View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

The Increasing Importance of Proximity
for Exports from U.S. States
Cletus C. Coughlin

I

ncome, trade policies, transportation costs,
technology, and many other variables combine
to determine the levels of international trade
flows. Not only do changes in these determinants
affect the levels of trade flows, they can also have
important consequences for the geographic pattern
of a country’s trade. Some changes are generally
thought to increase the proportion of a country’s
trade with nearby countries relative to its other
trading partners, while other changes tend to
decrease this proportion. For example, if a country
enters into a trade agreement with nearby countries,
it is likely that the country’s share of trade with
nearby countries will increase relative to its trade
with other trading partners. On the other hand,
declining transportation costs can reduce the cost
disadvantage of trading with distant countries and
could thereby increase trade with more distant
countries relative to those nearby.
This paper focuses on the changing geography
of merchandise exports from individual U.S. states
to foreign countries. Due to data limitations, exports
of services are not examined. Two basic questions are
addressed. First, how has the geographic distribution of exports from individual U.S. states changed?
Second, which changes in the economic environment appear to account for the observed changes
in the geographic distribution of state exports?
A useful measure for analyzing the changing
geography of trade is the distance of trade, which is
simply the average distance that a country’s (state’s)
international trade is transported.1 If a country’s
(state’s) distance of trade is declining (increasing)
1

The calculation is straightforward. Assume a state’s exports are
shipped to two countries and that the value of exports sent to one
country, which is 1,000 miles away, is $800 and the value sent to the
other country, which is 3,000 miles away, is $1,200. Thus, 40 percent
of the state’s exports are transported 1,000 miles and 60 percent are
transported 3,000 miles. The distance of trade is 2,200 miles (40% ×
1,000+60% × 3,000).

over time, then its trade is becoming more (less)
intense with nearer countries relative to countries
farther away. In other words, a declining (increasing)
distance of trade means that the shares of a country’s
(state’s) trade with nearby trading partners is rising
(falling) relative to trade with its more distant trading
partners.
The analysis begins by summarizing the facts
and the explanations concerning the geographic
distribution of exports throughout the world. An
important feature of the economic geography of
trade flows is the distance that separates a state from
its trading partners. Distance is generally thought
to play a key role in the geographic distribution of
trade for two reasons. First, transportation costs are
higher for longer distances. Second, the costs of
accessing information about foreign markets and
establishing a trade relationship in those markets
are higher for longer distances.2 Thus, a country’s
trade with more distant countries is deterred.
Despite the “death of distance” associated with
the communications revolution, proximity appears
to be increasingly important for trade flows.3 Using
the bilateral trade flows of 150 countries, Carrere
and Schiff (2004) find that during 1962-2000 the
distance of (non-fuel merchandise) trade declines
for the average country and that countries with a
declining distance of trade were twice as numerous as those with an increasing distance of trade.
After reviewing the geography of exports from
the perspective of individual countries throughout
the world, I examine the geography of the exports
2

See Rauch (1999) for additional discussion of this point.

3

The death of distance has become a popular term because of The
Death of Distance: How the Communications Revolution Will Change
Our Lives by Frances Cairncross (1997). The book focuses on the
economic and social importance of how advances in technology have
virtually eliminated distance as a cost in communicating ideas and
data. Possibly, this death of distance has made foreign direct investment
and trade with proximate countries a more efficient way to serve
markets than trade over long distances.

Cletus C. Coughlin is deputy director of research at the Federal Reserve Bank of St. Louis. Molly D. Castelazo provided research assistance.
Federal Reserve Bank of St. Louis Review, November/December 2004, 86(6), pp. 1-18.
© 2004, The Federal Reserve Bank of St. Louis.

N OV E M B E R / D E C E M B E R 2 0 0 4

1

REVIEW

Coughlin

from individual U.S. states to their trading-partner
countries. The distance of trade is calculated annually for each state beginning in 1988, the first year
of detailed geographic data for individual states.
Similar to the finding for the majority of countries,
the majority of, but not all, states show a declining
distance of trade.
The findings for individual states allow for an
examination of some explanations that may account
for the changing geographic distribution of exports
at the state level. The uneven income growth of
trading partners, the implementation of the North
American Free Trade Agreement (NAFTA), and changing transportation costs are the “usual suspects.”
Possibly, incomes of nearby trading partners have
increased more rapidly than incomes of more distant trading partners. Such a development might
stimulate trade with nearby trading partners (relative to those more distant) so that a state’s distance
of trade declines. Similarly, the implementation of
NAFTA, by reducing trade barriers between the
United States and its major North American trading
partners, might tend to decrease a state’s distance
of trade. Finally, it is possible that transportation
costs have changed to increase the attractiveness
of trading with nearby countries. My goal is to provide suggestive evidence on how these three factors
have changed the geography of the exports of states,
which in turn provides insights concerning the
changing geography of total U.S. exports.

THE CHANGING GEOGRAPHY OF
WORLD TRADE
During the second half of the twentieth century,
the volume of international trade throughout the
world increased more rapidly than output. Baier and
Bergstrand (2001) attempted to identify the reasons
for the growth of international trade between the
late 1950s and the late 1980s. They estimated that
declines in transportation costs explained about 8
percent of the average trade growth of several
developed countries, tariff-rate reductions about
25 percent, and income growth the remaining 67
percent. The question for the current study is
straightforward. Have these determinants, which
are related to one another, changed in such a way
that would alter systematically the geography of
state export flows? To date, systematic evidence
relating these determinants to state export flows is
lacking. In fact, little evidence exists as to how
changes in these determinants of trade have affected
the geography of world trade flows.
2

N OV E M B E R / D E C E M B E R 2 0 0 4

Changing Transportation Costs—
Usual Suspect No. 1
The costs of transporting goods from a producer
in one country to a final user in another country are
large. Putting a precise number on “large” is very
difficult and undoubtedly varies across goods and
countries. Despite this difficulty, Anderson and
van Wincoop (forthcoming) estimate international
transportation costs for industrialized countries to
be equivalent to a tax of 21 percent. Additional transportation costs are incurred to move internationally
traded goods within exporting countries and within
importing countries. Not surprisingly, changes in
transportation costs can have large effects on trade
flows. Not only can reductions in transportation
costs lead to increased trade flows directly, but also
indirectly by affecting the profitability of production
in specific locations.
A point that might not be intuitively obvious is
that a decline in transportation costs might cause
either an increase or a decrease in a country’s (state’s)
distance of trade. In the context of ocean shipping
costs, it depends on the nature of the change in
transportation costs.
Ocean shipping transportation costs can be
divided into those unrelated to distance, known as
dwell costs, and those related to distance, known as
distance costs. Dwell costs cover various aspects,
such as the cost of loading and unloading ships and
the cost (including time) of queuing outside a port
waiting to be serviced. On the other hand, distance
costs are related positively to the distance from
port to port. For example, the longer the distance
between ports, the larger the fuel costs of transporting a given shipment.
In theory, reductions in both dwell costs and
distance costs increase international trade flows;
however, their effects on the distance of trade differ.
A reduction in dwell costs increases the incentive to
trade with nearby locations relative to distant locations; this is so because dwell costs make up a larger
proportion of total transport costs for shorter distances.4 Thus, a reduction in dwell costs tends to
4

For example, assume dwell costs of $100,000 and distance costs per
mile of $200. If so, then the cost of a trip of 1,000 miles is $300,000
and a trip of 4,500 miles is $1 million. Thus, for the shorter (longer)
trip the respective shares of the transportation costs are 33 (10) percent
for the dwell costs and 67 (90) percent for the distance costs. As a
result, a reduction in dwell costs, say from $100,000 to $50,000, has
a larger proportional effect on costs for the shorter trip; a reduction in
distance costs, say from $200 per mile to $100 per mile, has a larger
proportional effect on costs for the longer trip.

FEDERAL R ESERVE BANK OF ST. LOUIS

reduce the distance of trade. On the other hand, a
reduction in distance costs increases the incentive
to trade with distant locations relative to nearby
locations. The reduced cost per mile causes a larger
proportional decrease in transport costs for longer
distances. Thus, a reduction in distance costs tends
to increase the distance of trade.5
Because evidence on dwell and distance costs
is limited, it is very difficult to reach firm conclusions
concerning their evolution and, in turn, their effects
on the distance of trade. Hummels (1999) provides
some evidence suggesting technological changes
associated with containerization have reduced both
dwell and distance costs.6 Containerization is a
system of inter-modal transport that uses standardsized containers that can be loaded directly onto
container ships, freight trains, and trucks. Dwell
costs are reduced because ships spend less time in
port and the cargo can be handled more efficiently.
Meanwhile, the larger and faster ships allowed by
containerization have reduced shipping costs on a
ton-mile basis while the ship is moving between
ports. It is likely, however, that containerization
lowered dwell costs relatively more than distance
costs. In addition, containerization, by eliminating
the unpacking and packing of cargoes at every
change in transport mode, likely reduced the cost
of the inland movement of goods by making the
inter-modal transfer of goods easier. Such changes
should tend to reduce the distance of trade.
Containerization, however, is only one of the
many changes that have affected transportation
costs. Regulatory policies and energy prices are two
additional factors. Whether transportation costs
have in fact declined in recent decades is uncertain
because of the lack of evidence on this issue. For
example, Carrere and Schiff (2004) conclude that
transportation costs have not necessarily declined
across all modes of transportation. First, they cite
evidence provided by Hummels (1999), who found
5

6

A decline in transportation costs might not affect the distance of
trade. Eichengreen and Irwin (1998) note that the cost of transporting
goods over various distances could decline proportionately. In this
case, which they call “distance-neutral” technological progress, such
a decline in transportation costs would tend to leave the distance of
trade unchanged.
Hummels (1999) identified as important the following institutional
changes that have affected ocean shipping: open registry shipping,
which allows ships to be registered under flags of convenience to
avoid some regulatory and manning costs imposed by some countries,
and cargo reservation policies, which were to designed to ensure that
a country’s own ships were granted a substantial share of that country’s
liner traffic.

Coughlin

that ocean freight rates have increased, while air
freight rates have declined rapidly. Hummels also
found evidence that overland transport costs in the
United States have declined relative to ocean freight
rates. In fact, according to Glaeser and Kohlhase
(2004), the costs of moving goods by rail and by
truck within the United States have fallen substantially in a nearly continuous manner since 1890.7
While far from precise in terms of quantifying
the changes in transportation costs, these findings
are consistent with recent changes in the relative
shares of the methods used to transport U.S. exports.
Over time, air and land shipments have displaced
ocean shipments. Figure 1 shows that between 1980
and 2002 the shares of air and land shipments
increased by 11.9 and 14.5 percentage points, respectively, while the share of ocean shipments declined
by 26.4 percentage points. As a result, the majority
of U.S. exports are no longer shipped on ocean
vessels. In fact, in 2002, shipments by air and land
accounted for larger shares of exports than shipments by sea.
A second source of evidence relevant to changes
in transportation costs relies on studies that estimate
the relationship between distance and international
trade flows.8 Numerous studies have generated estimates of the distance sensitivity of trade or, using
more precise terminology, the distance elasticity of
trade: that is, the percentage change in trade flows
associated with a given percentage increase in the
distance separating one country from its trading
partners. These studies find, not surprisingly, that
the larger the distance that separates two countries,
the smaller the value of trade moving between them.
More important for the current discussion is the
common finding that distance is playing a changing
role over time in the geographic distribution of trade.
For example, results by Frankel (1997) indicate that
7

Glaeser and Kohlhase (2004) find that the cost of moving goods has
declined by roughly 90 percent since 1890. The costs of transporting
goods by rail and by truck have declined at annual rates of 2.5 percent
and 2.0 percent, respectively. As a result, they conclude that the cost
of moving goods within the United States is no longer an important
component of the production process.

8

Using distance as a proxy for transportation costs is problematic for
numerous reasons. Distance is generally measured with the “great
circle” formula. Actual transportation routes are not this direct. In
addition, the use of distance assumes one route between trading
regions. Trade between two geographically large countries, such as
the United States and Canada, is conducted over many routes. Multiple
routes and multiple modes of transportation increase the doubts that
distance is a good proxy for transportation costs. As discussed in the
text, many transportation costs, such as dwell costs, clearly do not
vary with distance. Finally, actual freight rates often bear little connection to distance traveled.

N OV E M B E R / D E C E M B E R 2 0 0 4

3

REVIEW

Coughlin

Figure 1
U.S. Exports by Transport Mode, 1980-2002
60

50
Sea
40
Land
30
Air
20

10

0
1980

1982

1984

1986

1988

1990

1992

1994

1996

1998

2000

2002

NOTE: The variable Land was created by subtracting the sum of air and vessel exports from
total exports.
SOURCE: U.S. Bureau of the Census.

if the distance separating a country from two of its
trading partners differed by 10 percent, then trade
flows between the country and its more distant trading partner (relative to the country and its nearby
partner) were 4 percent less during the 1960s and
7 percent less during the 1990s. Overall, the majority
of studies indicate that the distance sensitivity of
trade is not shrinking, but rather increasing.9 Such
9

Disdier and Head (2004), in a thorough examination of numerous
studies using gravity models, conclude that the impact of distance is
increasing, albeit slightly, over time. Brun et al. (2003) and Coe et al.
(2002) reach a similar conclusion. Berthelon and Freund (2004) find
that, rather than a shift in the composition of trade, an increase in
the distance sensitivity for more than 25 percent of the industries
examined accounts for this result. Research by Rauch (1999), contrary
to most studies, finds that the effect of increased distance on trade
has declined since 1970.

10

Hummels (2001) identified numerous costs associated with shipping
time and its variability. Lengthy and variable shipping times cause firms
to incur inventory and depreciation costs. Inventory-holding costs
include the financing costs of goods in transit and the costs of maintaining larger inventories at final destinations to handle variation in
arrival times. Examples of depreciation, which reflect any reason to
prefer a newer good to an older good, include the spoilage of goods
(fresh produce), goods with timely information content (newspapers),
and goods with characteristics whose demand is difficult to forecast
(fashion apparel).

4

N OV E M B E R / D E C E M B E R 2 0 0 4

a change would tend to decrease the distance of
trade.
A third piece of evidence concerning transportation costs highlights the impact of time.10 The cost
consequences of delays can be quite large. Hummels
(2001) has estimated that each day saved in shipping
time was worth 0.8 percent of the value of manufactured goods. Overall, faster transport between
1958 and 1998 due to increased air shipping and
speedier ocean vessels was equivalent to reducing
tariffs on manufactured goods from 32 percent to 9
percent.11
Time costs have likely played a key role in the
change in shipping modes. Because shipping by air
is much faster than shipping by sea, the decline in
air shipping prices relative to ocean shipping prices
has made the saving of time less expensive. This has
11

In general, transportation costs, including time costs, have risen as a
result of the terrorist attacks of September 11, 2001. Insurance rates,
especially for shipping in the Middle East, have increased sharply.
Additional scrutiny of containers has also increased costs. According
to the Organisation for Economic Co-operation and Development
(2002), these costs could run from 1 to 3 percent of trade. Moreover,
additional security measures cause delays for importers and exporters
that further increase transportation costs.

FEDERAL R ESERVE BANK OF ST. LOUIS

led to relatively large increases in air shipping and
contributed to an increasing frequency of the various
stages of the production of final goods occurring in
different countries. The timeliness of air shipping
can play a key role in the international trade of intermediate goods that characterizes international
production fragmentation.
Some doubts arise about whether transportation
costs have truly declined when distances of trade
are examined. Carrere and Schiff (2004) examined
the distance of trade for approximately 150 countries between 1962 and 2000.12 They found that the
distance of trade declined for the average country
worldwide. For every country with an “empirically
significant” increasing distance of trade, there are
nearly two countries with a decreasing distance of
trade.13 For the average country’s exports during
2000, the distance of trade was slightly less than
4,000 miles. The average decline in the distance of
trade between 1962 and 2000 was approximately
5 percent. For the United States the distance of trade
based on exports was roughly 4,160 miles; however,
contrary to the average country, the distance of trade
for U.S. exports increased. Over the entire period,
the distance of trade for U.S. exports increased by
slightly less than 8 percent. The U.S. distance of
trade did not increase in a consistent pattern throughout the 39-year period. In fact, during the 1980s
and 1990s, the U.S. distance of trade declined.
A declining distance of trade, however, does
not preclude declining transportation costs. As discussed previously, a decline in transportation costs
can cause either an increase or a decrease in the
distance of trade. If the decline is due to a reduction
in dwell costs, then the distance of trade will tend
to decline. It is possible that the distance of trade is
trending downward because of changes in other
determinants. I now turn to one possibility, a proliferation of regional trade agreements.
12

The calculation of the distance of trade (DOT) from country i during
time period t is straightforward. DOTi=Σ dij sij, where d is the (spherical)
distance from the leading (economic) city in country i to the leading
city in destination country j and s is the share of i’s exports to country j.
The summation occurs over all destination countries. A simple numerical example is contained in footnote 1.

13

The authors define empirically significant as an absolute change in
the estimated distance of trade of more than 5.5 percent. For the
sample of 150 countries, 77 countries had an empirically significant
negative change in either the distance of exports or imports, while
39 countries had an empirically significant positive change.

Coughlin

Regional Trade Agreements—
Usual Suspect No. 2
A regional trade agreement eliminates barriers
for trade flows between members, while maintaining
the barriers for trade flows between members and
nonmembers. Standard customs union theory predicts that these agreements will lead to increased
trade between member countries (termed trade
creation) and decreased trade between members
and nonmembers (termed trade diversion). Because
such agreements tend to be formed between neighboring countries, it is reasonable to expect that
regional trade agreements will decrease a member’s
distance of trade. The stronger the effects of both
trade creation and trade diversion, the larger is the
decline in the distance of trade for a member.
Carrere and Schiff (2004) examine the impact
of NAFTA and seven other regional integration
agreements on the evolution of the members’ distance of trade. Almost without exception, they find
that regional trade agreements tend to reduce the
distance of trade for exports. In other words, regional
trade agreements, such as NAFTA, change the geographic trade pattern toward larger shares of trade
with nearby relative to more distant trading partners.

Uneven Income Growth—
Usual Suspect No. 3
A country’s distance of trade can be affected by
the pattern of income growth of its trading partners.
Other things the same, if a country’s nearby trading
partners have greater income growth relative to its
more distant trading partners, the country’s distance
of trade will decline because trade with its nearby
partners will increase relative to trading with more
distant partners.
Carrere and Schiff (2004) provide some examples, as well as regression results, to suggest this
explanation may be important. They note that
countries in the East Asia–Pacific region tended to
grow faster than the world average during 1962-79,
1980-89, and 1990-2000. For each period, the trend
distance of trade is negative for this region. A similar
example involves the countries in NAFTA. The distance of trade tended to increase during 1962-89
and decrease during 1990-2000. Consistent with
this explanation, growth in the NAFTA countries
was below the world average during 1962-1989
and above the world average during 1990-2000.
N OV E M B E R / D E C E M B E R 2 0 0 4

5

Coughlin

REVIEW

Table 1

apparel imports have shifted from Asian countries
to Mexico and Caribbean countries.14
The increasing importance of international
production fragmentation is consistent with findings
by Berthelon and Freund (2004) on the increasing
distance sensitivity of trade. They find an increasing
distance sensitivity of trade for 25 percent of the
industries they examined. Accordingly, trade with
nearby countries has become more attractive relative
to trade with more distant countries. This increasing
distance sensitivity might be the result of technological change that enhances the advantages of
proximity. One consequence of this change is an
increased share of trade between the countries
within a region, such as between the countries in
North and South America, relative to trade across
regions, such as between countries in North America
and East Asia.

Export Destination Share by Region (%)
1988-92

1993-97

1998-2002

Canada

20.4

22.4

23.4

Mexico

7.9

9.6

13.7

Latin America
and the Caribbean

6.4

7.8

7.4

Europe

27.8

22.8

23.2

Asia

33.2

33.9

29.1

Africa

1.7

1.4

1.2

Oceania

2.5

2.2

2.0

International Production Fragmentation—
A New Suspect
In addition to the usual suspects, one new suspect has emerged: international production fragmentation. This development has led to major changes
in the location of production and trade flows. A lack
of data precludes an empirical examination of this
explanation for state-level exports. Nonetheless, for
completeness, a brief discussion of the relationship
between international production fragmentation
and the distance of trade seems warranted.
One feature of the expanding integration of
world markets is that companies are outsourcing
increasing amounts of the production process. This
internationalization of production allows firms to
achieve productivity gains by taking advantage of
proximity to markets and/or low-cost labor. The net
effect on the distance of trade is unclear. Despite
locating production close to markets, the likely reduction in the distance of trade is uncertain because it
is unclear how the increased use of low-cost labor
will affect the distance of trade. One can easily find
examples for the United States, such as the growth
of maquiladoras in Mexico, which are associated
with a declining distance of trade. On the other hand,
the increased use by U.S. firms of low-cost labor in
China tends to increase the distance of trade.
Another factor contributing to a declining distance of trade for the United States is the increasing
use of “just-in-time” inventory management. New
information and communications technology have
propelled this management. For industries, such
as apparel, in which timely delivery has become
increasingly important, the distance of trade has
decreased. Evans and Harrigan (2003) show that U.S.
6

N OV E M B E R / D E C E M B E R 2 0 0 4

GEOGRAPHY OF U.S. STATE EXPORTS
The distance of trade for the United States can
be analyzed by taking a close look at the geography
of exports using state data. In view of the declining
U.S. distance of trade during the past two decades, it
is reasonable to expect that the change in the distance of trade (using state export data summed over
all states) will indicate relatively more intense trade
with proximate regions than with distant regions.
The data in Table 1 show how the destination
of U.S. exports has changed for three five-year
periods during 1988-2002.15 The destinations for
U.S. exports are split into Canada and Mexico, the
two major North American trading partners of the
United States as a whole, and then the rest of the
world is split roughly into continents. Comparing
1988-92 with 1998-2002, it is clear that Canada,
Mexico, and Latin America and the Caribbean are
the destinations for increasing shares of U.S. exports,
while Europe, Asia, Africa, and Oceania are the
destinations for decreasing shares of U.S. exports.
The shift in the share between the regions with an
14

Abernathy et al. (1999) argue that three retail apparel/textile regions
are developing in the world—the United States plus Mexico and the
Caribbean Basin, Japan plus East and Southeast Asia, and Western
Europe plus Eastern Europe and North Africa.

15

Export shares are calculated by dividing U.S. exports to each region
(averaged across five-year periods) by total U.S. exports to all seven
regions (averaged across five-year periods). Regions are constructed
using the top 50 export markets for each state, which account for more
than 90 percent of each state’s total exports. The definition of regions
used for Tables 1 and 6 is not identical to the one used by Coughlin
and Wall (2003).

FEDERAL R ESERVE BANK OF ST. LOUIS

Coughlin

Figure 2
National Distance of Trade, Calculated Using State-to-Country Distances, 1988-2002
Miles
6,300
6,200
Trend Line

6,100
6,000
5,900
5,800
5,700
5,600
5,500
5,400

1988

1989

1990

1991

1992

1993

1994

increasing share and those with a decreasing share
was 9.8 percentage points. Most noteworthy were
the increases by Mexico and Canada of 5.8 and 2.9
percentage points, respectively, and the decreases
in export shares for Europe and Asia of 4.6 and 4.1
percentage points, respectively.
The changing export shares of proximate and
distant regions are suggestive of the changing distance of trade for the nation as a whole. Figure 2
shows the yearly national distance of trade from
1988-2002.16 The distance of trade is substantially
lower for the years at the end of the period compared
with the earlier years. The range of national distance
of trade was 5,664 to 5,702 miles for 1998 through
2002, while no year prior to 1998 had a distance of
trade less than 5,930 miles.17
Because showing the distance of trade for each
16

This measure was calculated using the top 50 export markets for each
state.

17

The fact that the U.S. distance of trade measure calculated by Carrere
and Schiff (2004) is substantially less than my measure reflects various
factors, but most notably how our different measures deal with the
impact of trade with Canada and Mexico. Carrere and Schiff use the
distance between national capital cities (e.g., Washington, D.C., to
Ottawa and Mexico City), while I use the distance between the major
economic city in a state to the major economic city in Canada (Toronto)
and in Mexico (Mexico City). For 2002 this methodological difference
contributes 369 miles to the gap between the two measures.

1995

1996

1997

1998

1999

2000

2001

2002

state (51) for each year (15) would yield a very large
number of observations (765), I have chosen to summarize the data.18 Figure 3 shows the distribution of
distance of trade across all states at the beginning
of the sample period in the upper histogram and
at the end of the sample period in the lower histogram.19 Measured on the horizontal axis is the distance of trade for the following ranges in miles:
1,000 to 3,000; 3,000 to 5,000; 5,000 to 7,000; 7,000
to 9,000; 9,000 to 11,000; 11,000 to 13,000; and
13,000 to 15,000. The vertical axis shows the percentage of states with a distance of trade falling into
the given ranges. For 1988, two-thirds of the states
had a distance of trade within the 5,000 to 7,000 mile
range. Two states, Hawaii and Alaska, were outliers
with distances of trade in the 13,000 to 15,000 mile
range.
Comparing 2002 with 1988, one can easily see
the declining distance of trade in the figure. Twothirds of the states fell into the 5,000 to 7,000 mile
range in 1988, whereas 45.1 percent fell into that
range in 2002. Generally speaking, the decrease in
the 5,000 to 7,000 mile range was matched by an
18

For convenience, Washington, D.C., is referred to as a state.

19

A state’s distance of trade was calculated using its top 50 export
markets.

N OV E M B E R / D E C E M B E R 2 0 0 4

7

REVIEW

Coughlin

Figure 3
Probability Distribution of State Distance of Trade in 1988 and 2002
Percent
70

66.7

1988

60
50
40
30
17.6

20

7.8

10
0

3.9

0.0
2,000

4,000

6,000

8,000

0.0

3.9

10,000

12,000

14,000

3.9

2.0

2.0

10,000

12,000

14,000

Miles

Percent
50

45.1

2002

41.2

40
30
20
10
0

5.9
0.0
2,000

4,000

6,000

increase in the 3,000 to 5,000 mile range. For 1988
the percentage of states in the 3,000 to 5,000 mile
range was 17.6 percent, while for 2002 the percentage of states in this range was 41.2 percent.
Additional suggestive information about changes
in the state-level export distance of trade, especially
how the changes vary across states, was generated
by estimating a simple regression equation. Similar
to Carrere and Schiff (2004), the natural logarithm
of a state’s distance of trade (lnDOT) was regressed
against time (t). For each state, a separate regression
was estimated relating the state’s distance of trade
to time using annual data from 1988-2002. The
specific equation was as follows:
(1)

lnDOT=α+β t+ε ,

where α is the intercept term, β is the coefficient
relating time to the distance of trade, and ε is the
error term.
Table 2 shows the estimated β for each state,
ordered from the smallest (i.e., most negative) value
to the largest. The estimate for Montana, –0.0429,
was the smallest, while the estimate for Vermont,
8

N OV E M B E R / D E C E M B E R 2 0 0 4

8,000
Miles

0.0388, was the largest. Regressions for 40 of the
51 states generated negative estimates for β , while
regressions for 11 of the 51 states generated positive
estimates.20 Table 2 also shows the percentage
change in the distance of trade based on the coefficient estimate.21 The smaller (i.e., more negative) the
coefficient, the larger is the estimated percentage
decline in a state’s distance of trade. Twenty-seven
states showed declines in their distance of trade
that exceeded 10 percent, while five states showed
increases of more than 10 percent.

POSSIBLE DETERMINANTS OF THE
CHANGING GEOGRAPHY OF STATE
EXPORTS
In light of the changing geography of state
exports—toward relatively larger shares of trade
20

Using the 5 percent level, only 1 of the 51 estimates is not statistically
significant.

21

The calculation of the estimated percentage change in the distance
of trade follows from the fact that the coefficient estimate of β is an
instantaneous rate of growth. The formula is (e β × 14– 1)100.

FEDERAL R ESERVE BANK OF ST. LOUIS

Coughlin

Table 2
Time-Trend Analysis of the Distance of Trade, 1988-2002
State
Montana
Indiana
South Carolina
Mississippi
Wyoming
Texas
Alabama
North Carolina
Tennessee
Ohio
South Dakota
Illinois
Utah
Iowa
Oklahoma
Kentucky
Pennsylvania
Nevada
New York
Louisiana
Alaska
Arizona
Georgia
Kansas
Florida
California
Wisconsin
Missouri
Nebraska
New Hampshire
Connecticut
Arkansas
Washington
Idaho
Oregon
Minnesota
Virginia
Colorado
Rhode Island
New Jersey
Massachusetts
Maine
West Virginia
Michigan
Hawaii
Maryland
District of Columbia
North Dakota
Delaware
New Mexico
Vermont

State rank

Coefficient
estimate, β

t Statistic

Estimated percentage change
in distance of trade

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

–0.0429
–0.0253
–0.0248
–0.0235
–0.0230
–0.0224
–0.0213
–0.0191
–0.0171
–0.0170
–0.0141
–0.0113
–0.0110
–0.0108
–0.0107
–0.0106
–0.0102
–0.0102
–0.0098
–0.0097
–0.0093
–0.0090
–0.0086
–0.0082
–0.0081
–0.0081
–0.0079
–0.0072
–0.0070
–0.0060
–0.0054
–0.0048
–0.0039
–0.0034
–0.0033
–0.0025
–0.0025
–0.0021
–0.0007
–0.0005
0.0003
0.0024
0.0031
0.0044
0.0051
0.0058
0.0155
0.0163
0.0209
0.0277
0.0388

–42.2
–86.5
–65.6
–49.3
–48.2
–47.4
–53.7
–86.5
–54.6
–41.0
–18.9
–26.5
–18.1
–31.9
–21.4
–23.6
–86.1
–11.8
–34.3
–42.2
–48.5
–15.2
–27.1
–28.4
–20.0
–29.3
–27.2
–16.0
–14.1
–18.2
–18.9
–14.4
–11.3
–13.1
–15.1
–12.9
–13.6
–6.3
–2.9
–2.4
1.0
4.8
9.2
18.9
10.2
13.0
18.7
35.2
42.9
25.6
56.6

–45.2
–29.8
–29.3
–28.1
–27.6
–27.0
–25.7
–23.4
–21.3
–21.1
–17.9
–14.7
–14.2
–14.1
–14.0
–13.8
–13.3
–13.3
–12.8
–12.7
–12.2
–11.8
–11.3
–10.8
–10.7
–10.7
–10.4
–9.5
–9.3
–8.1
–7.3
–6.5
–5.4
–4.6
–4.5
–3.4
–3.4
–2.9
–0.9
–0.7
0.4
3.4
4.4
6.4
7.4
8.4
24.2
25.6
34.0
47.4
72.1

N OV E M B E R / D E C E M B E R 2 0 0 4

9

REVIEW

Coughlin

with proximate countries—I examine the same
explanations that apply to the changing world
geography of trade. As will become apparent, any
strong conclusions are precluded by analysis of the
existing data.

Changing Transportation Costs
To generate some basic facts about state exports
and distance, the following regression was estimated
for each state using its top 30 export markets for
each year from 1988 through 2002:
(2)

EXPSHARE=α+β RGDP+γ DIST+ε ,

where EXPSHARE is the share of a state’s exports
shipped to a specific country; RGDP is the real gross
domestic product (GDP) of the destination country;
DIST is the distance from the state to the destination
country; α, β , and γ are the parameters to be estimated; and ε is the error term.22 Because higher real
GDP should be associated with larger export shares,
the expected sign for the estimate of β is positive.
Because longer distances between the exporting
state and the destination country should proxy for
higher transportation costs, the expected sign for
the estimate of γ is negative.
The results indicate that the higher the real
GDP of a country, the higher is its export share.23
Not surprisingly, for the vast majority of states (45),
the larger the distance that separates a state from
an export destination, the smaller the export share
of the destination country. Summary results for the
estimate of γ are listed in Table 3.24 An important
question is how the estimated relationship between
distance and export share is changing over time. In
Table 3 the “Trend” column provides this information. Similar to the results cited earlier for the relationship between distance and trade flows using
country data, the relationship between distance
and trade shares using state data indicates that the
effect of distance is increasing the trade shares of
proximate countries at the expense of trade with
distant countries. This holds for 42 of the 51 states—
22

The top 30 export markets can vary over time for a given state and
vary across states. The countries used in the regressions for each state
are available upon request from the author.

23

These results are not reported; however, they are available upon
request from the author.

24

Strong statements concerning this evidence are not justified: Statistical
significance at the 10 percent level was found for the relationship
between distance and export share for 20 percent of the estimates.

10

N OV E M B E R / D E C E M B E R 2 0 0 4

for 38 states the sign of the parameter estimate for
distance is negative, with a declining trend (i.e.,
becoming more negative), and for 4 states the sign of
the parameter estimate for distance is positive, with
a declining trend (i.e., becoming less positive).25
Thus, for most states the results suggest that the
parameter estimate for distance is declining over
time. Such a change should tend to decrease a given
state’s distance of trade over time because the export
shares of more distant countries are declining more
rapidly in latter periods. One possible explanation
for these results is that changes in transportation
costs now favor land transportation.
As mentioned previously, Glaeser and Kohlhase
(2004) found that the costs of moving goods by rail
and by truck within the United States have fallen
substantially in a nearly continuous manner since
1890. Whether such declines also apply to trade
with Canada and Mexico is unclear, but there are
some reasons to think that these international transport costs have declined. Exports from the United
States to Canada and Mexico are generally over land.
From 1988 through 2002, roughly 90 percent of
U.S. exports to Canada and Mexico were transported
over land. Declining costs of transportation over
land have tended to favor state exports to Canada
and Mexico relative to trade with more distant locations. The importance of such a change, however,
is difficult to separate from the effects of NAFTA.

NAFTA
NAFTA has the potential to affect a variety of
trade barriers. Extending the previous discussion, a
question is whether NAFTA has had any impact on
transportation costs associated with crossing the
border between the United States and Mexico. The
answer appears to be no.
Seamless border crossings were envisioned as
a feature of NAFTA; however, Haralambides and
Londoño-Kent (2004) note that reality differs substantially from this vision. To complete the physical
25

Additional statistical analysis has been undertaken, the foundations
of which can be found in Cheng and Wall (forthcoming), and has
yielded similar results concerning how the distance coefficient has
changed over time. For each of the five leading U.S. exports markets—
Canada, Mexico, Japan, Germany, and the United Kingdom—the following two-step procedure was used. First, using annual observations
covering five years (1988-92, 1993-97, and 1998-2002) and all states,
the share of a state’s exports sent to a specific country was regressed
on a time dummy and a state-country dummy. Second, the statecountry fixed-effect estimate was regressed on the distance from the
state to the specific country. These results are available upon request
from the author.

FEDERAL R ESERVE BANK OF ST. LOUIS

Coughlin

Table 3
The Distance Coefficient, γ
State

Sign

Trend

State

Sign

Trend

Alabama

–

Down

Montana

–

Down

Alaska

+

Arizona

–

Down

Nebraska

–

Down

Down

Nevada

–

Down

Arkansas
California

–

Down

New Hampshire

–

Down

–

Down

New Jersey

–

Down

Colorado

–

Down

New Mexico

+

Up

Connecticut

–

Down

New York

–

Down

Delaware

–

Up

North Carolina

–

Down

District of Columbia

–

Up

North Dakota

–

Up

Florida

–

Down

Ohio

–

Down

Georgia

–

Down

Oklahoma

–

Down

Hawaii

+

Up

Oregon

+

Down

Idaho

–

Down

Pennsylvania

–

Down

Illinois

–

Down

Rhode Island

–

Down

Indiana

–

Down

South Carolina

–

Down

Iowa

–

Down

South Dakota

–

Down

Kansas

–

Down

Tennessee

–

Down

Kentucky

–

Down

Texas

–

Down

Louisiana

+

Down

Utah

–

Down

Maine

–

Up

Vermont

–

Up

Maryland

–

Up

Virginia

–

Down

Massachusetts

–

Down

Washington

+

Down

Michigan

–

Up

West Virginia

–

Down

Minnesota

–

Down

Wisconsin

–

Down

Mississippi

–

Down

Wyoming

–

Down

Missouri

–

Down

transfer of goods from the United States to Mexico
at the key United States–Mexico border crossing—
that is, from Laredo, Texas, to Nuevo Laredo,
Tamaulipas—requires a significant commitment
of time, vehicles, and manpower. The cross-border
transfer may take from two to four days, involve
three or more trucks and trailers, and require three
or four drivers. For comparison, the driving time
from Chicago to Laredo is two days.
The original NAFTA agreement provided that,
as of December 18, 1995, Mexican and U.S. trucking
companies would have full access to and from each
country’s border states. Then, as of January 1, 2000,
this reciprocal access was to have been extended
throughout both countries. Given the inefficiencies

affecting the movement of goods between the United
States and Mexico, the implementation of NAFTA
had the potential to substantially reduce cross-border
transport costs.26 However, for the period under
consideration, the provisions governing cross-border
trucking services were not in effect.
The Clinton administration, citing safety concerns, decided not to comply with the cross-border
trucking services provisions. The lack of U.S. compliance produced gridlock in terms of implementing
NAFTA’s trucking services provisions. Following the
26

Using estimates of the inefficiencies developed by Haralambides and
Londoño-Kent (2004), Fox, Francois, and Londoño-Kent (2003) estimated that the elimination of the inefficiencies would cause U.S.
exports to Mexico to increase by roughly $6 billion per year.

N OV E M B E R / D E C E M B E R 2 0 0 4

11

REVIEW

Coughlin

U.S. decision, a lengthy process involving much
negotiation and a ruling by an arbitration panel to
resolve the resulting disagreement ensued.27 What
appears to be the last roadblock to implementing
the trucking services provisions was eliminated in
June 2004 when the U.S. Supreme Court gave the
Bush administration the authority to open U.S. roads
to Mexican trucks without first completing an extensive environmental study. Thus, despite the potential
for improvements, the actual effects of NAFTA on
cross-border transport costs have been negligible
to date and provide no reason for the declining distance of trade experienced by most states.
Despite having little impact on cross-border
transport costs with respect to Mexico, NAFTA did
reduce trade barriers for U.S. exporters. Let’s examine
regional trade agreements from the perspective of
the state. Economic theory, known formally as customs union theory, suggests that NAFTA should
cause any given region in the United States to trade
more with Canada and Mexico and less with the rest
of the world. Thus, NAFTA should be associated with
a declining distance of trade for each state. However,
recent theoretical advances as part of the new economic geography suggest that the trade creation/trade
diversion dichotomy can be inadequate when factor
mobility is taken into account. This mobility can
shift resources across regions within a member
country or across member-country borders. When
resources are reallocated across regions, production
locations and trade flows are altered as well.
Coughlin and Wall (2003) provide examples to
illustrate the possible consequences of factor mobility. For example, consider a firm initially located in
New Jersey. The formation of NAFTA, by adding
Mexico to the United States–Canada free trade area,
expands the spatial distributions of the firm’s customers and suppliers southward. The firm that
locates closer to Mexico will likely increase its potential for profits. If the firm relocates, goods that had
been exported to NAFTA members from New Jersey
would be exported from, perhaps, California. This
relocation might also change the potential profitability of exporting to non-NAFTA markets by altering
shipping costs. Shipments to Asia might become
less expensive, while shipments to Europe might
become more expensive. The key point is that the
consequences of NAFTA for a given state’s distance

of trade are uncertain. Obviously, the effects on the
distance of trade are likely to vary across states.
Coughlin and Wall (2003) use a gravity model
to estimate how the effects of NAFTA differ across
states.28 The estimated percentage change in exports
due to NAFTA is listed in Table 4.29 The effect on a
state’s exports are disaggregated into five regions—
Mexico, Canada, Europe, Asia, and Latin America.
For example, Coughlin and Wall estimated that
NAFTA caused Alabama’s exports to Mexico, Canada,
and Latin America to increase by 43.9 percent, 35.1
percent, and 14.7 percent, respectively. Meanwhile,
NAFTA caused Alabama’s exports to Europe and Asia
to decline by 1.5 percent and 24.6 percent, respectively. The preceding changes caused Alabama’s
total exports, regardless of destination, to increase
by 12.1 percent.
Overall, most states did experience increased
exports to the other members of NAFTA. Exports to
Mexico increased by more than 10 percent for 28
states. However, 13 states were estimated to have
experienced declines in exports to Mexico as a result
of NAFTA. Meanwhile, exports to Canada increased
by more than 10 percent for 36 states. On the other
hand, 11 states showed a decline in exports.
With respect to exports to nonmember countries, exports to Europe declined roughly 6 percent.
Exports to Europe declined for 29 states; however,
contrary to standard customs union theory, exports
to Europe increased for 22 states. As with NAFTA’s
effects on exports to Europe, its effect on state
exports to Asia was far from uniform. Exports to
Asia declined for 20 states and increased for 31
states. Overall, NAFTA had a small negative effect
on exports to Latin America. Exports declined for
29 states and increased for 22 states.
The last column in Table 4 shows the effect on
each state’s exports weighted by the export shares
of the five regions. For most states (38), the effect of
NAFTA was to increase exports. For 12 states, however, the effect was estimated to be negative. For
one state (Montana) the estimated effect was zero.
Suggestive evidence for U.S. states indicates
that NAFTA is associated with a declining distance
of trade. Two pieces of evidence are available. First,
28

A companion article by Wall (2003) estimates the effects of NAFTA
on trade flows between subnational regions within North America
and between the same subnational regions and non-NAFTA regions.

27

29

Coughlin and Wall (2003) define regions for their NAFTA estimates
differently than regions are defined in this paper. When using Coughlin
and Wall’s NAFTA estimates, their regional definitions are used.

For details on these deliberations, see North American Free Trade
Agreement Arbitral Panel Established Pursuant to Chapter Twenty in
the Matter of Cross-Border Trucking Services (Hunter et al., 2001).

12

N OV E M B E R / D E C E M B E R 2 0 0 4

FEDERAL R ESERVE BANK OF ST. LOUIS

Coughlin

Table 4
Estimated Percentage Change in Exports—Effect of NAFTA
State

Mexico

Canada

Europe

Alabama
Alaska
Arizona
Arkansas
California
Colorado
Connecticut
Delaware
District of Columbia
Florida
Georgia
Hawaii
Idaho
Illinois
Indiana
Iowa
Kansas
Kentucky
Louisiana
Maine
Maryland
Massachusetts
Michigan
Minnesota
Mississippi
Missouri
Montana
Nebraska
Nevada
New Hampshire
New Jersey
New Mexico
New York
North Carolina
North Dakota
Ohio
Oklahoma
Oregon
Pennsylvania
Rhode Island
South Carolina
South Dakota
Tennessee
Texas
Utah
Vermont
Virginia
Washington
West Virginia
Wisconsin
Wyoming
US total

43.9
55.1
20.9
33.8
20.2
12.3
11.5
40.6
–2.9
–10.2
15.9
–22.9
–21.3
7.0
3.6
27.2
3.3
8.0
–11.3
–10.0
3.1
13.7
32.6
–21.9
7.3
4.3
54.1
64.4
–79.4
33.4
–1.1
62.8
–19.3
77.6
18.1
5.0
29.2
24.5
1.6
–9.0
96.4
5.7
38.2
13.8
26.2
19.8
46.8
–9.9
–44.2
38.7
52.8
15.7

35.1
35.4
23.2
35.6
24.5
17.2
14.5
–60.3
42.2
–6.2
26.2
–30.8
9.4
22.5
42.8
24.1
42.0
62.0
9.7
10.3
–0.3
23.9
–16.1
21.4
–4.4
18.1
–5.7
27.6
38.2
14.1
20.6
–9.5
26.2
42.8
10.2
20.0
–7.1
5.7
26.6
18.4
42.7
42.8
40.7
37.9
–6.4
8.0
20.8
–14.5
10.9
23.3
11.8
15.2

–1.5
10.5
8.8
–19.3
–2.8
6.6
8.3
13.4
–40.7
–8.9
2.4
1.3
–14.3
–11.8
9.3
5.7
–5.6
10.8
–13.2
–2.6
–30.9
–4.9
–13.5
–6.4
–11.1
34.6
18.1
–10.7
31.7
–14.7
–9.1
–13.9
–19.0
7.2
5.7
–11.7
–12.7
9.0
–1.5
–12.2
–0.5
2.9
–2.1
0.2
32.2
27.1
10.5
–24.6
–7.9
10.1
–47.2
–5.6

Asia
–24.6
–0.9
34.8
9.8
33.0
59.0
–10.6
9.1
–9.1
3.2
30.0
–6.9
41.8
37.3
7.9
18.0
27.3
14.6
24.0
–2.5
49.5
–8.4
14.3
16.9
–21.6
2.0
–23.8
19.4
–3.3
–10.7
–0.4
43.9
–9.3
–8.3
–20.1
8.9
15.4
34.7
6.0
–7.1
4.2
–2.8
14.6
12.1
–27.0
45.9
–2.2
8.0
10.8
16.0
–22.2
15.2

Latin America
14.7
–22.0
–24.0
5.2
4.6
3.2
–14.5
16.6
–18.8
–4.2
3.4
–31.2
–37.2
18.7
0.3
6.6
1.4
43.7
4.4
–17.9
3.3
–14.9
–1.0
–25.3
–32.1
–3.1
–36.9
–7.7
–19.8
–35.3
–8.1
–26.5
–30.4
20.0
–26.9
5.5
14.4
–0.8
5.8
–20.7
5.1
–27.3
15.4
–5.0
2.4
–24.3
12.9
–13.2
–43.9
–7.9
7.0
–2.7

World
12.1
3.6
22.5
17.9
21.2
28.5
4.2
–12.5
–15.0
–4.4
16.3
–8.2
15.2
16.5
25.9
16.7
21.9
35.4
6.3
1.8
0.3
1.2
–3.6
8.4
–13.7
16.5
0.0
21.5
24.2
–2.2
2.0
37.2
–2.9
21.4
7.3
10.6
2.0
23.1
12.0
–0.9
21.1
17.9
22.7
13.0
5.5
18.8
10.9
–4.8
–1.4
16.9
–4.0
7.8

SOURCE: Coughlin and Wall (2003, Table 1C).

N OV E M B E R / D E C E M B E R 2 0 0 4

13

REVIEW

Coughlin

Table 5
Mean Distance of Trade (in Miles) for 1994-2002 Relative to Mean Distance of Trade for 1988-93
State

1988-93

1994-2002

Alabama
Alaska
Arizona
Arkansas
California
Colorado
Connecticut
Delaware
District of Columbia
Florida
Georgia
Hawaii
Idaho
Illinois
Indiana
Iowa
Kansas
Kentucky
Louisiana
Maine
Maryland
Massachusetts
Michigan
Minnesota
Mississippi
Missouri
Montana
Nebraska
Nevada
New Hampshire
New Jersey
New Mexico
New York
North Carolina
North Dakota
Ohio
Oklahoma
Oregon
Pennsylvania
Rhode Island
South Carolina
South Dakota
Tennessee
Texas
Utah
Vermont
Virginia
Washington
West Virginia
Wisconsin
Wyoming

6,125
13,702
6,714
5,583
8,388
7,042
5,373
3,495
5,093
3,604
5,596
13,087
8,641
5,495
5,062
5,791
6,130
5,434
6,737
5,351
5,056
5,607
3,245
6,158
5,420
4,507
5,325
7,059
5,906
5,110
5,117
7,027
5,506
5,841
3,129
4,733
5,357
9,411
5,177
4,997
5,799
4,884
5,278
4,676
7,862
3,253
5,700
9,525
5,491
5,225
8,143

5,155
12,771
6,513
5,443
8,100
7,148
5,108
4,160
5,881
3,440
5,389
13,779
8,368
5,185
4,202
5,309
5,860
4,875
6,362
5,599
5,474
5,591
3,366
6,016
4,659
4,326
4,189
6,830
5,531
4,846
5,085
8,881
5,110
5,009
3,596
4,273
5,113
9,248
4,832
4,964
4,856
4,593
4,741
4,000
7,303
4,402
5,527
9,337
5,640
4,947
7,027

NOTE: *Using a 10 percent significance level, the hypothesis of equal means for the two periods is rejected.

14

N OV E M B E R / D E C E M B E R 2 0 0 4

Ratio of means
0.84*
0.93*
0.97
0.97
0.97
1.02
0.95*
1.19*
1.15*
0.95
0.96
1.05
0.97*
0.94
0.83*
0.92*
0.96*
0.90*
0.94*
1.05
1.08*
1.00
1.04*
0.98*
0.86*
0.96
0.79*
0.97
0.94
0.95*
0.99
1.26*
0.93*
0.86*
1.15*
0.90*
0.95
0.98
0.93*
0.99
0.84*
0.94
0.90*
0.86*
0.93
1.35*
0.97*
0.98
1.03
0.95*
0.86*

FEDERAL R ESERVE BANK OF ST. LOUIS

Table 5 shows the average distance of trade by state
for two periods, 1988-93 and 1994-2002. This split
reflects the official beginning of NAFTA in 1994.
A comparison of the means for the two periods
shows 40 states with a declining distance of trade
and 11 with an increasing distance of trade. This
evidence simply reflects the fact that the distance
of trade has trended downward for most states. The
last column in Table 5 shows the ratio of the means
for each state for the two periods. Values exceeding
1 indicate an increasing distance of trade, while
values less than 1 reflect a declining distance of trade.
Of the 40 states with a decreasing distance of trade,
Montana and Wyoming stand out because their
distance of trade decreased by more than 1,000 miles
between the two periods. Using a 10 percent significance level, 23 of these 40 states had a statistically
significant lower mean for 1994-2002 relative to
1988-93. Of the 11 states with an increasing distance
of trade, New Mexico and Vermont stand out because
their distance of trade increased by more than 1,000
miles between the two periods. Of these 11 states,
7 had a statistically significant higher mean for
1994-2002 relative to 1988-93.
The second piece of evidence uses the estimates
of Coughlin and Wall (2003). Using the estimates for
the impact of NAFTA on state exports to five regions,
I calculate a distance-weighted measure of NAFTA’s
effect on each state. For each state, this measure is
calculated as follows: Multiply the NAFTA effect
estimated by Coughlin and Wall by the share of a
state’s exports to that region; divide by the distance
from the state to the region; and then sum over the
five regions.30 Larger values of this measure indicate
that NAFTA has had larger impacts on trade with
nearby regions (i.e., Canada, Mexico, and Latin
America) relative to distant regions (i.e., Europe and
Asia). In turn, larger values of this measure should
be associated with larger percentage declines in a
state’s distance of trade.31 In fact, the simple corre30

31

In equation form, the calculation is DISNAFTA i=Σ(NAFTA ij × Share ij)/
Distance ij, where i indicates a specific state, j indicates a specific export
region (i.e., Canada, Mexico, Europe, Asia, and Latin America), NAFTA
is the estimated change in exports, Share is the percentage of a state’s
exports destined for a specific export region, and Distance is the distance from the state to a specific export region.
An illustration of the reasoning using two states might be useful.
Assume one state’s exports throughout the world were completely
unaffected by NAFTA. Meanwhile, assume the other state’s exports to
its NAFTA partners increased substantially and its exports to the rest
of the world were unaffected. In the preceding scenario one would
expect the state affected by NAFTA to show a larger decline in its distance of trade than the state unaffected by NAFTA.

Coughlin

lation coefficient between this distance-weighted
measure of NAFTA and the percentage change in a
state’s distance of trade is –0.33, which is statistically
significant at the 5 percent level. In other words,
across states, larger values of the overall, distanceweighted effect of NAFTA are associated with larger
declines in the distance of trade.

Uneven Income Growth
The last usual suspect that I examine is the
possibility that the growth of U.S. trading partners
has evolved in a manner that would cause demand
for U.S. exports to increase faster at proximate as
opposed to distant locations. Previous research has
explored the connection between income growth
and state exports by using two approaches. One
approach uses regression analysis to estimate the
extent to which foreign incomes affect state exports.
These studies, exemplified by Erickson and Hayward
(1991), Cronovich and Gazel (1998), and Coughlin
and Wall (2003), find a strong, statistically significant
relationship.
A second approach analyzes the connection
between foreign incomes and state exports using
shift-share analysis. Shift-share analyses separate
the change in a state’s exports into potentially meaningful components, one of which is the destination
of a state’s exports. Gazel and Schwer (1998) find
that destination is as important as any other factor,
such as the industry composition of exports, in
accounting for state export performance between
1989 and 1992.32
To examine the impact of uneven income
growth, I first examine the change in growth in the
major geographic destinations for U.S. exports.33
Table 6 is constructed using compound annual GDP
growth during each of the five-year periods: 1987-92,
1992-97, and 1997-2002.34 The GDP growth calculations, then, essentially reflect the same time periods
as those used in Table 1. Focusing on 1997-2002,
GDP grew relatively more rapidly in Mexico and
Canada than in the other regions. In light of the
32

Coughlin and Pollard (2001), however, find that the competitive
effect dominates both the industry mix and destination effects in
accounting for state export growth between 1988 and 1998.

33

Note that the regions discussed here, in Table 1 and in Table 6, are
not composed of the same countries as the regions associated with
the NAFTA measures. See Coughlin and Wall (2003) for a discussion
on the construction of the NAFTA regions.

34

This was calculated using the top 30 export markets for which GDP
is available.

N OV E M B E R / D E C E M B E R 2 0 0 4

15

Coughlin

REVIEW

Table 6

Overall, the geographic distribution of exports
has changed so that trade has become relatively
more intense with nearby as opposed to distant
countries. State trade shares with Mexico, Canada,
and Latin America and the Caribbean have increased,
while shares with Europe, Asia, Africa, and Oceania
have decreased. Reflecting the change in trade
shares, the distance of trade for the aggregate of
states has declined. However, all states did not
experience similar changes. For example, 40 states
experienced declining distance of trade, while 11
states experienced an increasing distance of trade.
Three related changes in the economic environment were examined. Suggestive evidence indicates
that all three changes might have contributed to the
observed changes in the geographic distribution of
state exports and, in turn, overall U.S. exports.
Declining costs of transportation over land have
tended to favor state exports to Canada and Mexico
relative to trade with distant locations. Trade with
Canada and Mexico has also been propelled by
NAFTA. Coughlin and Wall (2003) estimated the
effect of NAFTA on a state-by-state basis. NAFTA
was found to have had different effects across states.
These differential effects were found to be related
to the changes in the distance of trade experienced
by states. Finally, income growth by nearby trading
partners was found to be related to the changes in
the distance of trade experienced by states.
One issue that remains for future research is
the extent to which specific industries contribute
to the declining distance of trade. Berthelon and
Freund (2004) suggest that technological changes
might be stimulating production fragmentation
within regions. Thus, for a number of industries,
changing technology that enhances the advantages
of proximity might be an important reason for the
declining distance of trade.

Compound Annual GDP Growth by Major
Geographic Destination (%)
1987-92

1992-97

1997-2002

Canada

3.0

–0.7

1.6

Mexico

17.0

–2.2

10.2

4.9

9.5

–10.0

Europe

5.6

–0.6

–0.4

Asia

0.6

2.5

–1.5

Latin America
and the Caribbean

Africa

–0.7

1.7

–1.8

Oceania

3.2

4.5

–1.8

World

3.4

1.3

–1.2

major importance of these two trading partners, it
is not surprising that the distance of trade for most
states tended to be lower for 1997-2002 relative to
earlier in my sample. The poor economic performance in Latin America and the Caribbean likely
tempered some of the decline in the distance of
trade stemming from the relatively rapid growth in
Mexico and Canada.
Second, I construct a distance-weighted measure
of the growth of each state’s trading partners. This
measure is calculated analogously to the distanceweighted measure of NAFTA used in the preceding
section.35 Larger values of this measure indicate
relatively faster growth for nearby trading partners
than for distant trading partners. Thus, this measure
should be related negatively to the percentage
changes in the distance of trade. The simple correlation coefficient is –0.31, which is statistically
significant at the 5 percent level.

CONCLUSION
The preceding analysis has addressed two basic
questions concerning the geography of state exports.
First, how has the geographic distribution of state
exports changed? Second, which changes in the
economic environment appear to account for the
observed changes in the geographic distribution of
state exports?
35

The formula for calculating the distance-weighted measure of export
region growth for each state is DISGROWTHi=Σ(Growthij × Shareij)/
Distanceij. All variables, except Growth, were defined in footnote 30.
Growth is simply the annualized growth in GDP between 1987 and
2002 in a region.

16

N OV E M B E R / D E C E M B E R 2 0 0 4

REFERENCES
Abernathy, Frederick H.; Dunlop, John T.; Hammond,
Janice H. and Weil, David. A Stitch in Time: Lean Retailing
and the Transformation of Manufacturing—Lessons from
the Apparel and Textile Industries. New York: Oxford
University Press, 1999.
Anderson, James E. and van Wincoop, Eric. “Trade Costs.”
Journal of Economic Literature (forthcoming).
Baier, Scott L. and Bergstrand, Jeffrey H. “The Growth of
World Trade: Tariffs, Transport Costs, and Income

FEDERAL R ESERVE BANK OF ST. LOUIS

Similarity.” Journal of International Economics, February
2001, 53(1), pp. 1-27.
Berthelon, Matias and Freund, Caroline. “On the Conservation
of Distance in International Trade.” Working Paper 3293,
World Bank Policy Research, May 2004.
Brun, Jean-François; Carrere, Céline; Guillaumont, Patrick
and de Melo, Jaime. “Has Distance Died? Evidence from
a Panel Gravity Model.” Unpublished manuscript, March
2003.
Cairncross, Frances. The Death of Distance: How the
Communications Revolution Will Change Our Lives.
Boston: Harvard Business School Press, 1997.
Carrere, Céline and Schiff, Maurice. “On the Geography of
Trade: Distance Is Alive and Well.” Working Paper 3206,
World Bank Policy Research Center, February 2004.

Coughlin

Erickson, Rodney A. and Hayward, David J. “The International Flows of Industrial Exports from U.S. Regions.”
Annals of the Association of American Geographers,
September 1991, 81(3), pp. 371-90.
Evans, Carolyn L. and Harrigan, James. “Distance, Time,
and Specialization.” Working Paper 9729, National Bureau
of Economic Research, May 2003.
Fox, Alan K.; Francois, Joseph F. and Londoño-Kent, María
del Pilar. “Measuring Border Crossing Costs and Their
Impact on Trade Flows: The United States-Mexican
Trucking Case.” Unpublished manuscript, April 2003.
Frankel, Jeffrey A, with Stein, Ernesto and Wei, Shang-Jin.
Regional Trading Blocs in the World Economic System.
Washington, DC: Institute for International Economics,
1997.

Cheng, I-Hui and Wall, Howard. “Controlling for
Heterogeneity in Gravity Models of Trade and Integration.”
Federal Reserve Bank of St. Louis Review (forthcoming).

Gazel, Ricardo C. and Schwer, R. Keith. “Growth of
International Exports among the States: Can a Modified
Shift-Share Analysis Explain It?” International Regional
Science Review, 1998, 21(2), pp. 185-204.

Coe, David T.; Subramanian, Arvind; Tamirisa, Natalia T.
and Bhavnani, Rikhil. “The Missing Globalization Puzzle.”
Working Paper WP/02/171, International Monetary
Fund, October 2002.

Glaeser, Edward L. and Kohlhase, Janet E. “Cities, Regions,
and the Decline of Transport Costs.” Papers in Regional
Science, January 2004, 83(1), pp. 197-228.

Coughlin, Cletus C. and Pollard, Patricia S. “Comparing
Manufacturing Export Growth Across States: What
Accounts for the Differences?” Federal Reserve Bank of
St. Louis Review, January/February 2001, 83(1), pp. 25-40.

Haralambides, Hercules E. and Londoño-Kent, María del
Pilar. “Supply Chain Bottlenecks: Border Crossing
Inefficiencies between Mexico and the United States.”
International Journal of Transport Economics, June 2004,
31(2), pp. 183-95.

Coughlin, Cletus C. and Wall, Howard J. “NAFTA and the
Changing Pattern of State Exports.” Papers in Regional
Science, October 2003, 82(4), pp. 427-50.

Hummels, David. “Have International Transportation Costs
Declined?” Unpublished manuscript, November 1999.

Cronovich, Ron and Gazel, Ricardo C. “Do Exchange Rates
and Foreign Incomes Matter for Exports at the State Level?”
Journal of Regional Science, November 1998, 38(4), pp.
639-57.
Disdier, Anne-Célia and Head, Keith. “The Puzzling
Persistence of the Distance Effect on Bilateral Trade.”
Unpublished manuscript, August 2004.
Eichengreen, Barry and Irwin, Douglas A. “The Role of
History in Bilateral Trade Flows,” in Jeffrey A. Frankel, ed.,
The Regionalization of the World Economy. Chicago:
University of Chicago Press, 1998, pp. 33-57.

Hummels, David. “Time as a Trade Barrier.” Working Paper
No. 18, Global Trade Analysis Project, July 2001.
Hunter, J. Martin; Diaz, Luis Miguel; Gantz, David A.;
Hathaway, C. Michael and Ogarrio, Alenjandro. North
American Free Trade Agreement Arbitral Panel Established
Pursuant to Chapter Twenty in the Matter of Cross-Border
Trucking Services, Secretariat File No. USA-Mex-98-200801, Final Report of the Panel, February 6, 2001.
Organisation for Economic Co-operation and Development.
“Economic Consequences of Terrorism.” Economic
Outlook, June 2002, No. 71, Chap. 4, pp. 117-40.

N OV E M B E R / D E C E M B E R 2 0 0 4

17

Coughlin

Rauch, James E. “Networks versus Markets in International
Trade.” Journal of International Economics, June 1999,
48(1), pp. 7-35.
Wall, Howard J. “NAFTA and the Geography of North
American Trade.” Federal Reserve Bank of St. Louis
Review, March/April 2003, 85(2), pp. 13-26.

18

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW

Monetary Policy and Asset Prices:
A Look Back at Past U.S. Stock Market Booms
Michael D. Bordo and David C. Wheelock

L

arge swings in asset prices and economic
activity in the United States, Japan, and other
countries over the past several years have
brought new attention to the linkages between
monetary policy and asset markets. Monetary policy
has been cited as both a possible cause of asset
price booms and a tool for defusing those booms
before they can cause macroeconomic instability.
Economists and policymakers have focused on
how monetary policy might cause an asset price
boom or turn a boom caused by real phenomena,
such as an increase in aggregate productivity
growth, into a bubble. They have also addressed
how monetary policy authorities should respond
to asset price booms.
This article examines the economic environments in which past U.S. stock market booms
occurred as a first step toward understanding how
asset price booms come about. Have past booms
reflected real economic growth and advances in
productivity, expansionary monetary policy, inflation, or simply “irrational exuberance” that defies
explanation? We use a simple metric to identify
several episodes of sustained, rapid rises in equity
prices in the 19th and 20th centuries and then assess
both narrative and quantitative information about
the growth of real output, productivity, the price
level, the money supply, and credit during each
episode. Across some two hundred years, we find
that two U.S. stock market booms stand out in terms
of their length and rate of increase in market prices—
the booms of 1923-29 and 1994-2000. In general,
we find that booms occurred in periods of rapid
real growth and advances in productivity. We find,
however, no consistent relationship between inflation and stock market booms, though booms have
typically occurred when money and credit growth
were above average. Finally, contrary to conventional

wisdom, we find that wars have not always been
good for the market.
This article begins by reviewing relevant issues
concerning the links between monetary policy
and asset prices. The following section presents a
monthly time series index of U.S. equity prices
spanning two hundred years and identifies boom
episodes. Subsequent sections present a descriptive
history of U.S. stock market booms since 1834,
summarize our findings, and offer conclusions.

MONETARY POLICY ISSUES
The literature on the linkages between monetary
policy and asset markets is vast. Here, we focus on
two issues—the role of asset prices in the transmission of monetary policy to the economy as a whole
and the appropriate response of monetary policy
to asset price booms. The first concerns the extent
to which monetary policy might cause an asset price
boom. The second concerns the circumstances in
which monetary policymakers should attempt to
defuse asset price booms.

Asset Prices and the Transmission
Mechanism
There are many views about how monetary
policy might cause an asset price boom. For example,
a traditional view focuses on the response of asset
prices to a change in money supply. In this view,
added liquidity increases the demand for assets,
thereby causing their prices to rise, stimulating the
economy as a whole. A second view, voiced by
Austrian economists in the 1920s and more recently
by economists of the Bank for International Settlements (BIS), argues that asset price booms are more
likely to arise in an environment of low, stable inflation. In this view, monetary policy can encourage

Michael D. Bordo is a professor of economics at Rutgers University and an associate of the National Bureau of Economic Research. David C. Wheelock
is an assistant vice president and economist at the Federal Reserve Bank of St. Louis. Research for this article was conducted while Bordo was a visiting
scholar at the Federal Reserve Bank of St. Louis. The authors thank Bill Gavin, Hui Guo, Ed Nelson, Anna Schwartz, and Eugene White for comments
on a previous version of this article. Heidi L. Beyer, Joshua Ulrich, and Neil Wiggins provided research assistance.
Federal Reserve Bank of St. Louis Review, November/December 2004, 86(6), pp. 19-44.
© 2004, The Federal Reserve Bank of St. Louis.

N OV E M B E R / D E C E M B E R 2 0 0 4

19

REVIEW

Bordo and Wheelock

asset price booms simply by credibly stabilizing
the price level. Still another view, coming from the
dynamic general-equilibrium macroeconomics
literature, argues that asset price bubbles can result
from the failure of monetary policy to credibly
stabilize the price level.
The liquidity view has a long history. Some early
Keynesian IS-LM models, such as that of Metlzer
(1951), had central bank operations affecting stock
prices directly. A next generation of models, variants
of which are presented in Friedman and Schwartz
(1963b), Tobin (1969), and Brunner and Meltzer
(1973), introduce a broader range of assets into the
traditional Keynesian liquidity mechanism. In these
models, central bank operations that increase liquidity will cause the prices of assets that comprise the
private sector’s portfolio, including equities and
real estate, to rise and thereby lower their returns.
Substitution from more- to less-liquid assets occurs
as the returns on the former decline relative to the
latter. The impact of expansionary monetary policy
will be apparent first in the price of short-term
government securities; then longer-term securities;
then other assets such as stocks, real estate, and
commodities such as gold; and finally in the overall
price level. Thus, this view sees rising asset prices
as a possible harbinger of future inflation.
The Austrian-BIS view argues that an asset
price boom, whatever its fundamental cause, can
degenerate into a bubble if monetary policy passively
allows bank credit to expand to fuel the boom. This
view holds that, unless policymakers act to defuse
a boom, a crash will inevitably follow that in turn
may cause a downturn in economic activity. The
Austrians tended to equate rising asset prices with
general price inflation. For example, although the
level of U.S. consumer prices was virtually unchanged
between 1923 and 1929, the Austrians viewed the
period as one of rapid inflation fueled by loose
Federal Reserve policy and excessive growth of
bank credit (e.g., Rothbard, 1983).1
This view has carried forward into the modern
discussion of asset price booms. Two issues are relevant. The first is whether the price index targeted
by the central bank should include asset prices.
Alchian and Klein (1973) contend that a theoretically
correct measure of inflation is the change in the
price of a given level of utility, which includes the
present value of future consumption. An accurate
1

See Laidler (2003) and the references therein for more on the Austrian
view.

20

N OV E M B E R / D E C E M B E R 2 0 0 4

estimate of inflation, they argue, requires a broader
price index than one consisting of only the prices
of current consumption goods and services. To capture the price of future consumption, Alchian and
Klein (1973) contend that monetary authorities
should target a price index that includes asset prices.
Bryan, Cecchetti, and O’Sullivan (2002) concur,
arguing that because it omits asset prices (especially
housing prices), the consumer price index (CPI)
seriously understated inflation during the 1990s.2
A second connection of the Austrian view to the
recent experience concerns the issue of “financial
imbalances,” which Borio and Lowe (2002) define
as rapid growth of credit in conjunction with rapid
increases in asset prices and, possibly, investment.3
Borio and Lowe (2002) argue that a buildup of such
imbalances can increase the risk of a financial crisis
and macroeconomic instability. They construct an
index of imbalances based on a credit gap (deviations of credit growth from trend), an equity price
gap, and an output gap to identify incipient asset
price declines that lead to significant real output
losses, and they advocate its use as a guide for proactive policy action. Eichengreen and Mitchener
(2003) find that a similar index for the 1920s helps
explain the severity of the Great Depression.
Borio and Lowe (2002) argue that low inflation
can promote financial imbalances, regardless of
the underlying cause of an asset price boom. For
example, by generating optimism about the macroeconomic environment, low inflation might cause
asset prices to rise more in response to an increase
in productivity growth than they otherwise would.
Similarly, an increase in demand is more likely to
cause asset prices to rise if the central bank is viewed
as credibly committed to price stability. A commitment to price stability that is viewed as credible,
Borio and Lowe (2002) argue, will make product
prices less sensitive and output and profits more
sensitive in the short run to an increase in demand.
At the same time, the absence of inflation may cause
monetary policymakers to delay tightening policy
as demand pressures build. Thus, Borio and Lowe
2

See also Goodhart and Hofmann (2000). Filardo (2000), by contrast,
concludes that including housing prices in an index of inflation would
not substantially improve U.S. economic performance.

3

See also Borio, English, and Filardo (2003) and Borio and White (2003).
See Laidler (2003) and Eichengreen and Michener (2003) for discussion
of the similarities and differences between the modern “imbalance”
view and the Austrian emphasis on bank credit induced “forced saving”
as the cause of “overinvestment” in the 1920s that led to the stock
market crash and the Great Depression.

FEDERAL R ESERVE BANK OF ST. LOUIS

(2002, pp. 30-31) contend that “these endogenous
responses to credible monetary policy [can] increase
the probability that the latent inflation pressures
manifest themselves in the development of imbalances in the financial system, rather than immediate
upward pressure in higher goods and services price
inflation.”
The possibility that monetary policy can produce
asset price bubbles has also been studied extensively
in equilibrium rational-expectations models. In such
models, poorly designed monetary policies, such
as the use of interest rate rules without commitment
to a steady long-run inflation rate, can lead to selffulfilling prophesies and asset price bubbles. Such
outcomes are less likely, Woodford (2003) argues, if
monetary policymakers follow a clear rule in which
the interest rate target is adjusted sufficiently to
stabilize inflation. The theoretical literature thus
suggests that consideration of the monetary policy
environment may be crucial to understanding why
asset booms come about.

Proactive Policy in Response to Asset
Price Booms?
The appropriate response, if any, of monetary
policy to an asset price boom was the subject of
extensive debate during the U.S. stock market boom
of 1994-2000 and the recession that followed. Since
periods of explosive growth in asset prices have
often preceded financial crises and contractions in
economic activity, some economists argue that by
defusing asset price booms, monetary policy can
limit the adverse impact of financial instability on
economic activity. The likelihood of a price collapse
and subsequent macroeconomic decline might, however, depend on why asset prices are rising in the
first place. Many analysts believe that asset booms
do not pose a threat to economic activity or the
outlook for inflation so long as they can be justified
by realistic prospects of future earnings growth.
On the other hand, if rising stock prices reflect
“irrational exuberance,” they may pose a threat to
economic stability and, in the minds of many, justify
a monetary policy response to encourage market
participants to revalue equities more realistically.
The traditional view holds that monetary policy
should react to asset price movements only to the
extent that they provide information about future
inflation. This view holds that monetary policy will
contribute to financial stability by maintaining stability of the price level (Bordo, Dueker, and Wheelock,

Bordo and Wheelock

2002, 2003; Schwartz, 1995), and that financial
imbalances or crises should be dealt with separately
by regulatory or lender-of-last-resort policies
(Schwartz, 2002).4
Many economists do not accept the traditional
view, at least not entirely. Smets (1997), for example,
argues that monetary policy tightening is optimal
in response to “irrational exuberance” in financial
markets (see also Detken and Smets, 2003). Similarly, Cecchetti et al. (2000) contend that monetary
policy should react when asset prices become misaligned with fundamentals. Bernanke and Gertler
(2001) express doubt that policymakers can judge
reliably whether asset prices are being driven by
“irrational exuberance” or that an asset price collapse is imminent. Cecchetti (2003) replies, however,
that asset price misalignments are no more difficult
to identify than other components of the Taylor rule,
such as potential output.
Bordo and Jeanne (2002a,b) offer a novel argument in support of a monetary policy response to
asset price booms. They argue that preemptive
actions to defuse an asset price boom can be
regarded as insurance against the high cost of lost
output should a bust occur. Bordo and Jeanne contend that policymakers should attempt to contain
asset price misalignments when the risk of a bust
(or the consequences of a bust) is large or when
the cost of defusing a boom is low in terms of foregone output. Bordo and Jeanne show that a tension
exists between these two conditions. As investors
become more exuberant, the risk associated with a
reversal in market sentiment increases, but leaning
against the wind of investor optimism requires more
costly monetary actions. Thus, the monetary authorities must evaluate both the probability of a costly
crisis and the extent to which they can reduce this
probability.

FOMC Deliberations About the Stock
Market
The debate about the appropriate response of
monetary policy to asset price booms has not taken
place solely in professional journals and working
papers. The implications of rising asset prices
became an increasingly important component of
Federal Reserve policy discussions during the U.S.
4

Bernanke and Gertler (1999, 2001) present the traditional view in the
context of a Taylor rule. Bullard and Schaling (2002), Schinasi and
Hargraves (1993), and White (2004) are among other studies supporting the traditional view.

N OV E M B E R / D E C E M B E R 2 0 0 4

21

REVIEW

Bordo and Wheelock

stock market boom of 1994-2000. Cecchetti (2003)
presents evidence suggesting that movements in
equity prices help explain adjustments in the Federal
Open Market Committee’s (FOMC’s) federal funds
rate target during this period.5
Transcripts of FOMC meetings in 1996 and 1997
reveal that Fed officials focused on a potential
“wealth effect” of rising stock prices on consumer
confidence and spending and worried that a sudden
reversal of equity prices could cause real economic
activity to decline sharply. For example, at a meeting
on March 26, 1996, Chairman Greenspan stated
that “It’s hard to believe that if any series of adverse
developments were to occur, the market would not
come down rather substantially and reverse the
wealth effect. That probably would dampen economic activity quite substantially” (FOMC transcript,
March 26, 1996, p. 29).
Policymakers grew increasingly concerned as
equity prices continued to rise, and the FOMC discussed how to respond. At a Committee meeting
on February 4-5, 1997, Chairman Greenspan stated
that the prevailing level of equity prices, along with
unusually narrow interest rate credit spreads, “suggest[s] that product prices alone should not be the
sole criterion [for conducting monetary policy] if
we are going to maintain a stable, viable financial
system whose fundamental goal…is the attainment
of maximum sustainable economic growth” (FOMC
transcript, February 4-5, 1997, p. 103).
Greenspan saw a conundrum in the use of monetary policy to defuse an asset price boom, however,
and expressed the view that stock market booms
are more likely to occur when inflation is low:
We have very great difficulty in monetary
policy when we confront stock market bubbles. That is because, to the extent that we
are successful in keeping product price inflation down, history tells us that price-earnings
ratios under those conditions go through the
roof. What is really needed to keep stock
market bubbles from occurring is a lot of
product price inflation, which historically
has tended to undercut stock markets almost
everywhere. There is a clear tradeoff. If
monetary policy succeeds in one, it fails in
the other. Now, unless we have the capability
5

Additional evidence of a monetary policy response to the stock market
is presented by Rigobon and Sack (2003). Hayford and Malliaris
(2004), by contrast, find that the Fed did not respond to the market.

22

N OV E M B E R / D E C E M B E R 2 0 0 4

of playing in between and managing to know
exactly when to push a little here and to pull
a little there, it is not obvious to me that there
is a simple set of monetary policy solutions
that deflate the bubble. (FOMC transcript,
September 24, 1996, pp. 30-31)
We next turn to the history of past U.S. stock
market booms to try to identify the macroeconomic
environments in which booms have occurred as a
first step toward identifying lessons for the conduct
of monetary policy in these cases.

Historical Data on the U.S. Stock
Market
We focus on the stock market because long-term
data on the prices of other assets, e.g., real estate,
are not available and, moreover, because stock prices
are often the focus of policy concerns about the
causes and effects of booms and busts (e.g., during
the late 1990s and the 1920s).6 Our interest is with
the performance of broad stock market averages,
not in the performance of individual stocks or groups
of stocks. Booms, of course, are typically centered
in particular sectors—usually the “high-tech” sectors
of the day—but the booms that capture the attention
of macroeconomists and policymakers are broadly
based. In the 1990s, computer, telecommunications,
and internet stocks were at the epicenter of the stock
market boom. The stock prices of a wide range of
companies also rose sharply, however, and the
broader market averages, such as the Standard and
Poor’s (S&P) 500 and the Wilshire 5000, all increased
substantially, though not as much as the NASDAQ,
which quintupled (see Figure 1 for comparison of
the S&P 500 and NASDAQ from 1990 to 2003).
Schwert (1990) constructed a continuous
monthly stock market index for the United States
for the period 1802-70, linking indices created by
Smith and Cole (1935) for 1802-62 and Macaulay
(1938) for 1863-70. Banks were the first large corporate enterprises in the United States, and for
1802-34, the stock market index consists of only
bank stocks. Railroads, the largest corporate sector
throughout much of the 19th century, got their start
in the 1830s. For 1835-45, the stock market index
comprises both bank and railroad stock prices, and
for 1846-70 only railroad stocks.
6

Helbling and Terrones (2004) examine both housing and stock market
booms for several countries since 1970.

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Figure 1
S&P 500 and NASDAQ Composite Indices, January 1990–October 2003
Monthly Average of Daily Closes
5000

4000

3000

2000
NASDAQ

1000

0
1990

S&P 500

1991

1992

1993

1994

1995

1996

1997

1998

1999

2000

2001

2002

2003

Figure 2
Schwert-S&P Stock Price Index, 1802-2002
Actual and Nine-Year Trailing Moving Average
1941-43 = log (10)
8
7
6
5
4
Actual
3

Moving Average

2
1
0
1802 1812 1822 1832 1842 1852 1862 1872 1882 1892 1902 1912 1922 1932 1942 1952 1962 1972 1982 1992 2002

N OV E M B E R / D E C E M B E R 2 0 0 4

23

REVIEW

Bordo and Wheelock

Table 1
U.S. Stock Market Booms: Alternative Starting Dates
Boom
beginning
in trough

Avg. % change
in index
during boom
(months duration)

Beginning
when
local peak
surpassed

Avg. % change
in index
during boom
(months duration)

Beginning
when
global peak
surpassed

Avg. % change
in index
during boom
(months duration)

Boom
ending
month

Feb 1834

35.06 (16)

Apr 1834

26.45 (14)

Apr 1834

26.45 (14)

May 1835

Jan 1843

23.35 (36)

Dec 1843

12.31 (25)

Dec 1852

NA

Dec 1845

Nov 1848

9.80 (50)

May 1852

20.05 (7)

Dec 1852

NA

Dec 1852

Jul 1861

40.20 (33)

Oct 1862

37.28 (20)

May 1863

27.24 (10)

Mar 1864

Apr 1867

8.83 (61)

Jan 1868

7.05 (52)

Jan 1868

7.05 (52)

Apr 1872

Jun 1877

22.58 (49)

Oct 1879

21.13 (21)

Apr 1880

21.22 (15)

Jun 1881

Aug 1896

20.74 (33)

Sep 1896

19.47 (32)

Dec 1900

NA

Apr 1899

Sep 1900

22.02 (25)

Nov 1900

17.76 (23)

Dec 1900

15.34 (22)

Sep 1902

Oct 1903

16.74 (36)

Mar 1905

12.82 (19)

Aug 1896

10.33 (122)

Sep 1896

9.92 (121)

Mar 1905

12.82 (19)

Sep 1906

Dec 1900

7.32 (70)

Sep 1906

Oct 1923

23.70 (72)

Nov 1924

25.12 (59)

Jan 1925

23.97 (45)

Sep 1929

Mar 1935

41.32 (24)

Aug 1935

30.28 (19)

Sep 1954

NA

Feb 1937

Apr 1942

21.92 (50)

Dec 1944

25.78 (18)

Sep 1954

NA

May 1946

Jun 1949

18.08 (44)

Jan 1950

15.17 (37)

Sep 1954

NA

Jan 1953

Sep 1953

26.87 (35)

Mar 1954

26.88 (29)

Sep 1954

24.85 (23)

Jul 1956

Jun 1949

18.27 (86)

Jan 1950

16.95 (79)

Sep 1954

24.85 (23)

Jul 1956

Jun 1962

14.79 (44)

Sep 1963

10.84 (29)

Sep 1963

10.84 (29)

Jan 1966

Jul 1984

26.04 (38)

Jan 1985

25.96 (32)

Jan 1985

25.96 (32)

Aug 1987

Dec 1987

11.57 (74)

Jul 1989

8.34 (55)

Jul 1989

8.34 (55)

Jan 1994

Apr 1994

19.64 (77)

Feb 1995

21.23 (67)

Feb 1995

21.23 (67)

Aug 2000

After the Civil War, the U.S. industrial sector
grew to include large publicly traded manufacturers
of steel, petroleum products, chemicals, and other
goods, and available indices of stock prices reflect
the increasing breadth of the market. We link
Schwert’s (1990) index for 1802-70 to the Cowles
(1939) index of New York Stock Exchange prices
covering 1871-1920 and then to the S&P composite
index. A consistent S&P series is available from 1921
to the present modern form, the S&P 500 index.7
Figure 2 plots the entire index, from 1802 to 2002.8
7

Data for 1871-1920 are from the National Bureau of Economic
Research macro-history database (series m11025a), and those for
1921-2002 are from Haver Analytics. Alternative indices are available,
e.g., the Dow Jones Industrial Average, which began in 1895, but the
episodes of “boom” and “bust” that appear in one index are common
to all of the alternative broad indices.

8

The New York Stock Exchange was closed during August-November
1914 and, hence, there are no index values for those months.

24

N OV E M B E R / D E C E M B E R 2 0 0 4

Identifying Booms
Our objective is to describe the macroeconomic
environments in which sustained, rapid rises in
stock prices have occurred. Over our entire sample,
two boom episodes stand out, both in terms of their
length and rate of advance in the market index: the
bull markets of 1923-29 and 1994-2000. The rate
of advance in the market index has been faster at
other times, but only for short periods. Similarly,
there have been other long bull markets, but none
with such a large average rate of increase in the
market index. Since the bull market of the 1920s
stands out, we examine both the macroeconomic
environment in which that boom occurred and the
debate it generated among monetary policymakers.
We also examine other episodes of rapid, sustained
increases in the stock market index, however, in our
attempt to identify environmental characteristics
of stock market booms in general.

FEDERAL R ESERVE BANK OF ST. LOUIS

There is, of course, no precise empirical definition of an asset price boom, and researchers have
imposed a number of filters to identify specific
episodes that they then define as booms. We begin
by using the methodology of Pagan and Sossounov
(2003) to identify sustained periods of rising stock
prices. Beginning in 1834, we identify the peak and
the trough months for the market within a rolling
25-month window.9 We require that peaks and
troughs alternate, and so we eliminate all but the
highest (lowest) of peaks (troughs) that occur before
a subsequent trough (peak). Finally, we identify
booms as periods lasting at least three years from
trough to peak. Table 1 lists all such booms in our
data, plus a few shorter episodes of exceptional
increase in the market index, such as February
1834–May 1835. The table also lists a few periods
that include two or more consecutive booms that
were interrupted by short market declines and that
might be better thought of as a single episode. For
each period, the table also lists the average, annualized increase in the index during the boom period,
i.e., from the month following the trough to the peak
month.
One might question whether stock market
booms should be defined to include recoveries
from prior stock market declines. Indeed, some of
the booms listed in Table 1 include long periods in
which the market average remained below a prior
peak. Hence, for each boom, we also indicate the
month in which the previous (local) peak was
reached, as well as the month in which the previous
all-time market (global) peak was reached.10 For
each episode, we report the average, annualized
increase in the index from the month following the
attainment of both the prior local and global peaks
to the new peak month. Clearly, a few sustained
booms, such as that of July 1861–March 1864,
involved several months in which the market was
merely recovering to a prior peak (or to the prior
all-time high). Indeed, a few booms ended without
reaching a prior global peak. In general, the average
increase in the market index was larger during the
recovery phase of booms than during the phase in
which the market index exceeded its prior high.
9

We begin with 1834 because the stock market index before that year
comprised only a small number of bank stocks, and, as shown in
Figure 2, there appear to have been no large movements in the index
before then.

10

Helbling and Terrones (2004) use a similar approach in their crosscountry study of stock market and real estate booms since 1970.

Bordo and Wheelock

Interestingly, two exceptions are the booms of the
1920s and of the late 1990s, suggesting again that
these two booms were unique in character as well
as magnitude.
Some studies define booms as sustained periods
of increase in an asset price index above a trend
growth rate (e.g., Bordo and Jeanne, 2002a; Detken
and Smets, 2003). Figures 2 through 4 plot values
of our stock price index alongside a nine-year trailing
moving average of the index. From these charts,
episodes when the market average increased (or
decreased) rapidly relative to its recent trend are
evident. Booms are evident in the mid-1830s, during
the Civil War, from about 1879 to 1881, and, with
interruptions, from about 1896 to 1906. The bull
market of the 1920s is clearly evident, as is the rise
from 1994 to 2000. In addition, the market advanced
well above trend in the early 1950s and again from
about 1984 to 1987.
Figure 5 plots the “real” (i.e., inflation-adjusted)
stock market index and nine-year trailing moving
average for 1924-2002. In theory, stock prices should
not be affected by inflation that is anticipated.11
Nevertheless, this plot illustrates more clearly that the
bull markets of the 1920s and 1994-2000 stand out
as exceptional periods of sustained, large increases
in real as well as nominal stock prices. The real
stock market index also rose substantially during
the mid-1950s and between 1984 and 1987. Thus,
regardless of how one looks at the data, the same
boom episodes stand out.

The Economic Environment of Booms
Table 2 reports information about the growth
rates of labor productivity, real gross domestic product (GDP), industrial production, money stock, bank
credit, and the price level during the boom episodes
identified in Table 1. Here we define the start of a
boom as the month following a market trough. For
comparison, we also report growth rates of these
variables over longer periods. Unfortunately, few
macroeconomic data exist for early boom periods,
and what data there are usually consist of annual
11

The traditional capital asset pricing model posits that the current
market price of a stock will equal the present discounted value of the
expected dividend stream to the stockholder. Expected inflation should
not affect the current price of the stock because even though expected
inflation may increase the nominal dividend stream, the relevant
interest rate for discounting those earnings also will reflect the
expected inflation. Unanticipated inflation can, of course, wreak havoc
with an investor’s ex post real return on asset holdings, as occurred
during the 1970s.

N OV E M B E R / D E C E M B E R 2 0 0 4

25

REVIEW

Bordo and Wheelock

Figure 3
Stock Price Index, 1830-1913
Actual and Nine-Year Trailing Moving Average
Logs
2.8

2.4

2
Actual
1.6

1.2

0.8

Moving Average

0.4

0
1830 1834 1838 1842 1846 1850 1854 1858 1862 1866 1870 1874 1878 1882 1886 1890 1894 1898 1902 1906 1910

Figure 4
S&P Stock Price Index, 1922-2002
Actual and Nine-Year Trailing Moving Average
Logs
8
7
6
5

Moving Average

4
Actual
3
2
1
0
1922 1926 1930 1933 1937 1941 1944 1948 1952 1955 1959 1963 1966 1970 1974 1977 1981 1985 1988 1992 1996 1999

26

N OV E M B E R / D E C E M B E R 2 0 0 4

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Figure 5
Real Stock Price Index and Nine-Year Trailing Moving Average, 1924-2002
Logs
2.5
2
1.5
1
tAc ual

Moving

0.5

eAv

ra
ge

0
–0.5
–1
–1.5
1924 1927 1930 1933 1936 1939 1942 1945 1948 1951 1954 1957 1960 1963 1966 1969 1972 1975 1978 1981 1984 1987 1990 1993 1996 1999 2002

observations. It appears, however, that market booms
generally occurred during periods of relatively rapid
growth of output and productivity. Pre-World War II
booms also tended to occur during periods of aboveaverage growth in the money stock, bank credit, and,
sometimes, the price level. The growth rates of the
money stock, bank credit, and the price level were
not above average during the boom of 1923-29,
however, nor during most post-World War II booms.
We next examine specific historical episodes in
more detail. Obviously, not every episode identified
in Table 1 deserves attention. From looking closely
at a few episodes, however, we identify certain
characteristics about the environments in which
booms have occurred.

Antebellum Stock Market Booms
The stock market booms of the 19th century
were closely associated with the development of
the nation’s infrastructure—first canals and steamships, then railroads.
1834-35. Schwert’s (1990) stock market index
for this period combines indices of bank and railroad stocks from Smith and Cole (1935), with more
weight put on the railroad stock index. Smith and
Cole (1935) document a close relationship between
public land sales and railroad stock prices in 183435, though stock prices peaked and began to fall

before land sales started to decline in 1836 (p. 82).
The close correlation between land sales and railroad stock prices throughout the antebellum period
led Smith and Cole to conclude that “both series…
may be regarded as reflecting a common element—
that of the well-known speculative spirit of the
country” (p. 82).
Federal government land sales rose from under
$2 million a year in the 1820s to $5 million in 1834,
$15 million in 1835, and $25 million in 1836. The
land-sales and stock market booms occurred during
a period of commodity price inflation. Temin (1969,
p. 92) argues that the land boom was sparked by a
sharp increase in the price of cotton, which rose
some 50 percent during 1834 alone. The money
stock increased sharply in 1835-36, spurred by large
inflows of Mexican silver, which increased the growth
rate of the monetary base (Temin, 1969, pp. 68-69).12
It appears from limited data that the boom also
occurred during a period of fairly strong growth of
real economic activity. Smith and Cole’s (1935, p. 73)
index of the volume of trade shows a 14 percent
rise in domestic trade in 1834-35 and even larger
percentage gains in exports in 1835-36. Davis’s
12

At the time, the United States was on a bimetallic—gold and silver—
standard. An increase in British investment in U.S. securities, coupled
with a decline in silver exports to China, caused the inflow of Mexican
silver to increase the monetary base.

N OV E M B E R / D E C E M B E R 2 0 0 4

27

REVIEW

Bordo and Wheelock

Table 2
The Macroeconomic Environment of U.S. Stock Market Booms
Trough month

Peak month

Avg. % change
stock index

Avg. % change
productivity

Avg. % change
money stock

Feb 1834

May 1835

35.06

NA

6.58

Jan 1843

Dec 1845

23.35

NA

16.57

Nov 1848

Dec 1852

9.80

NA

11.65

Jul 1861

Mar 1864

40.20

NA

NA

Apr 1867

Apr 1872

8.83

NA

4.00

Jun 1877

Jun 1881

22.58

1.20

11.93

Aug 1896

Apr 1899

20.74

3.44

12.38

Sep 1900

Sep 1902

22.02

2.30

12.10

Oct 1903

Sep 1906

16.74

3.16

8.47

Aug 1896

Sep 1906

10.33

2.57

9.88

Oct 1923

Sep 1929

23.70

2.08

3.93

Mar 1935

Feb 1937

41.32

2.49

10.93

Apr 1942

May 1946

21.92

1.90

17.91

Jun 1949

Jan 1953

18.08

3.86

3.75

Sep 1953

Jul 1956

26.87

1.69

2.85

Jun 1949

Jul 1956

18.27

2.71

3.30

Jun 1962

Jan 1966

14.79

3.68

7.83

Jul 1984

Aug 1987

26.04

1.54

7.37

Dec 1987

Jan 1994

11.57

1.62

3.43

Apr 1994

Aug 2000

19.64

1.98

5.13

Dec 1859

–0.62

NA

5.93
5.71 ‡

Comparison periods
Jan 1834
Jan 1866

Dec 1913

2.31

1.30 †

Jan 1919

Dec 1940

3.99

2.05

3.29

7.67

2.21 §

6.09

Jan 1946

Dec 2002

NOTE: *Crashes and wars that occurred immediately prior to, during, or immediately after a boom. 20th century crashes are documented by Mishkin and White (2002). † Average for 1875-1913; ‡ average for 1867-1913; § average for 1949-2002; ¶ average for 1947-2002.
DEFINITIONS and SOURCES:
Percentage changes (%∆) are computed as annualized percentage changes in monthly data, i.e., %∆ t = 1200[( xt /xt – 1 ) – 1] (similar
formulas are used for quarterly or annual data). The figures reported in the table are averages of these percentage changes from the
month (quarter or year) following the trough month to the peak month, except as noted below.
Productivity: For 1879-1946, labor productivity data are from Gordon (2000b). The data are annual; we report the average annual
percentage change in productivity from the year after the year in which the trough occurs to the year in which the peak occurs. For
1947-2002, data for non-farm business sector labor productivity (output/hour, seasonally adjusted, 1992=100) are from the Commerce
Department. The data are quarterly; we report average annualized growth rates from the quarter following the trough to the quarter
of the peak, unless the peak occurred in the first month of a quarter, in which case our averages are based on data through the previous
quarter.
Money stock: For 1834-1906, data are annual, and we report the average annual percent change in the money stock from the trough
year to the peak year. For 1907-2002, data are monthly, and we report the average annualized percent change from the month following
the trough to the peak month. The data for 1834-59 are the broad money stock series in Friedman and Schwartz (1970). For 1860-62,
we use estimates provided by Hugh Rockhoff. Data for 1863-66 are not available. The data for 1867-1946 are the broad money stock

28

N OV E M B E R / D E C E M B E R 2 0 0 4

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Avg. % change
bank credit

Avg. % change
price level

Avg. % change
industrial production

Avg. % change
real GDP

13.33

13.26

11.91

NA

7.29

7.83

11.13

NA

Crashes and wars*
Crash, May 1835

5.25

6.75

7.45

NA

–2.24

25.80

9.68

NA

5.26

–3.54

5.93

NA

3.20

–1.22

12.76

8.40

Crash, 1880

10.29

4.58

11.48

6.76

Crash, summer 1896

11.64

2.33

10.04

6.58

Crash, 1900

7.82

1.79

12.68

7.03

Crash, Jul-Oct 1903; 1907

9.41

3.27

8.98

5.90

5.02

0.02

10.95

4.33

5.78

1.51

19.49

10.63

Crash, Oct 1937–March 1938

21.42

3.42

–1.01

1.67

World War II; crash, Sep 1946

5.17

3.01

10.08

6.60

3.51

0.53

2.09

2.84

4.63

1.89

6.00

4.70

8.47

1.53

7.77

5.61

Crash, Apr-Jun 1962

9.29

3.04

2.41

3.38

Crash, Oct 1987

5.50

3.88

1.71

2.34

Crash, Aug-Oct 1990

7.43

2.53

5.06

3.76

Crash, Aug 2000–Sep 2001

4.00

0.51

6.93

NA

6.64

–1.06

5.41

4.02 †

1.81

–0.67

12.73

2.29

7.24

4.05

3.62

3.37 ¶

Civil War

Crash, Oct 1929

DEFINITIONS and SOURCES cont’d:
series in Friedman and Schwartz (1963a). The data for 1947-58 are a broad money stock series from the National Bureau of Economic
Research Macro-History Database (series m14195b). For 1960-2002, we use the M2 money stock (seasonally adjusted) from the Board
of Governors of the Federal Reserve System.
Bank credit: For 1834-1946, data are annual (June figures), and we report the average annual percent change in total bank credit from
the trough year to the peak year from Historical Statistics of the United States (1976, series X580). Data prior to 1896 are incomplete. For
1947-2002, data are monthly from the Board of Governors of the Federal Reserve System.
Price level: For 1834-1912, monthly wholesale price index data (Warren-Pearson and Bureau of Labor Statistics) are from Cole (1938)
and the National Bureau of Economic Research Macro-History database (series m04048a/b). For 1913-2002, we use CPI-U (all items,
seasonally adjusted).
Industrial Production: For 1834-95, data are from Davis (2002). Davis’s data are annual; we report the average annual growth rate from
the year following the trough (except that for the boom beginning in January 1843, we include 1843) through the peak year. For 18961940, (monthly) data are from Miron and Romer (1989). For 1941-2002, we use the Federal Reserve monthly Index of Industrial
Production (seasonally adjusted).
GDP: For 1879-1946, (quarterly) data are from Balke and Gordon (1986). For 1949-2002, we use real GDP (chained $1996) (quarterly
data). We report the average annual growth rate from the quarter following the trough to the peak quarter.

N OV E M B E R / D E C E M B E R 2 0 0 4

29

REVIEW

Bordo and Wheelock

(2002) index of industrial production shows an
increase of about 12 percent between 1834 and
1835 (Table 2). Hence, the boom episode coincided
with both a general price inflation and rapid real
economic growth.
The boom was short-lived. Stock prices peaked
in May 1835, and land sales peaked in the first six
months of 1836. Monetary policy actions appear to
explain the end of the boom and a subsequent banking panic in 1837. Acting under the Deposit Act of
June 1836, the Secretary of the Treasury ordered a
redistribution of public balances from New York
City banks to banks in other states. Subsequently,
President Andrew Jackson issued an executive order,
known as the Specie Circular, mandating the use of
specie (gold and silver) rather than bank notes in
the purchase of federal land. In the absence of a
well-functioning interregional reserves market, the
ensuing outflow of reserves left the New York money
market vulnerable to shocks and, according to
Rousseau (2002), precipitated the Panic of 1837.13
Limited data make it impossible to determine
whether the stock market and land booms of the
1830s were justified by reasonable expectations of
profit growth. The success of New York’s Erie Canal,
which was completed in 1825, brought heavy investment in other canal projects. Railroad building took
off about the same time. The prospect of greatly
reduced transportation costs, combined with rising
export prices (chiefly cotton), were real phenomena
that could cause equity prices and public land sales
to increase. Nevertheless, monetary shocks, and
perhaps a dose of irrational exuberance, may have
also contributed to the boom, and the end of the
boom was caused by monetary policy actions.
The 1840s. The stock market recovered quickly
from a trough in 1843. Much of the 1843-45 boom
was a recovery to a prior (local) peak (see Table 1).
Smith and Cole (1935, p. 136) attribute the recovery
to “cheap money” and rapid expansion of economic activity, with capital inflows from abroad
sustaining the boom (p. 111). As in the 1830s, the
stock market boom coincided with a sharp increase
in public land sales. The period also was marked
by rapid growth of the money stock, price level, and
industrial production (see Table 2).
After a pause in the mid-1840s, stock prices
increased sharply in 1847 but fell back quickly
13

Temin (1969), by contrast, argues that the U.S. money market tightened
when the Bank of England raised its discount rate (i.e., bank rate) to
discourage capital outflows from the United Kingdom.

30

N OV E M B E R / D E C E M B E R 2 0 0 4

during the Panic of 1847. Stock prices began to rise
again in 1848 and rose at about a 10 percent annual
rate through 1852. As during the prior boom, the
period 1848-52 was characterized by above average
growth of the money stock, price level, and industrial production. Thus, all three of the antebellum
stock market booms we identify occurred during
periods of rapid growth of the money stock and
price level as well as strong economic activity.

The Civil War Boom
Equity prices rose sharply from July 1861 to
March 1864, though the real, inflation-adjusted
returns to investors were more modest. The stock
price index rose at an average annual rate of 40.2
percent during the boom, whereas the price level
rose at an average annual rate of 25.8 percent
(Table 2). Adjusted for inflation, the market peak
occurred in October 1863, and the real stock price
index declined precipitously until early 1865, as
shown in Figure 6.
It was once thought that the Civil War had
encouraged the development of manufacturing
and thereby increased the subsequent growth rate
of the U.S. economy. Industrial production rose fairly
rapidly during the war (see Table 2). Estimates of
the economic cost of the Civil War and its impact
on growth indicate, however, that although specific
firms and industries experienced high profits during
the war, the economy as a whole suffered and the
war did not increase growth (Goldin and Lewis,
1975). Recent studies have related break points in
various asset-price time series to war news, with
major Union victories producing increases in asset
prices (e.g., McCandless, 1996).

From the Civil War to World War I
The United States experienced a great industrial
expansion during the late 19th and early 20th centuries, with many new corporations formed and
listed on the stock exchanges. Our stock market
index shows a sustained, though not especially rapid,
rise from April 1867 to April 1872, a more rapid
rise from June 1877 to June 1881, and a long rise
(with two significant interruptions) from August
1896 to September 1906.
The U.S. price level declined almost continuously
from 1866 to 1896, with the cost of living falling at
an average annual rate of 2 percent (David and Solar,
1977). Figure 7A plots our stock market index alongside a commodity price index for 1866-1913. Like

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Figure 6
Real and Nominal Stock Price Indices, 1860-66
Logs
1.6
Nominal

1.4
1.2
1
0.8
0.6

Real

0.4
0.2
0
1860

1861

1862

1863

the cost of living, commodity prices fell almost
continuously until 1896, except from mid-1879 to
mid-1882, when commodity prices rose at an average annual rate of 9 percent. At the ends of the stock
market booms of 1867-72 and 1877-81, the level
of commodity prices was below where it had been
at the start of each boom.
By contrast, the price level rose during the
booms of 1896-1906. Commodity prices rose at a
fairly rapid 4.58 percent annual rate during the 189699 boom and again rose during the subsequent
booms of 1900-02 and 1903-06 (see Table 2). Hence,
the evidence from the post-Civil War era indicates
that stock market booms can occur during periods
of inflation, deflation, and a fairly stable price level.
Figure 7B plots end-of-quarter values of the
stock market index alongside Balke-Gordon’s (1986)
quarterly estimates of real gross national product
(GNP) from 1875 to 1913.14 Real output growth
accelerated in 1879, after several years of modest
growth following a cycle peak in October 1873, and
achieved an astounding 8.4 percent average rate
during the boom of 1877-81 (see Table 2). The growth
rates of the stock price index and of both real GNP
and industrial production were closely correlated
during 1890-1913 (see Table 2). Hence, as in the antebellum era, our evidence indicates that late 19th

1864

Estimates for years before 1875 are not available.

1866

century stock market booms occurred during periods
of unusually rapid growth in real economic activity.
Linking stock market booms to productivity
growth during this era is more difficult because
productivity data are limited. Figure 7C plots annual
estimates of labor productivity growth from 1875
to 1913 alongside June values of our stock price
index. In 1896, productivity growth appears to have
increased before the stock market did, and the ups
and downs in the market that follow are correlated
positively with changes in productivity growth.15
Next we examine growth of the money and
credit stocks. We plot end-of-quarter values of our
stock market index and a broad money stock measure (“M2”) for 1875-1913 in Figure 7D (M2 data are
from Balke and Gordon, 1986). Like real output
and commodity prices, M2 grew rapidly during the
course of the 1877-81 stock market boom. M2 also
grew at double-digit rates during the stock market
booms of 1896-99, 1900-02, and 1903-06.
The relationship between the stock market and
bank credit is more difficult to ascertain because
the only comprehensive credit data for this period
are annual. Figure 8A plots total bank credit alongside June values of the stock market index for 18661913, and Figure 8B plots annual data on the stock
15

14

1865

The pattern of growth in total factor productivity is similar to that of
labor productivity in this period.

N OV E M B E R / D E C E M B E R 2 0 0 4

31

REVIEW

Bordo and Wheelock

Figure 7
Stock Market Index and Macroeconomic Data, 1866-1913
A
5.4

2.4

5.2
2.0

Log Stock Market Index (left scale)

5.0
4.8

1.6
4.6
4.4

1.2

4.2
Log Commodity Price Index (right scale)
4.0

0.8

1866 1868 1870 1872 1874 1876 1878 1880 1882 1884 1886 1888 1890 1892 1894 1896 1898 1900 1902 1904 1906 1908 1910 1912

B
2.4
2.0

Log Stock Market Index (left scale)

1.6
±
±
1.2

Log Real GNP (right scale)
±

±

0.8

±

0.8
0.6
0.4
0.2
0.0
0.2
0.4
0.6
0.8
1.0

1866 1868 1870 1872 1874 1876 1878 1880 1882 1884 1886 1888 1890 1892 1894 1896 1898 1900 1902 1904 1906 1908 1910 1912

C
2.4

3.0
2.9

2.0

Log June Stock Market Index
(left scale)

Log Labor Productivity
(right scale)

2.8
2.7
2.6

1.6

2.5
1.2

2.4
2.3

0.8

2.2

1866 1868 1870 1872 1874 1876 1878 1880 1882 1884 1886 1888 1890 1892 1894 1896 1898 1900 1902 1904 1906 1908 1910 1912

D
3.5

2.4

3.0
2.0

Log Stock Market Index
(left scale)

2.5
2.0

1.6
1.5
1.2

1.0
Log M2 (right scale)
0.5

0.8
1866 1868 1870 1872 1874 1876 1878 1880 1882 1884 1886 1888 1890 1892 1894 1896 1898 1900 1902 1904 1906 1908 1910 1912

32

N OV E M B E R / D E C E M B E R 2 0 0 4

0.0

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Figure 8
Stock Market Index, Bank Credit, and New York Stock Market Loans,
1866-1913
A
11.0

2.4

10.0
2.0

Log Bank Credit (right scale)

9.0
8.0

1.6
7.0
Log June Stock Market Index (left scale)
1.2

6.0
5.0

0.8

1866 1868 1870 1872 1874 1876 1878 1880 1882 1884 1886 1888 1890 1892 1894 1896 1898 1900 1902 1904 1906 1908 1910 1912

4.0

B
2.4
2.0

7.0
6.0

Log September Stock Market Index (left scale)

1.6
1.2
0.8

5.0

1866 1868 1870 1872 1874 1876 1878 1880 1882 1884 1886 1888 1890 1892 1894 1896 1898 1900 1902 1904 1906 1908 1910 1912

market loans of New York City national banks alongside the stock market index for 1880-1913. Whereas
the correlation between the growth of total bank
credit and the stock market index is low, Figure 8B
indicates that stock market loan growth increased
during the booms of 1896-99, 1900-02, and 190306, as well as during the recovery of 1908-09.16
Railroads were the most visible industry in the
economic expansion and stock market between
1867 and 1873. Railroad investment hit a peak in
1871-72, as did stock prices (Fels, 1959, p. 98). The
collapse of Jay Cooke and Company, the principal
financier of the Northern Pacific Railroad, triggered
the financial crisis of 1873. Railroad building was
stagnant until 1876, but began to expand rapidly in
1877, and the stock market revived. Although the
railroads grew faster than any other industry, the
1870s and early 1880s also witnessed rapid growth
16

4.0

Log New York Stock Market Loans (right scale)

Data on stock market loans are as of call report dates (usually
September) from Bordo, Rappaport, and Schwartz (1992). Although
stock market loans appear to rise and peak before the stock market,
because the data on loans and the stock market index are not for the
same month in each year, we are hesitant to draw any conclusions
about timing.

3.0

in manufacturing, as well as agricultural output
and productivity (Friedman and Schwartz, 1963a,
pp. 35).
On the monetary side, in 1879 the United
States returned to the gold standard, which had
been suspended during the Civil War. Hence, the
stock market boom occurred in an environment of
strong growth of real economic activity and successful resumption of the international monetary standard. While there were reasons to be optimistic
about the growth of corporate earnings in this
environment, contemporary accounts, cited by Fels
(1959, pp. 120-25), suggest that risk premiums fell
unjustifiably and investors were swept up in a
“bubble of overoptimism.”
Similar contemporary and historical accounts
cite “speculative activity” as one reason for rapid
increases in equity prices during subsequent booms.
For example, Friedman and Schwartz (1963a, p. 153)
write that “The years from 1902 to 1907 were
characterized by industrial growth…by speculative
activity in the stock market, and by a wave of
immigration.”
In summary, we find that the 19th and early
N OV E M B E R / D E C E M B E R 2 0 0 4

33

REVIEW

Bordo and Wheelock

20th century booms occurred when growth of real
output and the money stock were high, but we
observe no consistent pattern with respect to the
price level, i.e., booms occurred during periods of
deflation, inflation, and more-or-less stable prices.
Anecdotes suggest that speculation also characterized most booms, but a quantitative assessment of
the extent to which the rise in stock prices during
booms exceeded rational pricing based on fundamentals is beyond the scope of this article.

20th Century Booms
The period between the Panic of 1907 and the
beginning of the bull market of the 1920s was
characterized by a choppy market. A significant
panic occurred at the start of World War I in 1914,
and the U.S. stock market was closed for four months.
There were no sustained movements in the market
between 1914 and 1923.
1923-29. In terms of duration and amplitude,
the U.S. stock market boom of 1994-2000 has but
one historical rival—the boom of 1923-29. The
market index rose at an annual average rate of
about 20 percent during both six-year booms. Both
periods were also characterized by low and stable
inflation and high average growth of real GNP and
industrial production. Productivity growth also
increased during both the 1920s and 1990s. In the
1920s, however, the increase in productivity growth
occurred in the three years preceding the stock
market boom, and productivity growth slowed
during the boom period. By contrast, in the 1990s,
productivity began to accelerate around 1995, and
rapid productivity growth coincided with the stock
market boom.
Figure 9A plots our stock market index alongside
the CPI for 1915-40. A rapid increase in the price
level during World War I was followed by deflation
in 1920-21. The consumer price level was virtually
unchanged over the remainder of the 1920s. As
illustrated in Figure 9B, real GNP exhibited positive
growth during 1923-29, interrupted by brief recessions in 1923-24 and 1927. GNP growth during the
boom averaged above the historical norm, but not
above the growth rates experienced during prior
booms (see Table 2). Industrial production also grew
rapidly during the 1923-29 boom, shown in Figure
9C, and reached a peak a few months before the
stock market peak in September 1929. Average
growth of industrial production during the boom
was similar to that experienced in late 19th century
booms (Table 2).
34

N OV E M B E R / D E C E M B E R 2 0 0 4

Figure 10 plots annual estimates of nonfarm
labor and total factor productivity for the U.S. economy, from Kendrick (1961), alongside June values
of our stock market index for 1889-1940. Both labor
and total factor productivity grew relatively rapidly
in the early 1920s. Economists have attributed this
growth to the diffusion of technological breakthroughs that had occurred in the late 19th and early
20th centuries, including the internal combustion
engine and inventions that made the industrial use of
electric power practical. Although the stock market
boom of the 1920s did not coincide precisely with
the productivity acceleration, as it did during 19942000, both booms have been associated with technological breakthroughs that revolutionized production
in numerous existing industries as well as created
entirely new industries. The high-flying stocks of
the 1920s, such as RCA, Aluminum Company of
America, United Aircraft and Transportation Corporation, and General Motors were direct beneficiaries
of the new general-purpose technologies and were
expected to have high profit potential, not unlike the
“dot-com” stocks that led the boom of the 1990s.17
Whereas technological progress and accelerating productivity would be expected to generate an
increase in the growth of corporate profits, and
thereby justify an increase in stock prices, the question remains whether such “fundamentals” can
explain the entire increase in the market. Contemporary observers disagreed about whether the
stock market boom of 1923-29 was justified by realistic expectations of future earnings, as do economists who look back at the episode. Yale economist
Irving Fisher famously defended the level of the
stock market. For example, he argued that the
increase in corporate profits during the first nine
months of 1929 “is eloquent justification of a height17

The internal combustion engine and electric motors are often referred
to as general-purpose technologies because of their wide applicability
and potential to increase productivity in many industries. The microprocessor is also regarded as a general-purpose technology, and the
increase in productivity growth that occurred around 1995 is commonly attributed to the widespread application of computer technology.
Greenspan (2000), for example, contends that “When historians look
back at the latter half of the 1990s a decade or two hence, I suspect
that they will conclude we are now living through a pivotal period in
American economic history. New technologies that evolved from the
cumulative innovations of the past half-century have now begun to
bring about dramatic changes in the way goods and services are produced and in the way they are distributed to final users. Those innovations, exemplified most recently by the multiplying uses of the Internet,
have brought on a flood of startup firms, many of which claim to offer
the chance to revolutionize and dominate large shares of the nation’s
production and distribution system.” See also David (1990), David and
Wright (1999), Gordon (2000a), and Jovanovic and Rousseau (2004).

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Figure 9
Stock Market Index and Macroeconomic Data, 1915-40
A
4.0
3.5
Log CPI
3.0
2.5
2.0

Log Stock Market Index

1.5
1.0

1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940

B
4.0
3.5
Log Real GNP (right scale)

3.0
2.5
2.0

Log Stock Market Index (left scale)

1.5
1.0

1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940

1.4
1.3
1.2
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4

C
4.0
3.5

5.6

Log Industrial Production,
13-month Centered Moving Average (right scale)

5.4

3.0

5.2

2.5

5

2.0

4.8

1.5

Log Stock Market Index (left scale)

1.0

4.6
4.4

1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939

ened level of common stock prices” (quoted in White,
2004, p. 10). There were naysayers, however, including Paul M. Warburg, a leading banker and former
member of the Federal Reserve Board. Warburg
argued in March 1929 that the market reflected
“unrestrained speculation” that, if continued, would
result in a collapse and a “general depression involving the entire country.”18
In addition to rapid earnings growth, Fisher
(1930) cited improved management methods, a
decline in labor disputes, and high levels of invest18

Quoted in Galbraith (1961, p. 77).

ment in research and development as reasons why
stocks were not overvalued in 1929. McGrattan
and Prescott (2003) argue that Fisher was correct.
Although the total market value of U.S. corporations
in 1929 exceeded the value of their tangible capital
stock by 30 percent, McGrattan and Prescott (2003)
estimate that the value of intangible corporate assets,
e.g., the value of R&D investment, fully justified
the level of equity prices.
Other researchers have examined the growth
of corporate earnings and dividends during the
1920s, and most conclude that equity prices rose
far higher than could be justified by reasonable
N OV E M B E R / D E C E M B E R 2 0 0 4

35

REVIEW

Bordo and Wheelock

Figure 10
Productivity and Stock Price Index, 1889-1940
Stock Market (June values, logs)

Productivity (logs)
4

3.8
3.8
3.4

Total Factor Productivity
3.6

3
3.4
2.6

3.2

Labor
2.2

3

1.8

Stock Price Index

1.4

2.6
2.4

1
1889 1891 1893 1895 1897 1899 1901 1903 1905 1907 1909 1911 1913 1915 1917 1919 1921 1923 1925 1927 1929 1931 1933 1935 1937 1939

Figure 11
Stock Price Index and Brokers’ Loans, 1926-30
January 1926=100
300

250

200

150
Stock Price Index

Brokers’ Loans
100

50

0
1926

36

2.8

N OV E M B E R / D E C E M B E R 2 0 0 4

1927

1928

1929

1930

FEDERAL R ESERVE BANK OF ST. LOUIS

expectations of future dividends. White (2004), who
surveys and extends this literature, concludes that
the increase in stock prices during 1928-29 exceeded
what could be explained by earnings growth, the
earnings payout rate, the level of interest rates, or
changes in the equity premium, all of which are
components of a standard equity pricing model.
Several Federal Reserve officials, Secretary of
Commerce Herbert Hoover, and a number of other
prominent public officials attributed the stock market
boom to loose monetary policy and the rapid growth
of credit. Neither the money stock nor total bank
credit grew at an unusually fast pace during 1923-29
(see Table 2). Brokers’ loans rose rapidly and in line
with stock prices, however, as Figure 11 illustrates.
Federal Reserve officials viewed the growth in loans
to stock brokers and dealers with alarm. Many
adhered to the so-called Real Bills Doctrine, which
focuses on the composition, rather than total quantity, of bank credit. According to this view, banks
should make only short-term commercial and agricultural loans to finance the production of real goods
and services because loans to finance purchases of
financial assets tend to promote speculation, misallocation of economic resources, and inflation.19
Moreover, asset price bubbles inevitably lead to
crashes and depressions, which are required to
“purge the rottenness out of the system,” as U.S.
Treasury Secretary Andrew Mellon famously once
said (Hoover, 1952, p. 30).
Federal Reserve officials debated whether their
actions had contributed to the growth of brokers’
loans and financial speculation. Some officials complained that the Fed was fueling the stock market
boom by making discount window loans to banks
that in turn lent to stock brokers and dealers. Although
only short-term commercial and agricultural loans
could be used as collateral for discount window
loans, some Federal Reserve Board members argued
that banks should be forced to liquidate their loans
to stock brokers and dealers before being allowed
to borrow at the discount window with eligible collateral. In February 1929, the Federal Reserve Board
directed the Reserve Banks to ensure that Federal
Reserve credit was not used to finance speculative
activity: “The Board…has a grave responsibility
whenever there is evidence that member banks are
19

Contemporaries argued that banks “pushed” loans to purchase stocks
on an unsophisticated public. Rappoport and White (1994) show,
however, that the risk premium on brokers’ loans increased sharply
in the late 1920s, indicating that the growth of brokers’ loan volume
reflected growing demand rather than increasing loan supply.

Bordo and Wheelock

maintaining speculative security loans with the aid
of Federal Reserve credit. When such is the case the
Federal Reserve Bank becomes either a contributing
or a sustaining factor in the current volume of
speculative security credit. This is not in harmony
with the intent of the Federal Reserve Act nor is it
conducive to the wholesome operation of the banking and credit system of the country.”20
Open market operations constituted a second
channel by which Federal Reserve credit contributed
to the stock market boom, according to critics. Open
market purchases made during economic recessions
in 1924 and 1927 came when “business could not
use, and was not asking for increased money,”
Federal Reserve Board member Adolph Miller
alleged.21 In the absence of increased demand for
Fed credit for “legitimate” business needs, according
to this view, open market purchases increased the
supply of funds available to purchase stocks and
thereby inflated the bubble.
Fed officials were not unanimous in their views.
In general, Federal Reserve Bank officials disagreed
with the idea that it was desirable, or even possible,
to control commercial banks’ use of funds obtained
from the discount window. Reserve Bank officials
tended to argue for discount rate increases, rather
than any form of “direct pressure,” to curtail discount
window borrowing.
Fed officials also disagreed about the relationship
between open market operations and the stock
market. Disagreement centered on whether the
large open market purchases of 1924 and 1927 had
been desirable or harmful. Benjamin Strong, the
governor of the Federal Reserve Bank of New York
from its inception in 1914, was the Fed’s dominant
figure and head of the System’s Open Market Investment Committee until his death in 1928. Although
other System officials acquiesced, the open market
purchases of 1924 and 1927 were largely Strong’s
idea.22 When asked by the Senate Banking Com20

Quoted in Chandler (1971, pp. 56-57).

21

Testifying before the Senate Banking Committee in 1931 (quoted in
Wheelock, 1991, pp. 98-99).

22

Strong’s motives for engaging in open market purchases in 1924 and
1927 have been debated. Meltzer (2002, pp. 197-221) finds that the
actions were undertaken both to encourage domestic economic
recovery from recessions and to assist the Bank of England in attracting
and maintaining gold reserves by lowering U.S. interest rates relative
to those in the United Kingdom. Meltzer concludes, however, that
international cooperation was relatively more important than domestic
recovery in 1927. Wheelock (1991) reports empirical evidence that
both domestic and international goals were important throughout
1924-29.

N OV E M B E R / D E C E M B E R 2 0 0 4

37

REVIEW

Bordo and Wheelock

Figure 12
Stock Market Index and Macroeconomic Data (1), 1947-2002
A
9.5

8.0
7.0
6.0

9.0
Log Real GDP (right scale)
8.5

5.0
8.0
4.0

Log Stock Market Index (left scale)
7.5

3.0
2.0

7.0

1947 1949 1951 1953 1955 1957 1959 1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001

B
8.0
7.0
6.0

Log Industrial Production (right scale)

5.0
4.0

Log Stock Market Index (left scale)

3.0
2.0

5.0
4.8
4.6
4.4
4.2
4.0
3.8
3.6
3.4
3.2
3.0

1947 1949 1951 1953 1955 1957 1959 1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001

C
8.0

4.9
4.7

7.0
Log Labor Productivity (right scale)
6.0

4.3

5.0
4.0

4.5

4.1
Log Stock Market Index (left scale)

3.9

3.0

3.7

2.0

3.5

1949 1951 1953 1955 1957 1959 1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001

mittee in 1931 whether those purchases had been
appropriate, some Federal Reserve Bank officials
argued that they had been useful but perhaps too
large, while other Fed officials contended that no
open market purchases should have been made in
those years. For example, officials of the Federal
Reserve Bank of Chicago argued that the purchases
in 1924 had been too large and “in 1927 the danger
of putting money into the market was greater than
in 1924 as speculation was well under way.” Officials
of the Federal Reserve Bank of Richmond went further, arguing that “we think…securities should not
have been purchased in these periods, and the aim
should have been to decrease rather than augment
38

N OV E M B E R / D E C E M B E R 2 0 0 4

the total supply of Federal Reserve Credit.”23
Although not reflected in growth of the money
stock or total bank credit, critics charged that the
Fed had pursued a dangerously loose monetary
policy as reflected in the growth of brokers’ loans
and the rise in the stock market.
The 1930s. At its nadir in June 1932, the S&P
stock market index stood at just 15 percent of its
September 1929 peak. The market staged a brief
recovery in 1933, then surged from March 1935 to
February 1937. The boom of 1935-37 coincided
with a period of rapid growth in real output and the
23

These quotes are from Wheelock (1991, p. 100).

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Figure 13
Stock Market Index and Macroeconomic Data (2), 1947-2002
A
8.0

5.5

7.0

5.0
Log CPI (right scale)

6.0

4.5
5.0
Log Stock Market Index (left scale)

4.0

4.0

3.0

3.5

2.0

3.0

1949 1951 1953 1955 1957 1959 1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001

B
8.0
7.0

Log M2 (right scale)

6.0
5.0
Log Stock Market Index (left scale)

4.0
3.0
2.0

M2 Series Break
January 1959

1949 1951 1953 1955 1957 1959 1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001

money stock and came in the middle of a decade
of unusually rapid growth in total factor productivity (Field, 2003). Inflation, however, remained low
(see Table 2). The adoption of highly restrictive
monetary and fiscal policies in 1936-37 snuffed
out the economic recovery and brought a halt to
the stock market boom in early 1937.
World War II. The next stock market boom
occurred during World War II. U.S. equity prices
declined when the war began in Europe, hitting a
low point in March 1942. The stock market then
rose as the U.S. economy was being mobilized for
war. Similar to the view that the Civil War had a
positive effect on postwar economic growth by
hastening the development of manufacturing
industries, World War II has also been viewed as
an important source of technological progress and
postwar economic growth. Field (2003) shows, however, that most of the seeds of postwar growth were
sown during the 1930s and that productivity growth
was slow during the war outside the munitions
industries. Thus, the stock market boom probably
reflected more the rapid increases in output and
liquidity during the war as the economy finally
reached full employment than productivity-driven

9.0
8.5
8.0
7.5
7.0
6.5
6.0
5.5
5.0
4.5
4.0

expectations of a long-run increase in the growth
of corporate profits.

The Post-World War II Era
The nearly 60 years since the end of World War II
can be divided into three distinct eras. The first, from
the end of the war to the early 1970s, was characterized by a rising stock market, strong real economic
growth, a high average rate of productivity growth,
and (toward the end of the period) rising inflation.
The second era, covering the 1970s and early 1980s,
was characterized by stagflation—high inflation
coupled with both highly variable and low average
output and productivity growth. Nominal stock
returns were flat, and ex post real returns were negative. In the third era, from the mid-1980s to the
present, real output growth has been more stable
and, on average, higher than it was before 1980.
Inflation has fallen markedly, and, since the mid1990s, productivity growth has returned to the high
average rates observed in the 1950s and early 1960s.
Stock returns have been high, both in nominal and
real terms, especially during the booms of 1984-87
and 1994-2000. Figures 12 (panels A through C) and
13 (panels A and B) illustrate these patterns.
N OV E M B E R / D E C E M B E R 2 0 0 4

39

REVIEW

Bordo and Wheelock

Technological breakthroughs in chemicals,
electronics, and other industries during the 1930s
and 1940s enabled rapid growth in labor productivity and living standards during the 1950s and 1960s
(Field, 2003; Gordon, 2000a). These decades also
witnessed high levels of investment in public infrastructure and education. Our metric identifies three
specific stock market booms in these decades, though
one might characterize the entire period as a boom.
The first boom began as the economy pulled
out of a mild recession in 1949. The market rose at
an average annual rate of 18 percent between June
1949 and January 1953. Following a pause during
another recession in 1953-54, the market rose at
an average annual rate of nearly 27 percent from
September 1953 to July 1956. Although the growth
rates of output and productivity were somewhat
slower during the latter boom, the return to more
stable monetary and fiscal policies after the Korean
War may explain why the stock market increased
at a faster pace. The third distinct boom lasted from
June 1962 to January 1966 and coincided with a
long period of economic expansion, characterized
by high average growth of GDP, industrial production,
and productivity, as well as low inflation.
The stock market performed poorly during the
1970s, with the market peak of January 1973 not
reached again until July 1980. No stock market
booms occurred during this decade of adverse supply
shocks, low productivity growth, highly variable
output growth, and high inflation. The next market
boom occurred during the three-year period from
July 1984 to August 1987. This period of economic
recovery was characterized by moderately strong
real economic growth and falling inflation, but
productivity growth that was below its post-World
War II average.
The macroeconomic environment of the April
1994–August 2000 boom period is well known.
This period was characterized by somewhat aboveaverage growth of real output and industrial production and low and stable inflation. An increase
in the trend growth rate of productivity to approximately the rate that had prevailed during the 1950s
and 1960s was the feature of this boom period that
has received the most attention; it has often been
cited as the main fundamental cause of the stock
market boom.

CONCLUSION
Our survey finds that U.S. stock market booms
have occurred in a variety of macroeconomic envi40

N OV E M B E R / D E C E M B E R 2 0 0 4

ronments. Nevertheless, some common patterns
are evident:
1. Most booms occurred during periods of relatively rapid economic growth and, to the extent
it can be measured, increases in productivity
growth. This suggests that booms were driven
at least to some extent by fundamentals.
2. Many booms also occurred during periods of
relatively rapid growth of the money stock
and bank credit, reflecting either passive
accommodation of booms by the banking
system or expansion of the monetary base
by means of gold inflows or monetary policy
actions.
3. Stock market booms have occurred in periods
of deflation (e.g., the late 1870s and early
1880s), in periods of inflation (e.g., the 1830s,
1840s, late 1890s, and early 1900s) and in
periods of price stability (e.g., the 1920s and
1990s). In general, booms appear to have been
driven by increases in the growth of real output and productivity and can persist despite
either inflation or deflation so long as the
growth of output and productivity remains
strong.24 The tendency for the money stock,
but not the price level, to grow rapidly during
booms suggests that money growth accommodated increases in productivity, which
fueled booms. In the absence of an increase
in money growth, the quantity theory predicts
that an increase in productivity and potential
output growth would lead to deflation. In
future work, we intend to examine formally
whether accelerations in money stock growth
during booms were quantitatively consistent with increases in long-run productivity
growth.
4. Wartime experience seems to have been
different from peacetime, but no consistent
24

Periods of depressed stock market returns occurred during periods
of declining productivity growth or other adverse supply shocks. Some
such periods were characterized by deflation (e.g., 1929-32), while
others were characterized by inflation (e.g., the 1970s). In deflationary
periods when aggregate supply growth outpaced aggregate demand,
such as in the late 1870s and early 1880s, the market did as well as it
did in the inflationary 1830s, 1840s, late 1890s, and early 1900s, when
rapid aggregate supply growth was surpassed by demand growth
(Bordo, Lane, and Redish, 2004; Bordo and Redish, 2004). This contrasts
sharply with the experiences of the Great Depression, when collapsing
aggregate demand coincided with a decline in aggregate supply, and
the Stagflation of the 1970s, which was characterized by excessive
growth of aggregate demand in the face of low or negative aggregate
supply growth.

FEDERAL R ESERVE BANK OF ST. LOUIS

wartime pattern emerges. The old adage that
“war is good for the market” does not seem to
always hold up. Stock market booms occurred
during World War II and to a lesser extent
the Civil War, but market performance was
relatively poor during World War I and the
Vietnam War.
The stock market booms of 1923-29 and 19942000 stand alone in terms of their length and the
extent to which market averages increased. Both
bull markets have been attributed to increased productivity growth associated with the widespread
application of new general-purpose technologies
that promised new eras of rapid economic growth.
The macroeconomic environments in which these
two booms occurred were strikingly similar. Both
decades saw above-average, though not exceptional,
growth of real output and industrial production,
while consumer price inflation was quite low and
stable. Productivity growth did increase in both the
1920s and 1990s; though, in the 1920s, productivity
growth appears to have occurred prior to the stock
market boom, whereas the increase in productivity
growth during the 1990s coincided with the boom.
Policymakers paid a great deal of attention to
the stock market during each of the great booms.
In the 1920s, debate centered on whether the Fed
had fostered the boom by oversupplying Federal
Reserve credit through open market purchases and
inadequate administration of the discount window.
Many Fed officials adhered to the Real Bills Doctrine,
which held that an increase in credit beyond that
required to finance short-term production and distribution of real goods would end up fostering
speculation and inflation. Despite the absence of
consumer price inflation, officials interpreted the
stock market boom as evidence of inflation. Accordingly, the Fed tightened policy in 1928 and 1929,
which may have hastened the collapse of both stock
prices and the economy (Schwartz, 1981; Hamilton,
1987). Lingering doubts about the efficacy of using
monetary policy to foster economic recovery then
contributed to the Fed’s failure to ease aggressively
to fight the Great Depression (Wheelock, 1991;
Meltzer, 2002).
The Fed’s understanding of the role of monetary
policy was quite different in the 1990s and 2000s.
Transcripts of FOMC meetings indicate that, during
the 1990s, the Fed was mainly concerned about
the potential consequences of a sharp decline in
stock prices, fearing that falling stock prices would

Bordo and Wheelock

reduce consumption by reducing wealth. Although
the Fed did tighten policy in the later stages of the
boom by raising its target for the federal funds rate
in 1999-2000, it eased aggressively when stock prices
declined and the economy entered recession. In
sharp contrast to its policy in the early 1930s, the
Fed maintained an aggressively accommodative
monetary policy well after the stock market decline
had ended, with the objectives of preventing deflation and encouraging economic recovery.
Our survey of U.S. stock market booms finds
that booms do not occur in the absence of increases
in real economic growth and perhaps productivity
growth. We find little indication that booms were
caused by excessive growth of money or credit,
though 19th century booms tended to occur during
periods of monetary expansion. The view that monetary authorities can cause asset market speculation
by failing to control the use of credit has been largely
discarded. Nevertheless, anecdotal evidence suggests
that the stock market sometimes rises more than
can be justified by fundamentals, though economists
continue to debate whether even the market peak
of 1929 was too high. Not surprisingly, these questions leave unsettled the issue of how monetary
policy should respond to an asset price boom.
Although one can offer plausible theoretical arguments for responding proactively to an asset price
boom, our survey suggests that policymakers should
be cautious about attempting to deflate asset prices
without strong evidence that a collapse of asset
prices would have severe macroeconomic costs.

REFERENCES
Alchian, Armen A. and Klein, Benjamin. “On a Correct
Measure of Inflation.” Journal of Money, Credit, and Banking,
February 1973, 5(1, Part 1), pp. 173-191.
Balke, Nathan S. and Gordon, Robert J. “Appendix B,
Historical Data,” in Robert J. Gordon, ed., The American
Business Cycle: Continuity and Change. Chicago: University
of Chicago Press, 1986, pp. 781-850.
Bernanke, Ben S. and Gertler, Mark. “Should Central Banks
Respond to Movements in Asset Prices?” American
Economic Review, May 2001, 91(2), pp. 253-57.
Bernanke, Ben S. and Gertler, Mark. “Monetary Policy and
Asset Volatility.” Federal Reserve Bank of Kansas City
Economic Review, Fourth Quarter 1999, 84(4), pp. 17-52.
Bordo, Michael D.; Dueker, Michael J. and Wheelock, David C.

N OV E M B E R / D E C E M B E R 2 0 0 4

41

Bordo and Wheelock

“Aggregate Price Shocks and Financial Stability: The
United Kingdom, 1796-1999.” Explorations in Economic
History, April 2003, 40(2), pp. 143-69.
Bordo, Michael D.; Dueker, Michael J. and Wheelock, David C.
“Aggregate Price Shocks and Financial Stability: A
Historical Analysis.” Economic Inquiry, October 2002,
40(4), pp. 521-38.
Bordo, Michael D. and Jeanne, Olivier. “Boom-Busts in
Asset Prices, Economic Instability, and Monetary Policy.”
NBER Working Paper No. 8966, National Bureau of
Economic Research, May 2002a.
Bordo, Michael D. and Jeanne, Olivier. “Monetary Policy
and Asset Prices: Does ‘Benign Neglect’ Make Sense?”
International Finance, 2002b, 5(2), pp. 139-64.
Bordo, Michael D.; Rappoport, Peter and Schwartz, Anna J.
“Money versus Credit Rationing: Evidence for the National
Banking Era, 1880-1914,” in Claudia Goldin and Hugh
Rockhoff, eds., Strategic Factors in Nineteenth Century
American Economic History, A Volume to Honor Robert
W. Fogel. Chicago: University of Chicago Press, 1992, pp.
189-224.
Bordo, Michael D.; Lane, John Landon and Redish, Angela.
“Good Versus Bad Deflation: Lessons from the Gold
Standard Era.” NBER Working Paper No. 10329, National
Bureau of Economic Research, February 2004.
Bordo, Michael D. and Redish, Angela. “Is Deflation
Depressing? Evidence from the Classical Gold Standard,”
in Richard C.K. Burdekin and Pierre L. Siklos, eds.,
Deflation: Current and Historical Perspectives. New York:
Cambridge University Press, 2004.
Borio, Claudio; English, William and Filardo, Andrew. “A Tale
of Two Perspectives: Old or New Challenges for Monetary
Policy?” Working Paper No. 127, Bank for International
Settlements, February 2003.
Borio, Claudio and Lowe, Philip. “Asset Prices, Financial
and Monetary Stability: Exploring the Nexus.” Working
Paper No. 114, Bank for International Settlements, July
2002.
Borio, Claudio and White, William. “Whither Monetary
and Financial Stability? The Implications of Evolving
Policy Regimes.” Presented at the Federal Reserve Bank
of Kansas City Symposium on Monetary Policy and
42

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW
Uncertainty: Adapting to a Changing Economy, Jackson
Hole, Wyoming, August 2003.
Brunner, Karl and Meltzer, Allan H. “Mr. Hicks and the
‘Monetarists’.” Economica, February 1973, 40(157), pp.
44-59.
Bryan, Michael F.; Cecchetti, Stephen G. and O’Sullivan,
Roisin. “Asset Prices in the Measurement of Inflation.”
NBER Working Paper No. 8700, National Bureau of
Economic Research, January 2002.
Bullard, James B. and Schaling, Eric. “Why the Fed Should
Ignore the Stock Market.” Federal Reserve Bank of St.
Louis Review, March/April 2002, 84(2), pp. 35-42.
Cecchetti, Stephen G. “What the FOMC Says and Does
When the Stock Market Booms.” Prepared for the Reserve
Bank of Australia conference on Asset Prices and Monetary
Policy, Sydney, Australia, August 2003.
Cecchetti, Stephen G.; Genberg, Hans; Lipsky, John and
Wadhwani, Sushil. Asset Prices and Central Bank Policy.
Geneva Reports on the World Economy. Volume 2. Geneva:
International Center for Monetary and Banking Studies;
London: Centre for Economic Policy Research, July 2000.
Chandler, Lester V. American Monetary Policy 1928-1941.
New York: Harper and Row, 1971.
Cole, Arthur H. Wholesale Commodity Prices in the United
States, 1700-1861. Cambridge: Harvard University Press,
1938.
Cowles, Alfred III and Associates. Common Stock Indexes.
Cowles Commission for Research Economics Monograph
No. 3. Second Ed. Bloomington, IN: Principia Press, 1939.
David, Paul A. “The Dynamo and the Computer: An
Historical Perspective on the Modern Productivity
Paradox.” American Economic Review, May 1990, 80(2),
pp. 355-61.
David, Paul A. and Solar, Peter. “A Bicentenary Contribution
to the History of the Cost of Living in America.” Research
in Economic History, 1977, 2, pp. 1-80.
David, Paul A. and Wright, Gavin. “General Purpose
Technologies and Surges in Productivity: Historical
Reflections on the Future of the ICT Revolution.”
Presented at the International Symposium on Economic
Challenges of the 21st Century in Historical Perspective,
Oxford, July 1999.

FEDERAL R ESERVE BANK OF ST. LOUIS

Bordo and Wheelock

Davis, Joseph H. “An Annual Index of U.S. Industrial
Production, 1790-1915.” Working paper, Department of
Economics, Duke University, October 2002.

Gordon, Robert J. “Does the ‘New Economy’ Measure Up to
the Great Inventions of the Past?” Journal of Economic
Perspectives, Fall 2000a, 14(4), pp. 49-74.

Detken, Carsten and Smets, Frank. “Asset Price Booms and
Monetary Policy.” Presented at the Kiel Institute of World
Economics conference on Macroeconomic Policies in the
World Economy, June 2003 (revised, December 2003).

Gordon, Robert J. Macroeconomics. Eighth Edition. Reading,
MA: Addison-Wesley, 2000b.

Eichengreen, Barry and Mitchener, Kris. “The Great
Depression as a Credit Boom Gone Wrong.” Working
paper, Department of Economics, University of CaliforniaBerkeley, March 2003.
Fels, Rendig. American Business Cycles, 1865-1897. Chapel
Hill: University of North Carolina Press, 1959.
Field, Alexander J. “The Most Technologically Progressive
Decade of the Century.” American Economic Review,
September 2003, 93(4), pp. 1399-413.
Filardo, Andrew J. “Monetary Policy and Asset Prices.”
Federal Reserve Bank of Kansas City Economic Review,
Third Quarter 2000, 85(3), pp. 11-37.
Fisher, Irving. The Stock Market Crash—And After. New York:
Macmillan, 1930.
Friedman, Milton and Schwartz, Anna J. Monetary Statistics
of the United States: Estimates, Sources, and Methods.
New York: Columbia University Press, 1970.
Friedman, Milton and Schwartz, Anna J. A Monetary History
of the United States, 1867-1960. Princeton: Princeton
University Press, 1963a.
Friedman, Milton and Schwartz, Anna J. “Money and
Business Cycles.” Review of Economics and Statistics,
February 1963b, 45(1 Suppl, Part 2), pp. 32-64.
Galbraith, John Kenneth. The Great Crash, 1929. Boston:
Houghton Mifflin, 1961.
Goldin, Claudia Dale and Lewis, Frank G. “The Economic
Cost of the American Civil War: Estimates and
Implications.” Journal of Economic History, June 1975,
35(2), pp. 299-326.
Goodhart, Charles A. and Hofmann, Boris. “Do Asset Prices
Help Predict Consumer Price Inflation?” Manchester
School, 2000, 68(Suppl), pp. 122-40.

Greenspan, Alan. “Technology Innovation and its Economic
Impact.” Remarks before the National Technology Forum,
St. Louis, Missouri, April 7, 2000.
Hamilton, James D. “Monetary Factors in the Great
Depression,” Journal of Monetary Economics, March 1987,
19(2), pp. 145-69.
Hayford, Marc D. and Malliaris, A.G. “Monetary Policy and
the U.S. Stock Market,” Economic Inquiry, July 2004,
42(3), pp. 387-401.
Helbling, Thomas and Terrones, Marco. “Asset Price Booms
and Busts—Stylized Facts from the Last Three Decades
of the 20th Century.” Working paper, International
Monetary Fund, March 2004.
Hoover, Herbert C. The Memoirs of Herbert Hoover: The
Great Depression, 1929-1941. New York: Macmillan, 1952.
Jovanovic, Boyan and Rousseau, Peter L. “Measuring
General Purpose Technologies.” Presented at the Duke
University and University of North Carolina conference
on Understanding the 1990s: The Economy in Long-Run
Perspective, Durham, North Carolina, March 26-27, 2004.
Kendrick, John W. Productivity Trends in the United States.
Princeton: Princeton University Press, 1961.
Laidler, David. “The Price Level, Relative Prices, and
Economic Stability: Aspects of the Interwar Debate.”
Prepared for the Bank for International Settlements conference on Monetary Stability, Financial Stability and the
Business Cycle, Basel, Switzerland, 2003.
Macaulay, Frederick R. Some Theoretical Problems Suggested
by the Movements of Interest Rates, Bond Yields, and Stock
Prices in the United States Since 1856. New York: National
Bureau of Economic Research, 1938.
McCandless, George T. Jr. “Money, Expectations, and the
U.S. Civil War.” American Economic Review, June 1996,
86(3), pp. 661-71.
McGrattan, Ellen R. and Prescott, Edward C. “The 1929
Stock Market: Irving Fisher Was Right.” Staff Report 294,

N OV E M B E R / D E C E M B E R 2 0 0 4

43

Bordo and Wheelock

Federal Reserve Bank of Minneapolis Research
Department, May 2003.
Meltzer, Allan H. A History of the Federal Reserve, Volume 1:
1913-1951. Chicago: University of Chicago Press, 2002.
Metzler, Lloyd A. “Wealth, Saving, and the Rate of Interest.”
Journal of Political Economy, April 1951, 59(2), pp. 93-116.
Miron, Jeffrey A. and Romer, Christina D. “A New Monthly
Index of Industrial Production, 1884-1940.” NBER
Working Paper No. 3172, National Bureau of Economic
Research, November 1989.
Mishkin, Frederic S. and White, Eugene N. “U.S. Stock
Market Crashes and Their Aftermath: Implications for
Monetary Policy.” Prepared for the Federal Reserve Bank
of Chicago and World Bank conference on Asset Bubbles,
Chicago, April 23, 2002.
Pagan, Adrian R. and Sossounov, Kirill A. “A Simple Framework for Analysing Bull and Bear Markets.” Journal of
Applied Econometrics, January/February 2003, 18(1), pp.
23-46.
Rappoport, Peter and White, Eugene N. “Was the Crash of
1929 Expected?” American Economic Review, March 1994,
84(1), pp. 271-81.
Rigobon, R. and Sack, Brian. “Measuring the Reaction of
Monetary Policy to the Stock Market.” Quarterly Journal
of Economics, May 2003, 118(2), pp. 639-69.
Rothbard, Murray. America’s Great Depression. Fourth
Edition. New York: Richardson and Snyder, 1983.
Rousseau, Peter L. “Jacksonian Monetary Policy, Specie
Flows, and the Panic of 1837.” Journal of Economic
History, June 2002, 62(2), pp. 457-88.
Schinasi, Garry and Hargraves, Monica. “Boom and Bust”
in Asset Markets in the 1980s: Causes and Consequences.
Staff Studies for the World Economic Outlook. Washington,
DC: International Monetary Fund, 1993.

44

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW
Schwartz, Anna J. “Asset Price Inflation and Monetary
Policy.” NBER Working Paper No. 9321, National Bureau
of Economic Research, November 2002.
Schwartz, Anna J. “Why Financial Stability Depends on
Price Stability.” Economic Affairs, Autumn 1995, 15(4),
pp. 21-25.
Schwartz, Anna J. “Understanding 1929-1933,” in Karl
Brunner, ed., The Great Depression Revisited. Boston:
Martinus-Nihoff, 1981, pp. 5-48.
Schwert, G. William. “Indexes of U.S. Stock Prices from
1802 to 1987.” Journal of Business, July 1990, 63(3), pp.
399-426.
Smets, Frank. “Financial Asset Prices and Monetary Policy:
Theory and Evidence.” Working Paper No. 47, Bank for
International Settlements, September 1997.
Smith, Walter B. and Cole, Arthur H. Fluctuations in
American Business, 1790-1860. Cambridge, MA: Harvard
University Press, 1935.
Temin, Peter. The Jacksonian Economy. New York: Norton,
1969.
Tobin, James. “A General Equilibrium Approach to Monetary
Theory.” Journal of Money, Credit, and Banking, February
1969, 1(1), pp. 15-29.
Wheelock, David C. The Strategy and Consistency of Federal
Reserve Monetary Policy, 1924-1933. New York: Cambridge
University Press, 1991.
White, Eugene N. “Bubbles and Busts: The 1990s in the
Mirror of the 1920s.” Paper presented at Duke University
and the University of North Carolina conference,
Understanding the 1990s: The Long-Run Perspective,
Durham, North Carolina, March 26-27, 2004.
Woodford, Michael. Interest and Prices: Foundations of a
Theory of Monetary Policy. Princeton: Princeton University
Press, 2003.

What Does the Federal Reserve’s Economic
Value Model Tell Us About Interest Rate Risk at
U.S. Community Banks?
Gregory E. Sierra and Timothy J. Yeager

I

nterest rate risk at commercial banks is the
risk that changes in interest rates will adversely
affect income or capital. Such risk is an inherent part of banking because banks typically originate loans with longer maturities than the deposits
they accept. This maturity mismatch between loans
and deposits causes the net interest margin (NIM)—
the spread between loan rates and deposit rates—
to fall when interest rates rise, because interest rates
on deposits adjust more quickly than interest rates
on loans. Further, when interest rates rise, the
economic value of longer-term instruments (assets)
falls by more than the economic value of shorterterm instruments (liabilities), thus reducing the
bank’s capital.
Bankers became increasingly concerned about
interest rate risk following the savings and loan (S&L)
crisis. In the early 1980s, many thrifts became
insolvent after interest rates rose sharply, setting
off a crisis that eventually required a $150 billion
taxpayer bailout (Curry and Shibut, 2000). Thrifts
were particularly vulnerable to interest rate risk
because of the large maturity mismatch that resulted
from using short-term deposits to fund long-term
home loans. Nevertheless, banks devoted considerable resources to measuring and managing their
exposure to interest rate risk. Many regional and
money-center banks implemented elaborate models
to measure their exposure and began to use sophisticated asset and liability management to manage
their risk.
Bank supervisors also were challenged to stay
abreast of the industry’s ability to take on interest
rate risk, and they responded with three related

initiatives. First, bank examiners received capital
markets training to help them understand better the
techniques for measuring and managing interest
rate risk. Second, bank supervisors explicitly incorporated interest rate risk into their ratings system
in 1997, transforming the “CAMEL” rating system
into “CAMELS.”1 The “S” rating stands for a bank’s
sensitivity to market risk, which includes interest
rate risk and exposure to trading account assets,
exchange rates, and commodity prices.2 The third
supervisory initiative was to develop a measure of
interest rate risk that examiners could use to riskscope a bank—that is, to pinpoint the areas of the
bank that warrant closer scrutiny—and to conduct
off-site surveillance. Economists at the Board of
Governors of the Federal Reserve System developed
a proprietary economic value of equity model called
the economic value model (EVM), which is a durationbased estimate of interest rate sensitivity for each
U.S. commercial bank (Houpt and Embersit, 1991;
Wright and Houpt, 1996). The Federal Reserve
operationalized the model in the first quarter of
1998 by producing a quarterly report (called the
Focus Report) for each bank. The Focus reports are
the confidential supervisory reports that provide
the detailed output of the Fed’s EVM.
The EVM’s interest rate sensitivity assessment
is most relevant for community banks, which we
define as those with less than $1 billion in assets
and no interest rate derivatives. Larger banks often
1

Board of Governors, SR 96-38. CAMEL stands for capital adequacy,
asset quality, management, earnings, and liquidity.

2

For the majority of banks that have no trading accounts or foreign
currency exposures, market risk and interest rate risk are equivalent.

Gregory E. Sierra is an associate economist and Timothy J. Yeager is an economist and senior manager at the Federal Reserve Bank of St. Louis.
The authors thank workshop participants at the Federal Reserve Board of Governors, the Federal Reserve Banks of Kansas City and Chicago, the
Federal Reserve System Capital Markets Working Group 2003 Conference, the Olin School of Business at Washington University in St. Louis, and
the University of Missouri–St. Louis for their comments. The authors are especially grateful for comments from Nick Dopuch, Jim Embersit, Greg
Geisler, Rohinton Karanjia, David Kerns, and Jim O’Brien and also thank their colleagues at the Federal Reserve Bank of St. Louis—Alton Gilbert,
Tom King, Julie Stackhouse, Mark Vaughan, and David Wheelock—for their insightful comments.
Federal Reserve Bank of St. Louis Review, November/December 2004, 86(6), pp. 45-60.

© 2004, The Federal Reserve Bank of St. Louis.

N OV E M B E R / D E C E M B E R 2 0 0 4

45

Sierra and Yeager

have derivatives or other balance-sheet complexities
that the EVM ignores, making the output from the
EVM more questionable. The EVM also is more appropriately applied to community banks because community banks are examined less often than larger
banks and the EVM is usually the only tool for off-site
interest rate risk assessment available to examiners
for those banks. Community banks devote fewer
resources to modeling and measuring their interest
rate risk than do regional and money-center banks,
which normally have full-time staff devoted to such
tasks. Consequently, examiners of larger institutions
usually have access to more sophisticated and often
more timely information than that provided by the
EVM.
This paper investigates the effectiveness of the
EVM by examining whether model estimates are
correlated with community bank measures of
interest rate sensitivity during recent periods of both
rising and falling interest rates. Because the model
is relatively new, it has yet to be validated against
actual bank performance. The Federal Open Market
Committee (FOMC) increased the federal funds rate
six times in 1999 and 2000, and then lowered the
federal funds rate 12 times in 2001 and 2002. A
strong correlation between the EVM’s estimate of
interest rate sensitivity and measures of interest
rate risk during these periods would suggest that
the model provides a useful surveillance tool to
community bank supervisors.
We find that estimates from the Fed’s EVM are
correlated with the performance of U.S. community
banks in the manner the EVM suggests. Specifically,
the banks that the EVM identifies as the most liability
sensitive—those most sensitive to rising rates—show
the biggest deterioration in performance during the
period of rising interest rates between 1998 and
2000. The most liability-sensitive banks also show
the greatest improvement in performance measures
during the 2000-02 period of falling rates. The evidence indicates, then, that the EVM is a useful tool
for supervisors interested in identifying the minority
of banks that are highly sensitive to interest rate
changes.

RELATED LITERATURE
Researchers have examined the interest rate
sensitivity of depository institutions in some detail.
There are two general lines of inquiry. The first line
of inquiry asks whether depository institutions are
exposed to interest rate changes and, if so, how large
is that exposure on average? The motivation for this
46

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW
research often is to assess the impact that monetary
policy or unexpected inflation might have on financial intermediation. Most studies measure interest
rate sensitivity by regressing the firm’s stock return
on a market index and an interest rate. Flannery and
James (1984), Aharony, Saunders, and Swary (1986),
Saunders and Yourougou (1990), Yourougou (1990),
and Robinson (1995) find that bank stock prices
react to (unexpected) interest rate changes. A major
limitation of this research is that the vast majority
of U.S. banks are excluded from the analysis because
they have no publicly traded equity. Flannery (1981,
1983) constructs a model that estimates the effect
of rate changes on a bank’s net operating income.
The model has the added advantage that it indirectly
estimates the maturities of the assets and liabilities.
Flannery finds that the impact of rate changes on
long-run bank earnings is small, averaging only 5.6
percent of net operating earnings. He also finds that
banks are slightly asset sensitive; that is, profits
increase with rising interest rates. These results,
however, contradict much of the literature—including
some of Flannery’s later work—which shows that
banks tend to be exposed to rising rates.
A second line of inquiry attempts to isolate a
bank-specific measure of interest rate risk to separate
banks by their interest rate sensitivity. Regulators
are interested in this process because bank-specific
measures provide opportunities to identify high-risk
banks. Flannery and James (1984) construct a oneyear gap measure and quantify the correlation
between this measure and the portion of a bank’s
stock return driven by interest rate changes. They
find that this simple maturity variable has statistically significant explanatory power. Gilkeson,
Hudgins, and Ruff (1997) use output from a regulatory gap model for thrifts between 1984 and 1988.
They also find a statistically significant correlation
between net interest income and the one-year gap
measure. Robinson and Klemme (1996) find that
bank holding companies with relatively high levels
of mortgage activity have higher degrees of interest
rate sensitivity than other bank holding companies,
as reflected by changes in stock prices. Finally,
Lumpkin and O’Brien (1997) construct a comprehensive measure of thrifts’ portfolio revaluations
caused by interest rate changes. They fail to find
evidence that such revaluations influence stock
returns beyond the influence already captured by
more general movements in interest rates.
This article adds to the evidence that banks are
liability sensitive, though the interest rate sensitivity,

FEDERAL R ESERVE BANK OF ST. LOUIS

on average, is small. Our results are consistent with
Gilkeson et al. (1997) and show that even accountingbased measures of interest rate sensitivity can have
significant explanatory power to aid bank supervisors
in risk-scoping and monitoring the interest rate exposure of commercial banks. Our results imply, as
well, that large rate increases are unlikely to have
significant adverse effects on the banking industry,
which is also consistent with previous literature.

A MEASURE OF RATE SENSITIVITY:
THE EVM
Interest rate risk is the product of a bank’s rate
sensitivity and subsequent rate changes. If rate
changes are unpredictable, then measurement of a
bank’s rate sensitivity is crucial to monitoring and
controlling interest rate risk. Models that measure
interest rate sensitivity fall into one of two categories.
Earnings-at-risk models estimate changes in a bank’s
net interest margin or net income in response to
changes in interest rates. Equity-at-risk models
estimate changes in a bank’s market value of equity,
or its economic capital, in response to changes in
interest rates.
Federal Reserve economists used the concept
of duration to develop an equity-at-risk model of a
bank’s interest rate sensitivity. Duration is the
present-value weighted-average time to maturity
of a financial instrument.3 Conceptually, it is the
price sensitivity of a financial instrument to a change
in interest rates. If, for example, the (modified) duration of a Treasury bond is –3.0, the bond is projected
to lose 3 percent of its value, given a 100-basis-point
increase in interest rates. The price of a financial
instrument with a larger duration will fluctuate more
in response to interest rate changes than the price
of an instrument with a smaller duration.
To ease banks’ regulatory burden, the Fed’s EVM
uses call report data, most of which is recorded at
historical cost (rather than marked to market). The
EVM aggregates balance sheet items into various
categories, an example of which is shown in Table 1
for a hypothetical bank.4 The model then matches
each category with a proxy financial instrument—
an instrument with a known market price that has a
duration similar to those items in a given category—
3

4

A number of financial textbooks discuss duration in detail. See, for
example, Saunders and Cornett (2003).
Table 1 is adapted from Wright and Houpt (1996). This table does not
show the exact categories and risk weights used in the EVM.

Sierra and Yeager

and assigns a “risk weight.” The risk weight for each
category is the estimated change in economic value
of those items, given a 200-basis-point instantaneous
rise in rates.5 For example, the EVM places all fixedrate mortgage products that reprice or mature in
more than five years into the same category. In this
example, the risk weight for that category is –8.50,
indicating that the value of those mortgages are
estimated to decline by 8.5 percent following an
immediate 200-basis-point rate hike. The change
in economic value is repeated for each balance sheet
category. The predicted change in the economic
value of equity, then, is the difference between the
predicted change in assets and the predicted change
in liabilities. The net change is scaled either by
assets or equity. In this paper, we scale the change
in equity by assets and refer to the output of the EVM
as the “EVE” score. The example bank in Table 1 has
an EVE score of –1.97; that is, the bank is expected
to lose equity equal to 1.97 percent of assets when
interest rates rise by 200 basis points.
The model’s simplicity and generality make it
a potentially powerful surveillance tool, but those
same characteristics lead practitioners to question
its usefulness. First, a precise economic-value-ofequity model would require an exact calculation of
the duration for each financial instrument, which
in turn requires detailed information on the cash
flows and optionality of those instruments—data
that the call reports do not contain. Because of this
information limitation, the Fed’s EVM may perform
poorly for banks with a significant share of assets
invested in complex instruments, such as collateralized mortgage obligations (CMOs) or callable securities, because their durations are more difficult to
estimate.6 Should interest rates fall, CMOs, for example, may mature much more quickly than anticipated
by the EVM because homeowners will exercise their
refinancing option. The maturity of core deposits
may be another source of error. A community bank
in a rural area with strong ties to its depositors may
have a duration of demand deposits that is significantly longer than the duration at larger urban banks
because the rural customers are less likely to withdraw their funds should market rates increase. A
5

The (confidential) risk weights are derived by economists at the Board
of Governors of the Federal Reserve System, and they do not change
over our sample period.

6

The EVM also fails to account for derivatives, another class of complex
instruments. To avoid any bias from this source, we eliminate from
our sample banks with derivatives.

N OV E M B E R / D E C E M B E R 2 0 0 4

47

REVIEW

Sierra and Yeager

Table 1
How Does an Accounting-Based Duration Model Work?

Interest-sensitive assets
Fixed rate mortgage products
0-3 months
3-12 months
1-5 years
More than 5 years
Adjustable rate mortgage products
Other amortizing loans and securities
0-3 months
3-12 months
1-5 years
More than 5 years
Nonamortizing assets
0-3 months
3-12 months
1-5 years
More than 5 years
Total interest-sensitive assets
All other assets
Total assets
Interest-sensitive liabilities
Core deposits
0-3 months
3-12 months
1-5 years
3-5 years
5-10 years
CDs and other borrowings
0-3 months
3-12 months
1-5 years
More than 5 years
Total interest-sensitive liabilities
Other liabilities
Total liabilities
Summary
Change in assets values
Change in liability values
Net change in economic value
Change in economic value as a percent of total assets
SOURCE: Adapted from Wright and Houpt (1996).

48

N OV E M B E R / D E C E M B E R 2 0 0 4

Total ($)
(1)

Risk weight (%)
(2)

Change in
economic value ($)
(1) x (2)

0
0
0
233,541
2,932

–0.20
–0.70
–3.90
–8.50
–4.40

0
0
0
–19,851
–129

0
0
28,858
0

–0.20
–0.70
–2.90
–11.10

0
0
–837
0

132,438
7,319
182,373
11,194
598,655
85,696
684,351

–0.25
–1.20
–5.10
–15.90

–331
–88
–9,301
–1,780
–32,317

56,082
39,634
157,785
50,600
28,167

0.25
1.20
3.70
7.00
12.00

140
476
5,838
3,542
3,380

117,491
77,303
78,140
0
605,204
112
605,316

0.25
1.20
5.40
12.00

294
928
4,220
0
18,817

–32,317
18,817
–13,500
–1.97

FEDERAL R ESERVE BANK OF ST. LOUIS

second reason to question the applicability of the
EVM is that a precise equity-at-risk calculation
requires current market prices on all balance sheet
items because the estimated change in the value of
an asset or liability is equal to the duration multiplied
by its price. Strictly speaking, the term “economic
value” in this context is a misnomer because the
EVM uses book values as estimates of market prices.
A third weakness is that the EVM simulates just one
interest rate scenario. Specifically, the model projects
changes to a bank’s economic value of equity given
an instantaneous 200-basis-point upward parallel
shift in the yield curve. The model does not account
for changes in the slope of the yield curve, nor does
it simulate a reduction in interest rates.
Despite these weaknesses, the Fed’s EVM still
may serve as a useful measure of interest rate sensitivity for community banks. Even if the actual EVE
score of a given bank is imprecise, the ordinal ranking of banks by EVE scores may help supervisors
detect the outlier banks that are vulnerable to an
interest rate shock.

MEASURING THE IMPACT OF RATE
CHANGES WITH ACCOUNTING DATA
Tests of the ability of the EVM to measure interest
rate sensitivity require assessments of bank performance following interest rate changes. The ideal
performance indicator for testing the EVM is the
change in the economic value of equity following a
change in interest rates. In such a world, the ex post
interest rate sensitivity of a bank could be measured
via an econometric model by estimating the change
in publicly traded equity due to the change in rates.
Indeed, a number of studies have estimated in this
manner the interest rate sensitivity of large banks.
Unfortunately, such data are available only for
the approximately 300 bank holding companies
with actively traded equity. To assess community
bank performance following interest rate changes,
we must rely exclusively on accounting information
produced under generally accepted accounting
principles (GAAP). Because we are limited to accounting data, our methodology simultaneously tests the
usefulness of regulatory accounting information
and the Fed’s EVM. The adequacy of GAAP-based
measures to capture interest rate risk is a question
we leave for future research.
The accounting-based bank performance meas7

Clearly, this analysis is suggestive because other factors such as the
2001 recession may have affected NIM.

Sierra and Yeager

ures we utilize include changes in the NIM, return
on assets (ROA), and the book value of equity (BVE).
NIM is the ratio of net interest income divided by
average earning assets, ROA (as defined here) is net
income (before extraordinary items) divided by
average assets, and BVE is simply the accounting
value of total equity capital divided by total assets.
The change in BVE is a straightforward, albeit
imperfect, performance measure to assess the EVM
because the EVM directly estimates the change in
equity given an interest rate change. Unlike the
economic value of equity, BVE will change slowly
as items are gradually marked to market (recorded
at market prices). The usefulness of NIM and ROA
require further explanation.
Although the theoretical link between earnings
and a duration-based equity-at-risk model is somewhat loose, an empirical relationship should be
discernable over a large number of observations.
Banks that the EVM estimates to be vulnerable to
rising interest rates (those with large negative EVE
scores) are those that have weighted-average asset
durations greater than weighted-average liability
durations. When interest rates rise, assets decline
in value more than liabilities, reducing the bank’s
economic capital. Because maturity is one component of duration, those same banks should be liability
sensitive, on average, such that liabilities tend to
mature or reprice faster than assets. In the short term,
interest expense on liabilities will tend to increase
more quickly than interest income on assets in a
rising rate environment, reducing the NIM. The
change in ROA captures not only the effect on NIM,
but also any other noninterest impact of rate changes
on earnings. Loan origination income might decline,
for example, when interest rates rise, because refinancing activity slows. We expect, therefore, that
banks with large negative values of EVE will exhibit
a more pronounced deterioration in these income
measures when interest rates rise, and those same
banks will see a larger surge in income when interest
rates fall.
A cursory look at bank NIMs suggests that banks
are modestly rate sensitive. Figure 1 plots the effective federal funds rate on the right axis and the fourquarter moving average NIM on the left axis. We
employ the four-quarter moving average to control
for the seasonality in the data. Although the effective federal funds rate fluctuated by more than 400
basis points between 1998 and 2002, the average
NIM of U.S. community banks changed little, staying
within a range of about 20 basis points.7
N OV E M B E R / D E C E M B E R 2 0 0 4

49

REVIEW

Sierra and Yeager

Figure 1
NIM and the Effective Federal Funds Rate:
Commercial Banks Are, on Average, Liability Sensitive
Federal Funds (%)
7

NIM (%)
5
Rising Rate Era

Falling Rate Era

6
4.8

Effective federal funds rate
(right axis)

5

4.6

4

Trailing four-quarter NIM
(left axis)

4.4

3

4.2
2

4
4
02
20

:Q
02
20

:Q

3

2

1

:Q

:Q

02
20

4
:Q

20

02

3
01
20

01
20

01
20

:Q

2

1

:Q

:Q

4
:Q

20

01

3
:Q

00
20

2
00
20

1
20

00

:Q

4

:Q
00
20

3

:Q

:Q

99

99
19

19

2

1

:Q
99
19

:Q
99
19

19

98

:Q

4

1

Quarter

NOTE: We plot the trailing four-quarter NIM for banks with less than $1 billion in total assets and
the effective quarterly federal funds rate. The movement in NIM is consistent with commercial
banks being modestly rate sensitive on average.

EMPIRICAL ANALYSIS OF THE
ECONOMIC VALUE MODEL
We test the ability of the Fed’s EVM to distinguish
interest rate sensitivity differences among U.S. community banks by comparing the measured interest
rate sensitivity of the EVM with accounting performance measures. Observance of a bank’s ex post
experience of interest rate risk requires an interest
rate change, a degree of rate sensitivity, and a time
period sufficiently long enough for the interest rate
risk to flow through the accounting data.8
To control for rate changes and time lags, we
split the sample into two periods: a period of rising
50

N OV E M B E R / D E C E M B E R 2 0 0 4

rates and a period of falling rates. Doing this, we
ensure that the banks are hit by rate changes in the
same direction, as opposed to offsetting rate changes.
We chose the fourth quarter of 1998 through the
fourth quarter of 2000 as the rising rate period and
the fourth quarter of 2000 through the fourth quarter
of 2002 as the falling rate period. The quarterly effec8

Results not presented here indicate that the Fed’s EVM, using quarterly
accounting data, cannot distinguish effectively between banks with
different rate sensitivities. This result is likely a combination of the
accounting data that react with lags and low absolute levels of interest
rate risk at most commercial banks. In tests using stock market returns,
the Fed’s EVM can distinguish among firms on a quarterly basis
(Sierra, 2004).

FEDERAL R ESERVE BANK OF ST. LOUIS

tive federal funds rate increased 161 basis points in
the first period and fell 503 basis points during the
second period. Moreover, the yield curve steepened
considerably in the falling rate period. The yield
spread between the 10-year and 6-month Treasuries
averaged 27 basis points between year-ends 1998
and 2000 and 235 basis points between year-ends
2000 and 2002. Yield spreads on Treasuries more
consistent with bank asset and liability durations
also increased in the later era. The spread between
the 3-year and 1-year Treasuries averaged 27 basis
points in the former period and 89 basis points in
the later period. Hence, we should expect larger
changes in bank performance measures during the
falling rate era.
Our bank performance measures include the
changes in NIM, ROA, and BVE over the relevant time
period. We compute the changes in NIM and ROA
using four-quarter averages to control for seasonality.
For example, the rising interest rate environment
begins in the fourth quarter of 1998 and ends in the
fourth quarter of 2000. The change in NIM, then, is
the trailing four-quarter NIM ending in the fourth
quarter of 2000 less the trailing four-quarter NIM
ending in the fourth quarter of 1998. We perform
ordinary least-squares regression analysis, matchedpairs analysis, and correlation analysis with the “S”
rating to test the EVM.

Bank Sample
Our bank sample is split into the rising rate era
and the falling rate era. We exclude banks with more
than $1 billion in assets in any given quarter, de novo
banks (those less than five years old), and banks
that merged during the respective time period.9 In
addition, we eliminate the very smallest banks—
those with less than $5 million in assets—and banks
with measures that are extreme outliers, because
these values fall outside of the realm of reasonable
values for typical banks.10 For each era, the sample
9

Excluding banks involved in mergers potentially creates a survivorship
bias. The bias would emerge if banks with high interest rate risk are
involved in mergers to a greater extent than banks with low interest
rate risk. We empirically examine this bias by comparing the average
EVE scores of the merger banks in the quarters before merger with
the average EVE scores of the sample banks. We find that the mean
EVE scores from the two groups are not significantly different from
one another, suggesting that survivorship bias is not important.

10

We remove banks with NIM, ROA, BVE or nonperforming loans greater
than the 99.75th percentile. We also remove banks with ROA below
the 0.25th percentile. Banks with asset growth rates less than or equal
to –100 percent are excluded. Finally, we exclude banks with a NIM,
BVE, or nonperforming loan ratio less than zero.

Sierra and Yeager

contains about 6,000 banks and represents about
11 percent of all commercial banking assets. Descriptive statistics for the full-regression sample
appear in Table 2.
As Table 2 reveals, changes in the accounting
performance measures—the dependent variables—
are modest. Mean NIM decreased 3 basis points in
the rising rate era (Panel A) and fell again by 15 basis
points in the falling rate era (Panel B). Changes in
ROA were smaller, with ROA essentially unchanged
in both the rising and falling rate eras. BVE declined
by 2 basis points in the rising rate environment
and increased by 22 basis points in the falling rate
environment.
Table 2 also lists summary statistics for the
independent variables, and EVE is the independent
variable of primary interest. We multiply EVE by –1
to make its interpretation more intuitive. Because the
Fed’s EVE measure becomes more negative as the
liability sensitivity of the bank increases, EVE and
exposure to rising interest rates are inversely related.
Flipping the sign on the EVE measure allows us to
associate larger EVE values with greater exposure
to rising interest rates. The mean EVE in Panel A of
Table 2 is 0.87, which says that the average bank is
predicted to lose 0.87 percent of its net economic
asset value given a 200-basis-point parallel shift in
the yield curve. The mean EVE in the falling rate era
is 0.99 percent. The average sample bank, therefore,
is estimated to be liability sensitive.

The Regression Model
We use regression analysis to assess the average
correlation between a bank’s EVE and a change in
NIM, net income, and BVE, for a given change in
interest rates. EVE is computed as the average of
each quarterly EVE value within the given time
period. We use the average EVE value rather than
the beginning-of-period EVE value because we are
more interested in the correlation of EVE with the
dependent variables, and less interested in the predictive power of EVE in a given quarter.11 The average
EVE score accounts for changes in EVE during the
two-year sample period, an important factor if bank
managers endogenously alter their interest rate
sensitivity as interest rates begin to move in a particular direction. The EVE coefficient should be negative
in the rising rate era because rising rates reduce
earnings and equity at liability-sensitive banks. Con11

As a robustness check, we ran the regressions using beginning-ofperiod EVE and obtained qualitatively similar results.

N OV E M B E R / D E C E M B E R 2 0 0 4

51

REVIEW

Sierra and Yeager

Table 2
Descriptive Statistics of Regression Samples
Panel A: Rising interest rate era, 1998:Q4–2000:Q4 (6,016 observations)
Standard
Mean
deviation Minimum
Q1
NIM

Median

Q3
4.60

Maximum

4.20

0.73

0.00

3.73

4.14

8.57

ROA

1.19

0.58

–3.15

0.90

1.17

1.46

5.94

BVE

10.36

3.46

3.06

8.09

9.55

11.70

51.36

Change in NIM

–0.03

0.47

–4.81

–0.24

–0.03

0.19

4.69

Change in ROA

–0.01

0.51

–4.94

–0.20

–0.01

0.18

5.19

Change in BVE

–0.02

1.59

–20.38

–0.66

0.01

0.67

21.49

0.87

1.24

–3.26

0.03

0.78

1.57

8.85

EVE
NPL

0.56

0.67

0.00

0.11

0.34

0.76

6.06

LNTA

11.22

0.94

8.59

10.56

11.20

11.85

13.81

AGR

11.35

12.32

–92.49

3.97

10.09

17.66

86.07

Median

Q3

Maximum

Panel B: Falling interest rate era, 2000:Q4–2002:Q4 (5,773 observations)
Standard
Mean
deviation Minimum
Q1
NIM

4.18

0.77

0.24

3.69

4.11

4.60

8.74

ROA

1.20

0.62

–3.25

0.86

1.17

1.50

7.86

BVE

10.38

3.52

4.51

8.02

9.44

11.77

51.24

Change in NIM

–0.15

0.63

–5.98

–0.45

–0.10

0.21

4.42

Change in ROA

–0.02

0.58

–5.63

–0.25

0.00

0.25

5.64

Change in BVE

0.22

1.54

–16.56

–0.44

0.27

0.97

10.59

EVE

0.99

1.25

–3.54

0.15

0.86

1.70

10.10

NPL

0.66

0.77

0.00

0.13

0.41

0.91

6.41

LNTA

11.34

0.95

8.67

10.69

11.32

12.00

13.80

AGR

12.64

11.92

–86.17

5.78

11.67

18.71

78.62

NOTE: Change in NIM: the trailing four-quarter NIM at the end of the period less the trailing four-quarter NIM at the start of the
period. Change in ROA: the trailing four-quarter ROA at the end of the period less the trailing four-quarter ROA at the start of the
period. Change in BVE: BVE at the end of the period less BVE at the start of the period. EVE: average over all quarters in the given era
of Fed EVE score scaled by total assets. NPL: average nonperforming loans to total assets in the given era. LNTA: the natural log of
average total assets in the given era. AGR: the growth rate of total assets during the given era.

versely, the EVE coefficient should be positive in the
falling rate era.
In the regressions, we attempt to control for
factors other than interest rate changes that could
influence income and equity ratios. Specifically, we
include the ratio of nonperforming loans to total
assets (NPL)—loans that are 90 days or more past
due or are no longer accruing interest—as a credit
risk control variable because nonperforming loans
can directly and indirectly affect all three dependent
52

N OV E M B E R / D E C E M B E R 2 0 0 4

variables. Most nonperforming loans do not accrue
interest, which means that interest income, and
hence NIM and ROA, are lower than they otherwise
would be. In addition, a higher ratio of nonperforming loans may be associated with changes in asset
quality, which would cause a bank to set aside more
provisions and lower ROA. Finally, the change in
BVE is smaller if net income and, hence, retained
earnings are smaller. We expect the signs of the nonperforming loans coefficients to be negative in both

FEDERAL R ESERVE BANK OF ST. LOUIS

the rising and falling rate periods. The mean nonperforming loan-to-total-asset ratio is 0.56 percent
in Panel A of Table 2 and 0.66 percent in Panel B of
Table 2.
We also control for bank size by including the
natural log of total assets in the regression. NIM,
net income, and BVE may respond to the economic
environment differently at larger banks than at
smaller banks. For example, changes in interest rates
may trigger the use of lines of credit, which are more
prevalent at larger institutions. The sign of this coefficient could be positive or negative.
Asset growth is an explanatory variable that
controls for portfolio turnover. More rapid asset
growth brings assets and liabilities onto the books
faster at market prices, which may either exacerbate
or dampen the sensitivity of earnings and BVE to
changes in interest rates. Asset growth will exacerbate interest rate sensitivity if the new assets and
liabilities reinforce or increase the bank’s interest
rate position. Conversely, asset growth will dampen
sensitivity if the new assets and liabilities mitigate
the bank’s interest rate position. The average EVE
score will partially capture these asset-growth effects,
but the EVE scores are not asset weighted. The signs
of the asset-growth coefficients, therefore, are uncertain. Table 2 shows that banks grew quickly in
each sample period. The mean growth rate in the
rising rate era is 11.35 percent; asset growth in the
falling rate era is 12.64 percent. The standard deviation of asset growth is also quite large, exceeding
11 percent in both rate eras.
We use ordinary least squares to run crosssectional regressions on the following model:
(1)
∆Yi =α 0+α1 EVE i+α2NPL i+α 3 LNTAi+α 4 AGR i+ε i,
where ∆Yi represents the change in the dependent
variable (NIM, ROA, or BVE) of bank i, LNTA is the
natural log of total assets, and AGR is asset growth.
The dependent variables are computed as the endof-period value less the beginning-of-period value,
while the independent variables (except asset growth)
are the quarterly averages over the time period. Asset
growth is simply the percentage change in assets
over the period. We report two specifications of
equation (1). Model 1 excludes asset growth; model 2
includes asset growth. For both models, the primary
focus is on the EVE coefficient.

Sierra and Yeager

Regression Results
Regression results in Table 3 show that the Fed’s
EVM is indeed correlated with the accounting performance measures. In the rising rate era, we expect
the high-EVE banks to perform worse than low-EVE
banks. Specifically, the EVE coefficient should be
negative for each regression presented in Panel A
of Table 3. Across the columns of the EVE row in
Panel A, the EVE coefficients are negative and statistically significant for every specification and every
dependent variable. The results from model 2 indicate that a bank with an EVE score 1 percentage
point higher than another bank would experience,
all else equal, a drop in NIM, ROA, and BVE equal to
5.0, 5.4, and 18.5 basis points, respectively, over the
two-year period. Put another way, the results imply
that for the average bank, which has an EVE score of
0.87, NIM, ROA, and BVE were about 4.4 (5.0 × 0.87),
4.7, and 16.1 basis points lower, respectively, than
they would have been had the bank had an EVE score
of zero. The results in Panel A are consistent with
the ability of the Fed’s EVM to identify a bank’s
sensitivity to rising rates.
In the falling rate era, high-EVE banks are projected to be more liability sensitive such that the
high-EVE banks should perform better than lowEVE banks. If EVE is able to distinguish effectively
between banks with high and low liability sensitivity,
the EVE coefficients should be positive in Panel B
of Table 3. Across the columns of the EVE row in
panel B, the EVE coefficients are positive and statistically significant for both model specifications and
each dependent variable. The EVE coefficients imply
that changes in NIM, ROA, and BVE over the two-year
period are expected to increase 15.2, 8.6, and 3.7
basis points, respectively, for each 1-percentagepoint increase in EVE. The results in Panel B are consistent with the ability of the Fed’s EVM to identify
banks that are the most sensitive to falling interest
rates.
The EVE coefficients for NIM and ROA are much
larger in Panel B of Table 3 than in Panel A, a result
that most likely reflects the greater interest rate
changes in the falling rate era. Recall that the federal
funds rate fell 503 basis points in the falling rate era,
which is 4.3 times the 116-basis-point rise in the
rising rate era. In addition, the average yield spread
between the 1- and 3-year Treasuries increased 3.5
times relative to the rising rate era. According to
model 2, a bank with an EVE score 1 percentage
point higher than another bank in the rising rate era
N OV E M B E R / D E C E M B E R 2 0 0 4

53

REVIEW

Sierra and Yeager

Table 3
Regression Analysis of the Fed’s Economic Value Model
Panel A: Rising Interest Rate era, 1998:Q4–2000:Q4 (6,016 observations)
Dependent Variable
Change in NIM

Change in ROA

Change in BVE

Variable

Model 1

Model 2

Model 1

Model 2

Model 1

Model 2

Intercept
(p value)

0.656
(0.000)

0.421
(0.000)

0.230
(0.002)

0.098
(0.205)

1.051
(0.000)

–0.344
(0.000)

EVE
(p value)

–0.044
(0.000)

–0.050
(0.000)

–0.051
(0.000)

–0.054
(0.000)

–0.149
(0.000)

–0.185
(0.000)

NPL
(p value)

–0.018
(0.017)

–0.028
(0.001)

–0.111
(0.000)

–0.116
(0.000)

0.021
(0.051)

–0.034
(0.001)

LNTA
(p value)

–0.057
(0.000)

–0.026
(0.000)

–0.012
(0.072)

0.005
(0.472)

–0.084
(0.000)

0.095
(0.000)

AGR
(p value)
Adjusted R2

–0.008
(0.000)
0.03

0.07

–0.005
(0.000)
0.03

0.05

–0.049
(0.000)
0.02

0.15

Panel B: Falling Interest Rate era, 2000:Q4–2002:Q4 (5,773 observations)
Dependent Variable
Change in NIM

Change in ROA

Change in BVE

Variable

Model 1

Model 2

Model 1

Model 2

Model 1

Model 2

Intercept
(p value)

–0.285
(0.005)

–0.467
(0.000)

–0.311
(0.002)

–0.303
(0.004)

–0.488
(0.000)

–1.773
(0.000)

EVE
(p value)

0.155
(0.000)

0.152
(0.000)

0.086
(0.000)

0.086
(0.000)

0.058
(0.000)

0.037
(0.000)

NPL
(p value)

–0.013
(0.266)

–0.028
(0.020)

–0.116
(0.000)

–0.115
(0.000)

0.029
(0.014)

–0.071
(0.000)

LNTA
(p value)

–0.001
(0.945)

0.025
(0.012)

0.025
(0.006)

0.024
(0.018)

0.056
(0.000)

0.235
(0.000)

AGR
(p value)
Adjusted R2

–0.007
(0.000)
0.09

0.11

0.000
(0.770)
0.07

0.07

–0.052
(0.000)
0.00

0.15

NOTE: We regress three different measures of ex post interest rate sensitivity on the Fed’s ex ante measure of interest rate sensitivity
(EVE), nonperfoming loans (NPL), log of total assets (LNTA) and asset growth rate (AGR). The three dependent variables are change in
net interest margin (NIM), change in return on assets (ROA), and change in book value of equity (BVE). The coefficient on the EVE variable
is the focus of the regression analysis.
We divide the sample period into two eras. The first era is from the fourth quarter of 1998 through the fourth quarter of 2000 and is
a time period during which interest rates were more-or-less uniformly increasing. The second era is from the fourth quarter of 2000
through the fourth quarter of 2002 and is a time period during which interest rates were more-or-less uniformly falling. Banks that the
Fed’s model predicts are more liability sensitive should perform worse over the rising rate era, and the EVE coefficients in Panel A should
be negative, which they are. Banks that the Fed’s model predicts are more liability sensitive should perform better over the decreasing
rate era, and the EVE coefficients in Panel B should be positive, which they are. P values are corrected for heteroskedasticity.

54

N OV E M B E R / D E C E M B E R 2 0 0 4

FEDERAL R ESERVE BANK OF ST. LOUIS

(Panel A) experiences a 5.0-basis-point drop in NIM.
However, in the falling rate era (Panel B) a bank with
an EVE score 1 percentage point higher than another
bank experiences a NIM increase of 15.5-basis-point
increase, 3.1 times the change in the rising rate era.
Moreover, ROA in the falling rate era increased by
1.6 times (8.6 divided by 5.4) the change in the rising
rate era. The BVE results, however, do not show the
same pattern in magnitude between Panels A and B.
The EVE coefficient for the BVE in the falling rate era
changed by just 0.2 times the change in the rising
rate era.
With a few exceptions, the coefficients on the
control variables have the expected signs. Nonperforming loan coefficients are negative in 10 of
12 regressions and statistically significant at the 5
percent level or lower in 9 regressions. The coefficients on bank size (natural log of total assets) suggest that, all else equal, larger banks have amplified
swings in income and equity relative to smaller
banks. With three exceptions, the coefficients on
the natural log of total assets are negative in the
rising rate era and positive in the falling rate era,
implying that NIM, ROA, and BVE at the larger banks
move in the same direction as the interest rate risk.
Finally, the coefficients on asset growth are negative
and statistically significant in the rising rate era, but
remain negative in the falling rate era for all the
specifications except that with ROA as the dependent
variable. These results suggest that asset growth
increased interest rate sensitivity in the rising rate
era but partially offset the interest rate sensitivity
in the falling rate era.

Matched-Pairs Analysis
Although regression analysis describes the
average relationship between EVE and accounting
performance measures, we are also interested in the
ordinal properties of the EVM. Can the EVM separate
the riskiest banks from the safer ones? This question
is important to Federal Reserve examiners and supervisors because they use the model to help assess
interest rate risk at a large number of community
banks. The model may help them detect banks in
the riskiest tail of the distribution.
We begin the matched-pairs analysis by separating the same sample of community banks used in
the regression analysis into deciles based on their
predicted exposure to rising interest rates as measured by their average EVE score. We then compare
changes in the performance measures across deciles.

Sierra and Yeager

By grouping the banks into deciles, we are exploring
whether the EVM broadly ranks banks by interest
rate risk, allowing for the possibility that the ordinal
rankings within a given decile may not be very tight.
Banks that are the most liability sensitive are in the
top deciles, whereas banks with low liability sensitivity or those that are asset sensitive (exposed to
falling rates) are in the bottom deciles. This ranking
does not imply that banks in the bottom deciles have
low interest rate risk because such banks may be
extremely asset sensitive. Interest rate risk is best
captured by the absolute value of the EVE measure.
We compare bank performance in the top decile
(the most liability-sensitive banks) with banks in
consecutively lower deciles. Based on two characteristics, each bank in the top decile is matched with
banks in lower deciles. Total assets at the match
bank must be within 50 percent of the sample bank
to control for the influence of size on performance
ratios, and the nonperforming loan-to-total-asset
ratios must be within 12.5 basis points to ensure
that differences in nonaccruing loans do not unduly
account for the banks’ differences in NIM and ROA.
If several banks qualify as matches with a bank in
the top decile, we average the performance ratios
of the matching banks.
To visualize the different reactions to falling
interest rates, we plot in Figure 2 the average cumulative change in NIM by quarter of the banks in the
top decile and the average cumulative change in
NIM of the banks in the bottom decile. The average
NIM at the most liability-sensitive banks declines
during the first three quarters—probably due to
the lag from the rising interest rate environment in
2000—and then begins to climb in the third quarter
of 2001. In contrast, the average NIM at the least
liability-sensitive banks declines continuously
between the fourth quarter of 2000 and the fourth
quarter of 2002. By the fourth quarter of 2002, the
difference in the change in NIM between the top
decile and the bottom decile is about 78 basis points.
Most of that difference is due to falling NIMs at the
least liability-sensitive banks; rising NIMs at the most
liability-sensitive banks account for only about 10
basis points of the total difference.
In addition to Figure 2, we conduct a series of
t-tests on the differences in means between the
most liability-sensitive banks and progressively less
liability-sensitive match-banks, for both the rising
interest rate environment and the falling interest rate
environment. The results appear in Table 4. Panel A
N OV E M B E R / D E C E M B E R 2 0 0 4

55

REVIEW

Sierra and Yeager

Figure 2
Changes in NIM at EVE-Predicted High- and
Low-Liability-Sensitive Banks
Basis Points
25

Percent
8

15

7
Change in NIM at EVE-predicted
HIGH-liability-sensitive banks (left axis)

5

6
–5
5

–15

–25

4

–35

3

–45

Effective federal funds rate
(right axis)

2

–55
Change in NIM at EVE-predicted
LOW-liability-sensitive banks (left axis)

1

–65

–75
4
02
20

:Q
02
20

:Q

3

2
20

02

:Q

1
20

02

:Q

4
20

01

:Q

3
20

01

:Q

2
20

01

:Q

:Q
01
20

20

00

:Q

4

1

0

Quarter

NOTE: We plot the average cumulative change in net interest margin (NIM) for high-EVE and lowEVE banks for the falling rate time period. The figure shows that high-EVE banks are indeed more
liability sensitive than low-EVE banks. Furthermore, the direction of change is consistent with the
EVM predictions; high-EVE banks’ NIMs improve, while low-EVE banks’ NIMs decrease. The chart
also plots the effective federal funds rate on the right axis.

lists the results for the rising rate era, whereas Panel B
lists the results for the falling rate era. The first row
of each panel compares the average changes in NIM,
ROA, and BVE of banks in the top (tenth) decile with
the average changes for banks in the ninth decile;
the second row compares the top decile with the
eighth decile; and so on. We expect the differences
to widen as the deciles in the comparison widen.
With some notable exceptions, the matched-pair
results indicate that the Fed’s EVM detects relatively
fine quantitative differences in interest rate risk
across deciles. The distinctions are the most pronounced for NIM and ROA in the falling rate environ56

N OV E M B E R / D E C E M B E R 2 0 0 4

ment, reported in Panel B of Table 4. We expect the
differences in changes in NIM and ROA to widen
(become more positive) as the deciles compared
become more extreme, because banks that are less
liability sensitive will respond less favorably to a
drop in rates compared with banks that are more
liability sensitive. Indeed, the spread does widen as
the gaps between the deciles widen, and the differences in the changes are statistically different from
zero at the 5 percent level or lower for every comparison. Differences in the changes of BVE are less
robust. In fact, differences in BVE changes have the
wrong signs in five of nine comparisons of Panel B.

FEDERAL R ESERVE BANK OF ST. LOUIS

Sierra and Yeager

Table 4
Relative Interest Rate Sensitivity of Pairs Matched by Extremity of the Fed’s EVE Model Interest
Rate Sensitivity Prediction
Panel A: Rising interest rate era, 1998:Q4–2000:Q4
Change in Difference in
Decile
N
NIM
NIM changes
10
9
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1

595
595
598
594
599
595
597
598
597

–11.18
–9.60
–11.30
–8.58
–10.92
–5.05
–11.04
–6.57
–10.98
–3.70
–10.85
–4.08
–11.16
6.46
–11.05
1.32
–10.97
12.72

–1.58
–2.72
–5.87***
–4.48***
–7.28***
–6.78***
–17.63***
–12.37***
–23.68***

Panel B: Falling interest rate era, 2000:Q4–2002:Q4
Change in Difference in
Decile
N
NIM
NIM changes
10
9
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1

571
569
570
567
572
572
569
572
571

9.72
4.02
9.93
–1.21
9.84
–5.19
10.09
–6.65
9.78
–8.37
9.67
–19.17
9.93
–16.66
9.34
–33.81
10.19
–65.79

5.70**
11.14***
15.03***
16.75***
18.15***
28.85***
26.59***
43.15***
75.98***

Change in
ROA
–8.08
–6.30
–8.29
–3.37
–8.13
–3.11
–8.48
1.28
–8.25
0.46
–7.76
–1.34
–8.44
4.57
–7.89
4.43
–8.79
14.78

Change in
ROA
14.33
8.50
14.74
7.14
14.71
3.39
14.92
4.93
14.53
1.25
14.37
1.25
14.55
–0.46
14.26
–9.38
14.45
–28.76

Difference in
ROA changes
–1.79
–4.92**
–5.01**
–9.76***
–8.70***
–6.42***
–13.02***
–12.32***
–23.57***

Difference in
ROA changes
5.82***
7.59***
11.32***
9.99***
13.28***
13.11***
15.01***
23.64***
43.21***

Change in
BVE
–43.55
–8.59
–45.18
–16.41
–44.81
–5.99
–45.75
–0.87
–44.78
–2.41
–44.40
2.40
–43.69
7.56
–42.86
8.24
–43.49
35.80

Change in
BVE
25.34
32.68
25.22
26.26
24.90
29.03
25.31
32.65
25.00
20.16
24.99
23.90
25.09
15.14
24.69
10.06
25.18
–2.56

Difference in
BVE changes
–34.96***
–28.77***
–38.81***
–44.88***
–42.37***
–46.79***
–51.24***
–51.10***
–79.29***

Difference in
BVE changes
–7.34
–1.04
–4.12
–7.33
4.84**
1.09
9.95
14.63*
27.74***

NOTE: Values are in basis points. */**/*** indicate significance at the 10/5/1 percent levels, respectively. In this table, we divide community banks into deciles based upon their degree of liability sensitivity. We then match banks in the top decile with similar banks in
the lower decile. With few exceptions, we find that the more liability-sensitive banks perform more poorly in the rising rate era, but
they perform better in the falling rate era. These results show that the Fed's EVM accurately separates banks by their interest rate
sensitivity. The banks in the higher (lower) deciles are predicted to be the most (least) liability sensitive.

N OV E M B E R / D E C E M B E R 2 0 0 4

57

REVIEW

Sierra and Yeager

Only for three of the comparisons in Panel B are the
results statistically significant and have the expected
sign.
The matched-pair results for NIM and ROA in
the rising rate era are not as dramatic as the results
in the falling rate era, but the results for the BVE are
much stronger than those of the falling rate era.
The rising rate era results appear in Panel A of
Table 4. We expect the differences in the changes
in NIM, ROA, and BVE to become more negative as
the decile comparisons widen, because the most
liability-sensitive banks in the top decile should have
larger declines in these performance measures relative to less liability-sensitive banks. All of the differences of the changes in NIM, ROA, and BVE have the
expected signs, and they generally become more
negative as the decile differences widen. In addition,
most are statistically significant at the 1 percent level.
Results for the BVE show that banks in the top decile
experienced a drop in equity of 43.55 basis points,
whereas banks in the ninth decile had a drop in
equity of 8.59 basis points. The –34.96-basis-point
difference is statistically significant at the 1 percent
level. The differences in the changes of BVE generally
become more negative, as expected, such that the
spread between the top and bottom deciles is more
than –79 basis points.
In sum, matched-pair analysis indicates that the
Fed’s EVM can detect differences in interest rate risk
when banks are grouped by deciles according to
their exposure to rising interest rates. These results
confirm the robustness of the regression results
and suggest that bank supervisors can use the EVM
as a useful tool to rank community banks by interest
rate sensitivity.

Correlation with the “S” Rating
Each time a bank is examined, examiners assign
the bank a sensitivity (S) rating from 1 to 5, with 1
being the best rating. A strong and positive correlation between the EVE score and the S rating would
be consistent with the assertion that the EVM captures information about a bank’s interest rate risk.
This analysis also serves as a robustness check
against the prior tests, which rely solely on accounting numbers.
We assess the correlation between EVE and S
ratings both in decile groupings and on a bank-bybank basis. As with matched pairs, the decile analysis
allows for the possibility that the EVE rankings within
a given decile may not be very tight. We rank all the
58

N OV E M B E R / D E C E M B E R 2 0 0 4

community banks in our sample by their EVE scores
and split the banks into deciles. We then compute
the mean EVE and S ratings and rank the deciles by
the absolute value of each decile. If the EVE model
is calibrated such that banks with the lowest interest
rate risk have EVE scores near zero, then banks with
large absolute-value EVE scores should have relatively worse (higher) examiner ratings. The top half
of Table 5 lists the mean EVE score and the mean S
rating for each decile, listed in descending order by
the absolute value of the mean EVE score.
The correlation coefficients listed in the bottom
half of Table 5 show a consistent positive relationship between the absolute value of EVE scores and
S ratings. The correlation coefficient on a decile basis
is 0.99, and it is 0.14 on a bank-by-bank basis. Both
are statistically different from zero at the 1 percent
level. The high degree of correlation suggests that
either the EVM captures information about interest
rate risk that examiners confirm on site or the examiners use the EVM to help assess interest rate risk.
Even though Federal Reserve examiners are
instructed not to incorporate the EVM directly into
the S rating, one may be skeptical. One simple test
to help discern the direction of causation between
EVE and S ratings is to examine the correlation
between EVE scores and S ratings in 1998. Because
1998 was the first year that the Focus reports were
made available to examiners, a period of transition
undoubtedly took place for the examiners to learn
about and understand the report. A positive correlation in 1998 would add to the evidence that the
EVM captures information that examiners confirm
on site. At the bottom of Table 5 we report the correlation coefficient between EVE scores and S ratings
assigned in 1998. The decile correlation is 0.82 and
the bank-level correlation is 0.10. Again, both are
statistically different from zero at the 1 percent level.
The results for 1998 lend support to the hypothesis
that the EVM contains information about interest
rate sensitivity that examiners affirm when they
are on site at a particular bank.

CONCLUSION
Regression analysis, matched pairs, and correlation analysis demonstrate that the Fed’s EVM is a
useful supervisory tool to assess the relative interest
rate risk at community banks. Bank supervisors can
confidently use the model’s output to rank banks
by interest rate sensitivity. The model appears to
be quite stable and robust. Although the EVM was

FEDERAL R ESERVE BANK OF ST. LOUIS

Sierra and Yeager

Table 5
The Relationship Between the EVM and the “S” Ratings
Decile by
absolute value of EVE

Mean S rating by decile

Mean EVE of decile

Observations

10

1.83

3.44

3,716

9

1.73

2.22

3,715

8

1.67

1.68

3,715

7

1.64

1.28

3,715

6

1.62

–1.12

3,715

5

1.61

0.95

3,715

4

1.56

0.64

3,715

3

1.58

–0.37

3,715

2

1.56

0.34

3,715

1

1.56

0.02

3,715

Correlation coefficient of absolute value of EVE and S ratings
By decile

Bank level

Full sample

0.99***

0.14***

1998

0.82***

0.10***

NOTE: ***Indicates significance at the 1 percent level or better. We measure the correlation between the absolute value of the EVE
measure and a bank’s S rating. The EVE measure and S ratings are positively correlated at a decile and bank level. The results show
that either the Fed’s EVM identifies interest rate risk patterns that examiners confirm on site or that examiners use the EVM to assign
the S rating.

constructed assuming a parallel yield curve shift
upward of 200 basis points, our results demonstrate
that the model is useful in both rising and falling
interest rate eras and in time periods in which the
slope of the yield curve changes.
Another conclusion that emerges from these
results is that the average interest rate risk at community banks appears to be modest. Even relatively
big changes in interest rates such as the drop that
occurred between December 2000 and December
2002 had relatively small effects on income and
capital at community banks, both in absolute and
relative terms. For example, regression analysis
predicts that the average bank with an EVE score of
0.99 experienced an increase in NIM of about 15
basis points, an increase in ROA of 9 basis points,
and an increase in BVE of 4 basis points over the twoyear period of falling rates. Although nontrivial, none
of these changes by themselves are of sufficient
magnitude to affect bank performance significantly.
Consequently, interest rate risk does not appear to
be a significant threat to bank safety and soundness

at the present time, a conclusion that should provide
some comfort to monetary policymakers when
they influence interest rates.
One caveat to this conclusion is that our analysis
fails to consider the interaction between interest
rate risk and other risks, such as credit risk. A large
change in the level of interest rates may affect community banks more severely than our analysis suggests because the default rates of marginal borrowers
with variable rate payments may increase.

REFERENCES
Aharony, Joseph and Saunders, Anthony. “The Effects of a
Shift in Monetary Policy Regime on the Profitability and
Risk of Commercial Banks.” Journal of Monetary
Economics, May 1986, 17(3), pp. 363-77.
Curry, Timothy and Shibut, Lynn. “The Cost of the Savings
and Loan Crisis: Truth and Consequences.” FDIC Banking
Review, December 2000, 13(2), p. 26-35.
Flannery, Mark J. “Market Interest Rates and Commercial

N OV E M B E R / D E C E M B E R 2 0 0 4

59

Sierra and Yeager

Bank Profitability: An Empirical Investigation.” Journal
of Finance, December 1981, 36(6), pp. 1085-101.
Flannery, Mark J. “Interest Rates and Bank Profitability:
Additional Evidence.” Journal of Money, Credit and Banking,
August 1983, 15(3), pp. 355-62.
Flannery, Mark J. and James, Christopher M. “Market
Evidence of the Effective Maturity of Bank Assets and
Liabilities.” Journal of Money, Credit, and Banking,
November 1984, 16(4, Part 1), pp. 435-45.
Gilkeson, James H.; Hudgins, Sylvia C. and Ruff, Craig K.
“Testing the Effectiveness of Regulatory Interest Rate
Risk Measurement.” Journal of Economics and Finance,
Summer 1997, 21(2), pp. 27-37.
Houpt, James V. and Embersit, James A. “A Method for
Evaluating Interest Rate Risk in U.S. Commercial Banks.”
Federal Reserve Bulletin, August 1991, pp. 625-37.
Lumpkin, Stephen A. and O’Brien, James M. “Thrift Stock
Returns and Portfolio Interest Rate Sensitivity.” Journal
of Monetary Economics, July 1997, 39(2), pp. 341-57.
Robinson, Kenneth J. “Interesting Times for Banks Since
Basle.” Financial Industry Studies, July 1995, pp. 9-16.

60

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW
Robinson, Kenneth J. and Klemme, Kelly. “Does Greater
Mortgage Activity Lead to Greater Interest Rate Risk?
Evidence from Bank Holding Companies.” Financial
Industry Studies, August 1996, pp. 13-24.
Saunders, Anthony and Yourougou, Pierre. “Are Banks
Special? The Separation of Banking from Commerce and
Interest Rate Risk.” Journal of Economics and Business,
May 1990, 42(2), pp. 171-82.
Saunders, Anthony and Cornett, Marcia. Financial
Institutions Management. Fourth Edition. McGraw–Hill
Irwin, 2003.
Sierra, Gregory E. “Can an Accounting-Based Duration
Model Effectively Measure Interest Rate Sensitivity?”
Ph.D. Dissertation, Washington University, April 2004.
Wright, David M. and Houpt, James V. “An Analysis of
Commercial Bank Exposure to Interest Rate Risk.” Federal
Reserve Bulletin, February 1996, 82(2), pp. 115-28.
Yourougou, Pierre. “Interest Rate Risk and the Pricing of
Depository Financial Intermediary Common Stock.”
Journal of Banking and Finance, October 1990, 14(4), pp.
803-20.

Discrete Policy Changes and Empirical Models
of the Federal Funds Rate
Michael J. Dueker and Robert H. Rasche

I

n macroeconomic models with monthly or
quarterly data, it is common to assume that
variables—such as output, investment, and
inflation—respond to the monthly or quarterly
average of the daily federal funds rate. The idea is
that the cumulative flow of investment spending
within a quarter, for example, does not depend on
the value of the federal funds rate at a point in time
but, instead, on its average level throughout the
quarter. In fact, the use of the monthly or quarterly
average of the federal funds rate is common practice
in a variety of empirical macroeconomic models,
from vector autoregressions (e.g., Bernanke and
Blinder, 1992) to estimated versions of stochastic
dynamic general equilibrium models (e.g., Lubik
and Schorfheide, 2004). In addition, the daily effective federal funds rate contains noise in the form
of departures from the target level set by monetary
policymakers, as a result of idiosyncratic conditions
in the interbank loan market on a given day. Averaging the daily rates across a month or quarter is
one way to cancel most of this noise—and is yet
another reason why the use of monthly and quarterly averages has become a widely used measure
of monetary policy. Hence, regardless of the direction of the evolution of empirical macroeconomics,
the use of the monthly or quarterly average of the
daily federal funds rate will likely remain common
practice.
Another feature common to otherwise disparate
approaches to macroeconomic modeling is that the
federal funds rate has its own equation called the
policy equation. Because Federal Reserve policymakers use the federal funds rate as their policy
instrument, one equation in the model describes
the way that policymakers adjust the policy instrument in response to the current state of the economy.
In practice, monetary policymakers adjust a target
level for the federal funds rate by discrete increments

at their regularly scheduled meetings or in conference calls. One often-neglected consequence of
quarterly averaging is that any change in the target
federal funds rate will affect the quarterly average in
two different quarters. For example, if policymakers
raise the target by 50 basis points precisely halfway
through this quarter, then the current quarter’s average will rise by 25 basis points relative to last quarter,
and next quarter’s average will also exceed this quarter’s average by 25 basis points, all else equal. Note
that this calculation relies on a key feature of target
changes: They are in effect until further notice and
not for a specified time period. In other words, monetary policymakers could announce a 25-basis-point
increase in the target federal funds rate that would
be in effect for the next 60 days, but this is not what
they do. Instead, each target change is in effect until
further notice. Hence, a target change made now is
likely to persist into the following quarter.
Despite this clear source of predictable change
in the quarterly average of the federal funds rate,
the vast bulk of the literature that estimates policy
equations ignores information concerning the timing and magnitude of discrete changes to the target
federal funds rate. As a result, such empirical models
end up trying to predict the effect on the monthly
or quarterly average of known, past policy actions
rather than include this piece of data in the forecast
information set. While this information about discrete target changes might seem like a second-order
issue in the estimation of policy equations, we
present estimates of a Taylor-type policy equation
(Taylor, 1993) that suggest otherwise. It turns out
that policy equations of the quarterly average of
the federal funds rate that take account of discrete
changes to the target federal funds rate fit the data
substantially better than those that omit this information. In addition, we show that empirical results
on important policy questions can be overturned,
depending on whether a discreteness-adjustment

Michael J. Dueker is an assistant vice president and Robert H. Rasche is a senior vice president and director of research at the Federal Reserve Bank of
St. Louis. Andrew Alberts provided research assistance.
Federal Reserve Bank of St. Louis Review, November/December 2004, 86(6), pp. 61-72.
© 2004, The Federal Reserve Bank of St. Louis.

N OV E M B E R / D E C E M B E R 2 0 0 4

61

Dueker and Rasche

term is included in the estimation of a policy equation. In particular, we focus in this paper on a debate
concerning the source of interest rate smoothing—
an issue discussed in greater detail in the next section. Given the results in this paper, we recommend
that such a discreteness-adjustment term be included
as a regular feature of empirical models of the quarterly or monthly average of the federal funds rate.

THE DEBATE ON INTEREST RATE
SMOOTHING IN ESTIMATED POLICY
EQUATIONS
One lively debate in empirical macroeconomics
is whether monetary policymakers adjust the federal
funds rate gradually in response to developments
in the economy or, alternatively, whether the determinants of the interest rate evolve gradually enough
to account for the sluggish pace of observed changes
in the interest rate. Sack (2000) and Clarida, Galí, and
Gertler (2000) argue for the former; Rudebusch
(2002) argues for the latter; English, Nelson, and
Sack (2003) find evidence of both. This question can
be summarized as follows: Do policymakers smooth
the interest rate by overtly choosing to adjust it
gradually? Three reasons have been put forth for
rate smoothing and partial adjustment. First, policymakers are uncertain about the true structure of the
economy and this source of possible policy mistakes
leads them to act less forcefully in the short run
(Sack, 2000). Second, and similarly, policymakers
are uncertain about the accuracy of initial data
releases—another source of possible policy mistakes
(Orphanides, 2001). Third, and perhaps most relevant, is the idea from Woodford (2003) that monetary
policymakers can influence market expectations if
they show a willingness to implement—even through
gradual actions—a large long-run interest rate
response if necessary. For example, suppose that
policymakers indicate that they are willing to raise
the federal funds rate by an eventual amount of 120
basis points if a 40-basis-point increase in inflation
persists. Policymakers demonstrate this willingness
by embarking on a path of raising the interest rate
gradually. If the public believes that this gradual
path will be implemented for as long as necessary
to reduce inflation, market expectations will adjust
quickly, with the beneficial effect of reducing inflation without requiring much actual increase in the
interest rate.
A dissenting voice to the interest rate smoothing
argument is Rudebusch (2002), who suggests that
62

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW
episode-specific factors influence the setting of
monetary policy and are not captured by simple
empirical policy equations. For example, the credit
crunch in the early 1990s, the financial market upset
in 1998, and the terrorist attacks on September 11,
2001, all created uncertainties that had a persistent
influence on the level of the federal funds rate, yet
they are not incorporated in simple policy equations.
Rudebusch suggests that purported evidence of
interest rate smoothing is actually only a product
of omitted variable bias. Since it is not really possible
to include or even measure all of the relevant variables in an interest rate regression, Rudebusch (2002)
concludes that the lagged policy rate (a measure of
interest rate smoothing) can be included in regressions but should not be interpreted as a structural
feature of monetary policy practice.
To disentangle these two competing hypotheses,
English, Nelson, and Sack (2003) studied a specification for a policy rule that can nest these two interpretations of Federal Reserve policy. Their results
suggest a significant role for both interpretations,
although their analysis indicated that interest rate
smoothing was perhaps the most important factor
quantitatively. In this article, we demonstrate that
empirical tests concerning this debate can be overturned if the effects of discrete target changes are
taken into account. The next section describes such
a discreteness adjustment.

DISCRETE TARGET CHANGES AND
FORECASTS OF THE QUARTERLY
AVERAGE
The discreteness adjustment is the one used in
Dueker (2002) and assumes that any target change
made to the federal funds rate during the quarter is
likely to remain in force through the next quarter.
In this case, the starting point for this quarter’s funds
rate is not the previous quarter’s average but the
target rate that held at the end of the previous quarter.
Another way to look at this issue is to note that, if a
quarter has N business days and a 50-basis-point
increase in the target federal funds rate occurs after
Nj business days have elapsed in the quarter, then,
other things equal, the effect of such a target change
on the next quarter’s average, relative to this quarter’s
average, would be to increase it by Nj /N × 50 basis
points above this quarter’s average, it. If more than
one discrete change takes place within a quarter,
then the effect of the target changes on the quarterly
average would be

FEDERAL R ESERVE BANK OF ST. LOUIS

Dueker and Rasche

Figure 1
Percent
14

12

Federal Funds Target
Quarterly Average of Federal Funds Effective Rate

10

8

6

4

2

0
1984

1986

1988

1990

1992

1994

1996

1998

2000

2002

2004

Figure 2
Percnt
1.5
ChangeiQuartlyAvofFdsR
DiscretnAdjustmentBasedonTargtChanges

1.0
inPrevousQat

0.5

0.0

–0.5

–1.0

–1.5

–2.0

–2.5
1984

1986

1988

1990

1992

1994

1996

1998

2000

2002

2004

N OV E M B E R / D E C E M B E R 2 0 0 4

63

REVIEW

Dueker and Rasche

DDVt −1 = ∑ N j / N × ∆ iTj,t −1 ,

(1)

j

where the discrete target changes are denoted ∆i T
and DDV is the discreteness variable first used in
Dueker (2002). An equivalent way to present this
discreteness adjustment is that it serves as a link
between last quarter’s average and the target rate
at the end of the quarter:
it −1 + DDVt −1 = itT−1,

(2)

Taylor rule, whereby monetary policy responds to
the current values of the inflation rate and the output gap. (Alternative forward-looking Taylor rules
can make use of measures of expected inflation.)
With the two behavioral assumptions appended,
the following three equations describe the policy
rule in its entirety:
ît = b0 + bπ πt + b y yt
(3) it = (1 − λ) ˆit + λit −1 + vt

T

where i is the end-of-period value of the target
federal funds rate and i is the quarterly average.
Figure 1 plots the target federal funds rate along
with the quarterly average of the federal funds rate.
In the chart, one can see that target changes can
precede some upward and downward shifts in the
quarterly average. Figure 2 makes this pattern even
more apparent: It plots the changes in the quarterly
average, it – it –1, with the discreteness adjustment,
DDVt –1=i Tt–1 – it –1. It is clear from the close correspondence in Figure 2 that DDVt–1 will be a relatively
powerful predictor of changes in the quarterly
average of the federal funds rate, based on target
changes that took place in the previous quarter.
Indeed this conjecture proves to be the case in the
empirical results presented in the next section.

ESTIMATION RESULTS FOR LINEAR
TAYLOR RULES WITH RATE SMOOTHING
English, Nelson, and Sack (2003) observed that
the hypotheses of interest rate smoothing and persistent omitted factors can be nested in one specification of a policy equation. Using their notation,
let i, π, and y denote, respectively, the interest rate,
the inflation rate, and the output gap. As discussed
above, the interest rate used as the monetary policy
instrument is the quarterly average of the federal
funds rate; the inflation rate is the most recent fourquarter change in the chain-weighted personal
consumption expenditures price index; the output
gap is the percentage difference between chainweighted real gross domestic product (GDP) and
the potential GDP measure from the Congressional
Budget Office. The sample period is from 1984:Q2
to 2004:Q2 and coincides with the period for which
a well-accepted series for the target federal funds
rate exists.
For this illustration of the effects of discrete target
changes on the quarterly average of the federal funds
rate, the basic policy equation is a contemporaneous
64

N OV E M B E R / D E C E M B E R 2 0 0 4

vt = ρvt −1 + ε t ,

Taylor rule
Rate smoothing
Autogressive errors

where iˆ represents the Taylor rule–implied level of
the federal funds rate in the absence of interest rate
smoothing and policy concerns other than inflation
and the output gap; λ is the interest rate smoothing
parameter under the assumption that the rate policymakers inherit from the past is it –1 (which we can
also call the reference rate for the purposes of
interest rate smoothing); and ρ measures the persistence of omitted factors that also concern monetary
policymakers. For the side of the debate that believes
that monetary policymakers smooth interest rates
purposefully, λ would account for the gradual adjustment of the federal funds rate and ρ would be zero.
For the opposite side of the debate, λ would be zero
and the gradual adjustment would be explained by
errors due to omitted ancillary policy concerns, such
as financial market disturbances, in the form of
autoregressive model errors.
The purpose of the interest rate smoothing
equation is to assume that monetary policymakers
set the rate this period equal to a weighted average
of the rate implied by the Taylor rule and the rate
inherited from the previous quarter. Following the
previous period’s target changes, however, the rate
inherited from the past ought to be the end-of-period
target level, i Tt–1, and not it –1, the previous quarter’s
average. But this hypothesis is testable, as we can
nest the two specifications as follows.
With discreteness adjustment, the expression
in equation (3) becomes
(4)
ît = b0 + bπ πt + b y y t

Taylor rule

it = (1 − λ )ˆit + λ it −1 + DDVt −1 + δ DDVt −1 + vt
Discreteness-adjusted rate smoothing
vt = ρvt −1 + ε t ,

Autoregressive errors

FEDERAL R ESERVE BANK OF ST. LOUIS

Dueker and Rasche

Table 1
Calculation of the Discreteness Adjustment
Days in quarter
1990:Q4
Rate change
1994:Q2
Rate change
2001:Q1
Rate change

66

65

65

12
33
50
58
–0.25
–0.25
–0.25
–0.25
(20/66)*(–0.25) + (32/66)*(–0.25) + (49/66)*(–0.25) + (57/66)*(–0.25)

=

–0.5985

12
33
0.25
0.50
(11/65)*(0.25) + (32/65)*(0.50)

=

0.2885

=

–0.6154

(5)
∆it = (1 − λ ) ∆iˆt + λ∆it − 1 + ( ρ − 1)  it −1 − (1 − λ ) iˆt − 1 − λi t −2  + εt
2

—

3
23
57
–0.50
–0.50
–0.50
(2/65)*(–0.50) + (22/65)*(–0.50) + (56/65)*(–0.50)

where the rate inherited from the previous quarter
is the end-of-period target level, i Tt–1=it –1+DDVt –1,
if δ=0 and it equals the previous quarter’s average
if δ=–λ.
English, Nelson, and Sack (2003) combine the
three expressions from equation (3) into one equation
that describes the changes in the federal funds rate:

(

Discreteness adjustment
for following quarter

Days within quarter when target changed

)

ε t ∼ N 0,σ ,

where iˆ is the Taylor rule–implied level of the federal
funds rate absent any interest rate smoothing or
autoregressive errors due to ancillary policy concerns.
With the discreteness adjustment, the combined expression is as follows:
(6)
∆ it = (1 − λ)∆ˆit + λ∆it −1 + (λ + δ )∆ DDVt −1
+(ρ −1)[it −1 − (1 − λ) ît −1 − λit −2 − (λ + δ )DDVt −2 ] + εt .
A key feature of the specification in equations (5)
and (6) is that it does not impose either hypothesis
(λ=0 or ρ=0). Judd and Rudebusch (1998) called
∆it –1 a term that captured “momentum” from the
previous period’s funds rate change. The purpose of
the discreteness adjustment, however, is to provide
an accurate reflection of the momentum implied
by the previous period’s discrete target changes,
which frees ∆it –1 from having to play this role.
Nonlinear least-squares estimates for equation
(5) are shown in the first column of Table 2. The
results without the discreteness adjustment, DDV,
concur with English, Nelson, and Sack (2003) in that

—

—

both λ and ρ are significantly greater than zero, and
the Taylor rule still seems operative, given significant
coefficients on bπ and by. A well-known stability
property of Taylor rules is that the coefficient on
inflation must be greater than 1 and the estimate of
bπ=1.25 from equation (5) meets this criterion even
if its standard error is 0.50. In most cases, the analysis
often stops at this point, with a standard error of the
regression of 33 basis points per quarter.
Our discussion of the discreteness adjustment
leads us to believe, however, that these results might
change if the empirical model incorporated information regarding target changes in the previous
quarter. Equation (4) suggests that when δ=0, the
starting point for interest rate smoothing is the endof-period target rate, rather than the most recent
quarterly average. If it is the quarterly average, on
the other hand, then a value of δ=–λ would remove
the discreteness adjustment, DDV, from equations
(4) and (6). The second column of Table 2 shows
estimates of equation (6) with an estimate of δ. Put
in the context of equation (2), this value of δ implies
that the reference rate for interest smoothing is
it –1+1.24(i Tt–1 – it –1).This coefficient is very close to
the value of 1.21 that Dueker (2002) found on the
same discreteness-adjustment variable in a vector
autoregression. In both cases, however, the coefficient δ is not significantly different from zero, which
suggests that the reference rate for interest rate
smoothing is the end-of-period target rate, i Tt–1 , and
not the most recent quarterly average, tt –1. Accordingly, the last column in Table 2 shows the estimates
for equation (6) with δ set to zero and finds that
the standard error of the regression is essentially
unchanged from when δ=0.24 (middle column of
N OV E M B E R / D E C E M B E R 2 0 0 4

65

REVIEW

Dueker and Rasche

Table 2
Taylor Rule Policy Equations with and without Discreteness Adjustment, 1984:Q2–2004:Q2
Variable

Coefficient

No discreteness

Discreteness

Intercept

b0

2.28
(1.24)

1.12
(2.42)

1.24
(1.85)

Inflation

bπ

1.248
(0.498)

1.971
(0.937)

1.846
(0.706)

Output gap

by

0.853
(0.354)

0.948
(0.610)

1.073
(0.486)

Rate smoothing

λ

0.719
(0.108)

0.951
(0.035)

0.919
(0.041)

Autoregressive errors

ρ

0.769
(0.134)

0.281
(0.144)

0.438
(0.118)

Inherited rate

δ

—

0.241
(0.182)

—

S.E.E.
–
R2

σ

0.333
0.502

0.275
0.660

0.275
0.660

NOTE: Standard errors are in parentheses.

Table 2). In contrast, the standard error of the regression in Table 2 is 21 percent higher when the discreteness adjustment is omitted. Based on this estimate,
we set δ=0 in all subsequent model specifications.
In terms of Taylor rule coefficients, interest rate
smoothing parameters, and autoregressive errors, the
estimates with and without the discreteness adjustment are also different. The point estimates of the
inflation and output gap response coefficients are
higher with the discreteness adjustment, although
their standard errors are relatively large. In the last
column of Table 2, the estimated values of bπ and
by are 1.85 and 1.07, respectively. In addition, the
estimated value of λ goes up and ρ goes down with
the discreteness adjustment. Instead of being roughly
equal at about 0.75 without the discreteness adjustment, λ=0.92 and ρ=0.44 with the discreteness
adjustment (last column of Table 2).
Two caveats, however, hinder us from interpreting
λ=0.92 as direct evidence of interest rate smoothing.
First, this estimate covers all quarters and thereby
mixes the roughly 40 percent of all quarters when
the target federal funds rate did not change with the
60 percent when the target did change. Second, we
need to consider the fact that monetary policymakers
would not move the target funds rate to the Taylor
rule–implied level at the beginning of each quarter;
instead they could act slowly but, by the end of the
period, set the target rate, i T, equal to the Taylor rate,
66

N OV E M B E R / D E C E M B E R 2 0 0 4

iˆ. As an extreme example, suppose that the target
is always unchanged until two-thirds of the way
through each quarter, whereupon it is set equal to
the Taylor rate, iˆ. This timing alone would result in
a value of λ=0.66 to match the quarterly average:
it = 0.333ˆit + 0.667iTt −1 .
Firm evidence of interest rate smoothing requires
that iˆt – i Tt show persistence beyond any found in
the autoregressive errors. To study the persistence
of iˆt – i Tt, however, we would like the estimate of iˆt
to come from a model that recognizes that policymakers do not change the target funds rate every
quarter and, hence, the standard deviation of the
residual, denoted σ, will sometimes be much lower
than the 28 basis points shown in Table 2.
One way to separate the target change/no target
change regimes in a predictive model is to make λ
and the standard deviation, σ, subject to regime
switching. We explore such a nonlinear Taylor rule
model of monetary policy in the next section.

ESTIMATION RESULTS FOR
NONLINEAR TAYLOR RULES WITH
RATE SMOOTHING
A clean test for interest rate smoothing—in the
form of persistence in the gap between the Taylor
rule rate and the end-of-period target, iˆt – i Tt—would

FEDERAL R ESERVE BANK OF ST. LOUIS

Dueker and Rasche

Table 3
Markov-Switching Model with Switching in Taylor Rule Intercept and Smoothing,
1984:Q2–2004:Q2
Variable

Coefficient

No discreteness

Discreteness

b0, S1 = 1

–0.898
(0.819)

0.581
(0.421)

b0, S1 = 2

2.48
(0.657)

4.35
(0.480)

Inflation

bπ

1.927
(0.239)

1.721
(0.167)

Output gap

by

0.969
(0.164)

1.11
(0.094)

Intercepts

Rate smoothing

λ S2 = 1

Rate smoothing

λ S2 = 2

0.717
(0.045)

0.888
(0.033)

Autoregressive errors

ρ

0.280
(0.030)

0.00
(0.012)

Transition probabilities

p1

0.955
(0.034)

0.947
(0.043)

Transition probabilities

q1

0.951
(0.038)

0.943
(0.049)

Transition probabilities

p2

0.473
(0.137)

0.636
(0.122)

Transition probabilities

q2

0.711
(0.087)

0.763
(0.077)

S.E.E.

σS2 = 1

0.057
(0.010)

0.014
(0.004)

S.E.E.

σS2 = 2

0.271
(0.030)

0.303
(0.042)

1.0

0.980
(0.003)

NOTE: Standard errors are in parentheses.

use an estimate of the Taylor rule rate from a model
that had two key attributes: First, the model would
not use data from quarters when policymakers did
not change the target federal funds rate to estimate
Taylor rule coefficients; second, the model would
not have autocorrelated errors. The first attribute is
important because we want a model that fits the
data in quarters where the target does not change
with the common-sense specification, ∆it =DDVt –1=
i Tt–1 – it –1, which requires in equation (6) that λ=1,
ρ=0, and δ=0 for those non-target-change observations. If the second attribute, ρ=0, holds for all
observations, then the end-of-period target ought
to equal the Taylor rule rate under the hypothesis
of no rate smoothing.

To find a model specification that has these two
desirable attributes, we introduce Markov switching
to two of the model parameters, λ and b0, the Taylor
rule intercept. The intent is to allow complete
smoothing (λ1<1) in one of the states, which ought
to coincide fairly well with the periods when the
target does not change. The value of λ in the other
state, λ2, could take on a lower value. The objective
behind regime switching in the Taylor rule intercept,
b0, is to lower or eliminate autocorrelation in the
model errors. In a Taylor rule, the intercept b0=r*–
(1 – bπ )π*, where r* is the equilibrium short-term
real interest rate and π* is the inflation target. Because
temporary changes in some combination of r* and
π* could occur across the business cycle or in periods
N OV E M B E R / D E C E M B E R 2 0 0 4

67

REVIEW

Dueker and Rasche

Figure 3
Smoothed Probability
1.0
0.9
0.8
0.7

Probability of Low
Taylor Rule Intercept

0.6
0.5
0.4
0.3
0.2
0.1
0.0
1985

1987

1989

1991

1993

1995

1997

1999

2001

2003

NOTE: Shaded areas are recessions (as determined by the National Bureau of Economic Research).

of financial market upset, it seems natural to investigate whether variation in b0 could remove some or
all of the autocorrelation in the errors.
We allow the regime switching in these two
parameters to take place independently through
two separate state variables, S1 and S2. Thus, the
Taylor rule rate with regime switching is
(7)

ît = b0,S 1t + bπ πt + b y yt , Taylor rule

where S1t=1,2 is an unobserved state variable and
the transition probabilities are denoted
p1=Pr(S1t=1|S1t–1=1) and q1=Pr(S1t=2|S1t –1=2).
The rate-smoothing equation with regime
switching is
(8)

it = (1 − λS 2t )ˆit + λS 2t (it −1 + DDVt −1 ) + σ S 2t et ,

where we also allow the variance to depend on the
state variable because we expect much lower variance in the state where λ <1. For the second state
variable, S2, we report parameter estimates for both
fixed transition probabilities and time-varying transition probabilities. The fixed transition probabilities
are denoted p2=Pr(S2t=1|S2t–1=1) and
q2=Pr(S2t=2|S2t –1=2).
Parameter estimates for the model with fixed
68

N OV E M B E R / D E C E M B E R 2 0 0 4

transition probabilities are in Table 3. Figure 3 shows
the smoothed probability of the low Taylor rule
intercept (S1=1) from the model with the discreteness adjustment. The periods when the Taylor rule
intercept is low coincide roughly with periods when
interest rates experience cyclical fluctuations around
recessions, which are shaded in the chart. Importantly, the model with the discreteness adjustment
is able to separate periods when the target changed
from periods when it did not, because the variance
parameters are farther apart for that model. The
model with the discreteness adjustment is also able
to eliminate the autocorrelation in the errors, whereas
the model without the discreteness adjustment still
has significantly autocorrelated errors, with ρ=0.28.
Without constraining λ, it takes a value very close
to 1.0 when S2=1. Even when it equals 0.98 in the
model with the discreteness adjustment, the economic effect of this difference in terms of basis points
is negligible. In other words, the model imputes
essentially zero input from the Taylor rule rate when
λ =0.98.
Figure 4 shows how well the smoothed probability of S2=1 matches periods when the target federal
funds rate did not change for the model with the
discreteness adjustment and the parameter values

FEDERAL R ESERVE BANK OF ST. LOUIS

Dueker and Rasche

Figure 4
1
Smoothed
Probability
of S2= 1

0
Periods of
Unchanged
Target

1985

1987

1989

1991

1993

from Table 3. The correspondence is quite close,
suggesting that the Taylor response coefficients are
not attempting to explain much about the quarters
when policymakers left the target rate unchanged,
because λ is very close to 1.0 in that state.
Despite these apparently strong results, constant
transition probabilities are not completely satisfactory for switching in the parameter λ. We would
expect that policymakers would respond systematically to economic developments when deciding in
which periods to leave the target unchanged and set
λ to 1. In reality, these no-change periods have an
endogenous component and are not solely the result
of coin flips. One natural variable to use to predict
whether a target change will occur is Zt=abs(iˆt – i Tt–1 ),
the gap between the Taylor rule rate and the most
recent end-of-period target federal funds rate. If the
size of this gap is large in absolute value, then we
would expect that a target change and the regime
where λ <1 are more likely. With this explanatory
variable, we parameterize the time-varying transition
probabilities for S2 as
(9)
Pr( S2t = 1| S2t − 1 = 1) = exp(c0 + c1Zt ) / [1 + exp(c0 + c1Zt )]
Pr( S2t = 2 | S2t −1 = 2) = exp( d0 + d1 Zt ) / [1 + exp(d0 + d1Zt )].

Because S2=1 is the state where λ <1 and the tar-

1995

1997

1999

2001

2003

get is less likely to change, we would expect to find
c1<0 and d1>0. These signs would mean that monetary policymakers are more likely to accept feedback
from the Taylor rule rate when the gap between
the Taylor rule rate and the prevailing target rate,
abs(iˆt – i Tt–1), is large.
Parameter estimates for the Markov-switching
model with time-varying transition probabilities are
in Table 4. The estimates are relatively unchanged
from Table 3. The only significant coefficient on Zt
is c1<0 in the model without the discreteness adjustment. This coefficient implies that, if abs(iˆt – i Tt–1) is
large, then policymakers are likely to switch out of
the state where λ <1 if they had been in that state.
In the model with the discreteness adjustment, d1
has a point estimate above zero, as expected, but it
is not statistically significant. Thus, although we have
presented a framework for predicting when monetary policymakers are likely to keep the federal funds
target unchanged, we have not yet identified a significant explanatory variable for the time-varying
transition probabilities. For this reason, we report
estimates from the fixed transition probability
model to examine persistence in the gap between
the Taylor rule rate and the end-of-period federal
funds target rate, iˆt – i Tt. Because these regimes are
endogenous, further research on the process governing target rate changes is needed. In this vein,
N OV E M B E R / D E C E M B E R 2 0 0 4

69

REVIEW

Dueker and Rasche

Table 4
Markov-Switching Model with Time-Varying Transition Probabilities on the Smoothing Parameter,
1984:Q2–2004:Q2
Variable

Coefficient

No discreteness

Discreteness

b0, S1 = 1

–0.818
(1.17)

0.579
(0.393)

b0, S1 = 2

2.46
(0.488)

4.91
(0.697)

Inflation

bπ

1.877
(0.276)

1.728
(0.161)

Output gap

by

1.049
(0.244)

1.070
(0.092)

Rate smoothing

λ S2 = 1

0.999
(0.008)

0.986
(0.003)

Rate smoothing

λ S2 = 2

0.738
(0.070)

0.890
(0.023)

Autoregressive errors

ρ

0.294
(0.026)

0.006
(0.010)

Transition probabilities

p1

0.974
(0.023)

0.943
(0.035)

Transition probabilities

q1

0.933
(0.025)

0.835
(0.107)

Transition probabilities

c0

433.8
(75.7)

0.261
(1.00)

Transition probabilities

c1

–548.8
(145.6)

0.355
(0.633)

Transition probabilities

d0

1.11
(1.00)

0.427
(0.819)

Transition probabilities

d1

–0.26
(0.77)

1.236
(1.144)

S.E.E.

σ

0.195
(0.018)

0.245
(0.070)

S.E.E.

σS2= 1

0.052
(0.009)

0.267
(0.029)

Intercepts

NOTE: Standard errors are in parentheses.

Hamilton and Jorda (2002) present an autoregressive conditional hazard model of the target federal
funds rate. Similarly, a dynamic ordered probit
model, of the type that Dueker (1999) estimated,
of changes in the bank prime rate could be applied
to target changes.

A MEASURE OF INTEREST RATE
SMOOTHING
As discussed above, only models in which the
errors are not autocorrelated, ρ=0, imply that the
70

N OV E M B E R / D E C E M B E R 2 0 0 4

end-of-period target ought to equal the Taylor rule
rate under the hypothesis of no rate smoothing. A
comparison of Tables 2, 3, and 4 shows that only the
Markov-switching models with the discreteness
adjustment eliminate the autocorrelation in the
model errors. Consequently, a direct measure of the
degree of interest rate smoothing is the correlogram
of iˆt – i Tt from these two models.
However, because the value of the likelihood
function barely changes between Tables 3 and 4
for the model with the discreteness adjustment, we

FEDERAL R ESERVE BANK OF ST. LOUIS

Dueker and Rasche

Figure 5
Percent
12

Model-Implied Taylor Rule Rate

9

6

Federal Funds Target
3

0
1985

1987

1989

1991

1993

1995

1997

concentrate on the results from Table 3, the specification with fixed transition probabilities. Figure 5
shows the Taylor rule rate implied by the Table 4
estimates with the end-of-period target. In general,
the Taylor rule rate leads the target when the target
rises and falls. It is remarkable, therefore, that the
target, on its descent between 2001 and 2003, did
not lag the Taylor rule rate. Monetary policymakers
apparently were not smoothing the interest rate as
the economy went into recession in 2001. Table 5’s
correlogram of the difference between the modelimplied Taylor rule rate and the target federal funds
rate shows that Federal Reserve policymakers close
the gap within about six quarters on average. Thus,
the degree of interest rate smoothing is considerable
but has a relatively short horizon.

Table 5

SUMMARY AND CONCLUSIONS
This article points out that discrete changes to
the target federal funds rate are a clear source of
predictable change in the monthly or quarterly
average of the daily federal funds rates. Figure 2
suggests that the adjustment for the discrete target
changes accounts for what is perhaps a surprising
amount of the sample variance of the changes in
the quarterly average. Thus, the discreteness adjust-

1999

2001

2003

Correlogram of the Gap Between the Federal
Funds Target Rate and the Markov-Switching
Taylor Rule Rate with Discreteness
Adjustment, 1984:Q2–2004:Q2
Lag

Autocorrelation

1

0.826

2

0.655

3

0.447

4

0.269

5

0.167

6

0.079

7

0.054

8

–0.036

9

–0.114

10

–0.188

ment carries the potential to overturn estimation
results that involve the monthly or quarterly average
of the federal funds rate. We present such an example
by examining the debate concerning interest rate
smoothing in policy rules. Without the discreteness
N OV E M B E R / D E C E M B E R 2 0 0 4

71

Dueker and Rasche

adjustment, estimation results suggest that interest
rate smoothing is not the only source of gradualism
in interest rate changes. With the discreteness
adjustment, however, the empirical results strongly
favor interest rate smoothing as the source of gradualism in federal funds rate changes. We also show
that the discreteness adjustment affects empirical
results concerning the policy equations even in relatively rich models that include regime switching.
The Markov-switching framework we present
is adept at separating the regime where policymakers
change the target from the regime where they do not.
This framework can employ explanatory variables
in the transition probabilities to predict these regimes
ahead of time. Future work can concentrate on studying the determinants of the target change decisions
of policymakers.

REFERENCES
Bernanke, Ben S. and Blinder, Alan S. “The Federal Funds
Rate and the Channels of Monetary Transmission.”
American Economic Review, September 1992, 82(4), pp.
901-21.
Clarida, Richard; Galí, Jordi and Gertler, Mark. “Monetary
Policy Rules and Macroeconomic Stability: Evidence and
Some Theory.” Quarterly Journal of Economics, February
2000, 115(1), pp. 147-80.
Dueker, Michael J. “The Monetary Policy Innovation
Paradox in VARs: A ‘Discrete’ Explanation.” Federal
Reserve Bank of St. Louis Review, March/April 2002,
84(2), pp. 43-49.
Dueker, Michael J. “Conditional Heteroscedasticity in
Qualitative Response Models of Time Series: A GibbsSampling Approach to the Bank Prime Rate.” Journal of
Business and Economic Statistics, October 1999, 17(4),
pp. 466-72.

72

N OV E M B E R / D E C E M B E R 2 0 0 4

REVIEW
English, William B.; Nelson, William R. and Sack, Brian P.
“Interpreting the Significance of the Lagged Interest Rate
in Estimated Monetary Policy Rules.” Contributions to
Macroeconomics, 3(1), 2003, http://www.bepress.com/
bejm/contributions/vol3/iss1/art5.
Hamilton, James D. and Jorda, Oscar. “A Model of the
Federal Funds Rate Target.” Journal of Political Economy,
October 2002, 110(5), pp. 1135-67.
Judd, John P. and Rudebusch, Glenn D. “Taylor’s Rule and
the Fed: 1970-1997.” Federal Reserve Bank of San
Francisco Review, 1998, (3), pp. 3-16.
Lubik, Thomas A. and Schorfheide, Frank. “Testing for
Indeterminacy: An Application to U.S. Monetary Policy.’’
American Economic Review, March 2004, 94(1), pp. 190-217.
Orphanides, Athanasios. “Monetary Policy Rules Based on
Real-Time Data.” American Economic Review, September
2001, 91(4), pp. 964-85.
Rudebusch, Glenn D. “Term Structure Evidence on Interest
Rate Smoothing and Monetary Policy Inertia.” Journal of
Monetary Economics, September 2002, 49(6), pp. 1161-87.
Sack, Brian. “Does the Fed Act Gradually? A VAR Analysis.”
Journal of Monetary Economics, August 2000, 46(1), pp.
229-56.
Taylor, John B. “Discretion versus Policy Rules in Practice.”
Carnegie- Rochester Conference Series on Public Policy,
December 1993, 39, pp. 195-214.
Woodford, Michael. “Optimal Interest-Rate Smoothing.”
Review of Economic Studies, October 2003, 70(4), pp.
861-86.