View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Federal Reserve
of Chicago

Economic.

p erspe ct i v e s

2

An evaluation of real GDP forecasts: 1996-2001

22

Inflation and monetary policy in the twentieth century

48

Bankruptcy law and large complex financial organizations:
A primer

59

Economic perspective on the political history of the
Second Bank of the United States

Conference on Bank Structure and Competition announcement

Economic .

perspectives

President
Michael H. Moskow
Senior Vice President and Director of Research
William C. Hunter

Research Department
Financial Studies
Douglas Evanoff, Vice President

Macroeconomic Policy
Charles Evans, Vice President

Microeconomic Policy
Daniel Sullivan, Vice President
Regional Programs
William A. Testa, Vice President
Economics Editor
David Marshall

Editor
Helen O’D. Koshy
Associate Editor
Kathryn Moran

Production
Julia Baker, Rita Molloy,
Yvonne Peeples, Nancy Wellman
Economic Perspectives is published by the Research
Department of the Federal Reserve Bank of Chicago. The
views expressed are the authors’ and do not necessarily

reflect the views of the Federal Reserve Bank of Chicago
or the Federal Reserve System
Single-copy subscriptions are available free of charge Please
send requests for single- and multiple-copy subscriptions,
back issues, and address changes to the Public Information
Center, Federal Reserve Bank of Chicago, P.O. Box 834,
Chicago, Illinois 60690-0834, telephone 312-322-5111
or fax 312-322-5515.
Economic Perspectives and other Bank

publications are available on the World Wide Web
at http://www.chicagofed.org.

Articles may be reprinted provided the source is credited
and the Public Information Center is sent a copy of the
published material. Citations should include the following
information: author, year, title of article, Federal Reserve
Bank of Chicago, Economic Perspectives, quarter, and
page numbers.
*

& chicagofed. org
ISSN 0164-0682

Contents
First Quarter 2003, Volume XXVII, Issue 1

2

An evaluation of real GDP forecasts: 1996-2001
Spencer Krane
During the second half of the 1990s, forecasters made large and persistent underpredictions of GDP
growth; subsequently, they missed the drop off into the recession of 2001. Forecasters do not appear
to have behaved unusually during this period: Their out-period forecasts were not far from their
perceptions of longer-run trends. This suggests that the forecast errors in 1996-2001 likely reflected
some unusual behavior in the economy.

22

Inflation and monetary policy in the twentieth century
Lawrence J. Christiano and Terry J. Fitzgerald
This article characterizes the change in the nature of the money growth-inflation and unemploymentinflation relationships between the first and second halves of the twentieth century. The changes are
substantial, and the authors discuss some of the implications for modeling inflation dynamics, notably
for models of inflation that say that bad inflation outcomes result from poorly designed monetary
policy institutions.

Conference on Bank Structure and Competition announcement

48

Bankruptcy law and large complex financial organizations:
A primer
Robert R. Bliss
Large complex financial organizations (LCFOs) are exposed to multiple problems when they become
insolvent. They operate in countries with different approaches to bankruptcy and, within the U.S., multiple
insolvency administrators. The special financial instruments that comprise a substantial portion of
LCFO assets are exempted from the usual “time out” that permits the orderly resolution of creditor
claims. This situation is complicated by the opacity of LCFOs’ positions, which may make them difficult
to sell or unwind in times of financial crisis. This article discusses these issues and their origins.

59

Economic perspective on the political history of the Second Bank
of the United States
Edward J. Green
The Second Bank of the United States was an institution of first-rank importance, both politically and
economically, during the early nineteenth century. This article uses recent contributions to theory on
industrial organization and monetary economics to argue tentatively that conflict between debtors and
creditors may have played a larger role in the bank’s fortunes than previously thought.

An evaluation of real GDP forecasts: 1996-2001
Spencer Krane

Introduction and summary
Increases in real U.S. gross domestic product (GDP)
averaged an annual rate of 3.2 percent between the
fourth quarters of 1991 and 1995 (the solid line in
panel A of figure 1), a relatively slow pace of growth
considering that the economy was emerging from the
1990-91 recession. Output then surged in the second
half of the decade, with current estimates showing real
GDP rising at an average annual rate of 4.4 percent
over the 1996-99 period. At the same time, inflation fell,
with the rate of increase in consumer prices (measured
by the Consumer Price Index, or CPI) moving from
5.4 percent in 1990 to an average of just 2.4 percent
in the second half of the decade (solid line in panel B).
The bars in the graphs show average forecasts of real
GDP growth and CPI inflation made at the beginning
of each year.1 Between 1996 and 1999, average real
GDP forecasts were in the range of 2.1 percent to 2.3
percent, while the CPI forecasts were in the range of
2.2 percent to 3 percent. Clearly, forecasters failed to
predict the outstanding performance of the economy—
they consistently underpredicted GDP growth and,
though to a lesser degree, they overpredicted inflation.
At the turn of the millennium, forecasts for real
GDP growth were in the range of about 3 percent to
3.5 percent. While not quite as robust as the actual rates
of growth recorded during the second half of the de­
cade, this still represented a solid gain in output and a
step up from the projections made in that earlier period.
Instead, in the second half of 2000, the expansion be­
gan to falter. The weakness intensified in early 2001,
with the economy falling into recession in March.
So again, forecasters failed to predict a major devel­
opment in the economy.
How should we interpret these forecast errors?
The economy is always being hit by shocks, and real
GDP growth naturally fluctuates a great deal. Further­
more, recessions are irregular occurrences that can be
generated by a variety of unforeseeable events. So, were

2

the forecast errors during the 1996-2001 period un­
usual, or did they simply reflect the inherent difficul­
ties in forecasting? If the errors were unusual, then why
is this so? In particular, did forecasters change the way
that they were constructing projections, or did the econ­
omy behave in an unusual manner? This article ad­
dresses these questions.
To do so, I first present a narrative account of the
evolution of real GDP forecasts made during the 19962001 period. This narrative shows, qualitatively, that
forecasters appeared to view most of the errors they
were experiencing during the 1996-99 period as tran­
sitory and left GDP projections at a pace just somewhat
below their benchmarks for longer-run growth. How­
ever, around the turn of the millennium, they boosted
their projections for GDP growth, both for the long
mn and the nearer term. Indeed, they did so just around
the time that the economy began to weaken.
This strategy clearly resulted in some large and,
during 1996-99, persistent forecast errors for real GDP.
I next show that, statistically, the 1996-99 errors were
unusual—based on forecasters’ track records, the odds
of seeing such a string of underpredictions were quite
small. The forecast errors in 2000 and 2001, though
large in an absolute sense, were not so significant rel­
ative to the performance around earlier turning points
in the economy.
Next, I examine whether the errors were influenced
by some change in the way forecasters were making
their projections. I use semiannual data back to the
early 1980s to characterize the “typical” way that

Spencer Krane is a vice president and economic advisor
at the Federal Reserve Bank of Chicago. The author -would
like to thank Charlie Evans, Helen Koshy, Tina Lam, David
Marshall, seminar participants at the Chicago Fed, and
especially, Michael Munley for helpful comments and
assistance. He also would like to acknowledge Aspen
publishers for allowing use of the Blue Chip data.

1Q/2003, Economic Perspectives

forecasters adjust projections for growth at various fore­
cast horizons. I find that forecasters appear to view
most shocks as being transitory—they may alter their
near-term outlook in response to incoming data, but
they generally do not change medium- and longer-term
forecasts very much. This means that perceptions of
longer-run trends—or potential GDP growth—provide
an important anchor for projections more than a cou­
ple quarters out. As just noted, this characterization
seems to describe the forecasts made between 1996
and 1999. Some other identifiable factors, such as re­
cessions or shifts in economic policy, also have had a

Federal Reserve Bank of Chicago

regular statistical influence on medi­
um-term forecasts. However, such
factors did not seem to be in play
during the second half of the 1990s,
while in 2001, forecasters appeared
to react in a fairly typical fashion to
the signals that the economy was
weakening. Accordingly, forecasters
probably did not behave unusually
during the 1996-2001 period.
These results suggest that the
forecast errors during this time likely
reflect some unusual behavior in the
economy. The final portion of this
article discusses a couple of important
candidates. First, during the second
half of the 1990s, there was a marked
and persistent pick-up in productivi­
ty growth, a rare development given
the mature stage of the business cy­
cle. Thus, the surprising step-up in
actual GDP growth around mid-decade may have reflected the response
of households and businesses to more
robust underlying trends in produc­
tivity. Second, much of the downshift
in overall economic activity in 2000
and 2001 reflected a surprisingly abrupt
swing from boom to bust in business
fixed investment. This swing seemed
to accompany a rather sharp reassess­
ment by financial markets and busi­
nesses of the earnings potential of
certain investment projects, particu­
larly in the high-technology area. To
be sure, claims were made in the late
1990s that a high tech “bubble” had
developed. But not only are such phe­
nomena problematic to identify ex
ante, predicting the timing and mag­
nitude of any “bursting of the bubble”
is virtually impossible. Indeed, at the turn of the
millennium, even the more pessimistic forecasters
thought that real GDP would rise at more than a
2 percent pace in 2000 and 2001.
Of course, the benefit of hindsight allows us to
analyze history with some knowledge of the important
shocks that hit the economy and of the responses of
households and businesses to those events. Forecasters
do not have this luxury. By their very nature, shocks
are unknowable in advance. And once shocks begin
to unfold, forecasters must make numerous judgment
calls regarding their magnitude and persistence. If the

3

surprises are unusual—such as those during the 1996—
2001 period—history provides little guidance on how
to make such judgments. Forecasting is further com­
plicated by the fact that incoming data rarely provide
a clear-cut reading on the course of events and because
a good deal of time must pass before any persistent
change in the economy can be identified with much
statistical confidence. As a result, real-time forecast­
ing is a much more difficult exercise than dissecting
the performance of projections after the fact.

The data
Theforecasters
For the sake of generality, I consider five widely
cited public and private sector forecasts. The forecasts
are best described as judgmental, although many are
informed to varying degrees by econometric models.
Three important public agencies publish forecasts
twice a year: The members of the Federal Open Mar­
ket Committee (FOMC) and other District Federal
Reserve Bank presidents present projections in their
semiannual Monetary Policy Reports to Congress;
and the Administration and the Congressional Budget
Office (CBO) publish forecasts in conjunction with
the submission and mid-session reviews of the Presi­
dent’s Budget.2 Many private-sector economists pub­
lish macroeconomic forecasts. I use two commonly
cited averages—the consensus outlook published by
Blue Chip Economic Indicators and the median pro­
jections from the Federal Reserve Bank of Philadelphia’s
Survey ofProfessional Forecasts (SPF) (see Croushore,
1993). Blue Chip forecasts are made each month, while
the SPF is published quarterly. The current Blue Chip
sample covers 52 forecasters, while the SPF covers
about 35; the samples share about 15 respondents.
Theforecasts
The variables projected, forecast periodicity, fore­
cast horizon, and conditioning information vary among
these forecasts. Notably, projections for the current
year are available from each of these sources, but the
FOMC projects the following year only in its mid-year
report. Forecasts for quarterly data are available only
for the Blue Chip and the SPF. All the forecasts include
projections of real GDP, an inflation measure, the un­
employment rate, and, with the exception of the FOMC,
some interest rate. Wide sectoral detail, however, is
available only for the SPF. All of the forecasters except
the FOMC publish “long-run” projections, although
the exact definition of “long-run” and the availability
of these forecasts vary somewhat across forecasters
and over time.
I often refer to “early year” and “mid-year” pro­
jections for real GDP growth. The early year forecasts

4

all are published in February, though some (notably
the Administration’s) often are completed a couple of
months earlier. The mid-year FOMC and Blue Chip
forecasts are released in early July, the SPF in August,
while the exact month that the Administration and
CBO mid-session reviews are released varies through
the summer. I also make use of Blue Chip forecasts
made in March and August, the two months when
long-term forecasts are collected. Current-year fore­
casts refer to projections made for the increase in real
GDP between the fourth quarter of the previous year
and the fourth quarter of the current calendar year.
Half-year forecasts refer to annualized growth between
the fourth quarter of the previous year and the second
quarter of a year or from the second to fourth quarters
of the same year. In addition, in December 1991 the
U.S. Bureau of Economic Analysis (BEA) moved from
using gross national product (GNP) to using GDP as
the featured measure of aggregate output; I use the
forecasts for GNP prior to 1991.
Reference data
When comparing forecasts to outcomes, one must
decide which vintage of the National Income and Prod­
uct Accounts (NIPA) to use for the “actual” values of
GDP and its components. At various times, I present
calculations based on different vintages of the NIPA
in order to compare forecasts with the historical data
in hand when a particular projection was made or to
highlight other features of the data. For the most part,
I construct forecast errors by comparing projections
with the “third” or “final” estimates of the NIPA. When
a comprehensive revision has occurred between the
time a forecast was made and the third estimate is re­
leased, I adjust the forecast error or other data presen­
tations for the average revision to GDP growth over
the previous several years. This purges the analysis
of the influence of the rebasing of GDP or major def­
initional changes that occur with comprehensive re­
visions but most likely were not incorporated in
earlier forecasts.

Forecasting experience of the late 1990s
and the 2001 recession
Below, I present a narrative account of the evolu­
tion of real GDP forecasts made during the high-growth
period of the second half of the 1990s and around the
2001 recession. The discussion highlights the errors
experienced during these periods and some apparent
regularities in forecasting procedures that might help
explain these errors. Table 1 presents the early year and
mid-year forecasts for GDP growth over the 1996—
2001 period. Table 2 shows forecasters’ assumptions
for the longer-run trends in GDP and productivity.

1Q/2003, Economic Perspectives

TABLE 1

Current-year real GDP forecasts, 1996-2001
(Q4-to-Q4 percent change in real GDP)
1996

1997

1998

1999

2000

2001

Early year
FOMC
Administration
CBO
Blue Chip
SPF

2.1
2.2
2.1
2.0
2.0

2.1
2.0
2.1
1.9
2.3

2.4
2.0
2.3
2.1
2.2

2.8
2.0
1.8
2.4
2.5

3.6
2.9
2.9
3.2
3.1

2.3
3.2
2.6
2.3
2.5

Mid-year
FOMC
Administration
CBO
Blue Chip
SPF

2.6
2.1
2.1
2.6
2.8

3.1
3.0
3.0
3.2
3.1

3.1
2.4
2.9
3.0
3.0

3.6
3.2
3.6
3.5
3.2

4.3
3.9
4.0
3.9
4.2

1.6
1.7
1.7
1.8
1.5

Actual
Third NIPA estimate
Currently published

3.1
4.1

3.7
4.3

4.3
4.8

4.2
4.3

3.4
2.3

0.5
0.1

1.1
1.4

2.0
1.2

1.5
1.3

1.6
0.5

1.4
1.9

-0.6
-0.5

0.1
0.1

0.6
0.5

0.3
0.3

0.7
1.0

0.1
0.2

-0.8
-1.5

Hl error
Blue Chip
SPF
H2 revision
Blue Chip
SPF

Notes: The National Income and Product Account (NIPA) estimate for the Q4-to-Q4 increase in real GDP in 1999 (published in March 2000)
was 4.6 percent; the figure in the table is adjusted for the comprehensive revisions to the NIPA that occurred in December 1999. Currently published
are the data published in the 2002 annual NIPA revision. Hl error and H2 revisions are percentage points, annual rate. Since mid-year Blue
Chip forecast is from July, second-quarter data are not yet available; its first half error is based on actual for Q1 and July forecast for Q2.
Sources: Federal Open Market Committee (FOMC), 1979-2001, Federal Reserve Board Monetary Policy Reports to Congress; Administration,
1979-2001, The Budget of the United States Government; submissions and mid-session reviews, and 1979-2001, Economic Report of the
President, Congressional Budget Office (CBO), 1979-2001, The Economic and Budget Outlook, submissions and mid-year updates; Blue Chip,
1978-2001, Blue Chip Economic Indicators, various issues; Federal Reserve Bank of Philadelphia, Survey of Professional Forecasters (SPF);
and Actual: U.S. Bureau of Economic Analysis, National Income and Product Accounts.

Background—Forecasts during the early 1990s
The recovery from the 1990-91 recession was
weak. Typically, the economy experiences a period of
above-trend growth following a recession, as house­
holds and businesses catch up on postponed spending
and inventories adjust to increases in demand. But,
based on data in hand in mid-1992, output rose just
1.6 percent between 1991:Q1 and 1992 :Q1, well be­
low the average gain of roughly 5 percent recorded
during the first year of the previous five expansions.
Many observers thought that “headwinds”—such
as banks’ efforts to meet capital standards and disloca­
tions from the downsizing of the defense industry—
were holding back the recovery. But even once these
headwinds subsided, forecasters were not expecting
much make-up for the lost growth. Instead, at 2.7 percent,
the average of the early year forecasts for real GDP
growth between 1992 and 1995 was just a bit above
the generally prevailing views of the economy’s longrun growth potential. And these forecasts were fairly
accurate: Actual growth averaged 2.6 percent. At
about 1.0 percent, the root mean squared forecasts

Federal Reserve Bank of Chicago

errors (RMSE) of the forecasts were well below their
longer-run averages (see table 3).

Forecasts during the second half of the 1990s
Given the relatively lackluster performance of
the economy over the previous five years, forecasters
entered the second half of the decade with modest
expectations. In early 1996, real GDP was estimated
to have increased less than 1.4 percent (annual rate)
over the first three quarters of 1995? Forecasters
thought that some of this weakness would persist,
and the early year projections for growth in 1996 were
all close to 2 percent. Instead, according to the third
NIPA estimates, real GDP rose 3.1 percent that year.
Forecasters’ early year projections for 1997 and 1998
were not much different from those in 1996—all
looked for real GDP to rise between 1.9 percent and
2.4 percent. Some upped their projections three-tenths
or four-tenths of a percentage point in 1999. But, in
each year, output came in much stronger than expect­
ed: Real GDP rose 3.7 percent in 1997, 4.3 percent in
1998, and 4.2 percent in 1999?

5

TABLE 2

Evolution of long-run forecasts
(percent change, annual rate)
1996-98

1999

2000

2001

2002

Real GDP
Administration
CBO
Blue Chip
SPF

2.3-2.4
2.0-2.2
2.3-2.5
2.3-2.5

2.4
2.3
2.5
2.5

3.0
2.8
3.1
3.1

2.9
3.1
3.4
3.3

3.1
3.1
3.2
3.0

Productivity
Administration
CBO
Blue Chip
SPF

1.2-1.3
1.1-1.5
N.A.
1.3-1.5

1.3
1.8
N.A.
1.6

2.0
2.3
N.A.
2.4

2.3
2.7
N.A.
2.5

2.1
2.2
N.A.
2.1

1991:Q4
1995:Q4

1995:Q4
2000:Q4

2.6
1.1

3.9
2.5

Actual
GDP
Productivity

Notes: Long-run forecasts are from early year Administration, Congressional Budget Office (CBO), and Survey of Professional Forecasters (SPF)
forecasts and the March Blue Chip. Due to changes in reporting, the horizons used to determine the long run for the Administration and CBO
forecasts vary somewhat over time. Actual data for 1991:Q4-95:Q4 are as published in March 1996; actual for 1995:Q4-2000:Q4 are as
published in the 2002 annual NIPA revision. N.A. indicates not available
Sources: Federal Open Market Committee (FOMC), 1979-2001, Federal Reserve Board Monetary Policy Reports to Congress; Administration,
1979-2001, The Budget of the United States Government; submissions and mid-session reviews, and 1979-2001, Economic Report of the
President, Congressional Budget Office (CBO), 1979-2001, The Economic and Budget Outlook, submissions and mid-year updates: Blue Chip,
1978-2001, Blue Chip Economic Indicators, various issues: Federal Reserve Bank of Philadelphia, Survey of Professional Forecasters; Actual:
U.S. Bureau of Economic Analysis, National Income and Product Accounts; and U.S. Department of Labor, Bureau of Labor Statistics.

All told, the early year forecasts shown in table 1
underpredicted real GDP growth by between 0.9 and
2.4 percentage points during the 1996-99 period. Thus,
the most obvious characteristics of these forecasts is
that, in contrast to the 1992-95 period, the errors made
during the second half of the decade were persistent­
ly positive and they were large.
These forecasts exhibit another interesting feature.
The fact that forecasters did not make substantial changes
to their GDP projections suggests that they thought
the errors they were experiencing largely reflected
transitory shocks or factors that would be offset by
other developments. This view is supported by the mid­
year forecasts shown in table 1. While these all gen­
erally looked for stronger growth than the early year
projections, the differences largely reflect the incorpora­
tion of data in hand for the first half of the year. This
can be seen using the quarterly forecasts made by Blue
Chip and SPF. Table 1 presents the errors in the early
year forecasts for real GDP growth in the first half of
the year and the revisions made at mid-year to fore­
casts of second-half growth.5 In 1996, 1998, and 2000,
forecasters made large errors in the first half of the
year but did not revise their second-half projections
very much. Modest upward adjustments were made
in 1997, but these still left the second-half forecasts

6

below 2.7 percent. In 1999, the forecasters made more
substantial upward revisions to their projections for
growth in the second half of the year, pushing them
above the 3 percent mark.
If most variations in GDP growth are viewed as
transitory, then perceptions of longer-run trends in
growth must be an important factor anchoring the
annual GDP forecasts. Indeed, between 1996 and
early 1999, the published assumptions for long-run
growth were all in the range of 2 percent to 2.5 per­
cent (table 2). And in each year, the early year forecasts
for annual growth were generally just somewhat be­
low these assumed longer-run trends. However, after
four years of persistently strong growth and low in­
flation, in late 1999 and early 2000 forecasters began
to boost their assumptions for long-run real GDP growth
to around 3 percent. Thus, it probably is not a coinci­
dence that around this time forecasters’ mid-year pro­
jections also included a substantial upward revision to
the projection of growth in the second half of the year.

Forecasts for 2000 and 2001
Forecasts made early in 2000 were looking for real
GDP to rise between 2.9 percent and 3.6 percent that
year—close to forecasters’ updated perception of poten­
tial growth. In the event, growth in the first half of the

1Q/2003, Economic Perspectives

year was quite robust. According to the estimates in
hand at mid-year, real GDP advanced at an annual rate
of 5.5 percent in the first quarter of the year and like­
ly posted another healthy gain in the second quarter.
Forecasters again did not think this “extra” strength
would persist. For example, the Blue Chip and SPF
mid-year forecasts for growth in 2000:H2 were both
about 3.3 percent, and forecasts made at this time for
real GDP growth in 2001 averaged about that pace. But
instead of simply settling down to trend, GDP growth
surprisingly collapsed during the second half of 2000.
According to the NIPA estimates available in March
2001, real GDP growth slowed to a 2.2 percent rate
in 2000:Q3 and a 1 percent pace in 2000:Q4.6
In response, forecasters began to project slower
growth, with most early-year forecasts for the increase
in real GDP in 2001 running between 2.3 percent and
2.6 percent. By mid-2001, the economic picture had
soured further, and forecasters marked their projections
for growth down substantially. That said, the changes
were not large enough. The mid-year forecasts were
clustered in the range of 1.5 percent to 1.8 percent.
Instead, according to revised estimates published in
July 2002, real GDP changed little over the four quar­
ters of 2001—and it fell at an average annual rate of
0.8 percent over the first three quarters of the year.
Thus, despite the downward revisions, forecasters
failed to predict the 2001 recession.
But, again, forecast errors are not the complete
story. Notably, relative to the 1996-99 period, the pro­
jections for growth in 2001 were adjusted quickly. For
example, the early year Administration forecast for
2001 was based on the data on hand as of the middle
of2000:Q4. It projected real GDP growth in 2001 would
be 3.2 percent—about the same as the SPF and Blue
Chip forecasts released in November 2000. However,
over the next couple of months, the extent of the slow­
down in the economy showed through more clearly
in the monthly indicators of activity. As a result, the
2001 early-year FOMC, CBO, Blue Chip, and SPF
forecasts—which were based on data available in late
January or early February—all had been marked down
to between 2.3 percent and 2.6 percent.

How unusual were the forecast errors
during 1999-2001?
Clearly, forecasters made larger errors during the
second half of the 1990s then they did during the first
half of the decade. And while they reacted quickly to
incoming information, they missed the sharp deceler­
ation in activity in 2000 and 2001. But economic growth
varies substantially over time, and the fluctuations are
difficult to predict. Thus, one must ask whether these

Federal Reserve Bank of Chicago

forecast errors were unusual or simply reflect inherent
difficulties in forecasting.
The first two columns of table 3 show some sam­
ple statistics for the errors in the various forecasts cal­
culated using data between 1980 and 1995. The mean
errors for the early year forecasts are near zero, while
their root mean square errors (RMSE) range between
1.3 percentage points and 1.7 percentage points. For
reference, the standard deviation of real GDP growth
over that period was about 2 percent. Furthermore, based
on a simple regression of the current error on its lagged
value, one cannot reject the hypothesis that the errors
are uncorrelated across years. The mean errors for
the mid-year forecasts also are near zero, and their
RMSEs are between 0.9 and 1.3 percentage points.7,8
In contrast, for every forecaster, all four early year
forecasts made between 1996 and 1999 underpredict­
ed real GDP growth. Furthermore, the errors were large:
The average errors varied between 1.5 and 1.8 percent­
age points (table 3, column 3). For every forecaster,
this average was greater than one RMSE of the fore­
cast errors experienced during the 1980-95 period. The
mid-year forecasts were only slightly better—they
too, all underpredicted growth, with average errors
between 0.7 and 1.2 percentage points.
How unusual were these errors in a statistical
sense? Suppose that each year’s forecast errors were
drawn from independent /-distributions with means and
variances as estimated using the 1980-95 data. (That
is, /-distributions with means and standard errors as
shown in the first two columns of table 3 and 16 de­
grees of freedom.) Because there is only about a onein-six chance of experiencing a single draw greater
than one standard deviation from these /-distributions,
the odds of drawing four consecutive errors of this
size from independent distributions are miniscule.9
Indeed, none of the five forecasters ever made four
consecutive same-signed errors in their early year
forecasts during the 1980-95 period. And, on average,
each forecaster experienced only two strings of three
consecutive same-signed errors—and half of these
occurred during recessions.
How unusual were the errors in 2000 and 2001?
The far right column of table 3 indicates that, on av­
erage during 2000 and 2001, both the early year and
mid-year forecasts overpredicted real GDP growth
by nearly 1 percentage point. In addition, as noted
earlier, the errors in the early year prediction of real
GDP growth in 2001 were quite large, between 1.8
and 2.7 percentage points. However, the dynamics of
an economy dipping into recession are quite different
than one in expansion. Indeed, as we see in the fourth
column, the average errors in 2000 and 2001 are not

7

TABLE 3

Forecast statistics for errors in current-year real GDP growth forecasts
(percentage points)
Mean errors for:

1980-1995

Mean
error

RMSE

Early year
FOMC
Administration
CBO
Blue Chip
SPF

-0.02
-0.29
-0.19
-0.18
-0.11

Mid-year
FOMC
Administration
CBO
Blue Chip
SPF

0.02
-0.20
-0.26
-0.07
0.14

Standard deviation of GDP growth

2.05

1996-99

1980-82
1990-91

2000-01

1.30
1.67
1.45
1.39
1.44

1.48
1.78
1.76
1.73
1.58

-0.69
-1.39
-1.22
-1.29
-1.27

-0.99
-1.11
-0.81
-0.81
-0.86

1.22
1.26
0.92
1.30
1.01

0.71
1.16
0.93
0.76
0.80

-0.37
-0.78
-1.41
-0.74
-0.09

-0.99
-0.86
-0.91
-0.91
-0.89

0.51

0.77

Notes: RMSE are root mean square forecast errors. Errors and standard deviations of GDP growth are calculated using the third estimates
of Q4-to-Q4 real GDP growth (adjusted for comprehensive NIPA revisions).
Sources: Federal Open Market Committee (FOMC), 1979-2001, Federal Reserve Board Monetary Policy Reports to Congress; Administration,
1979-2001, The Budget of the United States Government; submissions and mid-session reviews, and 1979-2001, Economic Report of the
President, Congressional Budget Office (CBO), 1979-2001, The Economic and Budget Outlook, submissions and mid-year updates: Blue Chip,
1978-2001, Blue Chip Economic Indicators, various issues: Federal Reserve Bank of Philadelphia, Survey of Professional Forecasters; and
U.S. Bureau of Economic Analysis, National Income and Product Accounts.

much different from those observed during the 1980,
1981-82, and 1990-91 recessions.

How unusual were the forecast procedures?
The results in the previous section suggest that
the forecast errors during 1996-99 were drawn from
a different distribution than they were, on average,
during 1980-95. The question then arises whether
this disparity reflects unusual behavior on the part of the
forecasters or an unusual performance by the economy.
This section addresses the first part of this question.
Typical evolution of GDPforecasts
How do GDP forecasts “typically” evolve over
time? Given the qualitative descriptions above, re­
stricting analysis to annual forecasts might hide some
interesting reactions—or non-reactions—of higherfrequency forecasts to incoming data. Furthermore,
longer-term projections appear to be an important part
of the story. Only the private sector forecasts publish
both quarterly and long-term forecasts. Accordingly,
I analyze the Blue Chip consensus numbers released
each March and October, the two months when re­
spondents also are surveyed for long-term forecasts.10
Note that the time gap between these months corre­
sponds roughly with the interval between the early year
and mid-year forecasts used above. And since the
different annual forecasts track each other relatively

8

closely, the patterns in these data likely generalize
fairly well to the behavior of other forecasters. The
appendix describes these data in more detail.
Given the periodicity of these forecasts, I consid­
er semiannual time series of growth projections for
half-year periods. Let fgdp f + k) be the forecast made
in period t for (annualized) real GDP growth in peri­
od t + k. For example, if t falls in the first half of the
year (that is, the March forecasts) and k = 1, then
(i + £) is the forecast for growth between the
second quarter and the fourth quarter of the year.
The available forecast horizons are k = 0, 1, 2. Let
yjgrf?
t|le forecasl of long-run growth made at
time t. Alternatively, for any half-year period /, I
have a sequence of three forecasts made in half-year
periods— t - 2, t - 1, and t - fj1'' (i), fgdp (i), and
—respectively. These latter forecasts are the
bars plotted in the three panels of figure 2 (with the
time grid identifying period /, the half-year being
forecast). The solid line in each panel is the forecast
of long-run growth, fgdJ (Jr), and the dashed line is
actual half-year GDP growth (see appendix).
As we can see, in general, the one-year and onehalf-year ahead forecasts do not differ substantially
from the longer-run outlook (panels A and B). The
standard deviations of the differences between these
forecasts and the long-run projections are 0.7 per­
centage point and 0.8 percentage point, respectively;

1Q/2003, Economic Perspectives

for reference, these standard deviations are just about
one-quarter the size of the average half-year growth
forecast. However, at times, some large differences
do open up. Some occur in the first half of the 1980s,
when activity was projected to bounce back from the
deep recessions in 1980 and 1981-82. Others are found
during 1989 and 1990, when real GDP was correctly
projected to grow well below trend. In contrast, fore­
casters’ projections for growth in the current half-year
period (panel C) often differ substantially from their
long-term outlook. The standard deviation of the dif­
ference between fgdp (z) and fjdp (Jr) is 1.5 percent­
age points, with differentials running as large as 3 to
4 percentage points during recessions and the recovery
in 1983.
Figure 3 presents a couple of factors that may
help explain the patterns in figure 2. Panel A
plots fgdp (z) (bars), J'jj (Jr) (solid line), and
(z _2) - f!br2 (^) (dashed line), the expected de­
viation of the real Treasury bill rate from its long-run
average the year before the end of the forecast period.
The figure suggests that high interest rates may have
led forecasters to lower their year-ahead growth pro­
jections in the mid- and late 1980s. The converse
appears to be true in 1993 and 1994. Panel B plots
/"f (?) and fgdp (Jr) along with the most recent value
of the Chicago Fed National Activity Index, or CFNAI
(the dashed line), that would have been observed at the
time the forecast was made. The CFNAI is a convenient
way to summarize a large number of the regular month­
ly indicators that forecasters use to gauge the current
pace of economic activity.11 (Note that a CFNAI value
of zero corresponds with the indicators growing at their
long-run averages.) In general, there does not appear
to be much correlation between the CFNAI and the
longer-run forecasts, with the possible exception of a
negative correlation when projecting a recovery from
recession. In contrast, forecasts for the current semi­
annual period do appear to change substantially in
conjunction with such data. As we see in panel C,
yjgrfp
often deviates from fjdp (Zr) in the direction
indicated by the movements in the CFNAI. The largest
deviations are found in and around recessions.
Quantifying the forecast processes
This section estimates a couple of simple regres­
sion models in order to provide some rough quantifi­
cation of the patterns exhibited in figures 2 and 3.
The first model considers how forecasts for
growth over half-year periods differ from the outlook
for longer-term GDP growth. The regression is:

Federal Reserve Bank of Chicago

fgdp (t + k)-fjdp(lr)=ci + b, \fj" (t + k-2)-

f'Jr (jr)]+b2CFNAFJ (z -1) + b.CFNAf (z -1) +
ifdp (t + k),
where k= 0, 1,2 and the regressors are

1)

(t + k-2)- f'br (Jr)'. the difference between
the real Treasury bill rate and the long-run value
expected to be in place one year before the end
of the forecast period;

2) CFNAI"’' (z -1): the most recent value of the
CFNAI known at the time the forecast was made
if it is greater than -0.7; and
3) CFNAI" (z — 1): the most recent value of the CFNAI
known at the time the forecast was made if it is
less than-0.7.

The «r and r superscripts refer to “no recession”
and “recession” CFNAI values. This dichotomy is to
address the observation that forecasters may react
differently to incoming data in and around recessions.
The boundary point is taken from Evans, Liu, and
Pham-Kanter (2002); as they discuss, historically,
when the CFNAI falls below -0.7, there is about a 70
percent chance that the economy is in recession.
The second model considers forecast revisions;
that is, how forecasters change their projection for a
particular semiannual period in light of recent forecast
errors or other information that they learn between
time Z - 1 and time Z. For the change in the forecast
for real GDP growth in the current half-year period Z,
the model is:

/^(z)-Z?(z) = « + Z»1rev^(z-2) +
b2errjdp (t -1) + b.err'J’" ( Z -1) +
b,errfFNAI (t-i) + ufdp (t),

where

1) revgdp (z - 2): the revision made between period Z
- 1 and Z in the published estimate of real GDP
growth over half-year Z - 2;
2) errjdp (Z — 1): the error in the forecast made at time
Z - 1 for real GDP growth over half-year Z - 1
based on actual GDP data available in period Z;
3) err'J" (z -1): the error in the forecast made at time
Z - 1 for the (quarterly) real T-bill rate at the end
of half-year Z - 1; and

9

FIGURE 2

Evolution of Blue Chip half-year real GDP growth forecasts

C. Forecasts made in the current half-year
percent change, annual rate

Sources: Blue Chip, 1978-2002, Blue Chip Economic Indicators, various issues: and U.S. Bureau of Economic Analysis, National Income
and Product Accounts.

10

1Q/2003, Economic Perspectives

FIGURE 3

Interest rates, current activity, and Blue Chip half-year real GDP growth forecasts
A. One year earlier GDP forecasts and real interest rates
percent change, annual rate

B. One year earlier GDP forecasts and the CFNAI
percent change, annual rate

Note: To smooth inherent volatility, the three-month moving average of the CFNAI, which is designated CFNAI-MA3, is plotted in the figure. The real
T-bill rate differential if the difference between the real T-bill rate and its long-run expectations (see appendix).
Sources: Federal Reserve Bank of Chicago, CFNAI: and Blue Chip, 1978-2002, Blue Chip Economic Indicators, various issues.

Federal Reserve Bank of Chicago

11

4) errFN'A' (t -1): the “shock” in the CFNAI learned
at time /. This is the residual from a simple AR(2)
model predicting the most recent value of the CFNAI
that would be known at the time the period-/ fore­
cast of GDP is made.
I ran a similar equation for the real T-bill forecast.
The equations for the period-/ revisions in the longerhorizon forecasts (k = 1, 2, Zr) are:
fg‘/p(t + k)-f^(t + k) = a + b.rev^ (/ - 2) +

b2erif,p (/ -1) + b2err'lbr (/ -1) + b^errf1™1 (/ -1) +
T,]<k b5jllfP (t + j)+ ^,<1 b6J11',''' (t + j) + 11 (t + k).

of the variation in revisions to current and one-halfyear-ahead forecasts but little of the changes to long­
er-run forecasts. Consistent with the first model,
much of the explanatory power for the one-quarterahead revision comes from the shock to the CFNAI,
but this shock has little predictive power for revi­
sions to the out-quarter forecasts. None of the projec­
tions are revised much in response to the most recent
GDP forecast error. And with the possible exception of
the half-year-ahead forecast, the reactions to the por­
tion of earlier GDP revisions not explained by the
model are small. Errors and revisions in the outlook
for the T-bill rate have at most a small influence on
the GDP forecast revisions.
Together, these models suggest that projections
of real GDP growth beyond the next couple of quarters
usually do not vary far from forecasters’ long-run
growth outlook; the exceptions are when events such
as recessions or changes in monetary policy come into
play. Forecasters may make large revisions to near-term
projections for real GDP growth in response to in­
coming high-frequency data, but the average responses
to past GDP and interest rate forecast errors and revi­
sions are small. These results suggest that forecasters
think that most of the “shocks” revealed in incoming

The extra terms in these regressions—the residu­
als from the shorter-horizon equations—test whether
unaccounted for factors that generate revisions in fore­
casts for earlier time periods are expected to persist
and affect growth in the farther out quarters. This is
similar to tracking impulse responses in a vector au­
toregression.
The results for the first model are shown in
table 4. As indicated by the R2 values, the interest
rate deviations and the CFNAI explain
more than 60 percent of the variation in
the difference between ftgdp (/) and
TABLE 4
yjgrf?
, p,l|t onjy at,oul 20 percent
Explaining deviations in half-year forecasts
of that in fgdp (/ + 2) - fgdp (lr) mA none
of real GDP growth from the long-run forecast
in fgdp (/ +1) - fgdp (Jr). As seen in the
Forecasts for growth over
top row, a positive interest rate differen­
the two quarters ending:
tial appears to be taken as a signal of
Current
Half year
One year
strong activity in the near term, but causes
half year
ahead
ahead
forecasters to lower their one-year-ahead
Regression on:
forecasts below fgdp(lr). The CFNAI
0.24
-0.12
-0.20
ft“"(t + A - 2) - C'W
(2.21)
(-1.22)
(-2.39)
terms indicate that current half-year fore­
1.46
0.30
0.12
CFNAI"'(t - 1)
casts are significantly raised or lowered
(4.37)
(1.04)
(0.54)
relative to the long-term outlook in reac­
2.01
-0.03
-0.42
tion to good or bad readings on incoming
(6.32)
(-0.10)
(-2.26)
high-frequency indicators of activity. And
0.61
0.00
0.18
ff2
the larger coefficient on CFNAI” (/ -1)
Std. dev. of
than CFNAI’”' (t -1) indicates that the re­
1.30
0.70
0.58
sponses are bigger when the economy ap­
pears to be falling into recession. But the
1982-95: Mean error
0.08
0.08
0.04
RMSE
0.70
0.75
0.57
medium-term forecasts react little to the
1996-99:
Mean
error
-0.06
-0.21
-0.14
incoming data, the exception being that if
RMSE
0.60
0.31
0.31
the economy currently is in a recession,
2000-02: Mean error
-0.34
-0.11
0.02
then forecasters will tend to predict a peri­
RMSE error
1.37
0.63
0.32
od of above-trend growth at the one-yearNotes: T-statistics in parentheses. Semiannual Blue Chip data, 1982:H2 to
ahead horizon.
2002:Hl. RMSE are root mean square forecast errors.
Sources: Federal Reserve Bank of Chicago, CFNAI: Blue Chip, 1978-2002, Blue
The results from the second model are
Chip Economic Indicators, various issues: and U.S. Bureau of Economic Analysis,
shown in table 5. As shown by the R2 val­
National Income and Product Accounts.
ues, these factors explain about 40 percent

12

1Q/2003, Economic Perspectives

TABLE 5

Explainting revisions to forecasts of real GDP growth
Current
half year
Regression on:
revf(t - 2)

Forecast for growth over the two quarters ending:
Half year
One year
ahead
ahead
Long-run

0.48
G-78)

0.60
(4.41)

-0.24
(-1.58)

0.07
(2.37)

-1)

0.08
(0.71)

-0.01
(-0.23)

-0.10
(-1-70)

-0.01
(-0.93)

errf(t -1)

-0.21
(-0.64)

0.23
(1-55)

0.04
(0.26)

0.17
(2.77)

0.16
(1.92)

0.11
(0.46)

0.07
(1.20)

0.02
(0.14)

-0.06
(-0.12)

0.08
(0.58)

1.48
(4.67)

0.18
(1.29)

-0.12
(-0-72)

-0.04
(-1.20)

0.38

0.38

0.14

0.24

1.29

0.57

0.55

0.10

1982-95: Mean error
RMSE

-0.03
0.83

-0.03
0.45

0.05
0.47

-0.02
0.08

1996-99: Mean error
RMSE

0.36
0.61

0.22
0.34

-0.19
0.44

0.04
0.07

2000-02: Mean error
RMSE

-0.38
1.85

-0.15
0.21

0.13
0.39

0.04
0.07

s«f + y)

e/7f™'(f -1)
ff2

Std. dev. of ff(t + k)- fff(t + k)

Notes: T-statistics in parentheses. Semiannual Blue Chip data, 1982:H2 to 2002:Hl. RMSE are root mean square forecast errors.
Sources: Federal Reserve Bank of Chicago, CFNAI; Blue Chip, 1978-2002, Blue Chip Economic Indicators, various issues: and U.S. Bureau
of Economic Analysis, National Income and Product Accounts.

monthly data or recent errors have only a transitory
influence on real GDP growth or will be offset by other
factors. Those shocks that are more persistent could
be expected to elicit a policy response that would have
an influence on output at a longer horizon. Indeed, in
qualitative terms, the characterization of the GDP
forecast process provided by these two simple mod­
els is consistent with the time-series evidence—such
as that generated by structural vector autoregression
(VAR) models—regarding the response of real GDP
to various shocks (see appendix).
Were the forecasts in 1996-2001 unusual?
The above statistical description appears consistent
with our earlier qualitative characterization of forecasts
during the 1996-2001 period. Notably, as seen in fig­
ure 3, the Blue Chip one-year-ahead forecasts for real
GDP growth in 1996-2000:Hl were a bit lower than
the long-run projections. Thus, forecasters were not
carrying earlier underpredictions or forecast revisions
forward into higher projections for GDP growth in the
out quarters. Indeed, forecasters were expecting other
factors—such as external shocks from the Asian crisis

Federal Reserve Bank of Chicago

in 1997 and the Russian default in 1998—to hold
back growth. Not until 2000, when long-term fore­
casts were increased, do we see a boost in
(/)
and 7? (i).Furthermore, the substantial downward
revisions in 7^ (?) in 2000 and 2001 appear consis­
tent with the declines in the CFNAI.
Supporting these qualitative descriptions, the errors
in our simple equations describing the forecast process
were not that different from those experienced prior to
1996. (Though given the quite weak explanatory power
of these models, the analysis of errors only provides sug­
gestive evidence.) As indicated by the average errors in
the bottom portion of table 4, forecasts during 199699 were a bit lower than the first model predicts. Simi­
larly, near-term forecasts were revised up a bit more
than was typical (as shown in the bottom of table 5).
However, in both cases, the differences are at most a
few tenths of a percentage point on GDP growth and
are not statistically significant. For the 2000-02:Hl
period, the near-term forecasts are 0.3 to 0.4 percent­
age point lower than predicted by the models, but these
errors are small relative to the revisions between
and
in 2000 and 2001.12

13

Unusual behavior of the economy
Given that forecasters seemed to be conducting
business as usual, the question is what economic de­
velopments made forecasting so difficult? It is beyond
the scope of this article to catalog the vast number of
factors—and forecasters’ perceptions of them—that
influenced the economy over 1996-2001. Instead,
I focus on two related developments: the step-up in
productivity growth and the boom and bust in busi­
ness investment. Both of these were inherently diffi­
cult to predict. And both had important implications
for GDP forecast errors during this period.
Acceleration in productivity
The trend in labor productivity is one of the fun­
damental determinants of long-run growth. As we
see in table 2, in the mid-1990s, productivity growth

14

was expected to run in the 1 percent
to 1.5 percent range, about the same
as the pace that had prevailed since
the early 1970s. Demographic pro­
jections (not shown) showed the
working age population rising about
1 percent per year, which was thought
to translate into like-sized increases
in hours worked. This left the projec­
tions for long-run GDP growth in the
range of 2 percent to 2.5 percent.
Because long-run forecasts an­
chor the medium-term outlook, changes
in productivity trends have important
implications for the forecasting exer­
cise. The colored line in panel A of
figure 4 plots the level of productivi­
ty (output-per-hour) in the nonfarm
business sector. The black line is the
simple trend of productivity between
business cycle peaks.13 As we can
see, productivity is quite cyclical—it
typically falls during a recession (or
period of weak growth) and rises
sharply early in a recovery. But pro­
ductivity rarely accelerates persis­
tently during a mature business cycle.
The vertical black lines in the figure
denote the four-year mark after the
end of the previous recession, while
panel B plots the (percent) deviation
in actual productivity from the peakto-peak trend. As we can see, the only
previous time that productivity re­
mained well above trend four years
into the expansion was during the
late 1960s. But even then, the gap between actual and
trend productivity was not increasing—that is, actual
productivity growth was proceeding at its peak-topeak trend. In contrast, in the mid-1990s, productivity
growth picked up markedly and persistently outstripped
earlier trends. The four-quarter increase in outputper-hour exceeded the 1.4 percent peak-to-peak trend
that prevailed between 1980 and 1990 in eveiy quar­
ter between 1996:Q1 and the cyclical peak in 2001:Ql.
The average growth rate of productivity over this pe­
riod was 2.5 percent.
Even now, determining how much of the pick-up
was transitory, though long-lived, and how much of
it represented a permanently higher trend is a diffi­
cult task. Almost by definition, a change in the trend
cannot be identified until we have observed a sub­
stantial amount of data following the break. Indeed,

1Q/2003, Economic Perspectives

deal of uncertainty remained regard­
ing how much of the pick-up in pro­
ductivity reflected a permanently
higher trend (see Gordon, 2000). But,
by 2001, most of the forecasters had
raised their assumptions for the trend
growth in productivity to the 2.3 per­
cent to 2.7 percent range. Correspond­
ingly, they boosted long-run growth
forecasts for real GDP growth to the
3 percent to 3.5 percent range. These
long-run assumptions became a new
anchor for nearer-term forecasts.

as late as 1999, forecasters’ estimates of the econo­
my’s longer-run trends in productivity growth re­
mained between 1.3 percent and 1.8 percent.
Eventually, however, a confluence of corroborat­
ing evidence led forecasters to change their expecta­
tions. The fact that the high GDP growth was associated
with low unemployment and subdued inflation indi­
cated that the economy’s productive resources were not
being strained. The economy also had proved unex­
pectedly resilient to external shocks. Furthermore, fore­
casters found themselves underpredicting every major
component of domestic private demand, suggesting
that the source of strength was some broad-based phe­
nomenon as opposed to a sector-specific shock.14
Finally, as discussed below, a good deal of the increase
in productivity growth appeared to reflect sources
that could prove to be persistent. To be sure, a great

Federal Reserve Bank of Chicago

Increases in capital and
information technology
One reason that forecasters
changed their views of the trends in
productivity is that some of the impor­
tant factors underlying the gains were
thought likely to be long-lived. In
particular, a good deal of the step-up
in growth that occurred in the second
half of the 1990s reflected intensified
capital deepening and developments
in the information technology (IT)
sector. Once in place, capital does not
disappear, and the longer-run pros­
pects for IT were quite optimistic.
Jorgenson and Stiroh (2000) and
Oliner and Sichel (2000) both esti­
mated that about half of the accelera­
tion in productivity between the first
and second halves of the 1990s was
due to capital deepening—or an in­
crease in the quality and quantity of
capital used per hour worked. Surges in capital deep­
ening often reflect cyclical weakness in hours. But
this time the gains were due to a sustained pick-up in
capital services, a measure of the productive input
provided by the total business capital stock in the
economy. Panel A of figure 5 shows the growth rates
of aggregate capital services (colored line) along
with business fixed investment (black line). Growth
in capital services had edged down from 4.7 percent
in 1985 to 2.1 percent by 1992, but a surge in invest­
ment in the 1990s boosted its growth to about 6 per­
cent by the end of the decade.15
Indeed, the large gains in investment depicted in
the figure account for a good deal of the pick-up in
overall real GDP growth during the 1996-99 period.
According to the July 2002 revised NIPA data, after
increasing at an average annual rate of about 5 percent

15

between the cyclical peak in 1990:Q3 and 1995:Q4,
real business fixed investment (BFI) rose at about an
11 percent annual rate between 1995:Q4 and 2000:Q2.
As a result, BFI moved from boosting real GDP growth
by an average of about 0.5 percentage points per year
during the first half of the 1990s to raising it between
0.8 and 1.5 percentage points per year during the sec­
ond half of the decade.
Technology also was an important factor in the
productivity acceleration. The studies cited above also
estimate that between 60 percent and 100 percent of
the increase in capital deepening reflected increases
in the quantity and quality of high technology capital
used by labor. Changes in technology also influence
output per hour through other channels. Multifactor
productivity refers to increases in output per hour that
cannot be attributed to capital deepening or changes
in labor quality. As we see in panel B of figure 5, mul­
tifactor productivity also exhibited an unusually sharp
acceleration in the second half of the 1990s. Both
Jorgenson and Stiroh and Oliner and Sichel calculated
that improvements in the production of IT products
made substantial contributions to this acceleration in
multifactor productivity. And more recent estimates
by these authors using up-to-date data point to even
larger IT contributions to the acceleration in overall
productivity in the second half of the 1990s.

Collapse of investment and decline in activity
in 2000 and 2001
Even though forecasters boosted their views re­
garding the longer-run prospects for the economy, they
expected several factors to moderate GDP growth in
2000 and 2001.16 All told, forecasters believed that
these factors would bring GDP growth down to its
longer-run potential, but would not be sufficient to
tip the economy into a recession.
However, as already noted, by their very nature,
recessions are periods of unusual economic activity and
are therefore hard to predict. This time, as shown in
panel A of figure 5, the demand for capital equipment
suddenly and surprisingly collapsed in the second half
of 2000. In particular, in the high-tech area, bookings
for capital equipment fell sharply, inventory-sales ra­
tios backed up, and industrial production began to drop.
Instead of the solid 10.5 percent annual rate increase
projected by the SPF in August, BFI barely changed in
the second half of2000. In February 2001, the SPF fore­
cast real BFI to increase 4.5 percent over the four quar­
ters of the year; instead, according to the third NIPA
estimates, it fell 9.4 percent. Similarly, the pace of in­
ventory investment did more than just moderate; by
2001 firms were liquidating inventories at a sharp rate.

16

According to the July 2002 revised NIPA data,
real BFI swung from double-digit gains to dropping at
an average annual rate of 6.3 percent between 2000:Q2
and 2001 :Q4. As a result, BFI reduced real GDP growth
by 1.2 percentage points in 2001—a negative swing
of 2 to 2.6 percentage points relative to its contribu­
tions to growth during the second half of the 1990s.
Spending on high-technology equipment, which rep­
resents about one-third of total BFI, accounted for a
good deal of this swing. Changes in inventory invest­
ment went from being, on balance, a neutral influence
on GDP growth in 1999 and the first half of 2000 to
reducing it by nearly 1.5 percentage points in 2001.
In contrast, slower growth in all other sectors of the
economy—with a share of about 85 percent—reduced
real GDP growth by just about 1 percentage point be­
tween 2000 and 2001.
Investment and the adjustment of capital stocks
Thus, the forecast miss in GDP seemed to have
been precipitated by a sudden swing in investment,
followed by a sharp correction in inventories. Even
though some stock adjustment had been anticipated, the
extent of the drop-off clearly was underestimated. Why
are such swings in investment so hard to forecast?
Some simple arithmetic regarding capital stocks and
flows provides a useful way to frame the discussion.
For any particular type of capital, call it the z'th type,

or

where I't is investment, K't is the end-of-period cap­
ital stock, 8J is the depreciation rate, and g‘ is the
growth rate of this component of the capital stock. The
simple arithmetic of this equation is: 1) if g‘ and 8'
are relatively stable in the long run, then so will be
/' /
, meaning that investment and the capital
stock will be growing at the same rate, g‘; and
2) to increase g't, investment must grow faster than
the capital stock for some period in order to boost
/' /
. Conversely, to lower g't, investment will
have to grow slower than capital for some time.
Suppose technological innovation makes some
type of capital more productive, for example, a new
chip makes computers more powerful. Businesses will
want to raise the growth rate of computer capital to
take advantage of the higher marginal value of the new
computers. In order to do so, for some time investment
in computers would have to increase at a higher rate

1Q/2003, Economic Perspectives

than that of the computer capital stock. As the higher
desired capital is achieved, growth in investment will
fall. But to what rate? To the degree the innovation
reflects a permanent change in the growth rate of
technology, growth in both capital and investment
will settle at a new higher g'. To the extent that it is
a one-time step-up in technology, growth will fall back
to the original g' ,17 The basic logic of this discussion
extends to describing the behavior of aggregate in­
vestment and capital.

Gauging growth in investment and capital stock
in 1996—2001
The arithmetic presented above indicates that in
order to pin down the path for investment, forecasters—
at least implicitly—have to make some judgment con­
cerning the persistence of any observed pick-up in
capital growth. Such decisions clearly were important
during the 1996-2001 period. As we noted earlier,
capital growth was spurred by the desire to incorpo­
rate advances in technology, boosting the growth in
capital services, g(, to around 6 percent by the end of the
decade. The February 2000 SPF forecast projected that
real BFI would increase about 8 percent that year—and
some of this gain reflected spending that was thought
to have been deferred due to Y2K. Thus, this forecast
for the underlying rate of increase in real BFI was not
far from the pace of growth in capital services. Such
a projection produces a constant UK = g + 8, the
equilibrium condition for stable growth in invest­
ment and capital.
In other words, it appears forecasters had come to
believe that we had experienced a long-lived increase
in the rate of advance in technology that should gen­
erate a persistent increase in the rate of growth in capi­
tal and in the investment spending to support this growth.
But, given the magnitude of the swing in investment,
it seems that forecasters overestimated where the growth
rate of capital would settle over the medium term.18
What happened to the determinants of capital stock
growth that may have caused this miss?
Around this time, both the players in financial mar­
kets and the businesses making capital spending deci­
sions appear to have reevaluated the earnings potential
of certain investment projects. The deceleration in BFI
was preceded in early 2000 by a decline in stock prices.
In both the equity markets and investment, the retrench­
ments were particularly dramatic in the high-technol­
ogy sectors—just as these sectors had led the surge
on the upside. Whatever its root cause, such a reas­
sessment clearly was a negative for new investment
projects. And to the extent that expected payoffs to
capital projects already undertaken were revised

Federal Reserve Bank of Chicago

down, earlier investment may have pushed the capital
stock to a level that, in retrospect, was too high. This
would imply a period of below-trend growth in the cap­
ital stock and an even sharper retrenchment in invest­
ment in order to realign stocks with desired levels.
Could this reassessment have been predicted?
To be sure, by conventional historical standards, eq­
uity valuation metrics—such as price-earnings ratios
or dividend-price ratios—were at unprecedented levels
in early 2000. And the high rates of investment had
substantially pushed up growth in capital. Many com­
mentators argued that these facts meant that the stock
market was “overvalued” and that firms had over­
built productive capacity. Based on these observations,
one might have thought that a “bursting of the bub­
ble” would lead to weak activity 2000 and 2001.
But actually forecasting such an event is problem­
atic. Throughout the second half of the 1990s, stock
market valuation metrics had been continuously attain­
ing new historical records, and some observers had
been continuously predicting market corrections (see
for example, Campbell and Shiller, 2001). Yet equity
markets kept moving up and investment surged further.
Forecasters who may have lowered their earlier pro­
jections due to such reservations also would have un­
derestimated the strength of the economy to an even
greater degree than the consensus did in the late 1990s.
Indeed, when it came to writing down numbers, even
the more pessimistic forecasts did not predict outright
declines in GDP in 2000 and 2001. In February
2000, the average of the lowest ten Blue Chip fore­
casts still had real GDP rising 2.2 percent that year,
and this group even boosted their outlook to 3.3 per­
cent in July. Even after the stock market declines and
weak investment indicators during the second half of
2000, as of February 2001 the bottom-ten Blue Chip
average forecast that real GDP would increase 1.2
percent that year. And in July, the pessimists still
thought that output would rise at about a 1 percent
annual rate in the second half of the year.

Conclusion: Implications for future
forecasts
Because up-to-date estimates are not yet available
(see note 15), we cannot look at the decomposition of
productivity to see how growth in capital services or
multifactor productivity has performed in recent quar­
ters. However, as figure 4 shows, growth in total labor
productivity has been very well maintained. Between
the cyclical peak in 2001:Ql and 2002:Q3, growth in
output per hour has averaged a strong 4 percent annual
rate.19 This performance more resembles the cyclical
patterns around the 1960 and 1969 recessions, when

17

productivity trends appeared to be nearly 3 percent,
than the behavior of output per hour around the re­
cession between 1973 and 1990, when productivity
trends were closer to 1.25 percent to 1.5 percent.20
A number of researchers have made rough esti­
mates of what might be reasonable steady-state values
to expect for growth in output per hour. As summarized
in Oliner and Sichel (2002), the numerous scenarios
considered in these papers produce a range of values
between 1.3 percent and 3.2 percent, with point esti­
mates largely between 2 percent and 2.8 percent. Thus,
while a return to pre-1995 rates can not be ruled out,
most analysts are guessing that the economy will ex­
perience higher productivity growth in the long run.

These estimates leave us with a relatively optimis­
tic view about productivity trends going forward. In
line with this perception, long-run forecasts for real
GDP growth have not changed much over the past cou­
ple of years. The most recent assumption for long-run
growth, made in October 2002 by the Blue Chip con­
sensus, was 3.2 percent. Accordingly, despite the re­
cession, and, to date, bumpy recovery, forecasters still
are anchoring their cyclical projections for real GDP
growth with solid trends in the underlying long-run
pace of growth in economic activity.

NOTES
lrThe average forecasts plotted in figure 1 are the averages of the
early year projections made by the Federal Open Market Committee
(FOMC) and other Federal Reserve Bank presidents, the Adminis­
tration, the Congressional Budget Office (CBO), the Blue Chip
Consensus, and the median forecast from the Federal Reserve
Bank of Philadelphia’s Survey ofProfessional Forecasters.

2The Federal Reserve publishes a range and central tendency of
forecasts made by the FOMC members and other Bank presidents.
I use the middle of the central tendency as the FOMC point fore­
cast. Other details regarding the data are available from the author
upon request.
3These figures are the growth estimates that were available in
mid-January 1996. Comprehensive revisions to the NIPA were
published that month, but they covered data only through 1995:Q3;
estimates for 1995:Q4 were delayed until March, after the early
year forecasts for 1996 were made.
4The 1996, 1997, and 1998 figures are from the third NIPA estimates
for growth in those years. The October 1999 comprehensive revi­
sions to the NIPA added business expenditures on software to the
estimates of business fixed investment. The BEA estimates this
added 0.41 percentage point to average real GDP growth between
1992 and 1998. The 1999 growth figure cited above is the third
estimate less 0.41 percentage point.
5First-half errors are estimated using the information available at
the time of the mid-year forecast. The mid-year SPF forecasts are
made in August, so the actual values used to calculate the first-half
error are the first estimates of growth in the second quarter. The
mid-year Blue Chip forecasts are made in July, before secondquarter data are available; the first-half Blue Chip “error” is thus
calculated using the actual value for GDP in for the first quarter
and the revision made between early year and mid-year in the
forecast for second-quarter GDP growth.
6The revised NIPA estimates published in July 2002 paint a some­
what different picture of these developments. Real GDP growth
during the first half of the year was revised down from 5.2 percent
to 3.8 percent, and the increase in the third quarter is now estimated
to be just 0.6 percent (annual rate). The estimate of real GDP
growth in 2000:Q4 is still about 1 percent.

18

7A large literature exists that examines the performance of macroeconomic forecasts; see for, example, Berger and Krane (1984),
McNees (1992, 1995), Romer and Romer (2000), Schuh (2001),
and the references cited in these papers. Many papers conduct
formal statistical tests of forecast efficiency. One criterion for ef­
ficiency is that forecast errors should be independent of informa­
tion known at the time a forecast was made, which includes the
lagged forecast error. Schuh rejects the efficiency of annual SPF
forecasts of GDP, though the rejection is due to correlation with
variables other than the lagged GDP forecast error.
8The 1980-95 period includes three recessions, 1980, 1981-82,
and 1990-91. Excluding these years from the calculations, the
mean errors of the early year forecasts are between 0.2 and 0.4
percentage point and the RMSEs are between 1.1 and 1.5 percent­
age points. For the mid-year forecasts, the means are between 0.1
and 0.3 percentage point and the RMSEs in the 0.6 to 0.8 percent­
age point range.
’Cumulative sum (CUSUM) plots also suggest a structural break in
the distributions of the errors during this period. Recursive ?-tests
(see Harvey, 1989) using the 1996-99 errors easily reject that the
errors have a zero mean when the tests are constructed using the
standard deviation of the errors over the four-year period. How­
ever, the recursive ?-tests only reject at between the 6 percent and
9 percent level if the standard deviation of the errors over the
1980-95 period is used. Finally, Schuh also finds that the average
forecasts from the SPF, Blue Chip, and Wall Street Journal made
statistically significant underpredictions of real GDP growth dur­
ing the 1996-2000 period.

10The March and October Blue Chip surveys ask for forecasts of
averages for GDP growth, inflation, and a number of other variables
over two five-year intervals—one beginning two years from now
and one beginning seven years from now. These rarely differ by
more than one-tenth or two-tenths; I use their average as the longrun forecast. The Blue Chip is more useful than the SPF for this
exercise, mainly because the latter publishes long-term forecasts
just once a year and has been doing so for GDP only since 1992.

1Q/2003, Economic Perspectives

11 The CFNAI is a weighted average of 85 monthly indicators in
five broad categories: production and income, labor markets, con­
sumption and housing, manufacturing and trade sales, and inven­
tories and orders. The weights are chosen using principal component
analysis and reflect the series’ correlation with the (unobserved)
common movement in all of the indicators. (See Fisher, 2000, and
Evans, Liu, and Pham-Kanter, 2002.) To smooth through inherent
volatility, the three-month moving average of the index often is
used; this average is plotted in figure 3 and used elsewhere in this
article.
12Similarly, Schuh concludes that the SPF forecasters were not be­
having unusually during the 1996-2000 period. In addition, Schuh
finds that the SPF forecasts fail to exploit certain statistical rela­
tionships among the forecast errors for different variables. He pos­
tulates that the large errors during this time may have in part reflected
a confluence of macroeconomic factors—perhaps intensified by
structural changes in the economy—that magnified the conse­
quences of forecasters’ failure to make efficient use of these rela­
tionships.

13Specifically, the trends connect the level of productivity between
the business cycle peaks in 1960 and 1969, 1969 and 1973, 1973
and 1980, and 1980 and 1990; this last trend line is then extended
through 2002 :Q2.
14This statement is based on the SPF data, which include projections
of personal consumption expenditures, residential investment,
business fixed investment, government purchases (federal, state,
and local), net exports, and inventory investment.

15Annual capital services data are published by the U.S. Bureau of
Labor Statistics in conjunction with their multifactor productivity
estimates. Jorgenson and Stiroh provide a description of why capi­
tal services measure the productivity of the capital stock. Note
that the investment data in figure 5 are quarterly and are from the
NIPA data available in late 2002. At that time, capital services and
multifactor productivity data (shown in panel B of figure 5) were
available only through 2000; furthermore, these data do not re­
flect the influence of the July 2002 annual revisions to the NIPA.
16First, monetary policy had been tightened—the federal funds rate
had been raised 175 basis points between the spring of 1999 and
the spring of 2000—and forecasters were expecting further increases
in rates. Second, the price of imported oil had risen, which acts as
a tax on U.S. energy consumers. Third, equity markets—which
had been skyrocketing since late 1994—began to edge off in March
2000, so that the boost to spending from wealth effects was ex­
pected to wane. Furthermore, in order to adjust stocks to higher
desired levels, outlays for housing, consumer durable goods, in­
ventories, and business capital all had been increasing at high rates,
and growth in these expenditures was expected to cool as the stock
adjustment process ran its course. Finally, some drop in spending on
high-tech equipment also was anticipated, following the tempo­
rary boost to outlays for these items in 1999 and early 2000 by
firms addressing Y2K contingencies.

Federal Reserve Bank of Chicago

17Even if the advance is a permanent rise in the level, but not the
growth rate, of technology, the level of the desired capital stock
still is higher. Accordingly, the transition from the old to new
time path for the capital stock will require some period of el­
evated capital stock growth and even higher investment growth.
But once the new path is reached, glt needs to fall back to its old
value. Consequently, investment needs to grow less than the capi­
tal stock for some period to bring I't I Klt_x back down to the
original g\ + 8J.
18Even in “normal” times, investment is difficult to predict because
of the large cyclical swings in its demand and the frictions caused
by the costs of planning, installing, and operating new capital (see
Oliner, Rudebusch, and Sichel, 1995). Y2K also complicated mat­
ters during the period, as firms first boosted high-tech investment
in order to deal with potential problems and then delayed spending
to avoid having to break in new equipment close to the January
2000 century date change. But the size of the errors noted above
suggests that other factors also were in play during this period.
19Indeed, the strong performance of productivity may be one reason
that the economy weathered the shock of the events of September
11, 2001, better than many predicted. Forecasters revised down
their projections for real GDP a good deal immediately following
the terrorist attacks, with the Blue Chip forecast from October 2001
projecting a 1 percent annual rate drop in real GDP in 2001 :H2
and the SPF forecast made in November looking for a similar de­
cline. According to the latest NIPA estimates, real GDP fell at an
annual rate of 0.3 percent in 2001 :Q3 but rose at a 2.7 percent pace
in 2001 :Q4. In the fall of 2001 the Blue Chip and SPF forecasts
for growth in 2002 were in the 2.5 percent to 3 percent range; and
as of December 2002, projections for growth in 2002 are in the
2.7 percent to 2.9 percent range.

20For example, if we assume that the cyclical trough occurred dur­
ing 2001 :Q4, then 2002:Q3 is three quarters after the trough. At
this time, the level of productivity was 6.1 percent above its level
at the 2001 :Q1 peak. Three quarters after the troughs for the 1960
and 1969 recessions, productivity was 5.8 percent and 7.3 percent,
respectively, above its value in the peak quarters. But three quar­
ters after the troughs of the 1973, 1980, 1981, and 1990 recessions,
productivity was just 1 percent to 4 percent higher than at the
preceding cyclical peaks.

19

APPENDIX: DATA, TIMING CONVENTIONS, AND INTERPRETATIONS OF REGRESSION MODELS
EVALUATING THE BLUE CHIP FORECASTS

In March, t signifies the first half of the year and the
k = 0 forecast is for growth from the fourth quarter of the
previous year to the second quarter of the current year.
The k = 1 forecast is from the second quarter to the fourth
quarter of the current year; the k = 2 forecast is from the
fourth quarter of the current year to the second quarter
of the following year. For October, t corresponds to the
second half of the year; the k = 0 forecast is for secondto-fourth quarter growth, and so on. At the time the Oc­
tober Blue Chip is published, the most recent National
Income and Product Accounts (NIPA) data are the third
estimates for the second quarter of the current year; in
March, the most recent data are the second estimates for
the fourth quarter of the previous year. However, revi­
sions between the second and third estimates of the NIPA
usually are small, so that the statistical results probably
are not substantially influenced by differences in the in­
formation sets available to the forecasters in March and
October.

The actual data for gross domestic product (GDP)
during the first half of the year are taken from the third
NIPA estimates; the actuals for the second half of the
year are the estimates published with the annual revi­
sions made in the following summer. Both are adjusted
for the average influence of any NIPA comprehensive
revision that may have occurred between the time the
forecast was made and the actual data were published.
Given our timing conventions,
(/— 2) essential­
ly reflects revisions to GDP that occur with compre­
hensive revisions to the NIPA that are larger than the
adjustments described above. The results in table 5
indicate that forecasters apparently carry forward these
influences in their medium-term forecasts.
The real Treasury bill forecast is constructed by tak­
ing the difference between the expected average nominal
Treasury bill (T-bill) rate for a quarter and the expecta­
tion of long-run inflation. If t is in March, the interest

20

rate differential is from the second quarter of the previ­
ous year; if t is in October, it is from the previous fourth
quarter. The / - 2 value is used to account for lags be­
tween changes in interest rates and their influence on the
real economy. The short-term Treasury bill forecasts
were first available in 1982. Long-run T-bill forecasts
were first made in 1983; I constructed a 1982:H2 value
using other Blue Chip long-run interest rates.
The Chicago Fed National Activity Index (CFNAI)
has been published only since 2000, so a real-time series
is not available. Instead, I use the index as currently
published. To account for publication lag, for March,
I assume the forecasters knew the January value of the
CFNAI; for October, I assume the latest available index
was from August.
As noted in the text, in qualitative terms, the results
from the regression models are consistent with the timeseries evidence—such as that generated by structural
vector autoregression (VAR) models—regarding the re­
sponse of real GDP to various shocks. As an example,
consider the results from Gali (1992). These show that
a favorable one-standard-deviation supply shock increases
real GDP by 2.8 percentage points (annual rate) in one
quarter, and that a real-side demand shock boosts growth
by 2 percentage points. But over the following four quar­
ters, the supply shock raises average growth by just 0.4
percentage point, while the demand shock has little fur­
ther effect. In contrast, a money supply shock has a 0.6
percentage point impact over the one to five quarterahead period. In addition, the short-lived shocks explain
larger fractions of the GDP forecast error variance—at
the one to five quarter horizon, the supply shock explains
about two-thirds, demand shocks one-fifth, and the mon­
ey supply shock about one-eighth of the variance. Of
course, these figures are only illustrative, as such calcu­
lations are model-specific, notably with regard to the
restrictions used to identify shocks.

1Q/2003, Economic Perspectives

REFERENCES

Berger A., and S. Krane, 1984, “The informational
efficiency of econometric model forecasts,” The Re­
view ofEconomics and Statistics, Vol. 67, No. 1,
pp. 128-134.

Jorgenson, D., M. Ho, and K. Stiroh, 2002, “Pro­
jecting productivity growth: Lessons from the U.S.
growth resurgence,” Economic Review, Federal
Reserve Bank of Atlanta, Third Quarter, pp. 1-13.

Campbell, J., and R. Shiller, 2001, “Valuation ratios
and the long-run stock market outlook: An update,”
Cowles Foundation, discussion paper, No. 1295.

McNees, S., 1995, “An assessment of ‘official’ eco­
nomic forecasts,” New England Economic Review,
Federal Reserve Bank of Boston, July/August,
pp. 13-23.

Croushore, D., 1993, “Introducing: The Survey of
Professional Forecasters,” Business Review, Federal
Reserve Bank of Philadelphia, November/December,
pp. 3-13.
Evans, C., C. Liu, and G. Pham-Kanter, 2002, “The
2001 recession and the Chicago Fed National Activi­
ty Index: Identifying business cycle turning points,”
Economic Perspectives, Federal Reserve Bank of
Chicago, Vol. 26, No. 3, pp. 26-43.
Fisher, J., 2000, “Forecasting inflation with a lot of
data,” Chicago Fed Letter, Federal Reserve Bank of
Chicago, No. 151.

Gali, J., 1992, “How well does the IS-LM model fit
postwar U.S. data,” Quarterly Journal ofEconomics,
pp. 709-738.
Gordon, R., 2000, “Does the ‘new economy’ mea­
sure up to the great inventions of the past?,” Journal
ofEconomic Perspectives, Vol. 14, No. 4, pp. 49-74.
Harvey, A., 1989, The Econometric Analysis of Time
Series, Cambridge, MA: The MIT Press.
Jorgenson, D., and K. Stiroh, 2000, “U.S. economic
growth in the new millennium,” Brookings Papers
on Economic Activity, Vol. l,pp. 125-211.

Federal Reserve Bank of Chicago

__________ , 1992, “How large are economic forecast
errors?,” New England Economic Review, Federal
Reserve Bank of Boston, July/August, pp. 26-42.
Oliner, S., G. Rudebusch, and D. Sichel, 1995,
“New and old models of business investment: A
comparison of forecasting performance,” Journal of
Money, Credit, and Banking, Vol. 27, pp. 806-826.

Oliner S., and D. Sichel, 2002, “Information technol­
ogy and productivity: Where are we now and where
are we going?,” Federal Reserve Board, Finance and
Economics, Discussion Series, No. 2002-29.

__________ , 2000, “The resurgence of growth in
the late 1990s: Is information technology the story,”
Journal ofEconomic Perspectives, Vol. 14, No. 4,
pp. 3-22.

Romer C., and D. Romer, 2000, “Federal Reserve
information and the behavior of interest rates,” Ameri­
can Economic Review, Vol. 90, No. 3, pp. 429-457.
Schuh, S., 2001, “An evaluation of recent macroeco­
nomic forecast errors,” New England Economic
Review, Federal Reserve Bank of Boston, January/
February, pp. 35-56.

21

Inflation and monetary policy in the twentieth century

Lawrence J. Christiano and Terry J. Fitzgerald

Introduction and summary
Economists continue to debate the causes of inflation.
One reason for this is that bad economic outcomes are
frequently accompanied by anomalous inflation behav­
ior. The worst economic performance in the U.S. in the
twentieth century occurred during the Great Depression
of the 1930s, and there was a pronounced deflation at
that time. Economic performance in the U.S. in the
1970s was also weak, and that was associated with a
pronounced inflation.
So, what is it that makes inflation sometimes high
and sometimes low? In one sense, there is widespread
agreement. Most economists think that inflation cannot
be unusually high or low for long, without the fuel of
high or low money growth.1 But, this just shifts the ques­
tion back one level. What accounts for the anomalous
behavior of money growth?
Academic economists attempting to understand
the dynamics of inflation pursue a particular strategy.
They start by studying the dynamic characteristics of
inflation data, as well as of related variables. These char­
acteristics represent a key input into building and re­
fining a model of the macroeconomy. The economist’s
model must not only do a good job in caphiring the be­
havior of the private economy, but it must also explain
the behavior of monetary authorities. The hoped-for final
product of this research is a model that fits the facts
well. Implicit in such a model is an “explanation” of
the behavior of inflation, as well as a prescription for
what is to be done to produce better outcomes.2
To date, much research has focused on data from
the period since World War II. For example, considerable
attention and controversy have been focused on the
apparent inflation “inertia” in these data: the fact that
inflation seems to respond only with an extensive delay
to exogenous shifts in monetary policy.3 We argue that
much can be learned by incorporating data from the
first half of the century into the analysis. The data from

22

the early part of the century behave quite differently
in many ways from the data we are accustomed to study­
ing. In particular, we emphasize four differences be­
tween the pre- and post-war data:4
■ Inflation is much more volatile, and less persistent,
in the first half of the twentieth century.
■ Average inflation is lower in the first half of the
century.
■ Money growth and inflation are coincident in the
first half of the century, while inflation lags money
by about two years in the second half.
■ Finally, inflation and unemployment are strongly
negatively related in the first half of the century,
while in the second half a positive relationship
emerges, at least in the lower frequency components
of the data.

These shifts in the behavior of inflation constitute
potentially valuable input in the quest for a good model.
The outline of our article is as follows. To set the
background, we begin with a brief, very selective, over­
view of existing theories about inflation. We divide the
set of theories into two groups: those that focus on
“people” and those that focus on “institutions.” We
describe the very different implications that each group
of theories has for policy. We then him to documenting
the facts listed above. After that, we review the implica­
tions of the facts for theories. We focus in particular
on the institution view. According to this view, what

Lawrence J. Christiano is a professor of economics at
Northwestern University, a research fellow at the National
Bureau ofEconomic Research (NBER), and a consultant
to the Federal Reserve Bank of Chicago. He acknowledges
supportfrom a National Science Foundation grant to the
NBER. Terry J. Fitzgerald is a professor of economics at
St. Olaf College and a consultant to the Federal Reserve
Bank of Minneapolis.

1Q/2003, Economic Perspectives

is crucial to achieving good inflation outcomes is the
proper design of monetary policy institutions. Our dis­
cussion reviews ideas initially advanced by Kydland
and Prescott (1977) and later developed further by Barro
and Gordon (1983a, b), who constructed a beautifully
simple model for expositing the ideas. We show that
the Barro-Gordon model does very well at understand­
ing the second and fourth facts above concerning in­
flation in the twentieth century.5 We also discuss the
well-known fact that that model has some difficulty
in addressing the disinflation that occurred in the U.S.
in the 1980s. This and other considerations motivate
us to turn to modern representations of the ideas of
Kydland-Prescott and Barro-Gordon. While this work
is at an early stage, it does contain some surprises and
may lead to improved theories that provide a better
explanation of the inflation facts.

Ideas about inflation: People versus
institutions
Economists are currently pursuing several theories
for understanding inflation behavior. However, the
theories are still in their infancy and are best thought
of as “prototypes”: They are too simple to be credibly
thought of as fitting the facts well. Although these re­
search programs are still at an early stage, it is possible to
see two visions emerging. Each has different implications
for what needs to be done to achieve better inflation out­
comes. To understand what is at stake in this research,
it is interesting to sketch the different visions. Our loose
names for the competing visions are the people vision
on the one hand and the institution vision on the other.
Although it is not the case that all research neatly falls
into one or the other of these categories, they are never­
theless useful for spelling out the issues.
Under the people vision, bad inflation outcomes
of the past reflect the honest mistakes of well-mean­
ing central bankers trying to do what is inherently a
very difficult job. For example, Orphanides (1999)
has argued that the high inflation of the 1970s reflects
that policymakers viewed the low output of the time
as a cyclical phenomenon, something monetary poli­
cy could and should correct. However, in retrospect
we now know that the poor economic performance of
the time reflected a basic productivity slowdown that
was beyond the power of the central bank to control.
According to Orphanides, real-time policymakers under
a mistaken impression about the sources of the slow­
down did their best to heat up the economy with high
money growth. To their chagrin, they got only high
inflation and no particular improvement to the econ­
omy. From this perspective, the high inflation of the
1970s was a blunder.

Federal Reserve Bank of Chicago

Another explanation of the high inflation of the
1970s that falls into what we call the people category
appears in Clarida, Gali, and Gertler (1998). They char­
acterize monetary policy using a framework advocated
by Taylor (1993): Fed policy implements a “Taylor rule”
under which it raises interest rates when expected in­
flation is high, and lowers them when expected infla­
tion is low. According to Clarida, Gali, and Gertler,
the Fed’s mistake in the 1970s was to implement a ver­
sion of the Taylor rule in which interest rates were
moved too little in response to movements in expected
inflation. They argue that this type of mistake can ac­
count for the inflation take-off that occurred in the U.S.
in the 1970s.6 In effect the root of the problem in the
1970s lay in a bad Taylor rule. According to the insti­
tution view, limitations on central bankers’ technical
knowledge about the mechanics of avoiding high in­
flation are not the key reason for the bad inflation out­
comes that have occurred in the past. This view
implicitly assumes that achieving a given inflation
target over the medium run is not a problem from a
technical standpoint. The problem, according to this
view, has to do with central bankers’ incentives to keep
inflation on track and the role of government institu­
tions in shaping those incentives.
The institution view—initiated by Kydland and
Prescott (1977) and further developed by Barro and
Gordon (1983a, b)—focuses on a particular vulnera­
bility of central banks in democratic societies (see
figure 1). If people expect inflation to be high (A), they
may take protective actions (B), which have the effect
of placing the central bank in a dilemma. On the one
hand, it can accommodate the inflationary expectations
with high money growth (C). This has the cost of pro­
ducing inflation, but the advantage of avoiding a re­
cession. On the other hand, the central bank can keep
money growth low and prevent the inflation that people
expect from occurring (D). This has the cost of pro­
ducing a recession, but the benefit that inflation does
not increase. Central bankers in a democratic society
will be tempted to accommodate (that is, choose C)
when confronted with this dilemma. If people think
this is the sort of central bank they have, this increas­
es the likelihood that A will occur in the first place.
So, what is at stake in these two visions, the people
vision versus the institution vision? Each has different
implications for what should or should not be done to
prevent bad inflation outcomes in the future. The people
vision implies that more and better research is need­
ed to reduce the likelihood of repeating past mistakes.
This research focuses more on the technical, opera­
tional aspect of monetary policy. For example, research
motivated by the Clarida, Gali, and Gertler argument

23

FIGURE 1

Central banker in a democratic society

focuses on improvements in the design of the Taylor rule
to ensure that it does not become part of the problem. The
institutional perspective, not surprisingly, asks how better
to design the institutions of monetary policy to achieve
better outcomes. This type of work contemplates the con­
sequences of, say, a legal change that makes low infla­
tion the sole responsibility of the Federal Reserve. Other
possibilities are the type of employment contracts tried in
New Zealand, which penalize the central bank governor
for poor inflation outcomes. The basic idea of this liter­
ature is to prevent scenarios like Ain figure 1 from occur­
ring, by convincing private individuals that the central
bank would not choose C in the event that A did occur.
In this article, we start by presenting data on infla­
tion and unemployment and documenting how those
data changed before and after the 1960s. We argue that
these data are tough for standard versions of theories
that there is a time consistency problem in monetary
policy. We then discuss whether there may be other ver­
sions of these theories that do a better job at explain­
ing the facts.

The data
This section describes the basic data on inflation
and related variables and documents the observations
listed in the introduction. First, we study the relation­
ship between unemployment and inflation; then we
turn to money growth and inflation.
Unemployment and inflation
To show the difference between data in the first
and second parts of the twentieth century, we divide

24

the dataset into the periods before and after 1960. To
better characterize the movements in the data, we break
the data down into different frequency components.
The techniques for doing this, reviewed in Christiano
and Fitzgerald (1998), build on the observation that
any data series of length, say T, can be represented
exactly as the sum of 772 artificial data series exhibiting
different frequencies of oscillation. Each data series
has two parameters: One controls the amplitude of
fluctuation and the other, phase. The parameters are
chosen so that the sum over all the artificial data series
precisely reproduces the original data. Adding over
just the data series whose frequencies lie inside the
business cycle range of frequencies yields the business
cycle component of the original data. We define the
business cycle frequencies as those that correspond to
fluctuations with period between two and eight years.
We also consider a lower frequency component of the
data, corresponding to fluctuations with period between
eight and 20 years. We consider a very low frequency
component of the data, which corresponds to fluctua­
tions with period of oscillation between 20 and 40 years.
Finally, for the post-1960 data when quarterly and
monthly observations are available, we also consider the
high frequency component of the data, which is com­
posed of fluctuations with period less than two years.7
We begin by analyzing the data from the first part
of the century. The raw data are displayed in figure 2,
panel A. That figure indicates that there is a negative
relationship between inflation and unemployment. This
is confirmed by examining the scatter plot of inflation
and unemployment in figure 2, panel B, which also

1Q/2003, Economic Perspectives

shows a negative relationship (that is, a Phillips curve).8
The regression line displayed in figure 2, panel B high­
lights this negative relationship.9 Figure 2, panels C,
D, and E exhibit the different frequency components
of the data. Note that a negative relationship is appar­
ent at all frequency components. The contemporaneous
correlations between different frequency components
of the inflation and unemployment data are reported
in table 1. In each case, the number in parentheses is
a /?-valuc for measuring whether the indicated corre­
lation is statistically different from zero. For example,
a /?-valuc less than 0.05 indicates that the indicated
correlation is statistically different from zero at the 5
percent level.10 The negative correlation in the business
cycle frequencies is particularly significant.
We analyze the post-1960 monthly inflation and
unemployment data in figure 3, panels A-F.11 There is
a sense in which these data look similar to what we
saw for the early period, but there is another sense in
which their behavior is quite different. To see the simi­
larity, note from the raw data in figure 3, panel A that
for frequencies in the neighborhood of the business
cycle, inflation and unemployment covary negatively.
That is, the Phillips curve seems to be a pronounced
feature of the higher frequency component of the data.
At the same time, the Phillips curve appears to have
vanished in the very lowest frequencies. The data in
figure 3, panel A show a slow trend rise in unemploy­
ment throughout the 1960s and 1970s, which is reversed
starting in early 1983. A similar pattern occurs in in­
flation, though the turnaround in inflation begins in
April 1980, roughly three years before the turnaround
in unemployment. The low frequency component of
the data dominates in the scatter plot of inflation versus
unemployment, exhibited in figure 3, panel B. That
figure suggests that the relationship between inflation
and unemployment is positive, in contrast with the
pre-1960s data, which suggest otherwise (see figure 2,
panel B).12

We can formalize and quantify our impressions
based on casual inspection of the raw data using fre­
quency components of the data, as reported in figure 3,
panels C-F. Thus, the frequency ranges corresponding
to periods of oscillation between two months and 20
years (see figure 3, panels C-E) are characterized by a
noticeable Phillips curve. Table 1 shows that the corre­
lation in the range of high frequencies (when available)
and in the business cycle frequencies is significantly
negative. The correlation between inflation and unem­
ployment is also negative in the 8-20 year range, but
it is not statistically significantly different from zero
in this case. Presumably, this reflects the relative paucity
of information about these frequencies in the post- 1960s
data. Finally, figure 3, panel F indicates that the cor­
relation between 20 and 40 year components is now
positive, with unemployment lagging inflation. These
results are consistent with the hypothesis that the Phillips
curve changed relatively little in the 2-20 year frequency
range, and that the changes that did occur are primarily
concentrated in the very low frequencies. Formal tests
of this hypothesis, shown in table BI in box 1, fail to
reject it.
Some of the observations reported above have been
reported previously. For example, the low-frequency
observations on unemployment have been document­
ed using other methods in Barro (1987, Chapter 16).
Also, similar frequency extraction methods have been
used to detect the presence of the Phillips curve in the
business cycle frequency range.13 What has not been doc­
umented is how far the Phillips curve extends into the
lowest frequencies. In addition, we show that inflation
leads unemployment in the lowest frequency range.
Finally, we noted in the introduction that inflation
in the early part of the century was more volatile and
less persistent than in the second part. We can see this
by comparing figure 2, panel A with figure 3, panel A.
We can see the observation on volatility by compar­
ing the scales on the inflation portion of the graphs.

TABLE 1
CPI inflation and unemployment correlations

Business cycle
frequency

8-20
years

20-40
years

1900-60 (annual)

-0.57(0.00)

-0.32(0.19)

-0.51 (0.23)

1961 -97 (annual)

-0.38(0.11)

-0.16(0.41)

0.45 (0.32)

Sample

High
frequency

1961 :Q2-97:Q4 (quarterly)

-0.37 (0.00)

-0.65 (0.00)

-0.30 (0.29)

0.25 (0.34)

1961, Jan.-97, Dec. (monthly)

-0.24 (0.00)

-0.69 (0.00)

-0.27 (0.30)

0.23 (0.40)

Notes: Contemporaneous correlation over indicated sample periods and frequencies. Numbers in parentheses are p-values, in
decimals, against the null hypothesis of zero correlation at all frequencies. For further details, see the text and notes 7 and 10.

Federal Reserve Bank of Chicago

25

FIGURE 2

Unemployment and inflation, 1900-60
A.

The unemployment rate and the inflation rate

B. Unemployment versus inflation

C. Frequency of 2 to 8 years

D. Frequency of 8 to 20 years

E. Frequency of 20 to 40 years

Note: Shaded areas indicate recessions as defined by the National Bureau of Economic Research. The black line indicates inflation
and the green line indicates unemployment.
Source: Authors' calculations based upon data from the U.S. Department of Labor, Bureau of Labor Statistics.

26

1Q/2003, Economic Perspectives

FIGURE 3

Unemployment and inflation, 1960-99
A.

The unemployment rate and the inflation rate

B. Unemployment versus inflation

C. Frequency of 2 months to 1.5 years

D. Frequency of 1.5 to 8 years

E. Frequency of 8 to 20 years

F. Frequency of 20 to 40 years

Note: Shaded areas indicate recessions as defined by the National Bureau of Economic Research. The black line indicates inflation
and the green line indicates unemployment.
Source: Authors' calculations based upon data from the U.S. Department of Labor, Bureau of Labor Statistics.

Federal Reserve Bank of Chicago

27

In the early period, the scale extends from -12 percent
to +18 percent, at an annual rate. In the later sample,
the scale extends over a smaller range, from 0 percent
to 14 percent. In addition, the inflation data in the early
period are characterized by sharp movements followed
almost immediately by reversals in the other direction.
By contrast, in the later dataset, movements in infla­
tion in one direction are less likely to be reversed im­
mediately by movements in the other direction.

observe in the higher frequencies. Figure 5, panels D
and E show how the variables are so far out of phase
in the business cycle and lower frequencies that they
actually have a negative relationship. The strong pos­
itive and contemporaneous relationship between the very
low frequency components of the data that we noticed
in figure 5, panel A, is quite evident in panel F.

Money growth and inflation
We report our results for money growth and infla­
tion in detail in Christiano and Fitzgerald (2003), so here
we just summarize the findings. We display these re­
sults in figure 4, panels A-E and figure 5, panels A-F.
The style of analysis is much the same as for the un­
employment and inflation data.
Consider the data from the early part of the cen­
tury first. Figure 4, panel A shows that money growth
(M2) and inflation move together very closely. The re­
lationship appears to be essentially contemporaneous.
This impression of a positive relationship is confirmed
by the scatter plot between inflation and money growth
in figure 4, panel B. To the eye, the positive relation­
ship in figure 4, panel A appears to be a feature of all
the frequency components of the data. This is confirmed
in figure 4, panels C-E. Here we see the various fre­
quency components of the data and how closely the
data move together in each of them.
Now consider the data from the later part of the cen­
tury. The raw data are reported in figure 5, panel A. The
differences between these data in the early and late parts
of the century are dramatic. At first glance, it may
appear that the two variables, which moved together
so closely in the early sample, are totally unrelated in
the late sample. On closer inspection, the differences
do not seem so great after all. Thus, in the very low
frequencies there does still appear to be a positive re­
lationship. Note how money growth generally rises in
the first part of the late sample, and then falls in the
second part. Inflation follows a similar pattern. It is
in the higher frequencies that the relationship seems
to have changed the most. Whereas in the early sample,
the relationship between the two variables appeared
to be contemporaneous, now there seems to be a sig­
nificant lag. High money growth is not associated im­
mediately with high inflation, but instead is associated
with high inflation several years later. These observa­
tions, which are evident in the raw data, are confirmed
by figure 5, panels B-F. Thus, panel B shows the
scatter plot between money growth and inflation, which
exhibits a positive relationship. Clearly, this positive
relationship is dominated by the low frequency behavior
of the data. It masks the very ditferent behavior that we

The differences in the time series behavior of in­
flation in the first and second parts of the last century
offer a potentially valuable source of information on
the underlying mechanisms that drive inflation. For
example, in the introduction, we talked about the re­
cent literature that focuses on explaining the apparent
inertia in inflation: the tendency for inflation to respond
slowly to shocks. These findings are based on analysis
of data from the second half of the century. We sus­
pect that similar analysis of data for the first part of
the century would find less inertia. This is because we
saw that inflation is less persistent in the early sample,
and its movements are more contemporaneous with
movements in money. These observations provide a
potentially important clue about how the private econ­
omy is put together: Whatever accounts for inflation
inertia in the second part of the century must be some­
thing that was absent in the first part. For example, some
have argued that frictions in the wage-setting process
and variability in the rate of utilization of capital have
the potential to account for the inflation inertia in post­
war data.14 If this is right, then wage-setting frictions
must be smaller in the early sample, or there must have
been greater limitations on the opportunities to achieve
short-term variation in the utilization rate of capital.
The remainder of this section focuses on the change
in the relationship between inflation and unemploy­
ment. At first glance, the change appears to lend sup­
port to the institutions view of inflation, as captured
in the work of Kydland and Prescott (1977) and Barro
and Gordon (1983a, b). A second glance suggests the
evidence is not so supportive after all. Therefore, we
begin with a brief review of the Barro-Gordon model.

28

Implications of the evidence for
macroeconomic models

Barro-Gordon model
The model comprises two basic relationships. The
first summarizes the private economy. The second sum­
marizes the behavior of the monetary authority. The
private economy is captured by the expectations-augmented Phillips curve, originally associated with
Friedman (1968) and Phelps (1967):
1)

u - z/w= -a(7t - 7te), a > 0.

1Q/2003, Economic Perspectives

BOX 1

Formally testing our hypothesis about the Phillips curve
Formal tests of the hypothesis that the Phillips curve
changed relatively little in the 2-20 year frequency
range fail to reject it. Table BI displays/?-values for
the null hypothesis that the post-1960s data on infla­
tion and unemployment are generated by the bivariate
vector autoregression (VAR) that generated the pre1960s data. We implement the test using 2,000 arti­
ficial post-1960s datasets obtained by simulating a
three-lag VAR and its fitted residuals estimated using
the pre-1960s unemployment and inflation data.1 In
each artificial dataset, we compute correlations be­
tween filtered inflation and unemployment just like we
did in the actual post-1960s data. Table BI indicates
that 9 percent of correlations between the business
cycle component of inflation and unemployment ex­
ceed the —0.38 value reported in table 1 for the post19608 data, so that the null hypothesis fails to be
rejected at the 5 percent level. The /?-value for the
8-20 year correlation is quite large and is consistent
with the null hypothesis at any standard significance
level.
The statistical evidence against the null hypoth­
esis that there has been no change in the 20-40 year
component of the data is also not strong. This may in
part reflect a lack of power stemming from the rela­
tively small amount of information in the sample
about the 20-40 year frequency component of the data.
But, the /r-value may also be overstated for bias rea­
sons. The table indicates that there is a small sample
bias in this correlation, since the small sample mean,
-0.35, is substantially larger than the corresponding
probability limit of-0.45. A bias-adjustment proce­
dure would adjust the coefficients of the estimated

pre-1960s VAR so that the implied small sample mean
lines up better with the pre-1960s empirical estimate of
-0.51. Presumably, such an adjustment procedure would
shift the simulated correlations to the left, reducing
the /?-value. It is beyond the scope of our analysis to
develop a suitable bias adjustment method.2 However,
we suspect that, given the large magnitude of the bias,
the bias-corrected/?-value would be substantially small­
er than the 14 percent value reported in the table.3
'We redid the calculations in table B1 using a five-lag VAR and
found that the results were essentially unchanged. The only no­
table differences in the results are that the p-value for the busi­
ness cycle correlations between inflation and unemployment is
0.06 and the p-value for these correlations in the 20-40 year
range is 0.11.
2One could be developed along the lines pursued by Kilian (1998).

’To get a feel for the likely quantitative magnitude of the ef­
fects of bias adjustment, we redid the bootstrap simulations by
adjusting the variance-covariance matrix of the VAR distur­
bances used in the bootstrap simulations. Let f’ = [L] denote
the variance-covariance matrix. In the pre-1960s estimation
results, fj, = -0.1024, fj t = 0.0018, V,, = 6.0653. When we
set the value of l j, to -0.0588 and recomputed the entries in
table B1 in box 1, we found that the mean correlations were
as follows: business cycle, -0.75 (0.01); 8-20 year, -0.54 (0.09);
and 20-40 year, -0.51 (0.06). The numbers in parentheses are
the analogs of the p-values in table B1. Note how the mean cor­
relation in the 20-40 year frequency coincides with the empiri­
cal estimate reported in the first row of table 1, and that the
p-value has dropped substantially, from 0.23 to 0.06. This is
consistent with our conjecture that bias adjustment may have
an important impact on the p-value for the 20 40 year correla­
tion. However, the other numbers indicate that the bias adjust­
ment procedure that we applied, by varying F , only, is not a
good one. Developing a superior bias adjustment method is
clearly beyond the scope of this article.

TABLE BI
Testing null hypothesis that post-1960s equal pre-1960s correlations
Frequency

Plim

Small sample
mean

Standard deviation,
small sample mean

2-8 year

-0.66

-0.61

0.0036x72000

0 09

8-20 year

-0.36

-0.38

0.0079x72000

0.25

20-40 year

-0.45

-0.35

0.0129x72000

0.14

p-value

Notes: Data-generating mechanism in all cases is a three-lag, bivariate VAR fit to pre-1960s data, p-value: frequency, in 2,000
artificial post-1960s datasets, that contemporaneous correlation between the indicated frequency components of x and y exceeds,
in absolute value, the corresponding post-1960s estimate. Plim: mean, over 1,000 artificial samples of length 2,000 observations
each, of correlation. Small sample mean: mean of correlation, across 2,000 artificial post-1960s datasets. Standard deviation,
small sample (product of Monte Carlo standard error for mean and 72000 ): standard deviation of correlations across 2,000
artificial post-1960s datasets.

Federal Reserve Bank of Chicago

29

FIGURE 4

Measuring money growth and inflation, 1900-60
A. The M2 growth

rate and the inflation rate

B. M2 growth versus inflation

C. Frequency of 2 to 8 years

D. Frequency of 8 to 20 years

E. Frequency of 20 to 40 years

Note: Shaded areas indicate recessions as defined by the National Bureau of Economic Research.
Source: Authors' calculations based upon data from the Federal Reserve System and the U.S. Department of Labor, Bureau of Labor Statistics.

30

1Q/2003, Economic Perspectives

FIGURE 5

Measuring money growth and inflation, 1960-99
A. The M2 growth rate and the inflation rate

D. Frequency of 1.5 to 8 years

B. M2 growth versus inflation
inflation

C. Frequency of 2 months to 1.5 years

F. Frequency of 20 to 40 years

Note: Shaded areas indicate recessions as defined by the National Bureau of Economic Research.
Source: Authors' calculations based upon data from the Federal Reserve System and the U.S. Department of Labor, Bureau of Labor Statistics.

Federal Reserve Bank of Chicago

31

Here, u is the actual rate of unemployment, uN is
the natural rate ofunemployment, n is the actual rate
of inflation, and is the rate of inflation expected by
the private sector. The magnitude of a controls how
much the actual rate of unemployment falls below its
natural rate when inflation is higher than expected. The
natural rate of unemployment is the unemployment rate
that would occur if there was no surprise in inflation.
The natural rate of unemployment is exogenous to the
model, evolving in response to developments in unem­
ployment insurance, social attitudes toward the unem­
ployed, and other factors.
Note that according to the expectations augmented
Phillips curve, if the monetary authority raises infla­
tion above what people expected, then unemployment
is below its natural rate. The mechanism by which this
occurs is not explicit in the model, but one can easily
imagine how it might work. For example, ne might be
the inflation rate that is expected at the time wage con­
tracts are set. Suppose that expectations of inflation are
low, so that firms and workers agree to low nominal
wages. Suppose that the monetary authority decides—
contrary to expectations at the time wage contracts are
written—to increase inflation by raising money growth.
Given that wages in the economy have been pre-set
at a low level, this translates into a low real wage, which
encourages firms to expand employment and thereby
reduce unemployment.15
The second part of the Barro-Gordon model sum­
marizes the behavior of the monetary authority, which
chooses n. Although the model does not specify the
details of how this control is implemented, we should
think of it happening via the monetary authority’s control
over the money supply. At the time that the monetary
authority chooses n, the value of is predetermined.
If the monetary authority can move n above ne, then,
according to the expectations-augmented Phillips curve,
unemployment would dip below the natural rate. It is
assumed that the monetary authority wishes to push
the unemployment rate below its natural rate, and this
is captured by the notion that it would like to minimize:

2)

!6 [(;/ - foA)2 + yn21, Y > 0, k < 1.

The first term in parentheses indicates that, ideally,
the monetary authority would like u = kuN < uN. The
model does not specify exactly why the monetary author­
ity wants unemployment below the natural rate. In prin­
ciple, there are various factors that could rationalize this.
For example, the presence of distortionary taxes or mo­
nopoly power could make the level of economic activity
inefficiently low, and this might translate into a natu­
ral rate of unemployment that is suboptimally high.

32

In practice, the monetary authority would not neces­
sarily go for the ideal level of unemployment, because
the increase in n that this requires entails costs. These
are captured by the yn2 term in the objective. According
to this term, the ideal level of inflation is zero.16 The
higher the level of inflation, the higher the marginal cost.
The Barro-Gordon model views the monetary
authority as choosing n to optimize its objective, sub­
ject to the expectations-augmented Phillips curve and
to the given value of rA The optimal choice of n reflects
a balancing of the benefits and costs summarized in
the monetary authority’s objective. A graph of the best
response function appears in figure 6, where jT appears
on the horizontal axis, and n appears on the vertical. The
45-degree line in the figure conveniently shows the level
of inflation that the policymaker would select if it chose
to validate private expectations of inflation.
Note how the best response function is flatter than
the 45-degree line. This reflects the increasing marginal
cost of inflation at higher levels of inflation. At low
levels of expected inflation, the marginal cost of inflation
is low, so the benefits outweigh the costs. At such an
inflation rate, the monetary authority would try to sur­
prise the economy by moving to a higher level. On the
other hand, if expected inflation were very high, then
the marginal cost of going even higher would outweigh
the benefits, and the monetary authority would choose
to violate expectations by choosing a lower inflation
rate. Not surprisingly, there is an inflation rate in the
middle, n, where the monetary authority chooses not
to surprise the economy at all. This is the inflation rate
where the best response function crosses the 45-degree
line. Because of the linear nature of the expectationsaugmented Phillips curve and the quadratic form of
monetary authority preferences, the best response func­
tion is linear, guaranteeing that there is a single crossing.
What is equilibrium in the model? We assume
everyone—the monetary authority and the private
economy—is rational. In particular, the private econ­
omy understands the monetary authority’s policymaking
process. It knows that if it were to have expectations,
jT < ji", then actual inflation would be higher than ne.
So, it cannot be rational to have an expectation like
this. It also understands that if it were to have expec­
tations, jT > rc*, the monetary authority would choose
an inflation rate lower than jT. So, this expectation can­
not be rational either. The only rational thing for the
private economy to expect is n. So, this is equilibri­
um in the model. The formula for this is

3)

oc(l-Ar)

n = rgn, V = —-------- > 0.
Y

1Q/2003, Economic Perspectives

FIGURE 6

Effect on macroeconomic models

According to the model, inflation is predicted to
be proportional to the actual level of unemployment.
There are several crucial things to note here. First, the
actual level of unemployment is equal to the natural rate,
because in equilibrium the monetary authority cannot
surprise the private economy. So, monetary policy in
practice does not succeed in driving unemployment be­
low the natural rate at all. Second, inflation is positive,
being proportional to unemployment. This is higher
than its ideal level, here presumed to be zero. These two
observations imply that in equilibrium, all the monetary
authority succeeds in doing is producing an inflation
rate above its ideal level. It makes no headway on unem­
ployment. That is, this optimizing monetary authority
simply succeeds in producing suboptimal outcomes.
How is this possible?
The problem is that the monetary authority lacks
the ability to commit to low inflation. At the time the
monetary authority makes its decision, the private econ­
omy has already formed its expectation about inflation.
The private economy knows that if it expects inflation
to occur at the socially optimal level, = 0, then the
monetary authority has an incentive to deviate to a
higher level of inflation (see figure 6).17
Eggertsson (2001) has recently drawn attention
to one of Aesop’s fables, which captures aspects of the
situation nicely. Imagine a lion that has fallen into a
deep pit. Unless it gets out soon, it will starve to death.
A rabbit shows up and the lion implores the rabbit to
push a stick lying nearby into the hole, so that the

Federal Reserve Bank of Chicago

lion can climb out. The lion cries out from
the depths of its soul, with a most solemn
commitment not to eat the (juicy-looking)
rabbit once it gets out. But, the rabbit is
skeptical. It understands that the intentions
announced by the lion while in the hole
are not time consistent. While in the hole,
the lion has the incentive to declare, with
complete sincerity, that it will not eat the
rabbit when it gets out. However, that plan
is no longer optimal for the lion when it
is out of the hole. At this point, the lion’s
optimal plan is to eat the rabbit after all.
The rational rabbit, who understands the
time inconsistency of the lion’s optimal
plan, would do well to leave the lion where
it is. What the lion would like while it is
in the hole is a commitment technology:
something that convinces the rabbit that
the lion will have no incentive or ability
to change the plan it announces from the
hole after it is out.
In some respects, the rabbit and the
lion resemble the private economy and the monetary
authority in the Barro-Gordon model. Before n' is
chosen, the monetary authority would like people to
believe that it will choose ti = 0. The problem is that
after the private economy sets n" = 0, the monetary au­
thority has an incentive to choose n > rf' (see figure 6).
As in the fable, what the monetary authority needs is
some sort of commitment technology, something that
convinces private agents that if they set jT = 0, the mon­
etary authority has no incentive or ability to deviate to
n > 0. Rational agents in an economy where the mone­
tary authority has no such commitment technology do
well to set tic = ti* > 0. This puts the monetary author­
ity in the dilemma discussed in the introduction. Its
optimal choice in this case is to validate expectations
by setting n = n* (that is, it chooses C in figure 1).
The crucial point of Kydland-Prescott and BarroGordon is that if the monetary authority has a credible
commitment to low inflation, then better outcomes
would occur than if it has no such ability to commit.
In both cases, the same level of unemployment occurs
(that is, the natural rate), but the authority with com­
mitment achieves the ideal inflation rate, while the mone­
tary authority without commitment achieves a socially
suboptimal higher inflation rate. The problem, as with
the lion in the fable, is coming up with a credible com­
mitment technology. The commitment technology must
be such that the monetary authority actually has no in­
centive to select a high inflation rate after the private
economy selects 7ie.

33

What makes adopting a commitment technology
particularly difficult is that the monetary authority’s
preferences in Barro-Gordon (unlike the lion’s pref­
erences in the fable) are fundamentally democratic pref­
erences: They reflect actual social costs and benefits.
Credible commitment technologies must involve basic
changes in monetary institutions, which make them,
in elfect, less democratic. Changes that have been adopt­
ed in practice are the legal and other mechanisms that
make central banks independent from the administra­
tive and legislative branches of government. The classic
institutional arrangement used to achieve commitment
has been the gold standard. Tying the money supply
to the quantity of gold greatly limits the ability of the
central bank to manipulate Jt.

Barro-Gordon and the data
The Barro-Gordon model is surprisingly effective
at explaining key features of the inflation-unemploy­
ment relationship during the twentieth century. It is
perhaps reasonable to suppose that the U.S. monetary
authorities more closely resembled the monetary authori­
ty with commitment in the Barro-Gordon model in the
early part of the last century and more closely resem­
bled the monetary authority without commitment in
the last part of the century. After World War II, the U.S.
government resolved that all branches of government—
including the Federal Reserve—should be committed
to the objective of full employment. This commitment
reflected two views. The first view, apparently validat­
ed by the experience of the Great Depression, is that
activist stabilization policy is desirable. It was codified
into law by the Full Employment Act of 1946. The
second view, associated with the intellectual revolution
of John Maynard Keynes, is that successful activist
stabilization policy is feasible. This view was firmly
entrenched in Washington, DC, by the time of the ar­
rival of the Kennedy administration in 1960. Kennedy’s
Council of Economic Advisors resembles a “who’s
who” of Keynesian economics.18
The notion that policymakers were committed to
low inflation in the early part of the century and rela­
tively more concerned with economic stabilization later
implies, via the Barro-Gordon model, that inflation
in the late period should have been higher than it was
in the early period. Comparison of figure 4, panel A
and figure 5, panel A shows that this is indeed the case.
Another implication of the model is that inflation should
have been constant at zero in the early period, and this
most definitely was not the case (see figure 4, panel A).19
But, this is not a fundamental problem for the model.
There is a simple, natural timing change in the model
that eliminates this implication, without changing the

34

central message of the analysis in the previous section.
In particular, suppose that the actions of the central
bank have an impact on inflation only with a p-period
delay withp > 0. In this way, the monetary authority
is not able to eliminate the immediate impact of shocks
to the inflation rate. The policymaker with commitment
sets the p-period-ahead expected inflation rate to zero.
Suppose that the analogous timing assumption applies
to the private sector, so that there are movements in
inflation that are not expected at the time it sets 7te. Un­
der the expectations-augmented Phillips curve, this
introduces a source of negative correlation between
inflation and unemployment. This sort of delay in the
private sector could be rationalized if wage contracts
extended over p periods of time. Under these timing
assumptions, the prediction of the model under com­
mitment is that the actual inflation rate fluctuates, and
inflation and unemployment covary negatively, as
was actually observed over the early part of the twen­
tieth century. (The appendix analyzes the model with
time delays.) When the monetary authorities drop their
commitment to low inflation in the later part of the
century, the model predicts that unemployment and
inflation move together more closely and that the re­
lationship will actually be positive in the lowest fre­
quencies. In the higher frequencies, the correlation might
still be negative, for the reason that it is negative in
all frequencies when there is commitment: Inflation
in the higher frequencies is hard to control when there
are implementation delays.20 In this sense, the BarroGordon models seems at least qualitatively consistent
with the basic facts about what happened to the infla­
tion-unemployment relationship between the first and
second parts of the past century. It is hard not to be
impressed by this.21
But, there is one shortcoming of the model that
may be of some concern. Recall from figure 3, panel A
that inflation in the early 1980s dropped precipitously,
just as unemployment soared to a postwar high. This
behavior in inflation and unemployment is so pro­
nounced that it has a substantial impact on the very
low frequency component of the data. According to
figure 3, panel F, the 20-40 year component of unem­
ployment lags the corresponding component of infla­
tion by several years. As a technical matter, it is possible
to square this with the model. The version of the model
discussed in the previous paragraph allows for the
possibility that a big negative shock to the price level—
one that was beyond the control of the monetary au­
thority—occurred that drove actual unemployment
up above the natural rate of unemployment. But the
explanation rings hollow. The model itself implies
that, on average, the low frequency component of

1Q/2003, Economic Perspectives

unemployment leads inflation, not the other way around
(see the appendix for an elaboration). This is because
unemployment is related to the incentives to inflate, so
when unemployment rises, one expects inflation to
rise in response. In fact, with the implementation and
observation delays, one expects the rise in inflation
to occur with a delay after a rise in unemployment.
In sum, the Barro-Gordon model seems to provide
a way to understand the change in inflation-unemploy­
ment dynamics between the first and second parts of
the last century. However, the disinflation of the early
1980s raises some problems for the model. That ex­
perience appears to require thinking about the defla­
tion of the early 1980s as an accident. Yet, to all direct
appearances it was no accident at all. Conventional
wisdom takes it for granted that the disinflation was
a direct outcome of intentional efforts taken by the
Federal Reserve, beginning with the appointment of
Paul Volcker as chairman in 1979. Many observers
interpret this experience as a fundamental embarrass­
ment to the Barro-Gordon model. Some would go
further and interpret this as an embarrassment to the
ideas behind it: the notion that time inconsistency is
important for understanding the dynamics of U.S. in­
flation. They argue that, according to the model, the
only way inflation could fall precipitously absent a
drop in unemployment is with substantial institutional
reform to implement commitment. There was no in­
stitutional reform in the early 1980s, so the institu­
tional perspective must, at best, be of second-order
importance for understanding U.S. inflation.
Alternative representation of the notion that
commitment matters
By the standards of our times, the Barro-Gordon
model must be counted a massive success. Its two
simple equations convey some of the most profound
ideas in macroeconomics. In addition, it accounts nicely
for broad patterns in twentieth century data: the fact
that inflation on average was higher in the second half,
and the changed nature of the unemployment-infla­
tion relationship.
Yet, the model encounters problems understand­
ing the disinflation of the 1980s. Perhaps this is a prob­
lem for the specific equations of the model. But, is it
a problem for the ideas behind the model? We just do
not know yet, because the ideas have not been stud­
ied in a sufficiently wide range of economic models.
Efforts to incorporate the basic ideas of KydlandPrescott and Barro-Gordon into modern models have
only just begun. This process has been slow, in part
because the computational challenge of this task is
enormous. Indeed, the computational difficulties of

Federal Reserve Bank of Chicago

these models serve as another reminder of the power
of the original Barro-Gordon model: With it, the read­
er can reach the core ideas armed simply with a sheet
of paper and a pencil.
Why should we incorporate the ideas into modern
models? First, the ideas have proved enormously pro­
ductive in helping us understand the broad features of
inflation in the twentieth century. This suggests that
they deserve further attention. Second, as we will see
below, when we do incorporate the ideas into modern
models, unexpected results occur. They may provide
additional possibilities for understanding the data. Third,
because modern models are explicitly based on micro­
foundations, they offer opportunities for econometric
estimation and testing that go well beyond what is
possible with the original Barro-Gordon model. In
modern models, crucial parameters like a, k, and y
are related explicitly to production functions, to fea­
tures of labor and product markets, to properties of
utility functions, and to the nature of information trans­
mission among agents. These linkages make it possi­
ble to bring a wealth of data to bear, beyond data on
just inflation and unemployment. In the original BarroGordon model, a, k, and y are primitive parameters,
so the only way to obtain information on them is us­
ing the data on inflation and unemployment itself.
To see the sort of things that can happen when the
ideas of Kydland-Prescott and Barro-Gordon are in­
corporated into modern models, we briefly summarize
some recent work of Albanesi, Chari, and Christiano
(2002).22 They adapt a version of the classic monetary
model of Lucas and Stokey (1983), so that it incorpo­
rates benefits of unexpected inflation and costs of in­
flation that resemble the factors Barro and Gordon
appeal to informally to justify the specification of their
model. However, because the model is derived using
standard specifications of preferences and technology,
there is no reason to expect that the monetary author­
ity’s best response function is linear, as in the BarroGordon model (recall figure 6). Indeed, Albanesi, Chari,
and Christiano find that for almost all parameterizations for the model, if there is any equilibrium at all
there must be two. That is, the best response function
is nonlinear, and has the shape indicated in figure 7.
In one respect, it should not be a surprise that there
might be multiple equilibriums in a Barro-Gordon
type model. Recall that an equilibrium is a level of in­
flation where benefits of additional unexpected infla­
tion just balance the associated costs. But we can expect
that these costs and benefits change nonlinearly for
higher and higher levels of inflation. If so, then there
could be multiple levels of inflation where equilibrium
occurs, as in figure 7.

35

There is one version of the AlbanesiChari-Christiano model in which the in­
tuition for the multiplicity is particularly
simple. In that version, private agents
can, at a fixed cost, undertake actions to
protect themselves against inflation. In
principle, such actions may involve ac­
quiring foreign currency deposits for use
in transactions. Or, they may involve
fixed costs of retaining professional assis­
tance in minimizing cash balances when
inflation is high. Although these efforts
are costly for individuals, they do mean
that on the margin, the costs of inflation
are reduced from the perspective of a be­
nevolent monetary authority. Turning to
figure 7, one might imagine that at low
levels of inflation, the basic Barro-Gordon
model applies. People do not undertake
fixed costs to protect themselves against
inflation, and the best response function
looks roughly linear, cutting the 45-degree line at the lower level of inflation in­
dicated in the figure. At higher levels of inflation,
however, people do start to undertake expensive fixed
costs to insulate themselves. By reducing the marginal
cost of inflation, this has the effect of increasing the in­
centive for the monetary authority to raise inflation. Of
course, this assumes that the benefits of inflation do
not simultaneously decline. In the Albanesi-ChariChristiano model, in fact they do not decline. This is why
in this version of their model, the best response func­
tion eventually begins to slope up again and, therefore,
to cross the 45-degree line at a higher level of inflation.
The previous example is designed to just present
a flavor of the Albanesi-Chari-Christiano results. In
fact, the shape of the best response function resembles
qualitatively the picture in figure 7, even in the ab­
sence of opportunities for households to protect them­
selves from inflation.
What are the implications of this result? Essen­
tially, there are new ways to understand the fact that
inflation is sometimes persistently high and at other
times (like now) persistently low. In the Barro-Gordon
model, this can only be explained by appealing to a
fundamental variable that shifts the best response func­
tion. The disinflation of the early 1980s suggests that
it may be hard to find such a variable in practice.
But, is a model with multiple equilibriums testable?
Perhaps. Inspection of figure 7 suggests one possibil­
ity. Shocks to the fundamental variables that determine
the costs and benefits of inflation from the perspective
of the monetary authority have the effect of shifting

36

the best response curve up and down. Notice how the
high-inflation equilibrium behaves differently from
the low-inflation equilibrium as the best response
function, say, shifts up. Inflation in the low-inflation
equilibrium rises, and in the high-inflation equilibri­
um it falls. Thus, these shocks have an opposite cor­
relation with inflation in the two equilibriums. This sign
switch in equilibriums is an implication of the model
that can, in principle, be tested. For example, AlbanesiChari-Christiano explore the model’s implication that
interest rates and output covary positively in the lowinflation equilibrium and negatively in the high-inflation
equilibrium. Using data drawn from over 100 coun­
tries, they find evidence in support of this hypothesis.
But, the Albanesi-Chari-Christiano model is still
too simple to draw final conclusions about the impli­
cations of lack of commitment for the dynamics of
inflation. The model has been kept very simple so that—
like the Barro-Gordon model—it can be analyzed with
a sheet of paper and a pencil (well, perhaps one would
need two sheets of paper!). We know from separate
work on problems with a similar logical structure that
when models are made truly dynamic, say with the
introduction of investment, the properties of equilib­
riums can change in fundamental ways (see, for ex­
ample, Kmsell and Smith, 2002). It still remains to
explore the implications of lack of commitment in such
models. In particular, it is important to explore wheth­
er the disinflation experience of the early 1980s, which

1Q/2003, Economic Perspectives

appears to be a problem for the Barro-Gordon model,
can be reconciled with modem models.

Conclusion
We characterized the change in the nature of in­
flation dynamics before and after the 1960s. We re­
viewed various theories about inflation, but put special
focus on the institutions view: theories that focus on
lack of commitment in monetary policy as the culprit
behind bad inflation outcomes. We argued that this
view, as captured in the famous model of Barro and
Gordon (1983a, b), accounts well for the broad out­
lines of the data. Not only does it capture the fact that
inflation was, on average, lower in the early period
of the twentieth century than in the later period, but it
also accounts for the shift that occurred in the unem­
ployment-inflation dynamics. In the early period, in­
flation and unemployment exhibit a negative relationship
at all frequency bands. In the later period, the nega­
tive relationship persists in the higher frequency bands,
while a positive relationship emerges in the low fre­
quencies. We show how the Barro-Gordon model

can account for this shift as reflecting the notion that
monetary policy was credibly committed to low in­
flation in the early period, while it abandoned that
commitment in the later period.
Although the model does well on these broad facts,
it has some well-known difficulties addressing the dis­
inflation in the U.S. in the 1980s. This, among other
considerations, motivates the recent research on the
implications of absence of commitment in monetary
policy. We show that that research uncovers some sur­
prising—relative to the original Barro-Gordon analy­
sis—implications of lack of commitment. These may
ultimately prove helpful for achieving a better model
of inflation dynamics. But that research has a long way
to go, before we fully understand the implications of
absence of commitment in monetary policy.
What is at stake in this work? If absence of com­
mitment is in fact the primary reason for the poor in­
flation outcomes of the past, then research on ways
to improve inflation outcomes needs to focus on im­
proved design of monetary institutions.

NOTES
1This belief is based in part on the evidence (see, for example, Barsky
and Kilian, 2000, for a discussion of the role of money growth in
the 1970s inflation). But, it is also based on the view that good
economic theory implies a close connection—at least over hori­
zons as long as a decade—between money growth and inflation.
Recently, some economists’ confidence in the existence of a close
connection between money growth and inflation has been shaken
by the discovery, in seemingly well-specified economic models,
that the connection can be surprisingly weak. For example, Loyo
(1999) uses the “fiscal theory of the price level” to argue that it
was a high nominal interest rate that initiated the rise in inflation
in Brazil, and that this rise in the interest rate was in a meaningful
sense not “caused” by high money growth. Loyo drives home his
point that it was not high money growth that caused the high in­
flation by articulating it in a model in which there is no money.
For a survey of the fiscal theory, and of Loyo’s argument in par­
ticular, see Christiano and Fitzgerald (2000). Others argue that
standard economic theories imply a much weaker link than was
once thought, between inflation and money growth. For example,
Benhabib, Schmitt-Grohe, and Uribe (2001a, b) and Krugman
(1998) argue that it is possible for there to be a deflation even in the
presence of positive money growth. Christiano and Rostagno
(2001) and Christiano (2000) review these arguments, respec­
tively. In each case, they argue that the deflation, high money
growth scenario depends on implausible assumptions.

4The first, second, and aspects of the fourth observations have been
made before. To our knowledge the third observation was first made
in Christiano and Fitzgerald (2003). For a review of the first two
observations, see Blanchard (2002). For a discussion of the fourth
using data on the second half of the twentieth century, see King
and Watson (1994), King, Stock, and Watson (1995), Sargent (1999),
Staiger, Stock, and Watson (1997), and Stock and Watson (1998).

2This description of economists’ research strategy is highly stylized.
In some cases, the model is not made formally explicit. In other
cases, the model is explicit, but the data plays only a small role in
building confidence in the model.

7The different frequency components of the data are extracted us­
ing the band pass filter method summarized in Christiano and
Fitzgerald (1998) and explained in detail in Christiano and
Fitzgerald (2003).

3Prominent recent papers that draw attention to the inertia puzzle in­
clude Chari, Kehoe, and McGrattan (2000), and Mankiw (2001).
Christiano, Eichenbaum, and Evans (2001) describe variants of standard
macroeconomic models that can account quantitatively for the inertia.

8It is worth emphasizing that, by “Phillips curve,” we mean a sta­
tistical relationship, and not necessarily a relationship exploitable
by policy.

Federal Reserve Bank of Chicago

5In this respect, our analysis resembles that of Ireland (1999), al­
though his analysis focuses on data from the second half of the
twentieth century only, while we analyze both halves.
6For a critical review of the Clarida, Gali, and Gertler argument, see
Christiano and Gust (2000). Other arguments that fall into what
we are calling the people category include Sargent (1999). Sargent
argues that periodically, the data line up in such a way that there
appears to be a Phillips curve with a favorable trade-off between
inflation and unemployment. High inflation then results as the cen­
tral bank attempts to exploit this to reduce unemployment. As em­
phasized in Sargent (1999, chapter 9), the high inflation of the 1970s
represents a challenge for this argument. This is because the domi­
nant fact about the early part of this decade was the apparent “death”
of the Phillips curve: Policymakers and students of the macroeconomy
were stunned by the fact that inflation and unemployment both
increased at the time.

37

9The slope of the regression line drawn through the scatter plot of
points in figure 2, panel B is —0.42, with a ^-statistic of 3.77 and
an J?2 of 0.20.
10Specifically, they are /7-values for testing the null hypothesis that
there is no relationship at any frequency between the two variables,
against the alternative that the correlation is in fact the one reported
in the table. These /7-values are computed using the following
bootstrap procedure. We fit separate g-lag scalar autoregressive
representations to the level of inflation (first difference, log CPI)
and to the level of the unemployment rate. We used random draws
from the fitted disturbances and actual historical initial conditions
to simulate 2,000 artificial datasets on inflation and unemploy­
ment. For annual data, q = 3; for monthly, q = 12; and for quarterly,
q = 8. The datasets on unemployment and inflation are independent
by construction. In each artificial dataset, we compute correlations
between the various frequency components, as we did in the actual
data. In the data and the simulations, we dropped the first and last
three years of the filtered data before computing sample correla­
tions. The numbers in parentheses in table 1 are the frequency of
times that the simulated correlation is greater than the estimated
correlation is positive. If it is negative, we compute the frequency
of times that the simulated correlation is less than the simulated
value. These are /7-values under the null hypothesis that there is
no relationship between the inflation and unemployment data.

nFigure 3 exhibits monthly observations on inflation and unemploy­
ment. To reduce the high frequency fluctuations in inflation, figure 3,
panel A exhibits the annual average of inflation, rather than the
monthly inflation rate. The scatter plot in figure 3, panel B is based
on the same data used in figure 3, panel A. Figure 3, panels C—F
are based on monthly inflation, that is, l,2001og(CP/^/CP/M),
and unemployment. The line in figure 3, panel B represents a re­
gression line drawn through the scatter plot. The slope of that
line, based on monthly data covering the period 1959:Q2—98:Q 1,
is 0.47 with a ^-statistic of 5.2.
12Consistent with these observations, when inflation and unemploy­
ment are detrended using a linear trend with a break in slope (not
level) in 1980:Q4 for inflation and 1983:Q1 for unemployment,
the scatter plots of the detrended variables show a negative relation­
ship. The regression of detrended inflation on detrended unemploy­
ment has a coefficient of —0.31, with ^-statistic of —4.24 and R2 =
0.037. The slope coefficient is similar to what was obtained in note 9
for the pre-1960s period, but the R2 is considerably smaller.
13See King and Watson (1994), Stock and Watson (1998), and
Sargent (1999, p. 12), who apply the band-pass filtering techniques
proposed in Baxter and King (1999). The relationship between
the Baxter-King band-pass filtering methods and the method
used here is discussed in Christiano and Fitzgerald (2003).

38

14See, for example, Christiano, Eichenbaum, and Evans (2001).

15In the years since the expectations-augmented Phillips curve was
first proposed, evidence has accumulated against it. For example,
Christiano, Eichenbaum, and Evans (2001) display evidence that
suggests that inflation surprises are not the mechanism by which
shocks, including monetary policy shocks, are transmitted to the
real economy. Although the details of the mechanism underlying the
expectations-augmented Phillips curve seem rejected by the data,
the basic idea is still very much a part of standard models. Namely, it
is the unexpected component of monetary policy that impacts on
the economy via the presence of some sort of nominal rigidity.
16Extending the analysis to the case where the socially optimal
level of inflation is non-zero (even, random) is straightforward.
17In later work, Barro and Gordon (1983a) pointed out that there
exist equilibriums in which reputational considerations play a role.
In such equilibriums, a monetary authority might choose to vali­
date 7te = 0 out of concern that if it does not do so, then in the next
period 7te will be an extremely large number with the consequence
that whatever they do then, the social consequences will be bad.
In this article, we do not consider these “trigger strategy” equilib­
riums, and instead limit ourselves to Markov equilibriums, in
which decisions are limited to be functions only of the economy’s
current state. In the present model, there are no state variables,
and so decisions, 7te and 7t, are simply constants. A problem with
allowing the presence of reputational considerations is that they
support an extremely large set of equilibriums. Essentially, any­
thing can happen and the theory becomes vacuous.
18It would be interesting to understand why earlier monetary au­
thorities were relatively less concerned with stabilizing the economy
and more committed, for example, to the gold standard.
19As mentioned in an earlier note, the model does not require that the
optimal level of inflation is literally zero. Implicitly, what we are
assuming is that the optimal level of inflation, 71° in the note, is much
smoother than the inflation rate actually observed in the early sample.

20These observations are established in the appendix.
21The argument we have just made is similar in spirit to the one
that appears in Ireland (1999).
22This builds on previous work by Chari, Christiano, and
Eichenbaum (1998).

1Q/2003, Economic Perspectives

APPENDIX: INFLATION-UNEMPLOYMENT COVARIANCE FUNCTION IN THE IMPLEMENTATION-DELAY
VERSION OF BARRO-GORDON MODEL

This appendix works out the covariance implications of a version of the Barro-Gordon model with implemen­
tation delays (implementation delays are discussed in Barro and Gordon [1983, pp. 601-602]). The particular
version we consider is the one proposed in Ireland (1999). We work out the model’s implications for the type
of frequency-domain statistics analyzed in the text. In particular, we seek the covariance properties of inflation
and unemployment, when we consider only a specified subset of frequency components (high, business cycle,
low, and very low) of these variables.
We obtain two sets of results. One pertains to the commitment version of the model and the other to the
no-commitment version:
■ It is possible to parameterize the commitment version of the model so that the covariance between inflation
and unemployment is negative for all subsets of frequency components.
■ In the no-commitment version of the model, the covariance between inflation and unemployment can be
positive in the very low frequency components of the data and negative in the higher frequency components.
Unemployment does not lag inflation in the very low frequency data, and it may actually lead, depending on
parameter values.

The idea is that policymakers can only influence the /^-period ahead forecast of inflation, not actual inflation.
With this change, the objective of the policymaker is E [(;/ -kuN)2 + yji2]/2. Actual inflation, ji, is ji = ft + 0 * q,
where ft is a variable chosen p > 0 periods in the past by the policymaker, and 9 * q captures the shocks that
impact ji between the time ft is set and ji is realized. Here,

0*q,

/? = 0 ’

0

where q( is white noise and L is the lag operator, 77q =q The policymaker’s problem is optimized by setting
ft = i|/«w, where uN is the forecast of the period t natural rate of unemployment, made p periods in the past,
wf = Et_pu^ , computable at the time ft is selected and .v is defined in the text. Following Ireland (1999), we
suppose that iin has a particular unit root time series representation:

(l-Z)zjf = A(1-Z)m"+v, -1<X<1.
With this representation,
wf =g*v, =g(£)v„
where

g(L) = E

Federal Reserve Bank of Chicago

1-A""
1-X

1

L

1-AZ

(l-AZ)(l-Z)

39

We suppose that iT in the expectations augmented Phillips curve is the p-period ahead forecast of inflation
made by private agents. We impose rational expectations, n" = n. Then, it is easy to verify that when there is
no commitment, inflation and unemployment evolve in equilibrium according to

ji,

=Vg(l)v,+0(l)nz,M,

v,-a9(£)rp,

respectively. We make the simplifying assumption that all shocks are uncorrelated with each other. Outcomes
when there is commitment are found by replacing y in the above expression with 0. In this case, it is easy to
see that the covariance between inflation and unemployment is unambiguously negative. Under no commitment,
it is possible for this correlation to be positive.
It is convenient to express the joint representation of the variables as follows:

«,=(U=^O).
where
1
F(£) = (1-AZ)(1-Z)

¥g(i)

-a0(Z)

0(1) .

Denote the covariance function of .r by
ElliTli k

c(k)=£.v,.v;^

ElltU<-k

E^,-k

for k = 9, ±1, ±2,... . We want to understand the properties of the covariance function, Enfi^, where ft, is
the component of in a subset of frequencies, and il, is the component of w( in the same subset of frequencies.
For this, some results in spectral analysis are useful (see Sargent [1987, chapter 11], or, for a simple review,
see Christiano and Fitzgerald [1998]). The spectral density of a stochastic process at frequency cue (-ji, n) is
the Fourier transform of its covariance function:

y=-oo
The covariances can then be recovered applying the inverse Fourier transform to the spectral density:

It is trivial to verify the latter relationship, using the definition of the spectral density and the fact

— fI‘e“'t/co = 0,/^0
l,/=0 '
2ji J-’t

40

1Q/2003, Economic Perspectives

The inverse Fourier transform result is convenient for us, because in practice there exists a very simple, direct
way to compute S'(co).
Let S'(co) denote the spectral density of xt, after a band-pass filter has been applied to xt to isolate a subset
of frequencies. Then,
S(co) = F(cri“)FF(e'“)',

where Vis the variance-covariance matrix of (v(, rQ. Here, V= [F ] and En = Ev2 , Vl2 = Fv,r|t, F,, = q(2 . Eval­
uating the 2,1 element of S'(co), which we denote S-Jai):
¥g(^“)

R=

0 e
/
i /
7 . . - y ocg <F'“ 9
1-Xe'“ l-e'“
'
1 '

(l-V“)(l-F“)

i
F12-oc9(w''“)9(e'“)F22
1

Then,

= —f s (co,l)dm,
2nJ»
7
where

s (co,/) = Sm (m)e™'+ Sm (-co)^™', co e (-ji, n).
There are two features of the covariance function, EnuiV that we wish to emphasize. First, in the case of
commitment, when y is replaced by 0 in 5m(co), it is possible to choose parameters so that Fji(m( ;<0 for all /,
over all possible subsets of frequencies. Consider, for example, F12 = 0, p = 1 and 9 (e
= 9 > 0, so that

En>,,,-i=

+ e"0,'Pm

_ J-n9F„ /=0
| 0
/XF
Second, when there is commitment so that i|i = a (1 - k)/y, then the covariance in the very low frequency com­
ponents of inflation and unemployment is positive over substantial leads and lags. Also, there unemployment
may lead inflation, if only by a small amount. We establish these things by first noting that for the very lowest
frequency bands,

s (co,/)

Federal Reserve Bank of Chicago

g(w“)F“'
(1 -

) (1 - e™)

g(R"
(1 - Aw™) (1 -

)

yFn.

41

To see this, note that s(co, Z) can be broken into three parts, corresponding to the coefficients on 7n, F12,
and F,,, respectively. For co in the neighborhood of zero, the coefficient on (z22 is obviously bounded, since
9 (<v"”) 9 (<?™) is bounded for all co e (-n, rc). The same is true for the coefficient on F12, although this requires
more algebra to establish. Finally, the coefficient on Fn is not bounded. For co close enough to zero, this ex­
pression is arbitrarily large. For this reason, for co close enough to zero this expression dominates the whole
covariance. To establish the remainder of the second result, we now examine more closely the expression in
the previous equation. Substituting out for g from above:

1 V+1(l
S(co,/) _

xVlt

~

1 V+1(l e'“)
1-A '
'
(l-V“)(l-eiM)

+

(l-Xe-'“)(l-e'“)

(l-Ae™)(l-e"“)

1 V+1(l
1-A '

+

(l-Xe-™’)(l-e-™)

e'm(' /<) + 1 V+1(l e"”) + e"”
1-A '

2[l + A2 - 2Acos (co)] [l - cos (co)]

11 X^+1 [(l -

) e"”"

+ (1 -

+ eime^p}

2^1 + X2 -2Acos(co)][l-cos(co)]

1-V+1 []cos (co(Z-/?))-cos(co(Z-/?-!))] +cos (co(Z-/?-!))
1-A
[l + A2 -2Acos(co)][l-cos(co)]

[l-V+1] cos(%(Z-p)) + [A^-A,]cos(co(Z-/?-l))

(1-A)|[l + A2 -2Acos(co)][ 1-cos (co)]

42

1Q/2003, Economic Perspectives

Since Enfut _t is just the integral of s(cg, /), we can understand the former by studying the latter. Consider
first the case, X = 0, when z/f is a pure random walk. In this case, s (co, /), viewed as a function of /, is a cosine
function that achieves its maximum value at I = /?.' A rough estimate, based on the results in Christiano,
Eichenbaum, and Evans (2001), of the time it takes for monetary policy to have its maximal impact on the
price level is two years. This suggests that a value ofp corresponding to two years is sensible. Our notion of
very low frequencies corresponds to periods of fluctuation, 20-40 years, or 10p-20p. In terms of frequencies,
this translates into co e [2n/(20p), 2n/(10p)]. If we suppose the data are quarterly, then/? = 8. For this case, we
find that s (co, /) is positive for / e (-10, 30) when co = 2n/(10p) and positive for / e (-30, 50) when co = 2ji/
(20//). We can conclude that the covariance over the very low frequencies is positive for / e (-10, 30), with un­
employment leading inflation by eight periods.
When we repeated this exercise for X = 0.999, we found that the covariance over the very low frequencies
is maximized for / somewhere between / = 0 and / = 1, and it is positive in the entire range, I e (-20, 20). The
empirically relevant value of X is smaller (Ireland, 1999, reports a value in the neighborhood of 0.6), and the
results we obtained for this lie in between the reports just reported for the X = 0 and 0.999 cases. This establish­
es our second set of results.

TIn general, it achieves its maximal value for any / such that co (/-/?) = 27i«, where n is an arbitrary integer. So, the full set of values for
which it achieves its maximum is

, » = 0, ±1, ±2,... .

Since at the moment we are considering small values of co, values of I not associated with n = 0 are not of interest.

Federal Reserve Bank of Chicago

43

REFERENCES

Albanesi, Stefania, V. V. Chari, and Lawrence
Christiano, 2002, “Expectation traps and monetary
policy,” National Bureau of Economic Research,
working paper, No. 8912, also available at http://
faculty, econ.nwu. edu/faculty/christiano/research.htm.

Barro, Robert J., 1987, Macroeconomics, second
edition, Toronto: John Wiley & Sons.
Barro, Robert J., and David B. Gordon, 1983a,
“Rules, discretion and reputation in a model of mon­
etary policy,” Journal ofMonetary’ Economics, Vol. 12,
pp. 101-121.
__________ , 1983b, “A positive theory of monetary
policy in a natural rate model,” Journal ofPolitical
Economy, Vol. 91, August, pp. 589-610.
Barsky, Robert, and Lutz Kilian, 2000, “A mone­
tary explanation of the great stagflation of the 1970s,”
University of Michigan, manuscript.

Baxter, Marianne, and Robert G King, 1999, “Mea­
suring business cycles: Approximate band-pass filters
for economic time series,” Review ofEconomics and
Statistics, Vol. 81, No. 4, November, pp. 575-593.
Benhabib, J., S. Schmitt-Grohe and M. Uribe, 2001a,
“Monetary policy and multiple equilibria,” American
Economic Review, Vol 91, No.l, March, pp. 167-186.
_________ , 2001b, “The perils of Taylor rules,” Jour­
nal ofEconomic Theory’, Vol. 96, No. 1/2, pp. 40-69.

Blanchard, Olivier, 2002, Macroeconomics, third edi­
tion, Upper Saddle River, NJ: Prentice-Hall.

Chari, V. V., Lawrence J. Christiano, and Martin
Eichenbaum, 1998, “Expectation traps and discre­
tion,” Journal ofEconomic Theory’, Vol. 81, No. 2,
pp. 462-492.

Chari, V. V., Patrick Kehoe, and Ellen McGrattan,
2000, “Sticky price models of the business cycle:
Can the contract multiplier solve the persistence
problem?,” Econometrica, Vol. 68, No. 5, September,
pp. 1151-1179.
Christiano, Lawrence J., 2000, “Comment on
‘Theoretical analysis regarding a zero lower bound
on nominal interest rates’,” Journal ofMoney, Cred­
it, and Banking, Vol. 32, No. 4, pp. 905-930.

44

Christiano, Lawrence J., and Wouter J. den Haan,
1996, “Small-sample properties of GMM for busi­
ness cycle analysis,” Journal ofBusiness and Eco­
nomic Statistics, Vol. 14, No. 3, July, pp. 309-327.
Christiano, Lawrence J., Martin Eichenbaum,
and Charles Evans, 2001, “Nominal rigidities and
the dynamic effects of a shock to monetary policy,”
National Bureau of Economic Research, working pa­
per, No. 8403.
Christiano, Lawrence J., and Terry J. Fitzgerald,
2003, “The band pass filter,” International Economic
Review, Vol. 44, No. 2, also available at http://
faculty, econ.nwu. edu/faculty/christiano/research.htm.
__________ , 2000, “Understanding the fiscal theory
of the price level,” Economic Review, Federal Re­
serve Bank of Cleveland, Vol. 36, No. 2, Quarter 2.
__________, 1998, “The business cycle: It’s still a
puzzle,” Economic Perspectives, Federal Reserve Bank
of Chicago, Vol. 22, No. 4, Fourth Quarter, pp. 56-83.

Christiano, Lawrence J., and Christopher Gust,
2000, “The expectations trap hypothesis,” Economic
Perspectives, Federal Reserve Bank of Chicago,
Vol. 24, No. 2, Second Quarter, pp. 21-39.
Christiano, Lawrence J., and Massimo Rostagno,
2001, “Money growth monitoring and the Taylor
rule,” National Bureau of Economic Research, work­
ing paper, also available at http://faculty.econ.nwu.edu/
faculty/christiano/research.htm.
Clarida, Richard, Jordi Gali, and Mark Gertler,
1998, “Monetary policy rules and macroeconomic sta­
bility: Evidence and some theory,” Quarterly Journal
ofEconomics, Vol. 115, No. 1, February, pp. 147-180.

Cooley, Thomas F., and Lee Ohanian, 1991, “The
cyclical behavior of prices,” Journal ofMonetary’
Economics, Vol. 28, No. 1, August, pp. 25-60.
Eggertsson, Gauti B., 2001, “Committing to being
irresponsible: Deficit spending to escape a liquidity
trap,” International Monetary Fund, manuscript.

Friedman, Milton, 1968, “The role of monetary pol­
icy,” American Economic Review, Vol. 58, March,
pp. 1-17.

1Q/2003, Economic Perspectives

Ireland, Peter N., 1999, “Does the time-consistency
problem explain the behavior of inflation in the United
States?,” Journal ofMonetary Economics, Vol. 44,
No. 2, October, pp. 279-291.

Lucas, Robert E., Jr., and Nancy Stokey, 1983,
“Optimal fiscal and monetary policy in an economy
without capital,” Journal ofMonetary’ Economics,
Vol. 12, No. 1, July, pp. 55-93.

Kilian, Lutz, 1998, “Small-sample confidence inter­
vals for impulse response functions,” Review ofEco­
nomics and Statistics, Vol. 80, No. 2, May, pp. 218-230.

Mankiw, N. Gregory, 2001, “The inexorable and mys­
terious tradeoff between inflation and unemployment,”
Economic Journal, Vol. Ill, No. 471, pp. C45-61.

King, Robert, James Stock, and Mark Watson,
1995, “Temporal instability of the unemployment-in­
flation relationship,” Economic Perspectives, Federal
Reserve Bank of Chicago, Vol. 19, No. 3, May/June.

Orphanides, Athanasios, 1999, “The quest for pros­
perity without inflation,” Board of Governors of the
Federal Reserve System, unpublished manuscript.

King, Robert, and Mark Watson, 1994, “The post-war
U.S. Phillips curve: Arevisionist econometric histo­
ry,” Carnegie-Rochester Conference Series on Public
Policy, Vol. 41, No. 0, December, pp. 152-219.
Krugman, Paul, 1998, “It’s baaack! Japan’s slump
and the return of the liquidity trap,” Brookings Papers
on Economic Activity.
Krusell, Per, and Anthony Smith, 2002, “Consump­
tion-savings decisions with quasi-geometric discount­
ing,” Carnegie Mellon University, Graduate School
of Industrial Administration, working paper, avail­
able at http://fasttone.gsia.cmu.edu/tony/quasi.pdf.

Kydland, Finn, and Edward C. Prescott, 1977,
“Rules rather than discretion: The inconsistency of
optimal plans,” Journal ofPolitical Economy, Vol.
85, No. 3, June, pp. 473-491.

Loyo, Eduardo, 1999, “Tight money paradox on
the loose: A fiscalist hyperinflation,” Harvard Univer­
sity, Kennedy School of Government, unpublished
manuscript.

Federal Reserve Bank of Chicago

Phelps, Edmund S., 1967, “Phillips curves, expecta­
tions of inflation, and optimal employment over time,”
Economica NS, Vol. 34, No. 3, pp. 254-281.
Sargent, Thomas J., 1999, The Conquest ofAmeri­
can Inflation, Princeton, NJ: Princeton University Press.

_________ , 1987, Dynamic Macroeconomic Theory’,
Cambridge, MA: Harvard University Press.
Staiger, Douglas, James H. Stock, and Mark W.
Watson, 1997, “The NAIRU, unemployment, and
monetary policy,” Journal ofEconomic Perspectives,
Vol. 11, No. 1, Winter, pp. 33-49.

Stock, James, and Mark Watson, 1998, “Business
cycle fluctuations in U.S. macroeconomic time series,”
National Bureau of Economic Research, working pa­
per, No. 6528, forthcoming in Taylor and Woodford,
Handbook ofMacroeconomics.
Taylor, John B., 1993, “Discretion versus policy rules
in practice,” Carnegie-Rochester Series on Public
Policy, Vol. 39, pp. 195-214.

45

39TH ANNUAL

CONFERENCE ON BANK STRUCTURE AND COMPETITION
FEDERAL RESERVE BANK OF CHICAGO

May 7-9, 2003
On May 7-9, 2003, the Federal Reserve Bank of Chicago will hold its 39th annual Conference

on Bank Structure and Competition at the Fairmont Hotel in Chicago. Since its inception, the

conference has encouraged an ongoing dialogue and debate on current public policy issues
affecting the financial services industry. Each year the conference brings together several hundred
financial institution executives, regulators, and academics to examine current issues.

Corporate Governance: Implications for Financial Services Firms
The 2003 conference will address issues related to
corporate governance. In recent months, there have
been a number of highly publicized incidents in
which appropriate corporate governance was lack­
ing. Deficiencies include inadequate oversight by
boards of directors, misleading or fraudulent
accounting practices, questionable audit arrange­
ments, and various efforts to obfuscate the true
financial condition of the firm. As a result, there has
been a general rise in investor skepticism, leading to
significant uncertainty in equity and credit markets
and adverse effects on the overall economy.
These events have significantly affected the finan­
cial services sector. A number of banks and other
financial intermediaries were directly affected
because they had large credit exposures to firms
that followed questionable accounting practices and
subsequently failed. Of particular concern are the
structured finance arrangements provided to special
purpose entities associated with the failed firms.
The revelation of these problems has brought into
question the efficacy of current mechanisms used
to monitor and control firm behavior. The appro­
priate role and effectiveness of boards of directors,
shareholders, creditors (including banks), financial
regulators, self-regulation, market regulation,
accounting standards, and disclosure rules are all
being challenged. The Sarbanes-Oxley Act was a
first step in addressing these issues. Modifications
are now being recommended and additional
reforms are being evaluated. These corporate gov­
ernance concerns raise a number of important pub­
lic policy questions that will be discussed at the
2003 conference.

As in past years, much of the program will be
devoted to the conference theme, but there will also
be a number of sessions on current industry issues.
Some of the highlights of the conference include:
■ The keynote address by Federal Reserve Board
Chairman Alan Greenspan.
■ A panel discussion of corporate governance from
a variety of perspectives by industry experts.
Participants include Randall Kroszner, Member
of the President's Council of Economic Advisors;

Katherine Schipper, FASB (Financial Accounting
Standards Board) Member; Elizabeth A. Duke of
the American Bankers Association; and Ingo
Walter, Charles Simon Professor of Applied
Financial Economics at the Stern School of
Business.

■ Special luncheon presentations on the appropriate
public
and
private
sector
responses
to
corporate governance problems by Cynthia A.
Glassman, Commissioner, Securities and Exchange
Commission; and Michael H. Moskow, President
and Chief Executive Officer, Federal Reserve Bank
of Chicago.
■ A discussion of regulatory and supervisory
reform proposals by Thomas M. Hoenig,
President and Chief Executive Officer, Federal
Reserve Bank of Kansas City; Gary H. Stern,
President and Chief Executive Officer, Federal
Reserve Bank of Minneapolis, and Fred H. Cate,
Professor of Law, Harry T. Ice Faculty Fellow, and
Director of the Information Law and Commerce
Institute, University of Indiana School of Law.

As usual, the Wednesday sessions, on May 7, will show­
case more technical research that is of primary interest
to research economists in academia and government.
The Thursday and Friday sessions are designed to
address the interests of a broader audience.

If you are not currently on our mailing list or have
changed your address and would like to receive an
invitation and registration forms for the conference
please contact:

Ms. Regina Langston
Conference on Bank Structure and Competition
Research Department
Federal Reserve Bank of Chicago
230 South LaSalle Street
Chicago, Illinois 60604-1413
Telephone: 312-322-5641
email: regina.langston@chi.frb.org

^sssr Federal Reserve Bank
ill of Chicago

Bankruptcy law and large complex financial organizations:
A primer

Robert R. Bliss

Introduction and summary
The avoidance of financial distress has been the sub­
ject of voluminous research and protracted debate. This
article considers the economic and legal issues sur­
rounding the treatment of firms in financial distress,
with a particular focus on the challenges posed by large
complex financial organizations (LCFOs).
The successive proposals of the Basel Committee
on Banking Supervision (Basel Committee, 2001) to
revise bank capital standards, which have preoccupied
regulators’ and bankers’ attentions for several years
now, are aimed at ensuring the safety and soundness
of banks and indirectly influencing banks’ risk taking
incentives. Financial institutions have themselves been
at the forefront in the quantification and management
of risk and have developed a multihide of financial
instruments for this purpose, both for their own uses
and for the benefit of other sectors of the economy—
credit and energy derivatives1 to name two notable
recent innovations. However, while these processes
have improved, at least potentially, the management
of risk, they do not eliminate the chance of financial
distress. From time to time, even in the best of all pos­
sible economic worlds, financial firms will fail through
unforeseeable economic shocks, mismanagement, or
fraud. It is therefore somewhat surprising that this in­
evitable, though hopefully rare, eventuality has been
so little analyzed by economists. For what happens
when a firm fails determines at least in part the arrange­
ments entered into when the firm is solvent and con­
strains the actions of various interested parties when
the firm becomes distressed.
This article provides an overview of the legal treat­
ment of bankruptcy in the U.S. and elsewhere and con­
siders whether the structure and complexity of LCFOs
have evolved beyond simplistic corporate structures
and contract types historically anticipated in our in­
solvency legislation and common law traditions. An

48

important part of that evolution has been the develop­
ment of markets for nontraditional financial instruments
used to hedge risk. The involvement of large systemically important institutions in these markets makes it
important to consider how these contracts are treated
under insolvency and whether this affects the ability
of legal and regulatory authorities to resolve these in­
stitutions in an orderly and efficient manner.
The failure of an LCFO, of all firms, raises the
greatest concern of potential systemic consequences.
This is because financial institutions provide capital and
other financial services to all sectors of the economy
and they form the backbone of the financial markets,
markets that rely to a great extent on trust. Thus, the
failure of a financial intermediary calls into question
a multitude of business relations. In contrast, the fail­
ure of a nonfmancial corporation of comparable size
is more easily localized: Witness the recent string of
bankruptcies of technology firms that have raised no
fears of systemic risk in the usual sense of a freezing
up of financial markets, in spite of the unprecedented
size of the firms involved.
Developed financial markets are generally robust,
and the failures of small financial firms, while painful
for the creditors, rarely endanger significant numbers
of counterparties. This being widely understood, the
failure of a small financial institution raises few sys­
temic concerns.2 However, the failure of a large insti­
tution raises concerns that it will directly trigger other
failures; for example, by failing to pay its creditors,

Robert R. Bliss is a seniorfinancial economist and economic
advisor at the Federal Reserve of Chicago. The author
thanks participants in the Federal Reserve Bank of
Chicago’s Workshop on Resolving Large Complex Banking
Organizations, panelists at the Federal Reserve Bank of
Chicago’s Bank Structure Conference, seminar participants
at the Federal Reserve Bank of Chicago, George Kaufman,
Robert Steigerwald, and most especially Christian Johnson.

1Q/2003, Economic Perspectives

the insolvent LCFO may cause these other firms to be­
come insolvent.3 Furthermore, uncertainty in the mar­
kets as to who is directly affected by the failure and
to what extent may lead participants in the payments
system and the short-term capital markets to take de­
fensive measures, thus causing a general contraction
of liquidity This in turn may lead to financial crisis
in vulnerable firms that may not even have direct ex­
posure to the firm whose failure triggered the crisis.
Because LCFOs operate across different legal juris­
dictions, the insolvency process itself creates a coor­
dination problem across the very agents (usually courts)
charged with solving the coordination problem amongst
creditors. Furthermore, for certain types of contracts,
the ability of the courts to suspend their execution
(termed “stays”) has been effectively eliminated.
As a result, LCFOs present a number of challenges
that affect the resolution process. These are broadly
issues of coordination, relating to reconciling the ob­
jectives of different regulators, legal jurisdictions, and
creditors; opacity’, relating to the inability of traditional
accounting methods to provide sufficient information
about contingent liabilities in off-balance-sheet activ­
ities and portfolios of nontraditional financial instru­
ments; and time, relating to the difficulty of managing
an orderly resolution of firms that have large portfolios
of nontraditional financial instruments, some of which
are exempted from the “time out” imposed on other
counterparties in bankruptcy proceedings. I refer to
these exempted financial instruments as “special finan­
cial instruments.”41 explore all of the issues in detail
in the following sections. While none of these issues
are unique to LCFOs, they are apt to come together with
particular severity if an LCFO becomes distressed.

A plethora of bankruptcy procedures
Early Roman personal bankruptcy procedures pur­
portedly involved dividing up the debtor and distrib­
uting the parts to the creditors if he could not pay within
a stipulated time period.5 Placing the debtor into sla­
very was an alternative and widely practiced resolution
procedure that preserved the productive capacity of
the debtor but transferred the benefits to the creditor.6
Similar thinking underlies modem corporate bankrupt­
cy processes, and these ancient solutions find their
modem equivalents in the two major outcomes to cor­
porate bankruptcy: liquidation and reorganization.
While the evolution of legal processes to deal with
bankruptcy dates back to the beginnings of written his­
tory, the analysis of these processes in an economic
framework is comparatively recent. Jackson (1982)
argues that bankruptcy procedures function to provide
a collective debt collection mechanism designed to

Federal Reserve Bank of Chicago

maximize the returns to creditors.7 If creditors are al­
lowed individually to enforce their claims, an unco­
ordinated bankruptcy proceeding involving multiple
creditors is likely to lead to the dismemberment of an
insolvent corporation and to a loss of value. Many in­
solvent firms have greater value as going concerns than
can be extracted by liquidating their physical and fi­
nancial assets. Furthermore, creditors who are suc­
cessful in seizing assets have little or no incentive to
maximize the liquidation value of those assets once
their own claim is satisfied, because any excess sums
must invariably be turned over to the remaining cred­
itors. The result is the classic “prisoners’ dilemma.”8
Without a credible means of ensuring cooperation
among creditors, each creditor has every incentive to
try to act in their own interest and seize what assets
they can, even though they are aware that in doing
so, they diminish the value that will be recovered by
the creditors as a group.
Corporate bankruptcy processes solve this prob­
lem by coordinating the resolution of claims. A court
(or administrator), interposed between the insolvent
firm and its creditors, imposes a “time out” to prevent
the untimely and inefficient liquidation of assets. Hav­
ing taken control of the situation, the court then deter­
mines the best method of realizing the value of the firm
(orderly liquidation of assets and/or reorganization),
ascertains the value of all creditors’ claims, and then
determines how those claims will be discharged. Of
these several steps, the power of the court (or admin­
istrator) to stay the execution of creditors’ claims on
the firm’s cash flows and assets is absolutely crucial.
The prisoners’ dilemma perspective views bank­
ruptcy law as a means of protecting creditors from each
other. An alternative perspective is that the function
of bankruptcy is to provide a means of protecting the
debtor from the creditors. In the U.S., firms that file
for protection under Chapter 11 of the bankruptcy code
enjoy considerable powers to manage the renegotiation
of their creditors’ claims. The purpose of Chapter 11
is to preserve the insolvent firm as a viable economic
entity.9 Usually the managers responsible for the in­
solvency are left in place, at least initially, to super­
vise the reorganization, subject to the oversight of the
courts. This provides managers and stockholders with
considerable leverage in negotiations: witness the con­
tinuity of managers in their jobs, the frequent violation
of seniority rights in the final settlements, and the re­
duced recovery rates for creditors.10 Critical to the suc­
cess of this procedure is the ability of courts to compel
counterparties to stay claims (for payment of debts) and
to keep contracts (for instance, for services) in force.

49

This neat picture of the problem of insolvency and
its solutions becomes less reassuring when we con­
sider LCFOs. The first issue to come to grips with is
the philosophy underlying the treatment of creditors—
whether and how contracts and contractual provisions
will be honored by the courts in different jurisdictions.
The insolvency of an LCFO necessarily raises ques­
tions of competing jurisdictions, with potentially con­
flicting objectives. As we will see later, the treatment
of special financial instruments, and the enforceabili­
ty and elfect of their termination and netting provisions,
to some extent undermines the procedural niceties
assumed in the bankruptcy procedures.
Bankruptcy laws vary across countries in their de­
tails, as one would expect, but more importantly they
vary in their underlying philosophies. This makes
reconciliation of bankruptcy codes something of a
challenge. Attempts at international harmonization of
bankruptcy laws have met with only limited success,
in part because of conflicting philosophies and legal
traditions. In 1997, the United Nations Commission
on International Trade Law adopted a Model Law on
Cross-Border Insolvencies, which sought to address
a limited range of issues peculiar to cross-border in­
solvencies without harmonizing bankruptcy codes in
their entirety. As a model law rather than a treaty, it
relies on individual countries to change their own codes
to conform to the model.11 In contrast, the recently
enacted European Insolvency Regulation has the ad­
vantage of being binding on European Union (EU)
members. EU countries must recognize each other’s
bankruptcy laws and insolvency administrators and
their agents. For cross-border insolvencies, the courts
of the country in which the company’s “centre of main
interest” is located will take the lead, and proceedings
in other jurisdictions will play a secondary and sup­
portive role.12
Pro-creditor versus pro-debtor systems
Broadly speaking, legal approaches to bankrupt­
cy resolution may be classified as either pro-creditor
or pro-debtor. Most of the countries that derive their
laws from the English common law tradition, includ­
ing the UK, most Commonwealth countries, and UKaffiliated off-shore financial centers, have pro-creditor
laws, which I term “English” approaches or frameworks.
Germany, Italy, China, and Japan have similar approach­
es, though they do not share the same legal heritage.
Countries whose legal frameworks have their origins in
the Napoleonic Code are generally pro-debtor in their
approach to bankruptcy, called the “Franco-Latin”
approach. These countries include Spain, most of Latin
America, as well as much of the Middle East and

50

Africa. The U.S., Canada, and France have evolved
hybrid systems of laws that are broadly pro-debtor
with significant pro-creditor exceptions.
Pro-creditor bankruptcy laws recognize the right
of creditors to protect themselves against default through
ex ante contractual agreements that permit the solvent
counterparty to close out contracts and set off obliga­
tions.13 The Franco-Latin approach, on the other hand,
seeks to maximize the value of the bankrupt firm by
affirming claims due to the bankrupt firm and disavow­
ing claims made on the firm, known as “cherry picking”;
this approach often ignores ex ante contractual arrange­
ments that would favor one creditor over another.
The English (pro-creditor) and Franco-Latin (pro­
debtor) approaches have at their roots two fundamen­
tally irreconcilable concepts of fairness. The English
perspective is that it is unfair for a bankruptcy admin­
istrator to claim monies due from a solvent counter­
party under one contract, while simultaneously refusing
to make payments to the same counterparty under an­
other contract. Under English law the right to “set off’
or net multiple contracts between a solvent and an in­
solvent counterparty is a matter of common law, which
does not require prior agreement. Thus, cherry picking
is anathema to the English bankruptcy tradition. Fur­
thermore, the English tradition recognizes the right
of freely contracting parties to protect themselves
against the possibility of default by various mutually
agreed contractual arrangements, such as netting
agreements and collateral.
In contrast, the Franco-Latin approach sees ex ante
private contracting of creditor protection agreements
as creating a privileged class of claimants to the detri­
ment of the remaining creditors. Such protections per­
mit one creditor to receive greater than pro rata value
by virtue of being able to net amounts owed from the
bankrupt firm against amounts due to the bankrupt firm,
while another creditor with no offsetting position may
suffer more substantial losses. The Franco-Latin ap­
proach views set-off agreements as creating an “un­
publicized security”; this means that certain assets of
a firm may not be available to satisfy the general cred­
itors’ claims because another creditor has an undis­
closed superior claim.14 Set-off arrangements that derive
from reciprocal contracts cannot reasonably be made
known to other creditors. Therefore, the Franco-Latin
tradition views such hidden preferences as fundamen­
tally unfair. To avoid this perceived inequity, the
bankruptcy administrator in pro-debtor jurisdictions
is given powers designed to maximize the value of
assets available for pro rata distribution to all credi­
tors.15 These include the ability to separate multiple
contracts between the bankrupt firm and individual

1Q/2003, Economic Perspectives

solvent counterparties. The administrator may also re­
quire solvent counterparties to pay amounts due to the
bankrupt firm and then stand in line for pari passu
distribution of any amounts due to them as creditors.
While these two legal philosophies are fundamen­
tally irreconcilable, pro-creditor and pro-debtor laws
frequently co-exist, though perhaps not naturally. This
happens when a fundamentally pro-debtor jurisdiction,
such as the U.S., enacts laws granting pro-creditor pro­
tection to specific types of contracts. These laws are
termed “carve outs” and provide exceptions to the
general bankruptcy code. Internationally, carve-outs
have been enacted in most relevant jurisdictions for
payments systems transactions and some nontraditional financial instruments. In the U.S. and some other
jurisdictions, banks and some other types of financial
institutions are also subject to carve outs from the bank­
ruptcy code that is applicable to most firms.

U.S. bankruptcy laws
Bankruptcy law in the U.S. is unusually, perhaps
uniquely, complex. The Federal Bankruptcy Code (gen­
erally referred to as simply “the Code”) governing
most corporations allows for both liquidation and re­
organization. Cases involving firms subject to the Code
are heard in special federal bankruptcy courts. The
bankruptcy code is generally pro-debtor, with some
exceptions. There is no general right of set-offs, or
netting, of obligations. Various laws have carved out
exemptions to the Code. Depository institutions (banks),
insurance companies, government-sponsored entities
(GSEs, for example, Fannie Mae), and broker/dealers
are each governed by special laws and distinct reso­
lution procedures, and certain types of financial con­
tracts receive special treatment under the Code.
Insolvent insured depository institutions are re­
solved under the Federal Deposit Insurance Act (FDIA),
as amended by the Financial Institutions Reform, Re­
covery, and Enforcement Act (FIRREA), and subse­
quent acts.16 Closure authority for banks lies with the
appropriate regulator, depending on the bank’s charter.
Creditors cannot force a bank into bankruptcy since
banks are specifically exempted from the Code. The
appointment of the Federal Deposit Insurance Corpo­
ration (FDIC) to administer the insolvency is mandated
for federally chartered, federally insured institutions
and is usual for state chartered, federally insured in­
tuitions. The FDIC either acts as receiver to liquidate
the bank or as conservator to arrange a workout (merger,
sale, or refinancing).
Broker dealers are also exempt from the Code and
subject to their own bankruptcy laws and procedures.
Insolvencies of insurance companies are subject to
state laws and handled by state courts.

Federal Reserve Bank of Chicago

Conflictingjurisdictions
The resolution of an LCFO will necessarily involve
multiple legal jurisdictions, which leads to two prob­
lems. The first is whether the insolvent firm should
be resolved as a single entity regardless of the location
of creditors and assets, or whether each of the several
jurisdictions in which the creditors and/or assets are
located should be treated separately. There are two basic
approaches to this fundamental question: the unitary
or single-entity approach, which treats the firm as a
whole, and the “ring-fence” or separate-entity approach,
which seeks to carve up the firm and resolve claims
in each jurisdiction separately. The second problem,
which is not unrelated to the first, is whether to con­
duct multiple proceedings in each relevant jurisdic­
tion or have one jurisdiction take the lead and other
jurisdictions defer to it. Ring fencing has the practical
advantage of placing assets at the disposal of the court
most likely to have control of them and minimizing
the dependence on cross-jurisdictional information
sharing. It also provides an admittedly crude solution
to conflicts in laws and legal objectives. In the case
of insured depository institutions, ring fencing serves
the interests of the deposit insurers by ensuring that
the insolvency of a holding company does not strip
assets out of a bank subsidiary. Potentially however,
ring fencing can make coordinated cross-border (and
cross-jurisdiction) resolutions more difficult because
it leads to differential payoffs for creditors—(domes­
tic) creditors in jurisdictions where the ratio of assets
to claims is higher will enjoy higher recoveries. Ring
fencing also leads to potentially adversarial competi­
tion among jurisdictions each seeking to maximize
the value of assets available to their own creditors—
the very problem that bankruptcy procedures are sup­
posed to solve.
British bankruptcy law takes a single-entity ap­
proach to resolving international firms, regardless of
the location of assets or the nationality of the credi­
tors. The UK court makes every effort to obtain con­
trol of all the firm’s assets, which it then divides equally
among the creditors (in a liquidation). The court makes
no distinction between domestic and foreign creditors,
even in the distribution of domestically controlled as­
sets directly under its control. Importantly, however,
UK bankruptcy law recognizes that it may be more
appropriate in some cases for another, perhaps home
country’s court to take the lead in the resolution of an
international firm. In such cases, the UK provides local
support for agents of the foreign courts, for instance
in obtaining control of assets located in the UK, so
long as the creditors are not made worse off than
they would be under a UK resolution.

51

The U.S. approach to these issues is complex and
fragmented. Where a branch or agency of a foreign
bank becomes insolvent, a U.S. administrator can at­
tach (seize) all of the foreign parent’s assets in the U.S.
even if they are part of a different nonbank subsid­
iary. The U.S. court or administrator would ring fence
those assets and use them to satisfy domestic claims,
paying any surplus to satisfy creditors in any foreign
proceedings. This necessarily means that domestic
creditors are given precedence over foreign ones. On
the other hand, in resolving a U.S. bank, the FDIC takes
a single-entity approach and seeks to obtain control of
offshore assets. Resolution of LCFOs is further com­
plicated because in the U.S. specialized laws and pro­
cedures apply to banks, broker-dealers, and insurance
companies. Thus, where these activities are co-located
in a single holding company, the ring fencing can ap­
ply to parts of the same domestic entity. Bank subsid­
iaries are ring fenced vis-a-vis nonbank subsidiaries
of the same holding company. The FDIC may seize
the assets of affdiated banks (subsidiaries of the same
holding company), while federal bankruptcy courts
would take control of the assets of an insolvent par­
ent bank holding company. Then, the FDIC may be
able to recover assets from the holding company and
nonbank affiliates under the “source of strength” pro­
visions of applicable law.
As I discussed in the introduction, a particular area
of concern in the resolution of LCFOs is the treatment
of special financial instruments, specifically the ability
to terminate and net contracts. In the following section,
I provide an overview of the issues involved and their
potential impact.

Termination and netting of contracts17
The distinctions between pro-creditor and pro­
debtor philosophies are particularly important in the
cases of payments systems and derivatives markets.
In most business relations, netting and set-off are not
significant issues. Generally, firms either buy from or
sell to other firms, but rarely do both simultaneously.
So, in the event of bankruptcy, few if any contracts
could be netted or set-off. However, financial mar­
kets can generate huge numbers of bi-directional trans­
actions between counterparties. Interbank payments
systems involve banks sending each other funds to
clear thousands of transactions throughout the day, and
the direction and amount of individual transfers are
unpredictable. The gross amounts of such transactions
are huge, but at the end of the day the net transfers are
relatively modest. Similarly, many large commercial
and investment banks make markets in special finan­
cial instruments and hedge their positions with each

52

other. Again the gross positions are huge, but the net
positions are modest.18
There are two types of netting rules. Those that
apply in the course of ordinary business—payments
netting, also called settlement netting or delivery net­
ting—and those that apply in resolutions of insolvent
firms—close-out netting, also called default netting,
open-contract netting, or replacement contract netting.
Close-out netting agreements consist of two related
rights: the right of a counterparty to unilaterally terminate
contracts under certain prespecified conditions, and the
right to net amounts due at termination of individual
contracts in determining the resulting obligation be­
tween (now former) counterparties. Wood (1994) points
out that payments netting is meaningless unless it is le­
gally supported by close-out netting rights in the event
of default by one of the counterparties. In the U.S. and
some other jurisdictions, the governing contracts typ­
ically contain terms stipulating the actions to be taken
in the event of default. In other jurisdictions, such as
the UK, a common law netting right exists.
Both payments and close-out netting are widely
seen as reducing systemic risk by limiting counterpar­
ty exposures to net rather than gross exposures. This
in turn makes the operation of financial markets more
efficient. Because counterparties can safely hold less
capital against individual counterparties, they can ex­
pand their gross positions while limiting their net firmwide exposures, resulting in increased market liquidity
(and higher revenues) for a given level of economic
capital. Furthermore, they may be more willing to
transact with potentially troubled counterparties so
long as their net position remains favorable, thus
keeping credit and risk-management channels open.
Close-out netting termination rights allow for the
early resolution of claims and reduce the uncertainty
associated with the failure of a counterparty. This is
critically important in the case of special financial in­
struments, because the value of these contracts can
change rapidly and delays in settling claims may al­
ter the eventual payouts. Termination also allows the
solvent counterparty to replace contracts with the in­
solvent counterparty with new contracts with a sol­
vent counterparty, thus ensuring the continued
effectiveness of their hedging and trading strategies.
These benefits have been widely acknowledged
by regulators, trade groups, and market participants.19
The adoption of the pro-creditor approach for these
types of markets is an implicit recognition that the equi­
ty arguments of the Franco-Latin framework are incon­
sistent with the contractual and legal certainty needs
of modern financial markets. While collateral arrange­
ments and netting may have the effect of favoring

1Q/2003, Economic Perspectives

one creditor over another in the event of insolvency,
these arrangements make it possible for creditors to
better measure and manage their exposures.20 Under
pro-debtor laws, all creditors may share equally in the
losses, but no creditor could know beforehand what
their expected losses might be.
The widespread adoption of carve-outs, providing
pro-creditor protection for payments systems and de­
rivatives securities, particularly in the form of collat­
eral arrangements and netting agreements, represents
one of the great successes in international legal harmo­
nization. This process has been shepherded by the In­
ternational Swap and Derivatives Association (ISDA),
a trade group that coordinates industry documentation
practices, drafts model contracts, and lobbies for leg­
islative changes to support the enforceability of those
contracts. Central to the ISDA approach to netting is
the concept of a master agreement that governs trans­
actions between counterparties. The Master Agreement
constitutes the terms of the agreement between the
counterparties with respect to general questions unre­
lated to specific economic transactions: credit support
arrangements, netting, collateral, definition of default
and other termination events, calculation of damages
(on default), documentation, and so forth. This Master
Agreement constitutes a single legal contract of indefi­
nite term under which the counterparties conduct their
mutual business. Individual transactions are handled
by confirmations that are incorporated by reference
into the Master Agreement. This device of placing in­
dividual transactions under a single master agreement
that provides for netting of covered transactions has the
effect of finessing the problem of netting under vari­
ous bankruptcy codes. Having only a single contract
between each pair of counterparties to a Master Agree­
ment eliminates the problem of netting multiple con­
tracts. 21 Netting legislation covering special financial
instruments has been adopted in most countries with
major financial markets (the UK being a notable ex­
ception, where netting has long been provided for in
the bankruptcy code), and ISDA has obtained legal
opinions supporting their Master Agreements in most
relevant jurisdictions.

Payments netting
Payments netting is a method of reducing exposures
in the event of default. Payments netting agreements
appear in most standardized special financial instru­
ments contracts (for instance, ISDA Master Agreements),
and various forms of netting are incorporated in the
settlement procedures of payments clearing houses.
Payments netting occurs when firms, primarily
financial institutions, are exchanging payments on a

Federal Reserve Bank of Chicago

regular basis and net the amounts due against those to
be received at the same time and transfer the difference.
Payments netting reduces the so-called Herstatt Risk
that one party will make a payment and the other party
default before the offsetting payment is made.22 The
importance of payments netting and payments systems
in general has become widely understood since the
default of Herstatt Bank in 1974 focused the attention
of market participants and regulators. The benefits of
payments netting are uncontroversial, though there is
considerable debate about the optimal structure of
payments netting arrangements.

Close-out netting
Close-out netting involves not only the treatment
of payments netting agreements for unwinding inter­
rupted bilateral payments flows, but also the treatment
of outstanding contracts between solvent and insolvent
counterparties.23 The netting of obligations in the event
of default is the subject of considerable legal debate
and differences in laws, as is the related issue of ter­
mination rights.
In general, close-out netting involves the termi­
nation of all contracts between the insolvent and a sol­
vent counterparty. Broadly speaking, there are two
relevant classes of contracts: Executory contracts are
promises to transact in the future (but where no trans­
action has yet occurred), such as a forward agreement
to purchase foreign currency; other contracts, such as
a loan, where a payment by one party payment has
already occurred, I refer to as “non-executory contracts,”
since no single legal description applies. These two
types of contracts are treated differently under close­
out netting in jurisdictions where such laws apply.
Where close-out netting is permitted, the general
procedure is that upon default or contractually agreed
“credit event,”24 executory contracts are marked-tomarket and any payments due from acceleration of
terminated non-executory contracts are determined.
These values are then netted and a single net payment
is made. If the solvent counterparty is a net creditor,
the solvent counterparty becomes a general creditor
for the net amount. Usually, the solvent counterparty
determines the values of the contracts being terminated
and payments owed. These computations are subject
to subsequent litigation. However, disputes over the
exact valuation do not affect the ability of the solvent
counterparty to terminate and replace the contracts
with a different counterparty.
Non-executory contracts, such as loans, may
contain clauses that permit the creditor to accelerate
future payments—for instance, repayment of loan
principal—in the event of default or occurrence of a

53

stipulated credit event. Acceleration is not netting per
se but a precursor to netting and determines in part
the amounts due.
The handling of non-executory contracts where pay­
ments are due to the insolvent counterparty depends
on the contract terms and legal jurisdiction. The most
common treatment is to accelerate all contracts between
solvent and insolvent counterparties when determin­
ing net obligations. In countries where it is permitted,
for instance the UK, walk-away clauses permit the
solvent counterparty to simply terminate without pay­
ment any contracts where payments are due to the in­
solvent counterparty.
Whereas non-executory contracts may be accel­
erated in insolvency, executory contracts are terminated.
Termination cancels the contract with appropriate com­
pensation, usually the cost of reestablishing the con­
tract on identical terms with another counterparty.
Acceleration and termination change the amounts
immediately due to and from the solvent counterpar­
ties vis-a-vis what would have been currently due had
the credit event (default, downgrade) not occurred.
Terminations of contracts with the resulting demands
for immediate payments may precipitate financial col­
lapse of a firm and make it impossible to resolve the
firm in an orderly manner or to arrange refinancing.25
For this reason, many jurisdictions limit the rights of
counterparties to enforce the termination clauses in
their contracts. The court can impose a stay, which does
not invalidate termination clauses in contracts but rather
overrides them, perhaps temporarily, at the discretion
of the court or an administrator. Staying contracts keeps
them in force; normal payments are still due. This is
unlike cherry picking, which involves disavowing un­
favorable contracts and forcing the counterparties to
become general creditors for the firm.

U.S. legal treatment of close-out netting
Although close-out and netting are two separate
issues, they are intimately linked in the case of special
financial instruments. Close-out refers to the termination
of contracts, while netting refers to the setting off of
multiple claims between solvent and insolvent coun­
terparties. For most contracts these are separate issues.
In the U.S., stays of indefinite term are automatic
for most contracts when a corporation files for protec­
tion under the Code.26 Furthermore, netting of most
contracts is not generally recognized under the Code,
thus cherry picking is permitted. However, as noted
earlier, various carve-outs or exceptions provide spe­
cial netting and termination rights for certain financial
contracts and certain types of counterparties. In gen­
eral, for financial contracts governed by ISDA and

54

similar master netting agreements, cherry picking is
prevented and termination rights are recognized.
Under U.S. common law, when a bank depositor
also has (performing) loans outstanding with the bank,
the amount of uninsured deposits may be netted against
the principal outstanding on the loan in the event of
insolvency of either the bank or a bank borrower. Where
the defaulting party is a corporation or a nationally
chartered bank, federal laws apply.27 For state-chartered
banks, state law applies.28 While the common law prin­
ciple of netting of certain bank depositor obligations
is widely recognized, it is still subject to legal uncer­
tainties and is narrow in scope (may be applicable only
to “deposits” and “indebtedness”), thus creating poten­
tial problems for special financial instruments market
participants. This has led to the enactment of a num­
ber of specific laws governing certain types of finan­
cial contracts and certain types of financial institutions.
The Code permits netting of swap contracts and
prohibits stays of swap contracts.29 Furthermore, swap
contracts may be terminated for reasons of insolven­
cy, commencement of bankruptcy proceeding, or ap­
pointment of a trustee, though such terminations are
expressly prohibited for other types of financial con­
tracts, for instance, unexpired leases.30 Swaps are gen­
erally considered to include most derivatives contracts
entered into under ISDA and similar Master Agreements.
Thus, counterparties of firms whose insolvency is gov­
erned by the Code have some degree of protection of
their netting and termination rights, though the scope
of what qualifies as a “swap” is perhaps unclear. How­
ever, this provides no protection when the insolvent
counterparty is a bank, broker/dealer, GSE, or insur­
ance company, which would not be subject to resolu­
tion under the Code.
For insolvent insured depository institutions, FDIA
as amended by FIRREA provides for netting of “qual­
ified financial contracts” between insolvent insured
depository institutions and other counterparties regard­
less of type. The term “... ‘qualified financial contract’
means any securities contract, commodity contract,
forward contract, repurchase agreement, swap agree­
ment, and any similar agreement,” with the FDIC being
given the authority to make the final determination as
to which contracts qualify.31 This definition covers most
over the counter (OTC) special financial instruments
governed by ISDA and similar Master Agreements.
The FDIC, as administrator or conservator of a failed
insured depository institution, may transfer qualified
contracts to another financial institution, for instance
a bridge bank, subject to a requirement to notify the
parties involved by noon on the next-business-day.32
The FDIC may also repudiate any contract but must

1Q/2003, Economic Perspectives

pay compensatory damages, which has much the same
effect as termination initiated by a solvent counterparty.33
The FDIC has announced that it will not selectively
repudiate contracts with individual counterparties—
that is, cherry pick—but its legal obligations in this
regard are unclear. However, the FDIC may not stay
the execution of termination clauses, except where
termination is based solely on insolvency or the ap­
pointment of a conservator or receiver.34 Thus, the
takeover of a bank by the FDIC is not an enforceable
“credit event” under ISDA contracts in the U.S., so
long as there is not some other basis for terminating
an agreement, such as a failure to make a payment.
If contracts are transferred, all contracts between the
insolvent depositor institution and a given counterparty
must be transferred together, thus prohibiting cherry
picking of transferred contracts.35
The Federal Deposit Insurance Corporation Im­
provement Act of 1991 (FDICIA) permits enforcement
of close-out netting agreements in financial contracts
between financial institutions.36 Financial institutions
are broadly defined as “... broker or dealer, deposito­
ry institution, futures commission agent, or other in­
stitution as determined by the Board of Governors
of the Federal Reserve System.”37 According to the
Federal Reserve’s criteria for determining whether an
institution qualifies (laid out in Regulation EE), the firm
must be a trader or dealer, rather than an end user, and
meet a minimum size requirement.38 For such desig­
nated financial institutions, the ability to net payment
obligations under netting agreements is quite broad
and includes close-out and termination rights written
into Master Agreements. Furthermore, the law preempts
any other agencies and courts from limiting or delay­
ing application of netting agreements, effectively pre­
venting stays of such contracts. 39 However, this law
only recognizes the enforceability of netting agreements
in contracts; it does not create a general right to net
obligations. Furthermore, these provisions are limited
to contracts between designated financial institutions
and, thus, provide no protection for contracts between
financial institutions and nonfinancial institutions.
Overall, therefore, the patchwork of laws govern­
ing termination and netting of special financial instru­
ments provides some protection of close-out and netting
agreements, but remains a source of legal uncertainties.
For example, it is not clear whether unenumerated
special financial instruments such as credit, equity,
energy, and weather derivatives would fall under the
rubrics of either “swap” or “qualified financial contract.”
Furthermore, the enumerated classes of covered coun­
terparties—stockbrokers, financial institutions, and
securities clearing agencies—fail to cover all important

Federal Reserve Bank of Chicago

financial market participants. The FDIC’s various rights
under FDICIA remain unclear and untested in the courts.
Attempts have repeatedly been made to clarify these
questions going back at least to 1996. Most recently,
both the House and Senate passed broadly similar bills
(H.R. 333 and S. 420) to address these issues as part
of a larger reform of the Bankruptcy Code. These
efforts are strongly supported by trade groups, the
Federal Reserve, and the Treasury. However, the re­
sulting piece of legislation failed to pass due to unre­
lated political considerations.

Other issues in resolving LCFOs
As noted earlier, bankruptcy, and in the U.S., bank
resolution procedures are predicated on the orderly
liquidation or reorganization of a troubled firm under
the supervision of a court, an administrator, or in the
case of U.S. banks, the FDIC. The first step is to stay
the exercise of most claims against the firm while the
administrator ascertains assets and liabilities, determines
the validity of claims, realizes the value of assets, and
pays off creditors in a liquidation or negotiates with
creditors to arrange a reorganization. These procedures
take considerable time, sometimes even years.40
The issues discussed above were largely related
to coordination—across competing legal and regula­
tory jurisdictions. Next, I discuss some additional is­
sues complicating the bankruptcy process for LCFOs.
These issues fall into two general categories—opaci­
ty and time.

Opacity
LCFOs tend to be informationally opaque to out­
siders because accounting methods are not designed
to provide detailed information about contingent lia­
bilities embedded in off-balance-sheet activities and
nontraditional financial instrument portfolios. More
importantly, for the purposes of failure resolutions, this
detailed information is often unavailable to insiders
as well. Rather, much of the information available to
managers, counterparties, and regulators and/or courts
is of a summary nature. LCFOs tend to manage their
activities in a decentralized manner. Firm-wide coor­
dination and risk management are usually based on
summary information of profits, losses, risk exposures,
and so forth passed up from the divisions to the head
office(s). This summary information, where it is cor­
rectly structured, should be sufficient for normal riskmanagement purposes. However, in the event of
financial distress, when the firm or an administrator
seeks to sell off the special financial instruments posi­
tions, more detailed information is needed. The problem
of decentralized information is sometimes exacerbat­
ed by incompatible legacy accounting systems arising

55

from recent mergers. Few large complex firms are in
a position to rapidly provide detailed firm-wide infor­
mation about individual positions at a level of detail
sufficient for a potential buyer to make an informed
valuation.41 The result is that buyers will only purchase
a special financial instruments book at a price well
below the true market value, since in effect they are
buying a grab bag of contracts with only a vague idea
of the contents.

Time
Banking regulation frequently seeks to avoid the
resolution process by having regulators become in­
creasingly involved in a bank’s activities as it approach­
es insolvency. In the U.S. prompt corrective action
dictates a series of increasingly stronger actions that
supervisors are required to take as a bank’s capital de­
clines below the regulatory minimum. These plans for
preventing a bank from becoming insolvent presume
that the decline in a bank’s condition will be observable
and sufficiently gradual to permit timely intervention.
Prompt corrective action cannot work when perceived
asset values change rapidly, either because their true
value has been hidden and is suddenly realized or be­
cause of fluctuations in market values. Recent notable
bank failures have been the result of fraud (First
National Bank of Keystone, 1999) or incorrect valuation
(perhaps fraudulent) of derivative assets (Superior
Federal Savings Bank, 2001).
While fraud and rapid changes in asset values can
frustrate the (ex ante) procedures that managers, coun­
terparties, and regulators have adopted to prevent or
minimize the incidence of insolvencies, the treatment
of special financial instruments during an insolvency
is apt to frustrate the (ex post) procedures for the or­
derly resolution of firms with large portfolios subject
to close-out netting. The inability of insolvency admin­
istrators to effectively prevent or stay close-out of a
significant portion of the distressed firm’s contracts
means that these contracts and their related collateral
will be terminated and liquidated. This may leave the
firm so impaired as to make reorganization impracti­
cal. Attempts to prevent such close-outs “for reasons

56

solely of filing for protection” are unlikely to prove
effective—contracts usually provide other termination
conditions beyond the control of courts and/or regu­
lators, for instance, “due-on-downgrade” clauses, which
are likely to be triggered at the same time.
There exists some possibility that the close-out can
be preempted by selling the book, or in the case of a
bank insolvency transferring it to a bridge bank, but
these decisions must take place with incomplete infor­
mation about the assets to be sold or transferred and
under extreme time pressure—close-out can only be
postponed with the forbearance of the solvent coun­
terparties that hold the option to exercise termination
once the firm becomes sufficiently distressed. Since
large firms have multiple counterparties, the situation
is likely to be extremely unstable. The value of special
financial instruments positions is liable to change rap­
idly due to the actions of other counterparties. Once
one counterparty exercises its close-out rights, a “rush
for the exit” will inevitably develop—counterparties
will seek to liquidate their collateral and positions be­
fore the actions of others depress prices (the “fire-sale”
elfect) and their own losses increase.42 This is the same
prisoners’ dilemma that gave rise to coordinated bank­
ruptcy procedures—now recurring because removing
the stays effectively exempts special financial instru­
ments contracts from the process.

Conclusion
I have provided an overview of the bankruptcy
laws and the problems relating specifically to resolu­
tion of LCFOs within the current legal and regulatory
framework. In particular, the combination of rapidly
developing insolvency, opaque special financial instru­
ments positions, and the exemption from stays of con­
tracts has the potential to preempt the usual options
open to regulators and courts to conduct a deliberate
and well-considered (that is, leisurely) liquidation or
reorganization of an LCFO. How to ensuring appro­
priate treatment of such an institution is a subject for
future research.

1Q/2003, Economic Perspectives

NOTES
’Energy derivatives are financial contracts tied to the price of
various forms of energy and are used for hedging by energy con­
sumers and producers. Credit derivatives are financial contracts
that allow financial market participants to make loans and enter
into contracts while laying off the risk that their counterparty will
default onto other agents willing to assume that risk (for a price).

2One possible exception is when common factors lead to the fail­
ure of large numbers of small institutions generating significant
macroeconomic costs—the savings and loan crisis in the early
1980s being an example.
3Recent research suggests that this fear may be unwarranted, for
example, Furfine (2003).
4These special financial instruments include swaps, options, futures,
forward rates agreements, as well as repurchase agreements, and
various transactions cleared through clearing houses (payments
and exchange traded derivatives). Most financial contracts, how­
ever, are not exempt from insolvency stays.
5See Kennedy (1994) and Knight (1992). This process would to­
day be considered to be undesirable. Determining whether such
an insolvency procedure might have been helpful in reducing the
incidence of default is beyond the scope of this study.

6Homer (1977) notes that the Code of Hammurabi (Babylonia, circa
1800 BC) limited the term of personal slavery for debt to three
years—a liberal innovation at the time.
7Armour (2001) provides a thorough analysis of this and subsequent
analytic frameworks.
8One of the earliest “games” analyzed by game theory, the prisoner’s
dilemma in its classic form considers two suspects interrogated sepa­
rately. Each is offered freedom if they implicate their partner (provided
that their partner does not do likewise) and a maximum sentence
if their partner implicates them. If both implicate each other, they
both receive an intermediate sentence (reduced from the maximum
for “cooperating” with the authorities); and if both refuse to impli­
cate their partner, they receive a minimum sentence (say for a re­
lated offence). Because the prisoners cannot cooperate with each
other or bind each other to prior commitments to say nothing, the
inevitable outcome is that they implicate each other and receive
the intermediate sentence, whereas if they could credibly cooper­
ate they would both be better off (receive the minimum sentence).
9Kahl (2002) finds that “Chapter 11 may buy poorly performing
firms some additional time, but it does not seem to allow many of
them to ultimately escape the discipline of the market for corpo­
rate control.”
10See, among others, Franks and Torous (1994).

11 As of October 2002, the model law had been adopted, at least
in part, in Eritrea, Japan, Mexico, South Africa, and within
Yugoslavia, Montenegro (www.uncitral.org).

12This is rather a smaller step forward than it may appear. Conflicts
in bankruptcy laws remain and are likely to give rise to anomalies
such as French pro-debtor courts enforcing British pro-creditor
laws in subsidiary proceedings to a UK-based bankruptcy. Further­
more, the absence of mechanisms for Europe-wide registration of
creditors will make coordination of related proceedings difficult.
(See Willcox, 2002.)

Federal Reserve Bank of Chicago

13To “set off” obligations means to reduce the amount owed to a
counterparty by any amounts due from the same counterparty.
14The concept of an unpublicized security carries over to collateral
arrangements. In the U.S., the claim on the collateral must be “per­
fected” by registering it in a manner that provides other creditors
with an opportunity to learn of the claim; still, courts are likely to
disregard the agreement and retain the collateral in the estate of
the insolvent firm, thus reducing the improperly collateralized
creditor to general creditor status.
15In practice, creditors are often divided by law into classes having
different priorities. For instance, taxes and lawyers are usually paid
before suppliers. The principle of equality of distribution, as dis­
cussed in this article, should thus be thought of as applying within
a particular creditor class defined by the bankruptcy code. The
Franco-Latin concern is that collateral and netting arrangements
result in privately negotiated alteration of these priorities.
1612 USC 1811 etseq. (1989).

17The exposition in this section borrows heavily from Johnson (2000).

18In 2002, U.S. banks had total derivatives credit exposures of
$525 billion, 96 percent of which (measured by notional value)
was concentrated in seven banks. Netting reduced banking systemwide gross exposures by 75.8 percent, a figure that had increased
from 44.3 percent in the second quarter of 1996. Still, a number
of major banks have (net) derivatives credit exposures exceeding
their risk-based capital, in the case of J. P. Morgan Chase by a fac­
tor of 589 percent. (Preceding data are from Office of the Comp­
troller of the Currency, 2002).
19See for instance, President’s Working Group (1999).

20The recovery of net in-the-money positions (that is, where a sol­
vent counterparty is owed money) is still subject to uncertainty,
but net positions are smaller than gross positions and can be man­
aged through adjusting net exposures.
21In some cases, there may be several Master Agreements covering
different classes of contracts and with different divisions of hold­
ing company. Thus, counterparty netting protection may be less
than complete. This has led to the development of Cross-Product
Master Agreements, in effect master Master Agreements. ISDA is
lobbying for legislative recognition of these innovations to reflect
industry risk management practices. Recent proposed changes to
the U.S. bankruptcy code have supported this idea.
22Bankhaus Herstatt was a medium-sized bank that was active in
foreign exchange markets. In 1974, it failed and was closed by
German authorities at the end of their business day. The dollar leg
of the bank’s dollar-deutschemark transactions had not cleared,
leaving its U.S. counterparties with losses exceeding $600 million.
The resulting direct losses and, more importantly, the uncertainty
as to whether the losses would lead other banks to fail (contagion)
seriously disrupted foreign exchange markets for weeks.
23An additional major issue is the treatment of collateral, which
I do not cover in this discussion.
24Termination events may include cross defaults (defaulting on
other contracts), mergers, changes in legal or regulatory status,
changes in financial condition, and changes in credit rating
(Johnson, 2000).

57

25A recent example is the acceleration of some $4 billion of Enron’s
debt following its downgrade by rating agencies. The firm could
not meet the resulting demand for immediate payment of principal
and was forced to file for bankruptcy. Until that time, Enron had
not actually failed to make a payment on any obligation, though it
was surely already insolvent.

3312 USC 1821(e)(1) and 12 USC 1821(e)(3).
3412 USC 1821(e)(8)(E) and 12 USC 1821(e)(12).
3512 USC 1821(e)(9).
3612 USC 4401-05.

2611 use 362.
3712 USC 4402(9).
27Scott v Armstrong 146 U.S. 499 (1892).

28For instance, the right of the depositor to offset the value of the
deposits against the depositor’s indebtedness was recognized in
Heiple v. Lehman, 358 Ill. 222, 192 N.E. 858 (1934) and FDIC v.
Mademoiselle of California, 379 F.2d 660 (9th Cir. 1967). In all
cases “mutuality” of obligations must be established. For instance,
if a holding company fails, deposits made by one subsidiary usu­
ally may not be seized to pay off a loan taken out by another sub­
sidiary. Where insured deposits are involved, netting occurs prior
to the determination of insurance coverage.
2911 USC 362(b)(17) and 11 USC 560.
30ll USC 365(e)(1).
3112 USC 1821(e)(8)(D)(i).
3212 USC 1823(d)(2)(G) and 12 USC 1821(e)(10).

38The size requirements are $1 billion of gross notional principal
outstanding or $100 million of gross marked-to-market value of
outstanding positions (Johnson, 2000, p. 87).
3912 USC 4405.
40Franks and Torous (1994) report that in their sample of firms filing
for Chapter 11, a median 27 months was required to complete re­
organization.

41Following Enron’s failure, J. P. Morgan announced revised
firm-wide exposures over a period of weeks.
42This is markedly different from other assets. If a bank collateralizes
a loan with a real asset such as an apartment building and the bor­
rower defaults, the building is not going to disappear and its value
is unlikely to change significantly over the next few weeks. On the
other hand, terminated derivatives contracts cease to exist and the
value of financial assets that are held as collateral can change rapidly.

REFERENCES

Armour, John, 2001, “The law and economics of cor­
porate insolvency: A review,” University of Cambridge,
ESRC Centre for Business Research, working paper,
No. 197.

Kahl, Matthias, 2002, “Financial distress as a selec­
tion mechanism: Evidence from the United States,”
University of California, Los Angeles, Anderson
School, working paper.

Basel Committee on Banking Supervision, 2001,
“The new Capital Accord,” draft proposal.

Kennedy, Frank R., 1994, “A brief history of bank­
ruptcy,” University of Michigan Law School, unpub­
lished working paper, archived in Box 18, Frank R.
Kennedy Papers, Bentley Historical Library, Univer­
sity of Michigan.

Franks, Julian R., and Walter N. Torous, 1994,
“A comparison of financial contracting in distressed
exchanges and Chapter 11 reorganizations,” Journal
ofFinancial Economics, Vol. 35, pp. 349-370.

Furfine, Craig H., 2003, “Interbank exposures:
Quantifying the risk of contagion,” Journal of
Money, Credit, and Banking, forthcoming.
Homer, Sidney, 1977, A History’ ofInterest Rates, second
edition, New Brunswick, NJ: Rutgers University Press.

Jackson, Thomas H., 1982, “Bankruptcy and non­
bankruptcy entitlements and the creditors’ bargain,”
Yale Law Journal, Vol. 91, pp. 857-907.

Johnson, Christian A., 2000, Over-the-Counter De­
rivatives Documentation: A Practical Guide for Ex­
ecutives, New York: Bowne & Company.

58

Knight, Jack, 1992, Institutions and Social Conflict,
Cambridge, UK: Cambridge University Press.
Office of the Comptroller of the Currency (OCC),
2002, “Bank Derivatives Report, Second Quarter 2002.”

President’s Working Group on Financial Markets,
Office of the President of the United States, 1999,
“Hedge funds, leverage, and lessons of Long-Term
Capital Management,” Washington, DC, group report.
Willcox, John, 2002, “Are you ready for European
bankruptcy regulation?,” International Federation of
Insolvency Professionals World, London, report, May.

Wood, Philip R., 1994, Principals ofNetting: A
Comparative Law Study, Amsterdam: Nederlands
Instituut voor het Bank en Effectenbedrijf.

1Q/2003, Economic Perspectives

Economic perspective on the political history
of the Second Bank of the United States

Edward J. Green

Introduction and summary
The Second Bank of the United States (1817-36) was
chartered by the federal government for a 20-year pe­
riod and it resembled a modem central bank in its close
relationship with the U.S. Treasury and paramount po­
sition in the nation’s banking system.1 It was conceived
in response to a fiscal crisis during and following the
War of 1812. The bank’s charter had a tortuous legis­
lative history, and there was intense political and judicial
controversy throughout the bank’s existence, culminat­
ing in the “War on the Bank” by President Andrew
Jackson and the ultimate refusal of Congress to renew
its charter.2 The “Panic of 1819” was a banking crisis
and economic contraction that was blamed (rightly or
wrongly) on tight credit policy that the bank had im­
posed in order to recover its solvency after mismanage­
ment in its early days of operation. The subsequent
period, 1819-32, was characterized by prosperity and
stability on the whole, but there were some minor fi­
nancial crises that did not have apparent causes. Finally,
some contemporary observers and historians have ar­
gued that actions taken by the national bank during
the Jacksonian “war” may have partly caused the “Panic
of 1837,” another banking crisis and economic con­
traction, which occurred shortly after the Second Bank
of the United States lost its federal charter.
The consensus among historians is that the Sec­
ond Bank of the United States (which I call the U.S.
Bank for short) was politically controversial because
it involved an expansion of federal powers that many
Americans in that day resisted on general principle; and
because the monetary discipline that it was designed
to impose on state-chartered banks was costly to those
banks and thus engendered a powerful industry lobby
in opposition to it. A predominant view (emphasized
particularly by Hammond, 1957) is that, while various
classes of indebted persons often expressed hostility
to the bank and were sometimes mobilized to support

Federal Reserve Bank of Chicago

politicians who opposed it, those debtor constituencies
were not the mainspring of opposition. On the whole,
other historians do not dispute Hammond’s view. It is
generally thought that, in fact, the U.S. Bank did not
act in a predatory way toward the state banks? Regard­
ing the economic management of the bank, there is wide
agreement that there was disastrous mismanagement dur­
ing the first two years of operation but, after a change
of leadership, very capable management subsequently.
The thesis of this article is that conflict between
debtors and creditors regarding economic policy may
have played a large role, both politically and economi­
cally, throughout the history of the U.S. Bank. This
conclusion is only tentative. It rests on some theoreti­
cal premises that are plausible but not yet rigorously
proven. If they are valid, historical research suggested
by their implications may overturn them nevertheless.
However, if correct, this explanation can account for
four aspects of the history of the U.S. Bank that other
explanations have not addressed convincingly: 1) Why
a large number of legislators changed positions, in both
directions, during the debate on the charter; 2) Why a
demonstrably incompetent president and some venal
senior managers were initially selected; 3) Why states
whose legislators had eventually supported issuance of
the U.S. Bank charter shifted to oppose the bank after
capable and honest management was installed; and
4) Why several, relatively minor, financial crises oc­
curred during the period while the bank was capably
managed and before the conflict about renewing its
charter reached its apex.
The interpretation of the U.S. Bank offered here
rests on theoretical premises about two related matters.
One is the relationship between the structure of the bank­
ing industry in an economy and the macroeconomic
Edward J. Green is a senior vice president at the Federal
Reserve Bank of Chicago.

59

performance of that economy, particularly in times of
high inflation and banking crises. The other is the na­
ture of voters’ preferences over those macroeconomic
outcomes, and the way in which political institutions
translate those preferences into legislation or regula­
tion that affects the structure of the banking industry.
I discuss these matters in turn in the following two
sections. Then I provide an overview of the history of
the U.S. Bank and discuss how the theories outlined in
this article shed some light on the bank’s performance.

Premises about banking structure and
macroeconomic performance
The analysis to be offered here is based on the im­
plications for macroeconomic performance of whether
or not banks’ criteria for making loans and for issuing
money are set centrally. I call a banking system uni­
fied if those criteria are set centrally and divided oth­
erwise. An economy has a unified banking system if
it has either a monopoly bank (or a bank capable of
maintaining a position of industry dominance) with
strong central management or a public authority that sets
and enforces industry-wide standards to which all banks
must adhere. An economy has a divided banking system
if it has many banks and they are not effectively regu­
lated or, alternatively, if it is dominated by a single,
unregulated bank, but the branches of that bank have
substantial independence from the head office. I argue
later in this article that the U.S. Bank itself was a divid­
ed banking system of the latter type, and that the U.S.
financial system as a whole was divided both for this
reason and also because of the survival of the state-char­
tered banks (a divided system of the former type).
This section provides a sketch of a theory (that is,
what economists call a reduced-form model) of banking
equilibrium. In the theory, lending and money creation
are conflated (treated as one variable) and high infla­
tion and banking crises are also conflated. Although
lending and money creation technically are related (be­
cause net money creation by a bank is the excess of the
amount of loans it makes plus the amount of notes it
issues over the amount of deposits it takes), what is rel­
evant for this sketch is that lending and money creation
are both banking activities that are profitable and so­
cially beneficial in moderation, but that can be over­
done in the sense of making imprudent, risky loans or
issuing more monetary claims than may be possible
to honor if demand for redemption is high. Overdoing
lending or money creation causes some economic loss,
often involving a banking crisis or an episode of high
inflation. These two forms of loss have the common
feature that a single bank or group of banks can cause
a loss to the banking industry and the economy as a

60

whole, not only to itself. (An economist would say
the offending bank imposes a negative externality on
the industry and the economy.)
I sketch arguments for the following three conclu­
sions, which I adopt as premises in my subsequent anal­
ysis of the U.S. Bank. Of course, given the heuristic
character of these arguments, one should regard them
as merely approximate ideas about the macroeconomic
implications of alternative banking-industry structures.
■ Excessive lending and money creation are avoided
in the equilibrium of a unified banking industry.
■ A divided banking industry has a static equilibrium,
in which excessive lending and/or money creation
are the norm and the industry consequently suffers
ongoing losses due to crises and/or high inflation.4
■ A divided banking industry may also have a dynam­
ic equilibrium, in which excessive lending and mon­
ey creation, and consequent losses due to crises and
high inflation, are avoided on the whole. If banks’
decisions are not directly observable by one another
and if occasionally there are economic circumstanc­
es (such as a run on an individual bank or an uptick
in inflation) that banks might impute—rightly or
wrongly—to excessive lending or money creation
by their competitors, then there may be episodic
“industry wars,” in which such excessive activity
does temporarily take place, with attendant losses
to the industry until normal conduct is restored.5

Some simple algebra is helpful to derive these re­
sults. Consider an activity that a bank can do to excess.
Let x denote the amount of excess activity in which each
bank engages and A denote the aggregate amount of ex­
cess activity in the banking industry. Suppose that a bank
makes revenue of p per unit of its own excess activity
and that it incurs cost of A. per unit of excess activity in
the industry. That is, if a bank’s excess activity is x and
the industry’s excess activity is A, then the bank’s profit is
7t(x, A) = px - AA.

From the perspective of the bank in question, the
industry level of excess activity is the sum of that due
to itself and that due to all other banks. Let x* denote
the level due to the other banks, so that A= x + x*. Think
of a unified banking industry as an industry consisting
of a single bank, so that x* = 0 in a unified industry.
Now the profit of a bank can be rewritten as
7t(x, A) = px - A.(x + x") = (p - A.)x - Ax’.

Make the assumption that a bank chooses its level of
excess activity by maximizing its profit without regard

1Q/2003, Economic Perspectives

to how its choice will influence the choices of its com­
petitors. (Economists call this the Cournot-Nash
equilibrium assumption.) On this assumption, a bank
will not engage in excess activity (that is, will set .y = 0)
if p<A but will engage in as much excess activity as
possible if p>A. Call this the static equilibrium of the
banking industry. For convenience, assume that there
is a finite, positive maximum level of 7 excess activity.
If p>A, then the static equilibrium is for every bank
to set .y=7.
In a unified industry, profit maximization by the
single bank is the same thing as profit maximization
by the industry. In a divided industry, however, they
may diverge. To see this, consider an industry with two
banks, 1 and 2. Let jq and .y,, respectively, denote the
excess-activity levels of banks 1 and 2. Under the as­
sumption that p> A, .Vj = .y2=7. Total industry profit
is the sum of the profits of the two banks, which is
2n(7,27) = 2p7-4A7. Consider, for example, p = 3
and A = 2. Then p>A, so 7 is each bank’s individual
profit-maximizing choice, so the total industry profit
is -27 < 0. If both banks had refrained from excess
activity, then total industry profit would have been 0.
That is, in this example, the individual profit-maximi­
zation decisions of banks do not lead collectively to
the maximum feasible level of industry profit.6
Bankers in a divided industry might try to achieve
informal coordination to mitigate the loss that they
would collectively suffer in static equilibrium. The on­
going nature of their relationship as competitors, which
is ignored in the above explanation of why each of
them would rationally decide to participate in the static
equilibrium, can provide a way out of their dilemma.7
For specificity, continue to assume that p = 3 and A = 2.
Also assume that the bankers make choices at each
date 0, 1, 2 ... and that they discount future profits by
factor 8 between 0 and 1. That is, if a banker chooses
excess activity .y( at each date t and the total industry
level of excess activity is A'. then the banker’s discount­
ed profit is Z/„ 8'Ji(.v,, Jf,). To reformulate the assump­
tion that bankers neglect the effect of their own choices
on their competitors’ choices in a way that takes ex­
plicit account of their repeated information, assume
that bankers neglect the effect of their own choices on
their competitors’ simultaneous choices, but that each
banker recognizes that competitors can base their cur­
rent choices on information or inference about the
banker’s past choices.
Now consider a divided industry consisting of two
banks, and think about an implicit or explicit agreement
between the bankers to refrain initially from excess
action (that is, to set vQ = 0), but to switch irrevocably
to the static equilibrium level (that is, X=x ) after

Federal Reserve Bank of Chicago

observing an apparent violation of the agreement. For
the moment, assume that bankers accurately observe
one another’s choices.
Consider whether the bankers have incentive to hon­
or this agreement. If all do honor it, then each banker re­
ceives discounted profit 0. Consider a banker who decided
to violate the agreement, say at date 0, by setting ,y0 > 0.
The banker’s profit at date 0 would be (p - A) v0 = y0.
Thereafter, in the ensuing static equilibrium, the bank­
er’s profit each period is p7-2A7 = -7. The banker’s
discounted profit from violating the agreement is thus
y0
8'7=v0 -(8/(l-8))7 < ((l-28)/(l-8))7. If
8 > 1/2, then the discounted profit from violating the
agreement is negative and, therefore, the banker has an
incentive to keep the agreement. Call such an incen­
tive-compatible agreement a dynamic equilibrium.
If 8 > 1/2, then it is really not necessary to switch
to static equilibrium forever. Maintaining the static equi­
librium for a sufficiently long time and then refraining
again from excess activity (that is, replacing °° by a suf­
ficiently large, finite, upper limit of the discounted sum
of profits) would preserve incentive compatibility.
Now suppose that bankers do not directly observe
one another’s choices, but that rather they observe
some indirect evidence that is subject to occasional,
random, disturbances. In particular, although all bankers
are keeping their agreement, they sometimes receive
the sort of evidence (such as an uptick of inflation or
a spate of withdrawals by depositors) that would or­
dinarily result from a violation. When this occurs, then
all the bankers will revert to static equilibrium for a
finite period and subsequently return to cooperation.
If the errors are sufficiently rare, then the inequality
of discounted profits that determines incentive com­
patibility of the agreement will be almost identical to
the corresponding inequality that has just been derived
for an industry where bankers observe one another’s
choices directly, and this inequality will hold in expectedvalue terms. That is, in an industry where such obser­
vation errors occasionally occur, dynamic equilibrium
will exhibit a pattern of cooperation that is occasion­
ally broken but always repaired after a while. During
the breaks, however, banks will lend or create money
in excess, and banking crises or high inflation will
sometimes result.

Premises about voters’ policy preferences
Banking crises and high inflation affect the gen­
eral public, as well as the banking industry. In most
macroeconomic models, all persons are identically sit­
uated and there is a unanimous preference for bank­
ing stability and low inflation (or even slight deflation).
However, people in actual economies are not all

61

identically situated. In particular, some people tend to
be in debt most of the time (although they may need
to pay off their debts periodically to remain credit­
worthy), while some others are debt free and even hold
bonds. It is plausible that such choices are often robust
(that is, they would not be reversed by small changes
in wealth, interest rates, and so forth) and that they are
rational in light of people’s endowments, preferences,
and so on. Strictly speaking, whether to borrow or to
lend is a choice that a person makes in credit-market
equilibrium, rather than a characteristic of the person.
Nevertheless, I use the terms debtor and creditor here
to refer to people whose characteristics lead them ra­
tionally and robustly to be either debtors or creditors
throughout most of their lives.
I use the following premises about people’s—and
specifically voters’—life-cycle credit positions and con­
sequent policy preferences in analyzing the history of
the U.S. Bank.

■ There are both debtors and creditors in the economy.
■ Debtors tend to favor positive inflation and are will­
ing to tolerate some risk of a banking crisis in return
for “easy” credit, while creditors favor price stability
or deflation and are averse to risk of a banking crisis.

Wallace (1984) emphasizes the significance of these
premises (as they apply to inflation, not banking crises)
for monetary policy. He provides an economic model
that conforms to the first premise and that also conforms
approximately to the second. (Holders of money in
the initial generation of Wallace’s overlapping-gener­
ations model, rather than creditors, are the group that
is averse to inflation.) A subsequent model that resembles
Wallace’s, and that can be shown to conform exactly
to the second premise (for inflation), is the prototypical
model of a debt security in Green (1997), diagrammed
in that paper in figure 2. The key to why these models
generate disparate preferences regarding inflation is that
steady-state inflation is an outcome of steady-state mon­
ey growth that depresses the real interest rate, and that
debtors prefer a low real rate while creditors prefer a
high real rate. Dependence of the real interest rate on the
rate of steady-state money growth contrasts with typical
models in which the real interest rate is assumed to
be constant or to be determined by non-monetary factors.
I am not aware of any studies that confirm either
of the premises directly. Direct confirmation could be
made, in principle, from a large set of observations
tracking households’ credit histories throughout their
lifetimes and including characteristics that might pre­
dict disposition to be debtors or creditors. Short of
analyzing such a dataset, it is still possible to obtain
partial and indirect confirmation. Hendricks (2002) may

62

be seen as providing this.8 Hendricks begins by pro­
viding corroboration of two previously observed facts:
that there is tremendous wealth inequality between
households with similar lifetime incomes, and that this
inequality persists across generations. He then shows
that these facts are inconsistent with a life-cycle con­
sumption model, which represents all households as
being essentially identical (with wealthier households
being scaled-up copies of less wealthy ones), even when
modifications are made to account for intergenerational
transfers, differences in time preference, and random
opportunities for entrepreneurial investment. He con­
cludes that life-cycle models lack an important source
of wealth inequality.
Hendricks does not pinpoint the situation postu­
lated in the first premise, but the premise can fit his
needs. Notably, if there is a segment of households with
income that increases predictably over time and with
relatively age-independent consumption preferences,
while other households’ income is a constant or de­
creasing function of age, then the increasing-income
households would maximize utility subject to their lifetime-budget constraints by borrowing when young and
repaying with their higher income when old. In con­
trast, other households with the same total lifetime in­
come would save and subsequently spend their savings,
or simply consume their income if they had time-con­
stant income, and so would not go into debt. That is,
the increasing-income households would have nega­
tive wealth throughout their lives, while other house­
holds would have nonnegative wealth. Moreover, under
the plausible assumptions that whether income is in­
creasing or decreasing as a function of age is correlated
with occupation and that occupation is intergenerationally correlated, the resulting wealth inequality will
also be correlated. Thus Hendricks’ findings provide
support for the first premise.9
The preceding discussion has entirely concerned
inflation and has not mentioned banking crises, to which
the second premise refers. The notion that creditors (that
is, bankers and depositors in banks) are more averse
than debtors to banking crises is intuitive, especially
in the early nineteenth century U.S. context where (as
I discuss below) debtors were able to get political
protection from their creditors during a crisis. Never­
theless, it would be desirable to have an economic model
to provide a foundation for the premise and also direct
evidence in favor of the premise. Since I discuss infla­
tion consequences of the U.S. Bank in the next section,
as well as banking-crisis consequences, the assertion
in the second premise regarding banking crises is not
absolutely required for the analysis of the U.S. Bank
to be sound.

1Q/2003, Economic Perspectives

The Second Bank of the United States
The premises discussed in the previous two sec­
tions seem to fit the Second Bank of the United States
well, and they provide a quite distinct insight from
the conventional analysis. The U.S. Bank was origi­
nally proposed to Congress in 1814. Congress granted
a charter in 1816 to operate for a period of 20 years.
The bank began to operate in 1817 and was convert­
ed into a Pennsylvania state-chartered bank in 1836,
after Congress declined to renew its federal charter.
The U.S. Bank was conceived in an environment
of financial crisis. The United States declared war on
England in 1812 and narrowly survived the war, which
ended with a negotiated peace in 1814. The U.S. gov­
ernment bore extraordinary war expenditures. At the
same time, tax revenues (principally import duties on
goods imported from England during peacetime) plunged.
The U.S. financial system was based on state-chartered
banks, which expanded their note issue and subsequent­
ly were unable to redeem their notes for specie. Be­
cause these notes were not redeemable and suffered
high inflation, and because the notes of most banks were
not accepted in trade except close to their location of
issue, it would have been fruitless for the government
to accept them in payment of taxes. Since taxpayers
could not obtain specie, they could not pay their taxes.
In large part because of credit risk due to this situation,
even short-term government debt sold at a substantial
discount (Wright, 1941, pp. 276-279).
The conventional analysis is that, as an economic
institution, the U.S. Bank was disastrously managed
in its first two years but, on the whole, very capably
managed thereafter. This abrupt change reflected a
change in leadership.10 The president of the bank dur­
ing those first two years, William Jones, was essentially
a political choice—preferred for the position by the
U.S. president and secretary of the Treasury (James
Madison and Alexander Dallas, who appointed five
of the bank’s directors and apparently lobbied actively
to influence the election of the remaining 20), but had
neither the experience nor the ability to be a capable
and judicious banker. In contrast, each of the two sub­
sequent presidents, Langdon Cheves and Nicholas
Biddle, was elected by the bank’s directors with the
expectation that he would act as a capable and judicious
banker, and each amply justified that expectation by
his performance.
The U.S. Bank operated in an economy in which
there were already over 200 state-chartered banks
(Wright, 1941, p. 258). Indeed, one of the main motives
for establishing the U.S. Bank was to impose discipline
on the state banks. Both impressionistic and quantita­
tive studies have concluded that the U.S. Bank acted

Federal Reserve Bank of Chicago

in a non-predatory way toward the state banks, although
it did constrain their profits by imposing discipline and
by competing vigorously. However, state bankers com­
plained strenuously that the conduct of the U.S. Bank
was unfair to them and contrary to the public interest.
These bankers’ complaints and their view of the role
of the U.S. Bank were taken seriously by citizens, es­
pecially in the southern and western states, who sup­
ported the sustained and aggressive campaign ofAndrew
Jackson’s administration against the U.S. Bank. That
campaign, which reached its peak during Jackson’s
second term (beginning in 1833), included withdraw­
ing the federal government’s deposits, refusing to ac­
cept notes of the U.S. Bank in payment of taxes, and
an intense and ultimately successful political effort to
prevent renewal of the bank’s federal charter.
Those southern and western states were the ones
in which it was most common for banks to issue a greater
value of notes than they were able to redeem for specie.
They were also the states where, during the Panic of
1819, laws were passed that impaired banks’ ability
to take possession of collateral and sell it to discharge
loans that were in default. These two facts suggest
that in the southern and western states, debtors were
politically decisive, and that those debtors favored or
at least tolerated a policy regime that permitted bank­
ers aggressively to expand the money supply.
As a political institution, the U.S. Bank was one of
the most intense objects of controversy in U.S. history.
The original charter was a subject of extended debate
throughout a two-year period, during which seven at­
tempts were made to pass it. One of these attempts ended
in a presidential veto. The original petition to Congress
for a bank to be chartered had been submitted by the
New York business community and received strong
support from business leaders in Philadelphia, where
the bank was ultimately headquartered. New York and
Philadelphia, the two primary U.S. financial centers,
were located in the states where it is reasonable to sup­
pose that creditors were most politically dominant, as
they likely were to some extent in most of the north­
eastern states. The petition emphasized that the U.S.
Bank would provide a sound national currency, disci­
pline the state banks (which in some states would other­
wise continue to issue unsound currency), and provide
a serviceable medium for payment of taxes so that the
federal government could balance its budget and repay
its debt. That is, the petitioners from these creditordominated states supported a contractionary monetary
and fiscal regime that would be expected to produce
relatively high real interest rates.
However, when the charter ultimately did pass,
much of the support for it came from the southern and

63

western states.11 That is, support came primarily from
the debtor-dominated states that later were most criti­
cal of the U.S. Bank’s conduct.
The conventional analysis of the politics of the
original U.S. Bank charter emphasizes considerations
of party and ideology, which are only indirectly relat­
ed to the economic function of the bank. The fact that
legislators’ votes were determined as much by their
regions as by their parties casts doubt on that analysis.12
At the same time, there are three puzzles that are chal­
lenges for the explanation that I am proposing. Why did
debtor-dominated states support a bank proposed by
creditor-dominated states? Why did creditor-dominat­
ed states withdraw their support for a bank that they
had proposed? Finally, why did the debtor-dominated
states quickly become dissatisfied with the bank?
If the premises enumerated in the previous two
sections are correct, then one can resolve all three of
these puzzles by paying attention to the decentralized
corporate structure of the U.S. Bank, which made the
U.S. Bank itself and the U.S. banking system (consist­
ing of both the U.S. Bank and the state banks) a divided
banking system. As discussed earlier, a divided bank­
ing system has two equilibriums that differ in their levels
of money creation and exposure to banking panics. As
discussed in the previous section, these differences be­
tween the equilibriums can result in differences between
their distributive implications. While the original pe­
tition to Congress to charter a bank did not envision
branches (and thus did envision a unified banking
system with a dominant, centrally managed bank at its
head), most of the draft charters subsequently consid­
ered did authorize the U.S. Bank to establish branches.
By early 1817, when the bank went into operation,
16 branches had been established in addition to the
head office in Philadelphia.13 Each branch had its own
board of directors, whom the charter specified were to
be appointed by the parent board in Philadelphia. A
branch board was to elect one of its members as branch
president. Each branch had a cashier, an employee who
managed its day-to-day business, whom the charter also
specified was to be appointed by the parent board.
The initial rationale for authorizing the establish­
ment of branches was to impose discipline on state banks
operating in markets far from the head office and to
create a uniform, nationwide currency. In order to
achieve the latter goal fully, notes issued by any branch
would have to be payable specie at any other branch.
Preferably other branch obligations, including drafts
and inland bills of exchange, should also be payable.
While the charter did not require the bank to operate
according to this rule, that was the expectation of the
U.S. Bank’s initial proponents. In principle, the charter

64

enabled the head office to limit the value of notes issued
by the branches because the paper notes themselves had
to be obtained from the cashier in Philadelphia. How­
ever, this arrangement was not self-enforcing. Rather,
it placed the burden on the cashier and, ultimately,
the directors of the head office to monitor note issuance
by branches and to constrain the decisions of branch
directors who might be politically influential. More­
over, it did not address the problem of limiting other
sorts of branch obligations, which were more difficult
to monitor than note issuance because they required
detailed knowledge of the operating procedures of each
branch. Even an experienced cashier in Philadelphia
had difficulty in this regard. (Catterall, 1902, p. 395.)
I have already mentioned William Jones, the first
president of the U.S. Bank. He was primarily a poli­
tician. He lacked the experience or ability to head the
nation’s largest bank and to play a role akin to that of
a central banker. As a businessman, he had gone into
bankruptcy. He had been regarded as incompetent dur­
ing a brief tenure as Treasury secretary. In fact, the bank’s
original directors shared these traits on the whole. They
appointed branch directors who, as a group, did not
exhibit high character, competence, or political inde­
pendence. (Catterall, 1902, p. 32.) With such leaders,
and without close and competent central oversight, a
number of branches located primarily in debtor-dom­
inated states engaged in dangerously expansive note
issuance and lending.14 That is, a policy regime went
into effect that closely resembled the static, high-inflation equilibrium discussed earlier in most respects.
These considerations suggest that the character
of the directors and officers was crucial to determining
whether the static, high-inflation equilibrium or the
dynamic, low-inflation equilibrium would result from
the founding of the U.S. Bank with its decentralized
corporate form. Evidently the representatives of the
creditor-dominated, northeastern states initially believed
that those directors and officers would be conservative
bankers who would implement the low-inflation equi­
librium. It is plausible that, sometime between 1814
and 1816, both they and the representatives of the debt­
or-dominated, southern and western states changed
their beliefs. They came to recognize that a combina­
tion of direct government appointment of some of the
Philadelphia directors and politically influenced elec­
tion of the remaining directors would likely produce
a board with the characteristics of the actual original
board, and that the head-office board would then ap­
point branch boards that would be inclined to behave
in accordance with the high-inflation equilibrium. This
supposition provides an explanation of why many legis­
lators representing the northeastern states abandoned

1Q/2003, Economic Perspectives

their support for the U.S. Bank, as well as why many
southern- and western-state legislators ultimately did
vote to charter the bank. That is, the supposition re­
solves the first and second of my puzzles.
Let’s turn now to the third puzzle: Why the south­
ern and western states’ citizens views shifted toward
opposition to the U.S. Bank, particularly after the equi­
librium initially supported by that institutional frame­
work turned out to be the one that they had hoped for.
A conventional view, to which I present an alter­
native or at least a supplement, attributes the shift to
the fact that the U.S. Bank was required by its charter
to redeem its notes for specie, so inflation could not go
on indefinitely. Beginning in mid-1818, the bank was
forced to demand payment of loans rather than renewing
them, in order to obtain specie with which to make
redemptions. To the extent that loans were repaid in state
banknotes that the U.S. Bank redeemed, the balance-sheet
pressure was also partly transmitted to state banks. The
resulting contraction of credit was widely thought to
have contributed to, or at least increased the hardship
produced by, the recessionary Panic of 1819. Further­
more, when Langdon Cheves became president of the
bank at the beginning of 1819, he forbade the branches
to issue notes and instructed the head office not to pur­
chase bills of exchange issued by the branches (Catterall,
1902, p. 70). The consequences of this policy were felt
most heavily by farmers and other users of bank credit.
Thus, according to this view, debtors turned against
the bank because they blamed it for causing them un­
necessary hardship during and after the panic.
This is a very plausible view. It is consistent with
documentary evidence about when and where sentiment
turned against the U.S. Bank. It is also consistent with
the intuitive idea that people whose lives had been ruined
or severely disrupted by being held to the harsh terms
of a contract in circumstances for which it was not de­
signed (that is, whose loan defaults were due to excep­
tional macroeconomic conditions rather than to their
own indolence or improvidence) would become im­
placable enemies of the institution enforcing the con­
tract. Here are two weaknesses of the view, although
these considerations are far from being decisive refu­
tations of it. First, in a number of the debtor-dominat­
ed states, laws were passed that effectively protected
defaulting debtors from action by their creditors.15 It
is probable that, once such a law had been passed, banks
largely left defaulting debtors alone rather than taking
costly, unproductive actions against them. Thus, to the
extent that such a law had been passed promptly, there
would be relatively few debtors who were directly, per­
sonally harmed by their banks. Second, the view does not
explain why debtors should have strong animosity to

Federal Reserve Bank of Chicago

the U.S. Bank as an institution, rather than to the of­
ficers who had caused the difficulty by inept or cor­
rupt management. In particular, after President Jones
had resigned in disgrace at the beginning of 1819 and
President Cheves had subsequently forced many of
Jones’s subordinates out of office and prosecuted
several of them, why was there still animosity to the
bank after 1822, when the Panic of 1819 had waned
and Nicholas Biddle had replaced Cheves as president?
Why was animosity not directed exclusively toward
Jones and perhaps Cheves (who initially had no choice
but to continue the contractionary policies adopted to
keep the bank solvent at the end of Jones’s tenure),
rather than toward the bank and its newly elected pres­
ident?16 Of course, if one believes that public animosi­
ty is frequently misdirected at institutions and public
figures whose actual conduct has been creditable, then
one will not lose much confidence in the convention­
al explanation of the bank’s fall from popularity on
account of that having happened here. In summary, the
conventional view explains well why support for the
U.S. Bank eroded in the southern and western states.
Nevertheless, the contrast between the static and
dynamic equilibriums of a divided banking system sug­
gests an additional explanation. Cheves and Biddle may
have accomplished a shift from a high-inflation to a
low-inflation equilibrium. If so, then it is obvious why
debtors who had supported chartering the U.S. Bank
in the expectation of an expansionary outcome retract­
ed that support in 1819. It is certain that the money
stock per capita steadily decreased to a stable level
attained by the late 1820s. Catterall (1902, p. 444) cites
a congressional document that calculates the amount
of money (including state banknotes, U.S. Bank notes,
and specie) in circulation per capita as having been $ 11
in 1816, $7.75 in 1819, $6 in 1829, and $6.35 between
1829 and 1834. It is clear that the gradual decline, on
average, in circulation per capita during 1819—29 is at­
tributable to the Cheves-Biddle regime. Credit for the
steeper decline during 1816-19 cannot be attributed
as surely, since Jones had to curtail the bank’s opera­
tions in the second half of 1818 and then the bank’s
transition from Jones to Cheves as president occurred
in January 1819. Both the description of the U.S. Bank’s
own operations during 1817-18 and the evidence that
state banknotes continued to inflate during that period
suggest that most of the 1816—19 decline in circula­
tion per capita probably occurred in 1819.
Changes that Cheves and Biddle made in operating
and management procedures can be viewed as attempts
to alter or mitigate the features of the U.S. Bank’s cor­
porate structure that constituted a divided banking
system. First of all, Cheves’ policy in 1819 established

65

the precedent that the bank’s management had the op­
tion not to permit notes of one branch to be presented
for specie payment at another branch. Moreover, he
required each branch not to pay bills of exchange is­
sued by another branch, unless the issuing branch had
made an inter-branch deposit from which the payment
could be made (Catterall, 1902, p. 76). To address the
problem of branch interrelatedness at its root, he as­
signed a notional capital to each branch and required
prompt payment of interbranch debt, so that each branch
had to stand financially on its own rather than being
a free rider on the others and the head office (Catterall,
1902, pp. 63, 76). Biddle reduced the autonomy and
privacy of the branches by having the cashier of each
branch report directly to the head office, rather than
delegating the supervision of the cashier substantially
to the branch president as before, and empowering
Philadelphia directors resident in branch cities to attend
the board meetings of those branches. He also instituted
a practice of filling branch cashier positions by pro­
moting seasoned Philadelphia employees and avoiding
moving people to cities where they had formerly lived
(Catterall, 1902, pp. 102-104).
In a decentralized economy in which a static equilib­
rium had been in effect for a period of time and, subse­
quently, a dynamic equilibrium had been in effect, one
would expect to observe two distinctions between the
earlier and later periods. First, policy would be less ex­
pansionary on average during the later period. Second,
there would be brief periods of some sort of financial
disturbance (such as high-inflation episodes in dynamic
equilibrium) in which the equilibrium had apparently
broken down and then been restored. These episodes
would occur in circumstances where it might appear as
though banks (or bank branches) could be overextending,
but without direct evidence of inappropriate decisions

or conduct. These comparisons between the two peri­
ods are predictions that follow from a supposition that
an equilibrium shift has taken place. Fulfillment of both
predictions should be taken as evidence of a shift.
To examine the U.S. economy during the existence
of the U.S. Bank in these terms, we might specify the
first period as having occurred during 1817-18 and the
second period during 1819-32. This specification rec­
ognizes that effects of the Jackson administration’s active
hostility to the bank and of the bank’s forceful strategic
reaction overshadowed the fundamental characteristics
of the bank’s equilibrium after 1832. The discussion of
money stock per capita above provides some evidence
that the first prediction from an equilibrium shift was
fulfilled. Regarding the second prediction, there were
episodes of banking disruption in 1828 and particularly
in 1832 that fit it well (Catterall, 1902, pp. 135-137).
This evidence seems favorable toward, albeit not conclu­
sive of, a shift from a static equilibrium to a dynamic
equilibrium coinciding with Jones’s resignation and
Cheves’ election as president of the U.S. Bank.

Conclusion
The Second Bank of the United States was an in­
stitution of first-rank importance, both politically and
economically, during the early nineteenth century.
This article has brought recent contributions to the theo­
ry of industrial organization and monetary economics
to bear, in order to link the political and economic
aspects of its history more closely and insightfully.
The main, albeit tentative, conclusion of the study is
that conflict between debtors and creditors regarding
the U.S. bank and its policies may have played a
larger role in the political fortunes of the bank than
historians have generally understood.

NOTES
lrThe First Bank of the United States (1791-1811) was a previous
economic and political experiment with a national bank.

2A legacy of the Second Bank of the United States is McCulloch v.
Maryland (McCulloch v. Maryland, 17 U.S. 316, 1819), a case that
became one of the pillars of U.S. constitutional law. The Supreme
Court ruled that the Constitution should be read as granting “im­
plied powers”—powers that are reasonable means for exercising
narrower powers explicitly enumerated in the Constitution and that
are not explicitly prohibited—to the federal government. From
this general principle and the specific premise that a national bank
was a reasonable means to exercise explicit federal powers such
as collecting taxes, borrowing money, regulating commerce, and
so forth, the court inferred that the charter of the Second Bank of
the United States was constitutional.

3An econometric study of the U.S. Bank by Highfield, O’Hara,
and Woods (1991) supports previous historians’ impressionistic
conclusions to this effect. Nevertheless, there was one very impor­
tant state (New York, where Governor Martin Van Buren was a
national leader of opposition) in which state banks were limited
by charter from offering loans at as low a rate as the U.S. Bank
could offer (Catterall, 1902, p. 166). So its avoidance of predatory
conduct did not necessarily mean that the U.S. Bank was not a
genuine threat to state banks.
4Aizenman (1989) derives this proposition in a model in which real
money balances are assumed to be an argument of agents’ utility
function (or, more generally, an exogenous demand function for
money is assumed). Horder (1997) derives the proposition in an
overlapping-generations model of fiat money.
5Zarazaga (1992, 1993) derives this proposition in a dynamic ver­
sion of Aizenman’s model.

66

1Q/2003, Economic Perspectives

6This exemplifies a more general phenomenon known to econo­
mists as “prisoner’s dilemma” and the “tragedy of the commons,”
on account of early examples that were studied.
7The following discussion presents the intuition behind a result of
Green and Porter (1984) that Zarazaga used. Abreu, Pearce, and
Stacchetti (1990) provide an improved, but more technically de­
manding, result.
8I am grateful to Anna Paulson for pointing out the relevance of
Hendricks’ study.
9Bayes’ Theorem states that an observation (such as Hendricks’ find­
ings) provides support for a hypothesis (such as the first premise)
if the hypothesis raises the conditional likelihood of the observation
(as this paragraph argues that the first premise does for Hendricks’
findings).

10The following facts, and the other facts in this section for which
explicit citations are not given, are documented by Catterall (1902).
11 The New England and middle states (New York, New Jersey,
Pennsylvania, and Delaware) voted 45-35 against the charter in
the House of Representatives, while the southern and western states
voted 45-26 for it. The Senate vote was 22-21, with more than
half of the votes for the charter coming from the South and West
(Hammond, 1957, p. 240).

12Crucial support for the charter came from defecting members of
the Federalist party (Hammond, 1957, p. 241).
13A total of 28 branches were eventually established, several of
which were closed while the bank still had its federal charter.

14Some of this activity, particularly at the Baltimore branch, involved
transactions that were outright inappropriate and even fraudulent.
However, the extent of this activity and its relative concentration
in the southern and western branches suggest that it was an equi­
librium phenomenon rather than solely a manifestation of individual
weakness or greed.
15Such laws were passed in Tennessee, Kentucky, Ohio, Missouri,
Illinois, and Indiana (Catterall, 1902, p. 83). Although these laws
superficially seem to be a time-inconsistent obstruction of volun­
tary agreements, there is a good case that their passage was actu­
ally efficient from an ex ante perspective. Green and Oh (1992)
and Bolton and Rosenthal (2002) have made this case.

16Wright (1953) documents that, even before Cheves became presi­
dent, some contractionary actions were taken on Biddle’s recommen­
dation that were necessary to correct Jones’s mismanagement.
However, Wright notes that Biddle managed to give this advice
without taking a publicly visible role.

REFERENCES

Abreu, Dilip, David Pearce, and Ennio Stacchetti,
1990, “Toward a theory of discounted repeated games
with imperfect monitoring,” Econometrica, Vol. 58,
No. 5, pp. 1041-1063.

Hendricks, Lutz, 2002, “Accounting for patterns of
wealth inequality,” Arizona State University, work­
ing paper.

Aizenman, Joshua, 1989, “The competitive exter­
nalities and the optimal seignorage,” National Bureau
of Economic Research, working paper, No. 2937.

Highfield, Richard A., Maureen O’Hara, and John.
H. Woods, 1991, “Public ends, private means: Central
banking and the profit motive 1823-1832,” Journal
ofMonetary Economics, Vol. 28, No. 2, pp. 287-322.

Bolton, Patrick, and Howard Rosenthal, 2002,
“Political intervention in debt contracts,” Journal of
Political Economy, Vol. 110, No. 5, pp. 1103-1134.

Harder, Jakob, 1997, “Essays on financial institu­
tions, inflation, and inequality,” London School of
Economics, Ph.D. dissertation.

Catterall, Ralph C. H., 1902, The Second Bank of
the United States, Chicago: University of Chicago Press.

Wallace, Neil, 1984, “Some of the choices for mone­
tary policy,” Quarterly Review, Federal Reserve Bank
of Minneapolis, Vol. 8, No. l,pp. 15-24.

Green, Edward J., 1997, “Money and debt in the
structure of payments,” Bank ofJapan Monetary and
Economic Studies, Vol. 15, No. 3, pp. 63-87, reprinted
in Federal Reserve Bank of Minneapolis, 1999,
Quarterly Review, Vol. 23, pp. 13-29.
Green, Edward J., and Soo Nam Oh, 1991, “Can a
credit crunch be efficient?,” Quarterly Review, Federal
Reserve Bank of Minneapolis, Vol. 15, No. 4, pp. 3-17.
Green, Edward J., and Robert H. Porter, 1984,
“Noncooperative collusion under imperfect price in­
formation,” Ecoz/ozne/rzca, Vol. 52, No. l,pp. 87-100.

Hammond, Bray, 1957, Banking and Politics in
America from the Revolution to the Civil War, Princ­
eton, NJ: Princeton University Press.

Federal Reserve Bank of Chicago

Wright, Chester W., 1941, Economic History’ of the
United States, New York and London: McGraw-Hill.
Wright, David McC., 1953, “Langdon Cheves and
Nicholas Biddle: New data for a new interpretation,”
Journal ofEconomic History, Vol. 13, No. 4,
pp. 305-319.
Zarazaga, Carlos, 1993, “Hyperinflations and moral
hazard in the appropriation of seigniorage,” Federal
Reserve Bank of Philadelphia, working paper, No. 9326.
__________ , 1992 “Hyperinflations, institutions, and
moral hazard in the appropriation of seigniorage,”
University of Minnesota, Ph.D. dissertation.

67