View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Announcement
Federal Reserve Bank^^
of Chicago

2002 Conference on
Bank Structure and
Competition

First Quarter 2002

perspectives
2

The electricity system at the crossroads—Policy choices
and pitfalls

19

The aggregate effects of advance notice requirements

32

When can we forecast inflation?

45

Origins of the use of Treasury debt in open market
operations: Lessons for the present

Conference on Bank Structure and Competition announcement

Economic __

perspectives

President
Michael H. Moskow
Senior Vice President and Director of Research
William C. Hunter

Research Department
Financial Studies
Douglas Evanoff, Vice President
Macroeconomic Policy
Charles Evans, Vice President
Microeconomic Policy
Daniel Sullivan, Vice President
Regional Programs
William A. Testa, Vice President

Economics Editor
David Marshall
Editor
Helen O’D. Koshy

Associate Editor
Kathryn Moran
Production
Julia Baker, Rita Molloy,
Yvonne Peeples, Nancy Wellman
Economic Perspectives is published by the Research
Department of the Federal Reserve Bank of Chicago. The
views expressed are the authors’ and do not necessarily
reflect the views of the Federal Reserve Bank of Chicago
or the Federal Reserve System.

Single-copy subscriptions are available free of charge. Please
send requests for single- and multiple-copy subscriptions,
back issues, and address changes to the Public Information
Center, Federal Reserve Bank of Chicago, P.O. Box 834,
Chicago, Illinois 60690-0834, telephone 312-322-5111
or fax 312-322-5515.
Economic Perspectives and other Bank
publications are available on the World Wide Web
at http:/www.chicagofed.org.

Articles may be reprinted provided the source is credited
and the Public Information Center is sent a copy of the
published material. Citations should include the following
information: author, year, title of article, Federal Reserve
Bank of Chicago, Economic Perspectives, quarter, and
page numbers.
«

es Chicago fed. org
ISSN 0164-0682

Contents

First Quarter 2002, Volume XXVI, Issue 1

The electricity system at the crossroads—Policy choices and pitfalls
Richard Mattoon
Can electricity markets be successfully opened to competition? Events ranging from California’s
electricity crisis to the fall of electricity trading giant Enron have caused policymakers to reexamine
the benefits of restructuring the industry. This article examines policy developments in the Midwest
and highlights some lessons that might help guide future electricity policy.

The aggregate effects of advance notice requirements
Marcelo Veracierto
This article analyzes the effects of advance notice requirements on aggregate output, wages,
employment, and welfare levels. The author finds that, contrary to firing taxes, advance notice
requirements do not lead to reductions in employment. However, they can reduce welfare levels
considerably more than firing taxes.

Conference on Bank Structure and Competition announcement

When can we forecast inflation?
Jonas D. M. Fisher, Chin Te Liu, and Ruilin Zhou

This article reassesses recent work that has challenged the usefulness of inflation forecasts.
The authors find that inflation forecasts were informative in 1977-84 and 1993-2000, but less
informative in 1985-92. They also find that standard forecasting models, while generally poor at
forecasting the magnitude of inflation, are good at forecasting the direction of change of inflation.

Origins of the use of Treasury debt in open market operations:
Lessons for the present
David Marshall

The Federal Reserve currently conducts open market operations primarily in Treasury securities.
It has not always done so. In its earliest years, the Fed conducted open market operations primarily
in private securities, such as bankers' acceptances. The Fed’s choice of instruments was based
both on economic doctrine and to help foster a liquid secondary market in these securities. The
move to reliance on Treasury securities resulted from changes in the financial markets and the
prevailing economic doctrine. These historical antecedents may have relevance for current
problems facing the Federal Reserve.

The electricity system at the crossroads—
Policy choices and pitfalls
Richard Mattoon

Introduction and summary
In the mid-1980s, electricity policy in the United States
began a new chapter when wholesale electricity markets
were opened to competition. While the immediate goal
was to increase the diversity of supply for electricity generation, proponents of restructuring also cited other dimensions of success arising from the restructuring of
other network industries (such as telecommunications,
airlines, and natural gas) as justification for introducing
competition to the electric utility industry. Wholesale
competition for producing electricity would improve
generation efficiency, diversify supply, promote innovation, and even lower prices. Success in opening the
wholesale market, proponents argued, would eventually
be extended to the retail market, and all consumers
would have the opportunity to choose their supplier
and pick an electricity service that best fit their individual needs.
The initial enthusiasm for restructuring was particularly noticeable in states with high electricity prices.
In theory, splitting the traditionally integrated functions of a utility—power generation, transmission,
and distribution—into separate functions would expose cross-subsidies and inefficiencies, and competition among power generators would lead to lower
prices for all classes of customers. Restructuring was
designed to introduce open market competition only
in electricity generation. Transmission and distribution services would still be subject to varying levels
of regulation. By 2000, almost half of the states were
pursuing some form of restructuring. However, several recent events have cooled the enthusiasm for
abandoning the traditional heavily regulated and integrated utility system. Foremost among these was
the California electricity crisis. The state garnered
daily headlines as a series of events, including a flawed
restructuring plan, left California facing skyrocketing
prices, potential blackouts, and bankrupt utilities.

2

California’s high-profile bad experience clearly demonstrated that the costs of a flawed electricity restructuring policy could be very high. In addition, states
that had demonstrated early success in restructuring,
such as Pennsylvania, Connecticut, and Massachusetts,
were beginning to find that sustaining competition
and promoting new market entrants was harder than
they had anticipated.
This apparent conflict between theory and outcome has left restructuring at a crossroads. States are
examining what elements and structures need to be
in place to realize the promise and benefits of opening electricity markets to competition. The questions
policymakers need to answer include the following:
■

■

■

■

Is the physical infrastructure (particularly, adequate
supplies of generation and transmission) in place
to support new market entrants and a competitive
market?
Are the incentives for investing in new electricity
facilities adequate? What can be done to improve
these incentives if they are lacking?
Do new institutions need to be developed to facilitate this new structure for delivering electricity?
Should these be federal, regional, state, or quasipublic institutions? What is the role for existing
regulatory institutions?
Should restructuring expose consumers to changes
in electricity prices, even when those prices can be
volatile?
Richard Mattoon is a senior economist at the Federal
Reserve Bank of Chicago. This research was conducted in
conjunction with the Federal Reserve Bank of Chicago’s
Midwest Infrastructure and Regulation project. The author
wishes to thank William Testa, Thomas Klier, and Jack
Hervey for reviewing the manuscript. Able research
assistance was provided by Margrethe Krontoft.

1Q/2002, Economic Perspectives

■

What is the relationship between meeting environmental goals and generating greater power supply?
Can the two successfully coexist?

In this article, I examine what restructuring means
in the electricity field. I discuss the legacy of the existing electricity system, which favored local electricity
provision by integrated and highly regulated monopoly
utilities, and describe the issues involved in moving
to a more market-based system. Then, I use the five
states of the Seventh Federal Reserve District as a case
study for examining how restructuring issues are being
addressed at the state level. The states of the Seventh
District provide a particularly useful example, given
that restructuring programs in Illinois and Michigan
are well underway with consumers to be provided
with retail choice in 2002. In contrast, Indiana, Iowa,
and Wisconsin have adopted a cautious approach to
restructuring, as relatively low prices for electricity
have led them to question the immediate benefits of
abandoning their existing structure for delivering electricity. Based on this analysis, I identify some lessons
that can be applied as electricity policy continues to
evolve. Evidence suggests that defining the role of
existing and new institutions in managing the transition
to market competition is one of the keys to promoting
electricity restructuring. This may include insulating
these institutions from political interference. Similarly,
we need to examine how markets are structured to provide access to competitive electricity supply sources,
as well as recognizing how the unique attributes of
electricity create challenges for trading power as a commodity. Finally, policymakers need to consider the role
of the electricity consumer in restructuring. For restructuring to succeed, consumers need to be exposed and
to respond to legitimate market-based changes in electricity prices. Price signals that reflect fundamental
changes in the cost of generation need to be passed
through to consumers. While consumers may be provided with tools to manage volatile electricity prices,
creating barriers to prevent price changes from being
reflected in utility bills will not provide incentives for
consumers to conserve electricity or for firms to invest
in expanded generation.
Understanding the legacy of the U.S.
electricity system
For much of its history, the electric utility business
has received little public attention. Electric policy assumed that utilities were natural monopoly providers
of a regulated and essential public service. Consumers
were told which company would be their electric provider and how much they would pay for the service
based on the service territory they were located in.

Federal Reserve Bank of Chicago

Decisions regarding 1) how energy was generated,
2) if new plants were necessary, and 3) how much
should be charged were largely discussed inside
utility companies and in hearing rooms at state public
utility commissions.
There were good reasons for maintaining this
structure. The electric utility business is a very capitalintensive industry. Investments in power plants, transmission, and distribution systems are expensive and
long-lived, and it would be inefficient to build overlapping systems within the same service territories.
The clear public policy response was to recognize the
monopoly status of utility companies, provide the companies with defined geographic service territories, and
then subject them to rigorous regulation so as to prevent the exercise of pricing power.1 The same rationale
was applied to other “network” industries, such as telecommunications, where the policy goal of providing
service to everyone (universal service) at a moderate
price was viewed as a primary objective. For the most
part, this led to a regulatory compact in which utilities
received monopoly status in return for a pricing structure based on tariffs that were “just and reasonable”
(for example, that reflected the utilities’ cost of production and delivery) and that provided for a fair rate
of return on invested assets.
This emphasis on local monopoly provision and
local policymaking led to a highly fragmented electricity system in the U.S. Everything from the price
charged for electricity to the fuel used for generation
varied widely from region to region. Figure 1 demonstrates the extreme variability in the “price” (as measured in average revenue per kilowatt hour [kWh]).
For example, while the price of electricity is a mere
4 cents in Idaho, where hydroelectric generation keeps
costs low, it is nearly triple that amount in nuclear dependent New Hampshire at almost 12 cents. In both
states, the average revenue received by the state’s utilities is justified based on a review by the state public
utility commission of the cost borne by the utility to
generate and deliver energy in its service territory.
The choice of fuel is a very significant factor in
price variability. Figure 2 provides a historical perspective on the costs of coal, natural gas, and petroleum
at the national level. Coal has exhibited very steady
and slightly declining costs, while petroleum and natural gas costs have demonstrated significantly more
volatility. In particular, the rapid run-up in natural gas
costs from 2000 through the first part of 2001 posed
major challenges to natural-gas-fired generators.
Electric prices also vary by class of customer
served (see table 1). Industrial customers are often
charged lower tariff rates because they are easier to

3

FIGURE 1

Average revenue from electricity sales to all retail customers by state,
1998 cents per kilowatt hour
WA
4

ND
5.7

MT
4.8

OR
4.9

MN
5.7

MI
7.1

WY
4.3

UT
5.2

CO
6

IL
7.5
KS
6.3

CA
9
AZ
7.3

OK
5.4

NM
6.8

IN
5.3

MO
6.1

TX
6.1

LA
5.8

WV
5.1

TN
5.6

MS
6

AL
5.6

RI
9.6

PA
7.9

OH
6.4
KY
4.2

AR
5.8

MA
9.6

NY
10.7

IA
6

NE
5.3
NV
5.8

ME
9.8

WI
5.4

SD
6.3

ID
4

NH
11.9

VT
9.8

NJ
DE 10.2
6.9

VA
5.9
NC
6.5

DC
7.4

CT
10.3

MD
7

SC
5.5
GA
6.4

0 to 5
5 to 6

FL
7

6 to 8
8 to 12
Source: U.S. Department of Energy, Energy Information Administration, 2000, “The restructuring of the electric power industry: A capsule of issues
and events,” report, January.

serve. As bulk users of electricity, they often draw a
highly predictable and steady level of power and, as

a result, their costs of service (for example, connections to the grid) are often lower than for residential
customers. Providing residential service
requires managing a more variable load
FIGURE 2
and can only be accomplished through a
U.S. electric utility average cost for fossil fuels,
large distribution system, supported by
1990 through May 2001
higher maintenance and billing costs.
The system of governance of utilities
cost (cents/106 btu)
700
is also fragmented. For the most part large,
vertically integrated, investor-owned
600
utilities are responsible for generating,
transmitting, and distributing power to
500
customers. However, other forms of utility ownership are also popular, including
400
Gas
municipal ownership, cooperative ownership, and even federal power utilities such
300
as the Tennessee Valley Authority and the
200
Bonneville Power Authority (see table 2).
Petroleum
These differences in governance have im100
Coal
portant ramifications for regulatory outcomes. While large investor-owned utilities
0
(IOUs) are subject to review by state public
1990 ’91 ’92 ’93 ’94 ’95 ’96 ’97 ’98 ’99 ’00 ’01
utility commissions, many public power
Note: btu is British thermal unit.
authorities are exempt from these requireSource: U.S. Department of Energy, Energy Information Administration,
2001, “U.S. electric utility receipts of and average cost for fossil fuels,
ments. This fragmented structure makes
1990 through May 2001,” table, available on the Internet at www.eia.doe.gov/
cneaf/electricity/epm/epmt26p1.html.
electricity a policy area with many participants and little central planning or review

4

1Q/2002, Economic Perspectives

in 1996 and were designed to pave the way
for increased participation by non-utilities
Electricity price by class of customer
and promote wholesale competition by
Value
Highest
Lowest
eliminating local utility monopoly control
over transmission. The combined effect of
Average electricity
these orders required public utilities that
price (cents/kWh)
6.66
NH(11.75)
ID(3.98)
controlled transmission to develop open
Industrial
4.43
NH(9.21)
WA(2.70)
access, non-discriminatory transmission
Commercial
7.26
NH(11.39)
ID(4.20)
tariffs and to provide existing and potenResidential
8.16
NH(13.84)
WA(5.10)
tial users with equal access to transmission
information. These orders also began the
Note: Prices are based on the contiguous U.S. NH is New Hampshire;
process of “unbundling” existing utility
WA is Washington; and ID is Idaho.
Source: U.S. Department of Energy, Energy Information Administration,
functions by separating transmission of
1999, “Average revenue per kilowatt hour by sector, census division,
electricity as a stand-alone service from
and state (cents),” available on the Internet at www.eia.doe.gov/cneaf/
electricity/esr/t12.txt.
generation and distribution.2 The opening
of access to transmission lines was a
significant step. States with high-priced
authority, except within the balkanized areas served and
electricity hoped that the development of an active and
regulated by a public authority.
open wholesale electric market would serve as a base
for moving into retail deregulation. Increased wholeThe start of a new era—
sale competition would provide local distribution comWholesale deregulation
panies with more options over how to meet their load
In 1978, the passage of the Public Utility Reguobligation, and eventually individual consumers would
latory Policies Act (PURPA) opened the wholesale
be able to choose their electricity generator.
power market to certain non-utility generating comBy 1999, FERC pushed the issue of opening the
panies. PURPA was passed to help reduce U.S. depentransmission grid one step further with the adoption
dence on foreign oil and to expand the diversity of
of Order 2000.3 This order encouraged states to form
supply for U.S. electricity generation. By 1998, nonRegional Transmission Organizations (RTOs) to imutilities were responsible for 11 percent of the total
prove the multi-state operations of the transmission
generation in the nation and were contributing 406
grid. The RTO was to serve as a multi-state, indepenbillion kWh to the electric system. PURPA was foldent organization to manage the operation of the translowed by the passage of the Energy Policy Act of 1992
mission grid for particular regions. The order provides
(EPACT). One aspect of EPACT was to further press
specific (but voluntary) guidance concerning a miniwholesale deregulation by opening up transmission
mum set of eight functions that an RTO must be able
access to non-utilities. In return, regulated utilities
to perform, but it leaves it up to the states and the utiliwere permitted to build new merchant plants outside
ties to develop both the geographic footprint and the
their service territories.
governance structure of the RTO. The suggested eight
Other landmarks in restructuring were regulatory
minimum functions are: responsibility for tariff adOrders 888 and 889 issued by the Federal Energy Reguministration and design; congestion management; parlatory Commission (FERC). Both orders were issued
allel path flow; ancillary services; total transmission
TABLE 1

TABLE 2

Utility retail sales statistics, 1998

Number of utilities
Number of retail customers
Retail sales (mWh)
Percentage of retail sales

Investor-owned

Public

Federal

Cooperative

205

1,951

7

852

91,889,360

18,002,349

33,544

14,115,259

$2,427,733,133

$485,692,301

$46,631,180

$279,761,845

74.9

15.0

1.4

8.6

Source: U.S. Department of Energy, Energy Information Administration 1999, State Electricity Profiles, available on the Internet
at www.eia.doe.gov/cneaf/electricity/st_profiles/toc.html.

Federal Reserve Bank of Chicago

5

capability and available transmission capability;
market monitoring; planning and expansion; and inter-regional coordination. In 2001, FERC clarified its
goals by arguing for the formation of as few as four
very large RTOs to cover the entire national grid.4
What does restructuring mean?
Electricity is provided to consumers through a very
complex mechanism. This mechanism is complex from
both a technological and regulatory perspective. On
the technology side, providers must match energy
supply and highly variable demand by managing different sources of generation that operate at differing
levels of efficiency. This process includes taking into
consideration scheduled and unscheduled generation
shutdowns, changes in fuel prices, seasonal variation,
a shifting customer base, and even daily weather. On
the regulatory side, electricity policy is the shared responsibility of federal, state, and local policymakers.
Jurisdictional boundaries between these various regulators are not often clearly drawn, and policy goals can
come into conflict. Given this complexity, it is not surprising that there is no single definition for “electricity restructuring.” However, in most cases, restructuring
focuses on taking the once integrated functions of a
traditional regulated utility—generation, transmission,
and distribution—and separating or unbundling them
into stand-alone services. In the case of generation,
the goal of the unbundling is to introduce competition.
In the case of transmission, the restructuring goal is
to modernize the transmission infrastructure to support
open access to the grid and the most efficient delivery
of bulk electricity on both an intra- and inter-state basis. Efficient transmission allows the cheapest power
to be used first and reduces the overall peak power or
back-up capacity needed in the system. In the case of
distribution, it is hoped that unbundling will make it
easier to identify the true cost of distributing electricity,
thereby eliminating hidden costs and cross-subsidies
among end-users of electricity.
The starting point for the restructuring debate
focuses on creating competition for generation
through market deregulation. Vibrant supply competition is at the core of the restructuring argument. On
the positive side, choice of generation supply can allow consumers to select more customized electricity
service, while putting market pressure on generators
to innovate and produce more efficient generation alternatives. Supplying competitive choices for generation would help better manage system peak load
demands by providing more options to distribution
systems when electricity shortages occur. However,
if generation competition fails to develop, eliminating traditional regulatory safeguards can result in

6

consumers being exposed to service provision by an
unregulated monopolist.
From a practical perspective, promoting competition in generation requires attracting new firms with
independent generation sources into the market and
encouraging the trading of electricity across the grid.
For the most part, electricity generation would eventually be carried out as an unregulated competitive
service. Generation would be supplied both by subsidiaries of traditional local utilities and new independent
power supply companies that would enter the generation business and sell power into the grid. Also, power
marketers (firms that trade electricity) could provide
local utilities with contracts and hedges, thereby offering them a wider range of options for managing
the energy demands and price risks of customers in
their service areas.
However, from a theoretical perspective the existence of new suppliers in every market may not be
necessary to promote the benefits of opening generation to competition. The threat of competition can provide incentives for existing generators to improve
efficiency and offer new products. Still, the high cost
of entering the generation business may require the
physical presence of a competitor, since existing generators know that a potential rival may have a lag time
before it is able to provide new supply into the market. Construction delays, permit requirements, and
transmission limitations may affect a competitor’s
ability to offer service.
The second unbundled function is transmission.
Transmission applies to the bulk movement of power
across high voltage power lines—linking individual
utilities to sources of power. In the past, integrated
utilities tended to favor building limited transmission
networks. These networks would often link a single
utility with one or two other outlets for importing or
exporting power. They were not designed to serve as
universal transmission grids for multi-state regions,
since most utilities built their own generation systems
with large reserves to serve their peak load requirements. Fundamental to restructuring has been the
assumption that an independent entity needs to be
established to run the transmission system. The grid
“makes the market” and without it, wholesale buyers
and sellers will not choose to trade. Without an independent transmission organization (such as an RTO),
local utilities cannot be assured of supply and they
will still want to establish transmission systems that
primarily meet their local needs. If independent generators are unable to access or have uncertain access
to the transmission grid, they cannot serve their customers and restructuring is jeopardized. Thus, open

1Q/2002, Economic Perspectives

access to a technologically adequate, multi-state transmission system is essential to promoting competitive
generation sources.
The third unbundled element is the distribution
system. While the transmission system serves as the
superhighway for moving bulk electricity, the distribution system can be thought of as the off-ramps and
local roads that bring electricity into the homes and
businesses of consumers. Under restructuring, traditional utilities often create a subsidiary that is purely
in the distribution business. Since it makes little sense
to build competing distribution networks, these distribution companies are often state-regulated monopolies that not only have responsibility for the wires
that run into an individual home or business, but also
for billing and other administrative functions. Even
with distribution, it is hoped that by unbundling the
function, the true cost of operations for specific classes
of customers can be more easily identified and priced
accordingly. In doing so, distribution operators will
focus on efficiency improvements to serve varying
classifications of customers.
Restructuring at the regional level—
The Midwest
To date, electricity restructuring continues to be
an uneven process. Even states that have expressed
similar electricity policy goals, such as improving
transmission, reducing noxious air emissions, and
attracting new generation, often adopt different strategies. One of the real challenges facing electricity
policy in the Midwest is the lack of a consensus on the
benefits of opening electricity markets to competition.
At first glance, the five states in the Seventh District
are extremely heterogeneous in terms of the price of
electricity and their interest in pursuing restructuring
(see table 3). While Illinois and Michigan continue

to press forward with plans to open their electricity
markets to competition, Indiana, Iowa, and Wisconsin
are taking a decidedly cautious approach. Two areas
that the five states can agree on are the need for
improvements to the region’s transmission grid and
the need to account for changes in environmental
policy when considering alternatives for future electricity generation.
As figure 3 demonstrates, the Midwest is not alone
in this piecemeal approach. Throughout the nation,
states are choosing different strategies for pursuing
restructuring; and the recent problems in California
have slowed restructuring activity in several states.
On the road to restructuring—Illinois
and Michigan
Illinois
Illinois took its first steps on the road to electricity
restructuring in 1997 with the passage of the Electric
Service Customer Choice and Rate Relief Law. The
state is phasing in competition and customer choice.
Different classes of customers have been given the
option of choosing an electricity supplier, beginning
with certain large nonresidential customers in October
1999. As of December 31, 2000, all nonresidential
customers can pick a supplier, and a watershed will
be reached on May 1, 2002, when retail choice will
be open to residential customers.5
However, some analysts suggest that the start of
residential choice in May will be met with very little
immediate activity. In the service territory of the state’s
largest utility—Commonwealth Edison (Com Ed)—
residential rate reductions of 20 percent have been
ordered. These rate reductions were intended to provide residential customers with benefits of restructuring during the period when nonresidential customers
were permitted to choose suppliers. The problem is

TABLE 3

Seventh District energy profile
Illinois

Indiana

Iowa

Michigan

Wisconsin

U.S.

Exporter
Coal

Exporter
Coal

Importer
Coal

Importer
Coal

Importer
Coal

n.a.
Coal

Average electricity price (cents/kWh)
Industrial
Commercial

7.46
5.11
7.77

5.34
3.95
6.08

6.04
3.99
6.67

7.09
5.03
7.81

5.44
3.86
5.87

6.74
4.48
7.41

Residential

9.85

7.01

8.38

8.67

7.17

8.26

Exporter or importer
Primary generating fuel

Note: n.a. indicates not applicable.
Source: U.S. Department of Energy, Energy Information Administration 1999, State Electricity Profiles, available on the Internet
at www.eia.doe.gov/cneaf/electricity/st_profiles/toc.html.

Federal Reserve Bank of Chicago

7

FIGURE 3

Status of restructuring of electricity markets

WA

ND

MT

MN

ME

OR

ID

WY

MI

CO

MA

PA
IL

UT

NH

NY

IA

NE
NV

VT

WI

SD

IN

MO

KS

RI

OH
NJ
DE

WV
VA

KY
CA

CT

MD
DC

OK
AZ

NM

TN

NC

AR
SC
MS

AL

GA

TX
LA

Already restructured
Ready to restructure

FL

Investigating restructuring
No restructuring plans
Source: U.S. Department of Energy, Energy Information Administration, 2000, “The restructuring of the electric power industry: A capsule of
issues and events,” report, January.

that these rate reductions are likely to discourage new
suppliers from entering the market, because they will
find it difficult to undercut the price already being
offered to residential customers.6
Although Illinois has made progress in the nonresidential market, statewide performance is at best
uneven. A number of new service providers have received certification by the Illinois Commerce Commission as alternative retail electric suppliers. These
suppliers have been reasonably successful in securing
industrial and commercial customers, particularly in
the Com Ed service territory (see table 4). For example, the switching rate for the eligible industrial load
in Com Ed’s territory is 72.5 percent; however, the
next highest industrial switching rate is only 19.7 percent in Illinois Power’s territory. In five of the service
territories, no switching has occurred. The switching
pattern for eligible commercial customers is similar.7
As is often the case in the opening of a new market,
suppliers are largely pursuing the best and most profitable accounts. Whether the advantages of choice are
reaching all nonresidential customers remains to be
seen. An additional issue is whether these switching
rates can be sustained. In Pennsylvania, the state’s
largest utility, Peco Energy, reported losing 44 percent
of its industrial customers and 30 percent of its commercial customers to new suppliers when choice was

8

first made available. One year later, Peco had reclaimed
many of these customers, leaving their net customer
losses at only 4.7 percent for industrial customers and
5 percent for commercial businesses.8
Efforts to encourage competition have occurred
not just on the supply side of the equation, but also on
the demand side. Buyers have formed “collaboratives”
and secured their own discounts. The most prominent
of these represents a coalition of the City of Chicago
and 48 suburban governments. This group signed up
with Houston-based power marketer Enron to provide
their energy needs, and the group estimated that they
would save $4 million per year through the new service provider. However, the announced bankruptcy
of Enron in December 2001 led to the cancellation of
this contract.
One final positive development in Illinois has been
the state’s current ability to attract new generation facilities. Illinois is a preferred location for new naturalgas-fired generation, partly due to the presence of major
gas pipelines in the state. Currently, 59 plants with a
generating capacity of 27,881 megawatts (MW) have
either been permitted, are under review, or have been
placed in service since 1999.9 While it is unlikely (and
not necessarily desirable) to have all of this generation
built, clearly Illinois has not faced significant challenges
in attracting investment in new plants. These new plants

1Q/2002, Economic Perspectives

customers.10 The Michigan Public Service Commission (PSC) has expressed
Switching statistics for Illinois
concern that a shortage of in-state generation capacity and an inadequate transmisA. Industrial customers
sion system are responsible for the lack of
Switching rate,
Switching rate,
response from alternative suppliers. The
share of eligible
share of all
industrial load
industrial load
PSC reported that as of February 1, 2001,
( - - - - - - - - - - - - - percent - - - - - - - - - - - - - )
only ten alternative energy suppliers had
been certified and only four of these were
AmerecenCIPS
7.5
6.4
actively serving retail customers. The PSC
AmerenUE
0
0
reports that “the pilot programs have demCILCO
0
0
ComEd
72.5
39.9
onstrated the importance of transmission
Illinois Power
19.7
15.1
in making customer choice effective. WithInterstate Power
0
0
out adequate transmission, new suppliers
MidAmerican
4.0
3.6
Mt. Carmel
0
0
are unable to secure and deliver power to
South Beloit
0
0
their customers. The existing transmisTotal
38.8
30.0
sion system is physically not adequate to
support a vibrant competitive market.”11
B. Commercial customers
Due to the lack of in-state generation,
Switching rate,
Switching rate,
the PSC reported that Detroit Edison and
share of eligible
share of all
Consumers Energy would need to purchase
industrial load
industrial load
about 2,900 MW (representing roughly
( - - - - - - - - - - - - - percent - - - - - - - - - - - - - )
15 percent of estimated total demand) over
the summer of 2001 to meet load and
AmerecenCIPS
30.7
7.1
AmerenUE
0
0
maintain a reasonable operating margin.
CILCO
0
0
These structural impediments are limiting
ComEd
48
16.5
options even for customers that are activeIllinois Power
11.6
3.0
Interstate Power
0
0
ly interested in receiving service from an
MidAmerican
20.2
8.6
alternative provider.12
Mt. Carmel
0
0
Michigan’s transmission constraint
South Beloit
0
0
Total
40.3
12
has been of sufficient concern that the
Michigan legislature mandated that utiliNote: Rates are effective through December 15, 2000.
ties make provisions for 2,000 MW of inSource: Electric Light & Power, 2001, Illinois and Deregulation: A Fresh
Update Places the State at the Halfway Point, March 22.
cremental transmission capacity by 2002.
As for the shortage of in-state generation, the PSC reports that 2,166 MW of
new generation has gone on line since June 1999.
have the potential for increasing generation competiGenerators have also reported to the PSC their intention in Illinois; however, in a deregulated system, they
tion of adding 7,670 MW in the future, based on fawill not be obligated to serve the Illinois market.
cilities that are either planned or under construction.
Michigan
TABLE 4

Michigan is the other state in the District that continues to press forward on restructuring. Michigan faces
a slightly more urgent burden, in that the state is an
electricity supply importer. Currently, retail market
activity in the state appears less developed than in
Illinois. Pilot programs where incumbent utilities have
made roughly 10 percent of their load available for customer choice have attracted little active interest. For example, as of January 1, 2001, two large utilities—Detroit
Edison and Consumers Energy—have made a total
of 2,100 MW available to retailers to provide to customers. So far, only 257 MW of electricity is actually
flowing from alternative service providers to large

Federal Reserve Bank of Chicago

The cautious approach—Indiana, Iowa,
and Wisconsin
Indiana
With low power prices derived from coal-based
generation, Indiana has been cautious in pursuing restructuring. Policymakers have focused on maintaining
the advantage of low-cost power, and Indiana consumers do not appear to be actively interested in a choice
of provider. However, California’s recent bad experience has led Indiana policymakers to continue to study
their options. One policy area of interest to the state
is the role of merchant power plants (which appear to

9

be interested in establishing locations in the state) and
trying to establish a more comprehensive energy policy to guide Indiana decisionmakers. In the last several years, over two dozen merchant plants have been
proposed, with the state’s Public Service Commission
approving seven of the plants.
Indiana’s interest in expanding generation may
be well founded. The state is currently an exporter of
electricity and would like to use its combination of low
price and relative energy surplus to attract economic
development. However, a recent report of the State
Utility Forecasting Group (SUFG) found that it has a
declining ability to meet its electricity needs and maintain a 15 percent reserve margin. (Because electricity
cannot be stored, reserve margins of 15 percent or
greater are considered prudent, particularly when transmission limitations might limit access to generation
from more distant utilities.) In its 1999 projection estimates, the SUFG predicts a potential deficit of 2,000
MW by 2005 and of 4,000 MW by 2010 (assuming
no new generation is added in state). With neighboring
states, particularly Illinois and Ohio, actively adding
generation and restructuring to encourage competition,
Indiana is anxious to maintain its currently favorable
status. Indiana officials are currently reviewing proposals for 2,330 MW of new generation.
Iowa
In Iowa, the focus of electricity policy has been
on incentives for creating new generation, rather than
opening markets to competition. A bill considered during the last session of the Iowa legislature offered indirect incentives for new generation by requiring the
Iowa Utilities Board (IUB) to specify in advance the
rate-making principles it would use for establishing
the recovery and return on investment for any new plants
built. Also under the proposed legislation, Iowa utilities signing contracts for power from in-state resources
would receive irreversible contract approval within
90 days if the IUB found the contract “reasonable
and prudent.”
Evidence from the IUB’s comprehensive review
of Iowa’s electricity structure and the implications of
restructuring suggests that the state needs to focus on
generation. The IUB finds that, based on a projection
of annual load and annual load obligation, the largest
utilities in the state will move from a surplus of 495
MW in 2000 to a potential deficit of 2,208 MW by
2009. The turning point in the surplus/deficit could
come as early as 2003.13
Recently, two large utilities have expressed interest
in building new generation in the state. MidAmerican
Energy announced plans to build a gas turbine plant
east of Des Moines that could produce 540 MW of

10

power by 2005. This would be the first new large
power plant built in Iowa in 20 years. Another major
player, Alliant Energy, is investigating building up to
1,000 MW of new generating capacity in Iowa.14
The provision of electricity in Iowa is relatively
complex, with a large number of utilities in the state.
Large investor-owned utilities accounted for 76 percent
of the megawatt per hour (mWh) sales in 1998, while
137 rural cooperative and municipal companies provided the remainder.
Wisconsin
Wisconsin’s electricity policy has focused on improving capacity and reliability. While the state’s low
prices continue to be an advantage, reserves continue
to dwindle and transmission bottlenecks have led to
concerns about reliability. State policy has not favored
sweeping restructuring. Instead, policy has emphasized providing incentives for utilities and independent companies to create new, in-state generation.
State electricity supplies are extremely tight, particularly in the eastern half of the state, according to
the Wisconsin Public Service Commission’s (PSC)
Strategic Energy Assessment for 2001. During summer
peak generation months, transmission constraints make
it difficult to relieve supply shortages in eastern
Wisconsin, even when power is available from sources
in western Wisconsin and neighboring states. This
grid congestion has left major utilities having to rely
on interruptible service contracts to prevent outages.
Customers subject to these interruptible contracts have
become increasingly dissatisfied with this arrangement,
even though they receive lower prices in return for
permitting the service reduction. By 1998, transmission
limitations had reached a point where the PSC recommended adding 3,000 MW of transmission into the
state, doubling the existing transmission capacity.15
Generation is also needed. The state has not added any baseload generating units since 1985. Wisconsin
has also faced challenges attracting new suppliers.
By the end of 1999, the state had only two merchant
plants operating; however, the PSC anticipates that
merchant plant production could reach 10 percent of
the state’s generating capacity by 2002. In all, merchant
plants could add 740 MW of new generation by 2002.16
Wisconsin policymakers have been investigating
ways in which the existing regulatory structure can be
modified to provide investment incentives while retaining oversight authority. Much like Iowa, Wisconsin is
emphasizing adding new rate-regulated generating units
and encouraging long-term contracts with in-state independent generators. Proposed policy options include
increasing regulatory certainty for recovering new investments, for example by raising the permitted return

1Q/2002, Economic Perspectives

on capital investment. These are still proposals, but
like Iowa, Wisconsin is trying to chart a path that will
increase capacity and transmission quality, without
reducing direct oversight of the electricity business.
Electricity restructuring at mid-term—What
have we learned?
Electricity restructuring is a work in progress. New
markets and mechanisms will not form overnight and
the transition period has already proven to be bumpy.
California’s experience with restructuring has led many
states to reconsider whether restructuring can produce
the benefits of lower consumer prices, more efficient
generation, and product innovation often touted by
proponents. Given the balkanized nature of the U.S.
electricity system, it is understandable that establishing a national electricity policy has proven difficult.
However, early programs in the electricity system and
experience from the deregulation of other network
industries (telecommunications, trucking, airlines, and
natural gas) have produced some useful lessons for
policymakers to consider.
For the purposes of this article, I group the experiences from restructuring into three broad categories.
The first category focuses on the unique features of
electricity that directly influence its market structures.
The “uniqueness” of electricity limits its treatment as
a standard commodity and influences the set of policy
goals that can be achieved through restructuring. The
second category considers the need to invent or reinvent institutions to govern the industry as it restructures.
For existing regulatory bodies, this will mean adding
new responsibilities and shedding old authority. Also,
entirely new institutions (RTOs in particular) will need
to be created and provided with the resources, authority, and mission to manage separated functions such as
transmission. The final category addresses issues of
market structure and design. This has two components.
First, regulatory bodies are facing transition costs as
they adapt to dealing with restructuring and the introduction of markets. Second, consumers are facing
their own transition costs as they are exposed to a
less regulated electricity system.
Category 1—Understanding electricity as a
unique commodity
In some important ways, electricity is unlike other
commodities. It is a modern necessity; we rely on it
to provide light and heat, to fuel commercial and industrial production, and to run most appliances in the home.
Much like water, electricity generally carries a low
price, has a high value to the consumer, and offers no
short-run substitutes. The critical role of electricity has
led to a regulatory policy favoring the development

Federal Reserve Bank of Chicago

of excess capacity to ensure reliability of service, even
when this has had the effect of inflating the price.
Second, electricity has certain physical properties
that make it different from other commodities. Primarily these have to do with difficulties in storing or
rapidly adding new capacity. Significant supplies of
electricity are almost impossible to store and electricity needs to be consumed when produced. This characteristic, combined with the inelastic demand of
customers, creates high marginal prices when there is
a shortage of electricity in a market, unless power can
be imported through the grid. Assuming that a utility
has no other choice but to meet its load obligation
through the spot wholesale market, it is fully exposed
to paying whatever price the market will bear for a
commodity that is consumed immediately.
In addition, it is difficult to create new electricity
capacity quickly. New power plants in most states require 18 months to 24 months to site and build, and
local opposition to construction often lengthens the
process even further. Moreover, new power plants require very high capital outlays and can be seen as
risky investments in the restructured electricity market. The decision to build a plant is predicated on the
cost of key variable operating costs (fuel in particular)
and an assessment of the price that can be charged
for electricity, which is often dependent on regulatory
decisions. New generators are often providing reserve,
back-up power, rather than meeting the daily baseload needs of a given service area. This also introduces uncertainty into the decision to add new generation.
These factors—the inability to economically store
electricity and the inability to create new generation
quickly—create the conditions for high prices in the
spot wholesale market and can provide certain sellers
of electricity with the opportunity to charge very high
prices for power. To avoid very high transition costs
associated with restructuring, state policymakers
need to ensure either that adequate reserves exist or
that restructuring will create the conditions for adequate reserves.
Another special feature of electricity is that it
needs to be delivered over an extensive physical grid.
The transmission grid is the physical market for trading
electricity, making issues of grid reliability, capacity,
and access critical in creating the conditions for competitive market operations. Assessing this infrastructure requires integrating several levels of analysis. First,
what is the current condition of transmission and generating assets? Can steps be taken to improve the operation of the existing facilities? How easy is it to access
the grid and efficiently transport electricity to where
it is needed? Recently, electric power forecasters

11

have questioned as inadequate the amount of new transmission capacity that is planned in the system. Second,
will new technologies change the need for the type
of generation and transmission that is needed? Some
analysts suggest that micro-generation, fuel cells,
and other new technologies may allow an increasing
number of consumers to generate electricity in their
homes and businesses. If this is true, building large
generation plants and making extensive upgrades to
the grid may be less important than has traditionally
been assumed.
The performance of the transmission grid (as we
have seen in the survey of the Seventh District states)
can undermine the development of healthy wholesale
markets and limit the appeal for new companies to
develop new operations in Midwest states. Blackouts
in Chicago and power problems in eastern Wisconsin
have frequently been linked to transmission bottlenecks
rather than a lack of adequate generation. The policy
problem is that many decisions regarding transmission
and the construction of new generation need to be conducted in coordination with actions in neighboring
states. Improving the intrastate grid is certainly important, but it will not fully substitute for the need to
carry out such a policy in concert with activities in
neighboring states. A well-functioning grid can reduce
the need for building excess generation capacity, if
energy can reliably be made available through transmission. For example, in Wisconsin’s forecast of energy needs, the Public Service Commission makes
differing recommendations for generation needs and
reasonable generator reserve requirements based on
assumptions about the future performance of the grid.
With a well-functioning grid, the traditional reserve
requirements of 15 percent are adequate, but without
any improvements in grid performance, recommended
safe generation margins for utilities facing poor grid
connections rise to 30 percent.17
Based on this assessment of the infrastructure for
delivering electricity and the conditions that make
electricity a “special commodity,” policymakers need
to have a clear set of restructuring goals. Artificially
reducing the retail price of electricity in order to gain
support from various constituent groups should not
be the sole motivation for restructuring. The price of
electricity should reflect market factors. If competition introduces efficiency that lowers prices while
maintaining other important policy goals, such as reliability and adequate reserves, lower prices can be a
welcome outcome.
In testimony before the Senate General Administration Committee, economist Paul Joskow suggested
that “… deregulation is not a goal in and of itself.

12

The goal is to create well functioning competitive markets that perform better than the regulated structures
they replace.”18 The real benefits of restructuring will
become apparent when fully functional markets are
operating. This will take time. Frequently, electric restructuring is presented as a policy designed primarily to reduce the current price paid for electricity. Often
this is motivated by states facing the highest electricity prices being the most interested in opening their
electricity market to competition. It is no accident that
in the Seventh District, Illinois and Michigan, the
high-price states, are pursuing restructuring, while
the three low-price states are hesitating. This interest
in promoting competition has been pressed by large
industrial customers that would expect to pay significantly less for power in an open wholesale market with
multiple suppliers bidding for their business. Under
fixed regulated retail tariffs, industrial customers often
argue that they are subsidizing the residential market.
However, an immediate policy objective of reducing
the price paid for electricity may impede valuable
longer-term policy goals, such as encouraging innovation and efficiency in generation and promoting
product customization. For example, many high-tech
firms are more interested in electricity that can be provided with complete redundancy (that is, 100 percent
back-up capability) and is of the highest quality, not
subject to transmission or distribution disruptions.
Clearly, the opportunity to purchase this type of specialized, premium product may be of greater interest
to certain classes of customers. Similarly, some consumers may want to promote environmental goals
and may be willing to pay a premium for electricity
from renewable sources. For example, in the Pacific
Northwest, the Bonneville Environmental Foundation and the Climate Trust of Portland are marketing
“green tags” to utility customers. These tags sell for
$20 and the proceeds are used to purchase power from
a renewable source such as wind power, thereby allowing individual customers to purchase offsets to
traditional energy sources.19
Beyond the often-repeated goal of reducing prices,
longer-term goals of establishing competitive markets
in electricity should include establishing incentives
for investments in more efficient power plants, stimulating the introduction of new technologies (such as
fuel cells), encouraging innovation in both supply and
demand-side management, and even providing incentives to promote needed investments in transmission
and environmental mitigation.
Policymakers clearly face some political challenges
as well, which may lead some states to adopt shortrun policies (such as mandatory price caps and price

1Q/2002, Economic Perspectives

cuts for residential customers) that would in fact prevent changes in wholesale markets from being reflected in retail markets. The tradeoffs between lower prices,
increased investment, and even related environmental goals of lower emissions will be made in a political context. Ideally, the political process will set the
policy targets and leave the methods for achieving these
targets efficiently to market mechanisms. It is also important that the institutions charged with implementing and overseeing the restructuring process be insulated
from political interference. Ultimately, this requires
an electricity policy characterized by integrated resource planning, a complete understanding of technology and efficiency tradeoffs, and the flexibility to make
mid-course corrections as changes in supply and demand conditions require.
Category 2—Inventing or reinventing
institutions and roles
We know by now that electricity restructuring is
a complicated business. Existing institutions such as
the state public utility commissions and FERC will
have new roles in guiding restructuring and they will
be asked to reduce their authority in areas where they
have traditionally held jurisdiction. Even more difficult will be creating new institutions, the proposed
RTOs, market surveillance committees, and electricity trading systems to support restructuring. Both the
traditional regulators and these new institutions need
to be vested with the resources, incentives, and authority to carry out their missions. They need the resources
to actively monitor the markets under their authority
and the data to know what is driving these markets.
Governance issues include clarifying overlapping jurisdiction and decisionmaking authority. Resource issues include staffing, staff training, and sufficient data
to monitor an industry that is undergoing profound
changes. California’s experience again presents a telling lesson. For example, Governor Gray Davis acknowledged that when it came to negotiating for power sales
to the state, the state’s negotiating team faced a circumstance similar to “a tee-ball team playing the NY
Yankees.”20 While this was an extreme situation, it
underscores the difficulties that may arise when the
state government is forced to play an unfamiliar role.
To date, the formation or reinvention of these institutions has been a difficult process; and the related
uncertainty may be discouraging potential infrastructure investments. The most obvious example in the
Midwest has been the attempt to form an RTO. Most
of the region’s major utilities had joined the Midwest
Independent Systems Operator (MISO) and assumed
that this organization would become the RTO for the
region. However, internal issues led to many of the

Federal Reserve Bank of Chicago

largest utilities withdrawing from MISO and establishing the Alliance RTO. This group subsequently identified the British firm, National Grid, as its grid operator.
However, it is unclear whether National Grid would
actually own any of the transmission infrastructure or
would simply serve as the system operator. If the traditional utilities maintain their ownership of their existing transmission assets, it is unclear how investment
decisions will be made or coordinated. These developments also call into question how easy it will be
for competitive generators to access the grid on equal
terms. Adding to this confusion is FERC’s recent request that the geographic boundaries of the RTOs be
expanded in an effort to develop a national grid. FERC
has suggested that MISO, Alliance, and the Southwest
Power Pool should consider combining into a superregional RTO.21 (On December 19, 2001, FERC approved MISO as the RTO for the Midwest. The MISO
transmission area will operate in 20 states. In its action, FERC suggested that the Alliance organization
could operate as a member of MISO.)
Even if the issue of what organization is running
the RTO is resolved, the potential functions of the organization need to be considered. One white paper
produced by the Electricity Policy Research Institute
examining power issues on the West Coast produced
a list of recommendations for improving electricity
operations. The study suggests that RTOs should have
either primary or shared responsibility for addressing
a significant array of issues, including repairing dysfunctional wholesale markets; generating standardized
regional energy information; and implementing a
“whole system” reliability-centered maintenance capability. This whole-system approach would include
assessing the equipment health for vital components;
initiating comprehensive, region-wide transmission
risk analysis; creating a seamless real-time exchange
of information among regions; coordinating training
of grid operating personnel; developing power-flow
technology for system reservation and scheduling;
and establishing regional authority for siting and cost
sharing.22 Clearly, this is an ambitious agenda for an
organization that is still being developed.
Finally, the formation of large, multi-state RTOs
suggests that FERC, and not the state public utility
commissions, will need to play the major regulatory
role over transmission issues. Even this decision is
not clear-cut. Following the summer 2001 meeting
of the National Governors Association, the governors
agreed to work with the U.S. Department of Energy
and other federal agencies to improve transmission,
but made it clear that the states still wanted to
maintain their traditional policymaking role over

13

transmission. Specifically, the governors issued a statement to the effect that “governors oppose preemption
of traditional state and local authority over siting of
electricity transmission networks, but governors recognize that situations exist where better cooperation
could improve competition and reliability. Governors
are willing to engage in a dialogue with the federal
government and industry to address these situations
in a manner that does not intrude upon traditional state
and local authority.”23
A final governance issue has to do with the oversight of municipal and cooperative utilities. These utilities, which are not major suppliers of electricity in
most states, are often self-governed. While larger investor-owned utilities are subject to regulation by state
utility commissions and FERC, it is unclear whether
these smaller players should be brought under the
same regulatory structure.
Category 3—Market structure and design
Transition costs for regulators
Industry observers assume that the introduction
of competition into the generation component of the
electricity system will best be accomplished through
market mechanisms. The eventual goal of promoting
unregulated (or at least minimally regulated) competitive generation is to promote cost efficiency and greater diversity of generation resources. Yet, the introduction
of markets provides another set of challenges. These
challenges can be understood across three dimensions.
The first dimension relates the uniqueness of electricity as a commodity to the implications of using markets
to deliver electricity, as discussed earlier. The second
two dimensions consider two sets of transitions costs
of moving to a market structure. The first set of costs
relates to the actions and adaptation (and potential rigidity) of regulatory bodies and market participants
in responding to the unbundling of electricity service.
The second set of transition costs relates to the implications of market structures for end-users/consumers
of electricity.
I discussed earlier how the physical properties
of electricity create certain challenges to establishing
smooth operating markets. The inability to store or
rapidly create new capacity, as well as physical limitations of the grid as the market trading system are
all factors that must be accounted for in successfully
restructuring electricity. Another important change
that is brought about by moving to a market structure
is the potential lack of incentive to provide large generation reserve margins. In the old regulated system,
the utility would be willing to build surplus capacity
into its generation plans. Since the regulator permitted

14

a rate structure that allowed the utility to recover the
cost of this extra capacity (even if it went unused), there
was little risk involved in carrying a large reserve
margin. In moving to an unregulated market structure
for generation, carrying reserve capacity (particularly
when the generator cannot store the power) clearly
makes little sense from the generator’s perspective.
It is therefore not surprising that states that have been
moving toward restructuring have seen reserve margins decline, so that supply and demand more closely
match each other.
Another consequence of moving to a more market-based electric system is that the new market structure may provide opportunities and incentives for
suppliers to exercise market power. Market power
can be understood as the ability to raise market prices
through unilateral action so as to profit from the price
increase. Concern over suppliers exercising market
power is one of the potential transition costs faced
by regulators. To address these concerns in the wholesale market, FERC only allows suppliers that can
demonstrate that they do not have market power to
sell in the market and receive the market-clearing
price. Suppliers that cannot demonstrate an absence
of market power are limited to charging the cost-ofservice rate set by FERC.24
Establishing whether a firm has market power is
critical in determining whether prices being charged
in newly created electricity markets are the product
of manipulation or genuinely reflect the interaction
of supply and demand. In the case of the California
market experience of recent years, it was often alleged
that certain suppliers would withhold generation or
use other timing and bidding techniques to receive
extraordinary prices when it was known that utilities
would have to make purchases in the spot market. Determining the prospective market power of an electricity supplier is a very difficult business. Attempts
by FERC to define market power have been met by
skepticism. Frank Wolak, a Stanford University economist who serves as chair of the Market Surveillance
Committee of the California Independent System
Operator, points out that market power is often incorrectly estimated based on the concentration indexes
applied to geographic markets. These geographic
boundaries fail to account for the fact that electricity
must be provided to final customers over the existing
transmission grid. Limitations in the grid can make
differences in the bidding, scheduling, and operating
protocols of the market crucial in determining whether a supplier can exercise market power. Work by
Wolak, Borenstein, and Bushnell (2000)25 measured
the extent of market power in California since 1998,

1Q/2002, Economic Perspectives

and the Market Surveillance Committee has contributed a number of reports on the subject. By the summer
of 2000, the committee found that average monthly
prices being charged for June were 182 percent above
what would have been expected if no generator was
able to exercise unilateral market power.26
Competitive generation and supply will change
the incentives and behavior of many firms in the
electric supply business. In the case of traditional integrated utilities, the spinning off of generation into
unregulated affiliated companies raises the expectations
for shareholders that these generation companies can
become profit centers for the parent company. On the
one hand, increasing the importance of profitability
as a measure of success for the generation company
should promote efficiency. On the other hand, it also
raises the incentive for generators to take advantage
of market conditions to receive the highest price for
their production. It is important to recognize that the
“public service” ideal that once guided utilities will
be de-emphasized once generation is treated as a commodity. Studies by California’s Independent Systems
Operator provide evidence that generators did withhold supply in order to bid up market prices. During
the fall and winter of 2000–01, there were nearly four
times as many scheduled and unscheduled plant shutdowns as in the previous year. While some of these
could be attributed to breakdowns in older plants that
were forced to run at higher capacities than intended
in order to avoid blackouts and brownouts, some
shutdowns seemed to be more strategic.
This motivation is even more obvious in the case
of independent merchant generators. An example of
this occurred in the California market in January 2001.
In the latter half of the month the California grid operator faced a series of conditions that made the likelihood of an energy shortage in the state highly
probable. A combination of unfavorable weather, a
lack of supply from traditional reserve supplies from
northwestern states, and a malfunction at a 1,000 MW
plant in the state meant that the grid operator had to
scramble to find power. Eventually, the operator found
a California merchant plant that was willing to offer
power, but only at the record price of $3,880 per mWh.
The grid operator ended up buying the power, and this
situation demonstrated the ability of this single supplier
to set the price in the market. In justifying the record
price charged, the merchant generator admitted that
the price was less related to the plant’s cost of generating power than the risk premium it was charging for
selling into the financially shaky California energy
market. (As it turned out, the generator never received
the $3,880 per mWh. First, FERC investigations into

Federal Reserve Bank of Chicago

the price charged found that the generator had overcharged for the electricity and reduced the price to $273
per mWh. In fact, to date the generator has only received
payments equaling $70.22 per mWh, due to the inability of the purchasers to make good on their debts.)27
The Independent Systems Operator’s study also
demonstrated how independent generators could use
bidding to influence the price in the wholesale market.
It was estimated that the combination of withholding
supply and strategic bidding behavior accounted for
one-third to one-half of the increase in prices in the
California wholesale energy market. This added roughly $6 billion to the costs California consumers had to
pay. In the case of bidding behavior, the study revealed
that the rules of the hourly auctions provided an opportunity for generators to manipulate prices to their
advantage. Under the terms of the hourly auction,
generators would offer batches of energy at various
prices. The system operator would then rank the bids
according to price, and the price in the market was
set at the bid price for the last unit of electricity needed to meet the demand on the grid. At this point, all
generators in the auction would receive the price that
had been paid for the last unit. The concept behind
this bidding structure was to treat the electricity supply like any other commodity, where everyone in the
market would receive roughly the same price for providing a homogenous good. Over time, generators became very savvy about taking advantage of this structure
when it seemed likely that supply would run short.
Essentially, most bids would be offered at prices that
roughly reflected the cost of generation plus some
reasonable margin. However, the final units would be
bid at an extreme premium, sometimes at ten times
what the normal price would be. If all of the bids for
these final units of supply came in at these prices, the
operator had no choice but to accept this price in order to meet load. At this point, because of the terms
of the market, all of the supply bid that day would
receive this high price. While it has not been shown
that this bidding behavior involved collusion among
the generators, it is clear that this auction system provided an opportunity for savvy bidders to take advantage of the process, and it appears that they did.28
Critics of California’s restructuring plan have suggested that permitting the use of long-term contracts
and other hedges would have made it is far less likely that the prices bid would have been at such exaggerated levels, since spot shortages would have been
less frequent.
Finally, the role of power traders in markets
needs to be understood. These firms provide financial options to electricity markets, but are not in the

15

business of building or owning generation facilities.
Firms such as Dynegy are well known as power traders
but, increasingly, regulated utilities carry out trading
activities through unregulated subsidiaries of their
holding company. In an ideal world, this can help
utilities and customers manage risk. For policymakers,
marketers are often a new institution to deal with.
Utility commissions often lack the staff to monitor
the behavior of traders in the electricity market for
signs of collusion or unfair practices. Policymakers
need to understand that the purpose of these firms is
to make money through trading, not to serve as a
public utility. In this regard, oversight of these firms
may best be accomplished through the same mechanisms that govern other commodity trading operations.
However, most states appear to lack such a structure
or a real understanding of how to deal with electricity traders. In the case of California, for example, market problems identified by the state’s Market Surveillance
Committee were largely ignored.
Transition costs for consumers
What are the implications of electricity restructuring for consumers? First, consumers need to understand that opening markets will expose them to
both the advantages and disadvantages of market
pricing. In the past two years, this increasingly has
meant dealing with a commodity with high price volatility. Electricity consumers are not used to dealing
with market risk, since the regulated electricity system had firm tariff-based prices. Exposing consumers to market-based (or even real-time) prices is a
natural consequence of moving to a market system.
However, in many cases, states are protecting their
residential and small business customers from price
volatility by freezing electricity prices for a transition
period. This presents several problems. Frozen retail
rates mean that consumers are not exposed to the underlying dynamics that are being reflected in the deregulated wholesale market. California’s experience
has shown the problems that can result from this approach. Because California’s consumers were insulated from market risk and volatile prices, they never
received the appropriate price signal that would have
caused them to immediately reduce consumption
when electricity prices spiked. This eventually led to
the financial insolvency of the state’s two largest investor-owned utilities. Eventually, Pacific Gas and
Electric and Southern California Edison ran up $9
billion in debt, purchasing power in the spot market.29
Only much later did the California Public Utility
Commission allow these two utilities to charge higher
prices for electricity and, by then, Pacific Gas and
Electric Company had filed for bankruptcy protection

16

and Southern California Edison had gone to the state
legislature asking for assistance.
Efforts to set prices for certain classes of customers
during a transition period have other shortcomings as
well. If the price is set artificially low, new entrants
to this competitive market will not appear, since the
margin will not be sufficient for them to capture customers. This will undermine the development of a
competitive market. On the other hand, if the price is
set too high, consumers may be paying too much in
return for stable electricity prices. Instead, it would
make more sense to allow prices to reflect market
fundamentals. The downside is the resulting price
volatility faced by end-users. In order to protect riskaverse consumers from being fully exposed to price
swings while these markets develop, risk management
tools (hedges and long-term contracts) can be used.
Consumers who have grown accustomed to a firm
price for their electricity bills can still be provided
with this option in a deregulated market. The distributing utility can offer a customized product that protects the consumer against volatility by offering a
firm price that has an “insurance” premium built into
the rate. Even now, many utilities offer customers
fixed monthly payments that protect them from high
electricity bills caused by seasonal conditions.
Volatile prices can be an essential element in encouraging more efficient demand-side management.
Pilot programs in real-time pricing demonstrate that
consumers will respond to price spikes by reducing
consumption.30 In testimony before a Senate panel,
Joskow went as far as to suggest that the default service option for larger commercial and industrial consumers should be purchasing electricity at real-time
prices. He argued that the use of real-time pricing for
these more sophisticated customers would introduce
demand elasticity into the wholesale market and this,
in turn, would dampen price volatility and help mitigate supplier market power.31 Providing more opportunities to manage peak load needs can produce a more
efficient electricity system. Allowing price signals to
be felt can be an important motivator in improving
demand-side management programs.
Another transitional cost to consumers is electricity reliability. In the bundled service, rate-regulated,
historical model for providing power to the consumer,
the blended tariff rate ensured that investments would
occur in all aspects of electricity provision—including customer service and reliability. Once the service is
separated into three components, the low-profit regulated portions of the business (distribution in particular) may not attract needed investment, which may
impair reliability and even service quality. This has

1Q/2002, Economic Perspectives

been a frequent complaint in telecommunications restructuring, where once-regulated local phone companies have been allowed into open markets. Once in
these markets, the company pursues the most profitable segments of the business, often to the detriment
of investing in basic service.
Conclusion
Electricity restructuring is at a crossroads. Experience to date has brought into focus the difficulties involved in restructuring the industry efficiently through
a combination of regulated and deregulated structures.
The recent experience of California and the number
of issues complicating this transition would be of less

concern if it weren’t for the fundamental role that
electricity plays in supporting modern society. There
are still good reasons to believe that electricity restructuring can fulfill its early promise. However, as
California has demonstrated, electricity restructuring
must be fully thought through and carefully crafted.
Policy missteps can lead to unintended costs that will
be borne well into the future. At a minimum, electricity restructuring requires a clear set of policy targets
that establish goals for system efficiency, investment,
and prices. Once these goals are established, institutions must be designed and equipped to meet them
and, importantly, must be protected from political interference while they pursue these objectives.

NOTES
This policy preference favoring geographically defined, integrated
utilities was established in the Public Utility Holding Company
Act of 1935.

16

Wisconsin Public Service Commission (2000), p. 2.

17

Ibid., p. 4.

U.S. Department of Energy, Energy Information Administration
(2000), p. 16.

18

Joskow (2001), p. 8.

19

Knight-Ridder Tribune Business News (2001).

20

Smith and Emshwiller (2001), p. 5.

1

2

U.S. Department of Energy, Federal Energy Regulatory Commission (1999).

3

U.S. Department of Energy, Federal Energy Regulatory Commission (2001).

4

U.S. Department of Energy, Federal Energy Regulatory Commission (2001).

21

5

Illinois Commerce Commission (1999), p. 2.

22

Energy Policy Research Institute (2001a), pp. 5–6.

6

Cvengros (2001), updated August 1.

23

National Governors Association (2001), p. 5.

7

Davis (2001), Illinois page.

24

Wolak (2001), p. 4.

8

O’Grady (2001).

25

Wolak, Borenstein, and Bushnell (2000).

9

Cvengros, op.cit.

26

California Independent System Operator (2000).

Ibid.

27

www.washingtonpost.com, August 18, 2001.

Michigan Public Service Commission (2001), p. 5.

28

Pearlstein (2001), p. A9.

12

Ibid.

29

Ibid.

13

Iowa Utilities Board (2000).

30

Energy Policy Research Institute (2001b).

14

DeWitte (2001).

31

Joskow (2001), p. 14.

15

Wisconsin Public Service Commission (1998), p. 2.

10

11

Federal Reserve Bank of Chicago

17

REFERENCES

California Independent Systems Operator, 2000,
“An analysis of the June 2000 price spikes in the
California ISO’s energy and ancillary services
market,” report, available from the Internet at
www.caiso.com/docs/09003a6080/07/dc/
09003a608007dc78.pdf, September 6; other reports
on market power and market behavior are also available at the website.

National Governors Association, 2001, “Comprehensive National Energy Policy, point 18.5,” Improving Energy Transmission, Washington, DC, policy
position, No. NR-18, p. 5.

Cvengros, Laura, 2001, “Restructuring activities by
state,” Indiana Utility Regulatory Commission, report, available on the Internet at www.ai.org/iurc/
energy/restruct/restruct_index.html, updated August 1.

Pearlstein, Steven, 2001, “The $3880 megawatthour; How supply, demand, and maybe ‘market power’ inflated a $273 commodity,” Washington Post,
August 21, p. A9.

Davis, Kathleen, 2001, “Know your power:
EL&P’s state of deregulation features delve into
individual battles with electric restructuring,” Electric Light & Power, available on the Internet at
http://elp.pennnet.com/, Illinois page.

Smith, Rebecca, and John R. Emshwiller, 2001,
“Hurt by deregulation of utilities, California gives
itself lead role,” Wall Street Journal, available to
Internet subscribers at www.wsj.com, July 17, p. 5.

DeWitte, Dave, 2001, “MidAmerican Energy plans
two plants, rate freeze in Iowa,” Knight Ridder Tribune Business News: The Gazette, Cedar Rapids, IA,
July 11.

O’Grady, Eileen, 2001, “On second thought,” Wall
Street Journal, available to Internet subscribers at
www.wsj.com, September 17.

U.S. Department of Energy, Energy Information
Administration, 2000, “The restructuring of the
electric power industry: A capsule of issues and
events,” paper, No. DOE/EIA-X037, January, p. 16.

Energy Policy Research Institute, 2001a, “Western
States power crisis,” June 25, pp. 5–6.

U.S. Department of Energy, Federal Energy Regulatory Commission, 2001, “Order initiating mediation,” docket, No. RT01-100-000, July 12.

, 2001b, “Real-time pricing could signal
consumer conservation,” EPRI Journal online, available on the Internet at www.epri.com/Palo Alto, CA.

, 1999, “FERC order 2000, docket No.
RM99-2-000,” order, No. 89FERC 61,285, December 20.

Illinois Commerce Commission, 1999, “A consumer’s guide to electric service restructuring,” available
on the Internet at www.icc.state.il.us/pluginillinois,
September, p. 2.

Wisconsin Public Service Commission, 2000,
“Strategic energy assessment,” docket, No. 05-ES100, December 18, p. 2.

Iowa Utilities Board, 2000, “Facts concerning the
consumption and production of electric power in
Iowa,” Des Moines, Iowa, August.
Joskow, Paul L., 2001, “Statement of Professor
Paul L. Joskow before the Senate Committee on
Governmental Affairs,” U.S. Senate, June 13, p. 14.
Knight-Ridder Tribune Business News, 2001,
“Green tags finance renewables to cut carbon emissions,” September 12.

, 1998, “Report to the Wisconsin legislature on the regional electric transmission system,”
Madison, WI, September 1, p. 2.
Wolak, Frank A., 2001, “Statement of Frank A.
Wolak before the Senate Committee on Governmental Affairs,” U.S. Senate, Washington, DC, June 13.
Wolak, Frank A., Severin Borenstein, and James
Bushnell, 2000, “Measuring market power in the
California electricity market,” mimeo, August.

Michigan Public Service Commission, 2001, “Status of electric competition,” Lansing, MI, February.

18

1Q/2002, Economic Perspectives

The aggregate effects of advance notice requirements
Marcelo Veracierto

Introduction and summary
It is well known that the performance of labor markets,
measured in terms of unemployment rates or employment to population ratios, is much stronger in the U.S.
than in many European countries. In order to improve
the performance of European labor markets then, it is
important to determine the cause of these differences.
While the degree of unionization, the unemployment
insurance system, or minimum wage legislation can
have significant effects, most of the literature has focused on firing restrictions as the main candidate. Firing
restrictions stand out because they are relatively minor
in the U.S., compared with many countries with poor
labor market performance.
Firing restrictions take several forms in these countries. The most common forms are severance payments,
advance notice requirements, and procedural constraints.
Severance payments are mandated payments that the
employer must give to the worker at the time of employment termination. They vary as a function of the
years of service and the perceived fairness of the dismissal. Advance notice requirements impose a pre-notification period that delays the time of employment
termination. In turn, the procedural constraints require
employers to seek authorization from an outside party prior to performing a dismissal (the outside party
being a union, a work council, the government, or the
courts). Usually, the authorization procedure is long
and costly, and the employer is forced to provide full
pay to the worker while the procedure is underway.1
The theoretical literature has typically modeled
these forms of firing restrictions in a very simple way:
as firing costs that involve either a fixed loss of resources
or a fixed payment to the government per unit reduction in employment (firing taxes). While this may be
a good first approximation, not many attempts have
subsequently been made to model more explicitly the
different forms of firing restrictions. The purpose of

Federal Reserve Bank of Chicago

this article is to analyze the effects on aggregate output,
wages, employment, and welfare levels of one particular form of firing restriction, namely, advance notice
requirements. I also provide a comparison with the
effects of firing taxes to assess the differences between
both types of policies.
The empirical literature provides good reasons to
analyze advance notice requirements separately from
other forms of firing restrictions: It suggests that they
may have different effects. In a very influential paper,
Lazear (1990) constructed two measures of job protection for a set of 22 countries: the amount of severance payments that employers are required by law to
pay to blue-collar workers with ten years of experience
at the time of termination; and the period of advance
notice that employers are required to give to this same
class of workers. Lazear then compared these measures
with measures of labor market performance, such as
employment to population ratios and unemployment
rates. Table 1 (overleaf) reproduces the average employment–population ratios, severance payments, and
advance notice requirements between 1956 and 1984
for the 22 countries. Figure 1 plots the average employment–population ratios and severance payments, showing a negative relation between the two variables.
However, this analysis does not take into account the
large variations in labor market institutions over time
within each of these countries: Generally, job security
provisions were introduced in the 1960s, reinforced
in the 1970s, and somewhat loosened in the 1980s.
To account for this time variation, Lazear performed
a panel data analysis, using yearly observations for
each country to regress severance payments against
employment–population ratios. His results indicate
Marcelo Veracierto is a senior economist at the Federal
Reserve Bank of Chicago. The author thanks seminar
participants at the Chicago Fed for their comments.

19

payments but opposite results for advance notice requirements. Indeed, they
Data for sample countries, 1956–84
found that longer notice intervals are asEmployment
Advance
sociated with statistically significant in/population
Severance pay
notice
creases in employment and labor force
(months of wages)
(months)
participation rates.
In the theoretical literature, an early
Austria
0.43
0.93
3.00
Australia
0.41
0.00
0.00
study of the effects of firing costs was
Belgium
0.38
1.24
1.00
provided by Bentolila and Bertola (1990).
Canada
0.38
n.a.
n.a.
Taking factor prices as being exogenous
Denmark
0.46
0.48
6.00
Finland
0.47
n.a.
n.a.
to their analysis (that is, using a partial
France
0.40
5.24
1.86
equilibrium
setting), Bentolila and Bertola
Germany
0.43
1.00
1.86
studied
the
consequences
of imposing
Greece
0.37
1.00
10.00
Ireland
0.35
0.00
0.00
firing costs on a monopolist facing a
Israel
0.33
8.41
n.a.
shifting demand for its product. In that
Italy
0.37
15.86
n.a.
context, firing costs potentially have two
Japan
0.48
0.00
n.a.
Netherlands
0.35
n.a.
2.00
opposing effects. On one hand, firing costs
Norway
0.42
12.00
3.00
induce the monopolist to avoid large conNew Zealand
0.38
0.00
n.a.
tractions in employment after reductions
Portugal
0.37
3.36
2.59
Spain
0.35
13.56
n.a.
in demand in the hope that demand will
Switzerland
0.49
0.00
1.00
increase in the near future. On the other
Sweden
0.48
0.00
0.76
hand, they make the monopolist less willUnited Kingdom
0.44
n.a.
0.90
United States
0.39
0.00
0.00
ing to hire workers after increases in demand because of the prospective firing
Note: n.a. indicates not available.
costs that will have to be paid when deSource: Lazear (1990).
mand shifts down in the future. Under
a parameterization that reproduces
that introducing severance payments of three months
observations from European countries, Bentolila
of wages is typically accompanied by a decrease in
and Bertola found that the first effect is the most
the employment–population ratio of
about 1 percent.
FIGURE 1
Figure 2 plots average employment–
Severance payments
population ratios and average advance
severance
notice requirements. The plot shows a
18
negative relation, but one that is much
weaker than that in the previous figure.
16
However, when the time variation within
14
countries is taken into account, Lazear
found that advance notice requirements
12
reduce employment even more than sever10
ance payments. He considered this result
to be surprising: “At worst, the employer
8
could treat notice requirements as sever6
ance pay, simply by telling the worker not
to report during the notice period and pay4
ing him anyway” (Lazear, 1990, p. 712).
2
In a later paper, Addison and Grosso
(1995) provided revised estimates for the
0
effects of severance payments and advance
-2
notice requirements. Including some ad0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50
ditional countries and correcting some data
employment/population
errors from Lazear’s study, Addison and
Source: Lazear (1990).
Grosso found similar effects for severance
TABLE 1

20

1Q/2002, Economic Perspectives

component, since employers must pay wages during
important: Firing costs actually increase the average
the notice period, despite not needing the workers.
employment level of the monopolist.
But advance notice requirements have an additional
Hopenhayn and Rogerson (1993) performed a
effect: They hold workers to their jobs during the pericher analysis, which allowed factor prices to clear
riod of notice. This is an important effect. I find that,
markets instead of treating them as exogenous (that
contrary to firing taxes, advance notice requirements
is, they performed a general equilibrium analysis).
do not have a negative effect on aggregate employIn their model economy, production is carried out by
ment. However, the welfare costs of advance notice
a large number of establishments that are subject to
requirements can be substantially larger.
changes in their individual productivity levels, which
To gain a better understanding of these results, I
induce them to expand and contract employment over
start with a partial equilibrium version of the model,
time. Households supply labor, own the establishments,
in which prices are fixed. Comparing the effects of adand have access to perfect insurance markets. In that
vance notice requirements in this setting with those
framework, Hopenhayn and Rogerson introduced fircorresponding to the general equilibrium framework
ing taxes that were rebated to households as lump sum
allows me to isolate the importance of equilibrium
transfers. In this model, firing taxes give rise to an imprice changes to the results. I also analyze the effects
portant misallocation of resources. The reason is that
of advance notice requirements assuming that once a
establishments that switch to a low individual producworker is given advance notice, his productivity on
tivity level do not contract their employment as much
the job decreases quite substantially. This version of
as they should in order to avoid current firing taxes.
the model not only adds some realism, but also shows
On the other hand, establishments that experience
that shirking behavior is an important variable to conhigh individual productivity levels do not expand
sider when analyzing advance notice requirements.
their employment enough, because they try to avoid
The article is organized as follows. In the next
paying firing taxes in the future. This misallocation
section, I analyze the effects of advance notice in a
of resources across establishments reduces labor propartial equilibrium framework. Then, I study the efductivity quite substantially. The decrease in labor
fects in a general equilibrium model. Next, I incorpoproductivity induces a large substitution from market
rate the assumption that advance notice requirements
activities toward leisure and leads to a reduction in
generate shirking behavior by workers. Finally, I comtotal employment. This effect can be quite significant:
pare advance notice requirements with firing taxes.
Firing taxes equal to one year of wages reduce employment by 2.5 percent. Hopenhayn and
Rogerson (1993) also calculated the welFIGURE 2
fare costs associated with the firing taxes.
Advance notice
They found that a permanent increase in
advance
consumption of 2.8 percent is needed to
12
leave agents in the equilibrium with firing
taxes indifferent with moving to the equi10
librium without firing taxes.
The model I use in this article is sim8
ilar to that analyzed by Hopenhayn and
Rogerson (1993). The main difference is
that I use it to evaluate not only the effects
6
of firing taxes in general, but also the particular effects of advance notice require4
ments. The model introduces advance notice
requirements in a very parsimonious way.
2
If an establishment decides not to give advance notice to any of its workers, the fol0
lowing period it can expand its employment
but not reduce it. On the other hand, if an
-2
establishment gives advance notice to some
0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50
of its workers, it cannot rehire them duremployment/population
ing the following period.2 Clearly, advance
Source: Lazear (1990).
notice requirements have a firing penalty

Federal Reserve Bank of Chicago

21

Partial equilibrium
In this section, I analyze the effects of advance
notice requirements assuming that prices are fixed. The
purpose is to isolate the partial equilibrium effects of
advance notice requirements from those that arise from
general equilibrium effects (that is, from equilibrium
price changes).
Consider the problem of a single producer facing
a constant wage rate w and interest rate i. For simplicity, I assume that output depends only on labor input.
In particular, the production function is given by:
yt  st ntH ,

where yt is output, nt is the labor input, g is a parameter
governing the productivity of labor, with 0 < g < 1,
and st is a productivity shock (that is, a higher value
of st means that more output can be produced with
the same labor input). The shock st follows a Markov
process with transition matrix Q. That is, Q(s,s¢) is
the probability that st+1 = s¢, conditional on st = s.
When no government regulations are introduced,
the establishment chooses its labor input to equate the
marginal productivity of labor to the wage rate, that is,3
1)

w  st H ntH 1 .

However, I am interested in studying how the behavior of the establishment is affected by the introduction of advance notice requirements. In principle, the
establishment could give firing notice to all its workers
every period and rehire them at will in the following
period, according to the value that the productivity shock
takes. If this were possible, advance notice requirements
would clearly have no effects. But, given that this alternative is not available to employers in the real world,
I consider a notice requirement policy that precludes
this possibility altogether. More precisely, my policy
specifies that: a) if the establishment gives notice to any
number of workers, it cannot rehire in the following
period; and b) if the establishment does not give advance notice to any worker, it cannot fire the following period, but it can hire at will.
Under such a policy, the problem of the establishment is much more complicated because it becomes
dynamic. At any given time, the establishment has to
decide to how many (if any) of its workers it will give
advance notice. To do this, the establishment must
forecast the value that the productivity shock will
take in the following period. The relevant state variables for making its employment decision are its current labor force n and its current productivity shock
s. If the establishment gives advance notice to some
of its workers, which forces its next period employment

22

to a value n¢ below its current period employment,
the establishment will not be able to rehire any workers
in the following period under any of the realizations
of the productivity shock s¢. It is forced to employ n¢
workers across all realizations of the productivity
shock. The other alternative is not to give advance
notice to any of its workers. In this case the establishment cannot contract employment below n in the following period, but it is free to hire. Thus, next period
employment can be made contingent on the realization of the productivity shock s¢, as long as it is larger
than the current employment level n. A profit-maximizing establishment will choose the best of both
alternatives. Box 1 describes the establishment’s
problem more formally.
I am interested in describing the aggregate behavior of a large number of establishments similar to the
one described so far. For this purpose, I assume that
there are a large number of establishments facing the
same random process for the individual productivity
shock, but that the realizations of the shock are independent across establishments.
I need to incorporate entry and exit of establishments, at least exogenously, because a substantial probability of exit will affect how establishments change
their employment levels in response to their productivity shocks. For this purpose, I assume that the transition matrix Q for the individual productivity shocks
is such that: 1) starting from any initial value, st reaches
zero in finite time with probability one; and 2) once
st reaches zero, there is zero probability that st will
receive a positive value in the future. Given these
assumptions, a zero value for the productivity shock
can be identified with the death of an establishment.
While establishments exit, v new establishments are
exogenously created every period. The distribution
of new establishments across productivity shocks is
given by y. The new establishments can hire workers
freely during their first period of activity since they
are created with zero previous period employment.
The state of the economy is described by a distribution xt, defined over idiosyncratic productivity
shocks s and employment levels n. Equation 4 in box
2 describes the law of motion for xt in formal terms.
Intuitively, the next period’s number of establishments
with a productivity shock s¢ and employment level n¢
is given by the sum of two terms. A first term gives the
number of establishments that transit from their current
shock s to the shock s¢ and choose an employment level
n¢. A second term includes all new establishments born
with a shock s¢, in case the unrestricted employment
level that corresponds to the shock s¢ is given by n¢.
In this article, I concentrate on steady state equilibria,
where the distribution xt is invariant over time.

1Q/2002, Economic Perspectives

BOX 1

Establishment’s problem
Assume that the maximum present value of profits
that can be attained starting from the state (n,s) is
given by V(n,s). If the establishment gives advance
notice to some of its workers, the best it can do is
given by the following problem:
2)

F (n, s )  max{sn H  wn 
na  n

4 V (na, s a )Q( s, s a )}.

1
1 i s
a

If the establishment does not give advance
notice to any of its workers, the best it can do is
given by:
3)

H (n, s )  max {sn H  wn 
na( sa) r n

4 V (na ( sa ), s a )Q( s, s a )}.

1
1 i s
a

If V(n,s) indeed describes the present value of
profits under the optimal employment plan, it must
be equal to the maximum of the two alternatives
given by equations 2 and 3. Thus, the value function V satisfies the following functional equation:

V (n, s )  max{F (n, s), H (n, s)}.
In computations, I restrict s to take a finite
number of possible values. The value function V
is obtained by iterating on this functional equation
starting from some initial guess.

Given the invariant distribution x across establishment types, aggregate production c and aggregate employment h are given by summing production and
employment across all establishments described by
the distribution x. Formally, aggregate production and
employment are given by equations 5 and 6 in box 2,
respectively.
My purpose here is to obtain quantitative estimates
of the effects of advance notice requirements. For these
estimates to be meaningful, the parameters of the model
must reproduce important empirical observations (a
procedure known as calibration). Although the article
is concerned with European labor market institutions,
I choose to replicate observations for the U.S. economy
because this a common benchmark in applied studies.
Since there are neither advance notice requirements nor
firing taxes in the U.S. economy, I use a laissez-faire
version of the model to reproduce U.S. observations.
These observations are from the National Income and
Product Accounts and establishments’ dynamics data

Federal Reserve Bank of Chicago

reported by Davis and Haltiwanger (1990). The calibration procedure, which is similar to the one provided in Veracierto (2001), is described in box 3.4
Next, I describe the effects of introducing advance
notice requirements of three months duration (the length
of the model period) to the partial equilibrium model
calibrated above. Since I have chosen parameters to
reproduce U.S. observations, the experiments provide
estimates of the effects of introducing advance notice
requirements in the U.S. economy. Table 2 (on page 25)
reports the results. The first column reports statistics
for the economy without interventions (which have
been normalized at 100), and the second column reports
statistics for the economy with advance notice requirements. The variables are aggregate production c, wages
w, and aggregate employment h.
There are two opposite effects of advance notice
requirements on employment. On one hand, when establishments receive bad shocks, they cannot instantly
contract their employment levels because they must
give advance notice first. This tends to increase employment. On the other hand, when establishments receive positive shocks, they are less willing to hire
workers, because if the shock is reversed, during the
advance notice period they will be stuck with workers
they don’t need. This tends to lower employment.
Table 2 shows (last row, second column) that this last
BOX 2

Aggregation
The law of motion for the distribution xt is described by:

4)

xt  1 (na, s a ) 

4

{( n , s ): g ( n , s , s a )  n a }

Q( s, s a ) xt (n, s )

 vZ ( s a )D(na, s a ),
where g(n,s,s¢) is the next period employment level chosen by an establishment with current employment n and shock s when the realized next
period productivity is s¢; and where c(n¢, s¢) is
an indicator function that is equal to one if
g(0, s¢, s¢) = n¢ (and zero otherwise).
Given the invariant distribution x that satisfies equation 4 for all time periods t, steady state
aggregate production c is given by:

5)

c  4 sn H x(n, s )
n, s

and steady state aggregate employment h in turn
is given by:
6)

I  4 nx( n, s ).
( n, s )

23

BOX 3

Calibration
I choose the interest rate to reproduce an annual rate
of 4 percent, which is a compromise between the return on equity and the return on short-term debt (see
Mehra and Prescott, 1985). This is also the value
commonly used in the real business cycle literature.
Since the model period is one quarter, i is selected
to be 0.01.
When no government regulations are imposed,
equation 1 shows that the curvature parameter g in
the production function determines the share of output that is paid to labor. As a consequence, it is selected to be 0.64, which is the share of labor in the
national income accounts. I choose the wage rate w,
in turn, to reproduce an average establishment size
equal to 60 workers, which is consistent with Census
of Manufacturers data. On the other hand, I select
the number of establishments created every period
n to generate a total employment level equal to 80
percent of the population, roughly the fraction of
the working age population that is employed in the
U.S. economy.
I restrict the stochastic process for the productivity shocks to be a finite approximation to the following process. Realizations of the shock take values
in the set:

Q(0,{0})  1
Q( s,[1, s% ]) 

1

N

Pr{(a S ln s  Fa ) <1, s% >}, for s, s% r1,

where a, r, and m are constants and e¢ is an i.i.d. (independently and identically distributed) normally distributed shock with mean zero and standard deviation s.
With this functional form for the transition function, there are four parameters to be determined: m,
a, r, and s. In addition, I must choose the distribution y across idiosyncratic shocks. Since all these
parameters are important determinants of establishment dynamics in the model, they are selected to reproduce observations about establishment dynamics.
The observations used to calibrate these parameters
are the employment size distribution reported by the
Census of Manufacturers, the job creation and destruction rates reported by Davis and Haltiwanger (1990),
and the five-year exit rate of manufacturing establishments reported by Dunne, Roberts, and Samuelson
(1989).1 The size distribution and the job creation and
destruction statistics for the U.S. economy are displayed
in table B1. The parameter values used to match
these observations are reported in the appendix.
Since the computations require a finite number of shocks and
only nine employment ranges are reported in Census of Manufacturers data, nine values for the idiosyncratic shocks are
used in the article.
1

8  {0} ‡ <1, d)
and the transition function Q is assumed to be of
the following form:

Table B1
Statistics for U.S. and model economy
A. U.S. economy
Average size = 60%
Job creation due to births = 0.62%
Job creation due to continuing establishments = 4.77%
Employment
5–9
10–19
20–49
50–99
100–249

Shares (%)
23.15
22.82
24.83
12.59
10.05

B. Model economy
Average size = 59.6%
Job creation due to births = 0.72%
Job creation due to continuing establishments = 4.80%
Employment
5-9
10–19
20–49
50–99
100–249

Shares (%)
26.19
31.67
20.21
13.01
3.92

Exit rate = 36.2%
Job destruction due to deaths = 0.83%
Job destruction due to continuing establishments = 4.89%
Employment
250–499
500–999
1,000–2,499
>2,500

Shares (%)
3.86
1.68
0.73
0.28

Exit rate = 38.5%
Job destruction due to deaths = 0.72%
Job destruction due to continuing establishments = 4.80%
Employment
250–499
500–999
1,000–2,499
>2,500

Shares (%)
2.25
2.13
0.59
0.02

Source: Lazear (1990).

24

1Q/2002, Economic Perspectives

TABLE 2

Partial equilibrium analysis

Production
Wages
Employment

Laissez
faire

Advance
notice

Advance
notice
(shirk)

Firing
taxes

100.00
100.00
100.00

97.71
100.00
97.75

92.59
100.00
92.97

92.62
100.00
89.41

effect is the strongest: Introducing advance notice requirements reduces employment by 2.25 percent. We
also see that even though employment decreases, this
is not accompanied by an increase in labor productivity.
In fact, we see that output decreases by roughly the
same factor as employment. The reason is that with
the introduction of the advance notice requirements,
establishments that receive bad shocks do not contract
employment (during the first period of the shock), and
establishments that receive positive shocks do not expand employment enough. Thus, labor is allocated less
efficiently across establishments.
General equilibrium
In the partial equilibrium analysis of the previous
section, advance notice requirements reduce aggregate
employment quite significantly (the effects have the
same sign as in Lazear, 1990, and the opposite sign to
the results in Addison and Grosso, 1995). However,
there is an important reason to suspect that those partial equilibrium results are not reliable: The effects are
so large that prices should have been significantly affected. Therefore, instead of assuming a fixed wage
rate and interest rate, invariable to policy changes, in
this section I investigate the effects of advance notice
requirements allowing prices to adjust to clear all markets. That is, I provide a general equilibrium analysis
of advance notice requirements. My results here show
that equilibrium price changes are crucial for understanding the effects of this type of policy.
To formulate a general equilibrium analysis, I introduce a few modifications to the environment. The
same continuum of establishments analyzed in the previous section is still responsible for production of the
consumption good, but now I explicitly introduce a
household sector. In particular, the economy is now
populated by a continuum of ex ante identical agents,
of size normalized to one. The preferences of the representative agent are given by:
d

E

4 C [ln c
t 0

t

t

Federal Reserve Bank of Chicago

 BIt ],

where ct is consumption, ht is the fraction of the population that works, a is a positive parameter governing the marginal utility of leisure, and b is a discount
parameter with 0 < b < 1.
I restrict the analysis to a steady state equilibrium,
where the wage rate w and the interest rate i are constant over time. There are two important decisions that
a household has to make—how much to consume today relative to tomorrow and how much time to spend
working. Consider the consumption decision first. If
the household sacrifices one unit of consumption at
date t in order to buy a bond, it loses the marginal utility of consumption at date t. In return, the household
obtains 1 + i units of the consumption good at date
t + 1, each of which is valued according to the marginal utility of consumption at date t + 1 and discounted
according to b (in terms of utility at date t). If the household makes an optimal choice, the marginal loss of
this decision at date t must be equal to the marginal
gain at date t + 1. Since consumption (and therefore,
the marginal utility of consumption) is constant in a
steady state equilibrium, it follows that the steady state
interest rate 1 + i must be equal to the inverse of the
discount factor b. Observe that this interest rate is not
affected by the introduction of advance notice requirements. As long as the economy is in a steady state, with
constant consumption, the gross interest rate must be
given by 1/b.
Consider now the decision of how much time to
spend working versus how much to consume. If the
household spends one additional unit of time working,
it loses the marginal utility of leisure. In return it obtains wage payments that allow it to buy w units of the
consumption good, each of which is valued according
to the marginal utility of consumption. If the household
maximizes utility, the marginal loss from this intratemporal decision must be equal to the marginal gain.
Thus, the wage rate w must be equal to the marginal
rate of substitution between consumption and leisure:
7)

ac = w.

Observe that in equilibrium, consumption c is given
by the aggregate production of establishments (equation 5 in box 2, page 23). Also, the fraction of the population that works, h, must be equal to the demand
for labor by establishments (equation 6 in box 2).
In order to perform the policy experiment, I choose
parameter values identical to those in the partial equilibrium section, except for a and b, which are new. These
two parameters are selected to generate the same wage
rate w and interest rate i as in the partial equilibrium
section. The required values are a = 0.80 and b = 0.99.

25

TABLE 3

General equilibrium analysis

Production
Wages
Employment
Welfare (%)

Laissez
faire

Advance
notice

Advance
notice
(shirk)

Firing
taxes

100.00
100.00
100.00
0.00

99.18
99.18
100.05
0.86

97.24
97.24
100.37
3.08

97.25
97.25
96.51
0.56

When I introduce advance notice requirements, the
wage rate must change in order to restore the equality
with the marginal rate of substitution of consumption
for leisure. Recall that when the advance notice requirements are introduced, table 2 shows that production
drops quite substantially at the initial wage rate. Since
the amount of production undertaken by establishments
increases monotonically with decreases in the wage
rate (because they increase their demand for labor),
for the equality in equation 7 to be restored, the wage
rate must decrease. As a consequence, both consumption and employment fall by a smaller amount than
in the partial equilibrium analysis.
The first two columns of table 3 show that the
general equilibrium results in fact lead to a much
smaller drop in aggregate consumption—only 0.82
percent compared with the 2.29 percent drop in the
partial equilibrium framework. Given the linear relation in equation 7, we know that the wage rate must
also decrease in the same proportion. What is interesting to observe in table 3 is that the fall in the wage
rate is enough to leave the employment level roughly
unchanged (it increases only by 0.05 percent) instead
of generating the substantial decrease (of 2.25 percent)
obtained in the partial equilibrium framework. Thus,
the general equilibrium results lead to employment
effects that are more consistent with Addison and
Grosso (1995) than with Lazear (1990).
Since this is a neoclassical economy, the equilibrium without interventions is Pareto optimal,5 and
introducing advance notice requirements can only reduce welfare levels. In fact, advance notice requirements produce significant deadweight losses. Table 3
shows that agents in the steady state with advance
notice require a 0.86 percent permanent increase in
consumption in order to be indifferent with being at
the laissez faire equilibrium.
Advance notice and shirking behavior
Although I do not model it explicitly here, it is
reasonable to expect that once workers are notified
that they will be fired in the following period, their

26

performance on the job will decrease considerably.
To capture this effect, I assume that the productivity
of workers that are given advance notice is reduced
to a fraction f of that of workers that are not given
advance notice. However, workers that are given advance notice are paid the same wage rate as those that
are not given advance notice. (Box 4 explains the modified establishments’ problem and feasibility condition in detail).
Given that there are no data available for the shirking parameter f, I go to the extreme and assume that it
is equal to zero. In other words, I assume that workers’
productivity drops to zero when they are given advance
notice. The third column of table 2 reports the results
for the partial equilibrium framework. We see that the
effects of advance notice requirements are much larger
when shirking behavior is present than when it is not.
The reason is clear. Since establishments that contract
employment must pay wages to workers without obtaining any production from them, the advance notice
requirements impose much larger penalties. As a consequence, they have a much larger effect on the demand
for labor, which drops by 7.03 percent instead of 2.25
percent. The drop in consumption is also much larger, 7.41 percent instead of 2.29 percent. This is due
not only to the larger drop in the labor input, but also
to the fact that production is severely affected when
workers are given advance notice.
When we incorporate the general equilibrium effects, we see (in the third column of table 3) that the
wage rate drops by such an amount that employment
actually increases by 0.37 percent when the advance
notice requirements are introduced. Given this increase
in employment, the drop in consumption is reduced
to 2.76 percent (compared with 7.41 percent in the
partial equilibrium framework). It is worth mentioning that shirking behavior produces the same sign as
the empirical relation between advance notice requirements and employment levels reported by Addison and
Grosso (1995); however, the magnitude of the employment response is much smaller. Also, note that the
welfare costs of notice requirements are much larger
when shirking behavior is allowed for—3.08 percent
instead of 0.86 percent, representing an extremely
large welfare cost.6
Advance notice requirements versus
firing taxes
Hopenhayn and Rogerson (1993) and Veracierto
(2001) analyzed the effects of firing taxes in a framework similar to this and found large negative effects
on employment, consumption, and welfare. The parameterization in this article is similar to one of the cases

1Q/2002, Economic Perspectives

BOX 4

Shirking behavior
In order to allow for shirking behavior, I modify
the value of giving advance notice in equation 2
as follows:
F ( n, s )  max{s[na  G( n  na )]H 
na n

wn  1 1 i 4 V ( na, s a )Q ( s, s a )}.
sa

The only other condition that must be modified in the general equilibrium analysis of the
previous section is the one for aggregate consumption, which is now given by:

c

4

( n , s ):noticeis not given

4

( n , s ):noticeis given

sn H x(n, s ) 

s[na  G(n  na )]H x(n, s ).

analyzed in Veracierto (2001).7 But, for that case,
Veracierto (2001) only reported the effects of firing
taxes equal to one year of wages. To facilitate comparisons with the advance notice requirements analyzed
in this article, I report the effects of firing taxes equal
to one quarter of wages (same length as the advance
notice requirements).
The firing tax I consider is a tax on employment
reduction, which is rebated to households as lump-sum
transfers. Box 5 describes the establishment’s problem
in detail. Essentially, the establishment has to pay a
tax in the next period equal to one period of wages per
unit reduction in employment, whenever next period
employment n¢(s¢) is lower than the current period
employment n.
The partial equilibrium effects of firing taxes are
reported in the last column of table 2. We see that the
effects on consumption are as large as under advance
notice requirements when shirking behavior is allowed
for (7.38 percent versus 7.41 percent), but the effects
on employment are considerably larger (10.59 percent
versus 7.03 percent). The consumption results are not
surprising. If shirking behavior leads to zero productivity, under advance notice requirements firms end up
facing similar firing restrictions as under firing taxes.
In both cases, workers who are fired make no contribution to production, while the establishment must pay
their wages anyway. Certainly there is a difference between the policies: Under advance notice requirements
the firing decisions must be taken in advance, while
under firing taxes they can be made after the shocks
are realized. However, with the high persistence of

Federal Reserve Bank of Chicago

the productivity shocks, this difference is unimportant
and, thus, the drop in output is almost the same in
both scenarios.
What is important is the difference in terms of employment outcomes. Under firing taxes, when establishments receive a bad productivity shock, workers are fired
right away. Under advance notice, these same workers
must be employed an additional period before they can
be fired. Consequently, employment is larger under
advance notice requirements (when shirking behavior is allowed) than when firing taxes are introduced.
When the general equilibrium effects are considered, the drop in wages is virtually the same under firing taxes and advance notice requirements (with shirking
behavior). But this decrease in wages is not large enough
for firing taxes to increase employment. We see that
employment falls by 3.49 percent. On the contrary,
employment increases by 0.37 percent under the advance notice requirements (with shirking behavior).
Observe that the welfare costs of firing taxes are
much smaller than those of advance notice requirements (with shirking behavior): 0.56 percent instead
of 3.08 percent. The reason is that consumption drops
by the same amount in both cases, but employment
decreases more under firing taxes, allowing for a
larger amount of leisure.
We see, in table 3, that the welfare costs of firing
taxes are even smaller than those of advance notice
requirements when shirking behavior is not allowed
for: 0.56 percent versus 0.86 percent. The reason is
that when establishments receive a zero productivity
shock, workers are hired an additional period to comply with the advance notice requirement. This leads
to a higher employment level and a lower amount of
leisure. It is interesting to note that if the advance notice requirements were waived from establishments
BOX 5

Firing taxes
Under firing taxes the Bellman equation of establishments becomes:

V (n, s )  max{sn H  wn  1 1 i 4{V (na ( s a ), sa ) 
na ( sa )

sa

w max[0, n  na ( sa )]}Q( s, s a )},
where V(n,s) is the present value, excluding current firing taxes, of an establishment with current
employment n and current shock s.
This equation, together with equations 4, 5,
6, and 7, defines an equilibrium with firing taxes
rebated as lump sum taxes.

27

that exit the market, the amount of employment h in
the “advance notice” column would be 99.40 instead
of 100.05 and the welfare cost of the policy would be
0.44 percent, a lower cost than that of firing taxes.

When advance notice requirements generate
shirking behavior, their effects can be considerably
larger. However, when the general equilibrium effects
are taken into account, advance notice requirements
actually have a positive effect on employment. This
effect is of the same sign as in Addison and Grosso
(1995), but the magnitude is much smaller.
In terms of welfare effects, I find that advance
notice requirements are quite costly—in fact even costlier than firing taxes. While firing taxes equal to three
months of wages reduce welfare by 0.56 percent, advance notice requirements lead to welfare costs that
range between 0.86 percent and 3.08 percent, depending on the amount of shirking behavior generated.
However, the large welfare cost of advance notice
requirements allowing for shirking behavior was calculated under the assumption that workers who shirk
do not obtain leisure. This is probably an unrealistic
assumption and, as such, we should interpret these results with caution. While the results in this article suggest that advance notice requirements can be extremely
costly, in order to provide a more definite answer they
should be analyzed in a model that explicitly considers the shirking decisions. A model based on efficiency wages may provide a suitable framework of analysis.

Conclusion
This article analyzes the effects of advance notice requirements in a general equilibrium model of
establishment-level dynamics of the type introduced
by Hopenhayn and Rogerson (1993). I find that when
advance notice requirements do not lead to shirking
behavior, the effects of advance notice requirements
are relatively small. Establishments do not tend to alter their employment levels considerably for the following reasons: a) next period’s productivity is likely
to be similar to current productivity (given the high
persistence of the shocks); b) employment can be
freely increased if a good shock occurs next period;
c) employment can be decreased after one period if a
bad shock occurs; and d) during the period of notice
the workers remain productive. In a partial equilibrium
framework, I find that advance notice requirements
reduce employment, but when I consider general
equilibrium effects, employment is not much affected.
The reason is that the advance notice requirements
lead to a substantial reduction in equilibrium wages,
which sustains the employment level.
APPENDIX: PARAMETERS

Prices and technology
i = 0.01

w = 0.3297

g

s1 = 1.00
s6 = 4.19

s2 = 1.32
s7 = 5.38

s3 = 1.79
s8 = 7.30

s4 = 2.35
s9 = 10.65

y = 6.8e–5
y = 0.0

y
y

y
y

= 0.64

Productivity shocks
s0 = 0.00
s5 = 3.19

Distribution over initial productivity shocks

y
y

= 9.995e–1
= 0.0
5
0

y
y

= 2.3e–4
=
0.0
6
1

2
7

= 1.6e–4
= 0.0
8
3

= 0.0
= 0.0
9
4ÿ

Transition matrix Q:
1.000
0.087
0.005
0.005
0.005
0.005
0.005
0.005
0.005
0.005

28

0.000
0.848
0.084
0.000
0.000
0.000
0.000
0.000
0.000
0.000

0.000
0.065
0.879
0.086
0.000
0.000
0.000
0.000
0.000
0.000

0.000
0.000
0.032
0.847
0.088
0.000
0.000
0.000
0.000
0.000

0.000
0.000
0.000
0.062
0.877
0.090
0.000
0.000
0.000
0.000

0.000
0.000
0.000
0.000
0.031
0.846
0.092
0.000
0.000
0.000

0.000
0.000
0.000
0.000
0.000
0.059
0.808
0.094
0.000
0.000

0.000
0.000
0.000
0.000
0.000
0.000
0.095
0.873
0.096
0.000

0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.028
0.896
0.099

0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.004
0.896

1Q/2002, Economic Perspectives

NOTES
1
For an extensive discussion of dismissal regulations, see Emerson
(1988) and Piore (1986).

If the establishment could rehire workers that were given advance
notice, then it would give advance notice to all of its workers and
rehire them at will the following period, depending on the value
of the establishment’s individual productivity. Clearly, if this were
allowed, advance notice requirements would have no effect.

while in this article labor is the only factor. Another difference is
that in the former paper entry of establishments was endogenous,
while here it is exogenous.

2

Observe that variations in the productivity of the establishment
determine its employment expansion and contraction over time.
3

A Pareto optimal allocation maximizes the utility level of the
representative agent within the set of feasible allocations.

5

A good part of the welfare cost of advance notice requirements
when shirking behavior is allowed for is due to the assumption
that workers that shirk do not enjoy leisure.

6

In particular, it corresponds to the economy without capital referred to in that paper as the “H-R economy.”

7

A main difference with Veracierto (2001) is that that paper had
a flexible form of capital as an alternative factor of production,
4

REFERENCES

Addison J., and J. Grosso, 1995, “Job security provisions and employment: Revised estimates,” Centre
for Labour Market and Social Research, working paper,
No. 95-15.

Hopenhayn, H., and R. Rogerson, 1993, “Job turnover and policy evaluation: A general equilibrium
analysis,” Journal of Political Economy, Vol. 101,
No. 5, pp. 915–938.

Bentolila, S., and G. Bertola, 1990, “Firing costs
and labor demand: How bad is eurosclerosis?,” Review of Economic Studies, Vol. 57, No. 3, pp. 381–402.

Lazear, E., 1990, “Job security provisions and employment,” Quarterly Journal of Economics, Vol. 105,
No. 3, pp. 699–726.

Davis, S., and J. Haltiwanger, 1990, “Gross job creation and destruction: Microeconomic evidence and
macroeconomic implications,” in NBER Macroeconomics Annual, Vol. 5, O. Blanchard and S. Fischer
(eds.), pp. 123–168.

Mehra, R., and E. Prescott, 1985, “The equity premium: A puzzle,” Journal of Monetary Economics,
Vol. 15, No. 2, pp. 145–161.

Dunne, T., M. Roberts, and L. Samuelson, 1989,
“The growth and failure of U.S. manufacturing
plants,” Quarterly Journal of Economics, Vol. 104,
No. 4, pp. 671–698.

Piore, M., 1986, “Perspectives on labor market flexibility,” Industrial Relations, Vol. 25, No. 2, pp. 146–166.
Veracierto, M., 2001, “Employment flows, capital
mobility, and policy analysis,” International Economic Review, Vol. 42, No. 3, pp. 571–595.

Emerson, M., 1988, “Regulation or deregulation of
the labour market,” European Economic Review,
Vol. 32, No. 4, pp. 775–817.

Federal Reserve Bank of Chicago

29

When can we forecast inflation?
Jonas D. M. Fisher, Chin Te Liu, and Ruilin Zhou

Introduction and summary
The practice of forecasting inflation has generally been
considered an important input in monetary policymaking. Recently, this view has come under attack. In an
article that appeared in the Federal Reserve Bank of
Minneapolis’s Quarterly Review, Atkeson and Ohanian
(2001, hereafter A&O) argue that the likelihood of accurately predicting a change in inflation using modern
inflation forecasting models is no better than a coin
flip. They conclude that these forecasting models cannot be considered a useful guide for monetary policy.
In this article, we reexamine the findings that underlie
this conclusion. We show that it may be possible to forecast inflation over some horizons and in some periods.
A&O study the properties of standard Phillipscurve-based inflation forecasting models. These models
relate changes in inflation to past values of the unemployment gap (the difference between unemployment
and a measure of unemployment believed to be associated with non-accelerating inflation, the so-called
NAIRU [non-accelerating inflation rate of unemployment]), past changes in inflation, and perhaps other
variables believed to be useful indicators of inflation.1
Recently, Stock and Watson (1999, hereafter S&W)
proposed a generalized version of the Phillips curve
and argued that their generalization is superior to these
standard models as a forecasting tool. Focusing on the
one-year-ahead forecast horizon, A&O argue that unemployment-based Phillips curve models and S&W
generalized Phillips curve models can do no better than
a “naive model,” which says that inflation over the
coming year is expected to be the same as inflation
over the past year. This analysis focuses on the ability
to forecast the magnitude of inflation in the Consumer
Price Index (total CPI), the CPI less food and energy
components (core CPI), and the personal consumption
expenditures deflator (total PCE) over the sample period 1985 to 2000.

30

To gain some insight into these findings, figure 1,
panel A displays 12-month changes in 12-month core
CPI from 1967 to 2000. The vertical lines in this figure
(in 1977, 1985, and 1993) divide the sample period
into four periods. It is immediately clear that in the two
later periods, that is, the sample period considered by
A&O, the volatility of changes in inflation was much
lower than in the two earlier periods. This change in
the behavior of inflation seems to be coincident with
the change in monetary policy regime that is generally
thought to have taken effect in the mid-1980s.2 The
lower volatility and the possibility of a changed monetary policy regime in the later two sample periods may
favor the naive model studied by A&O. Figure 1, panel
B shows that PCE less food and energy components
(core PCE) behaves in a similar fashion.
These changes in the behavior of inflation raise
the question of whether A&O’s findings are due to
special features of the data in the sample period they
chose to focus on. To address this possibility, we extend the A&O analysis by studying three distinct sample periods, 1977–84, 1985–92, and 1993–2000. In
addition, we add core PCE inflation to the list of inflation measures and we consider a broader class of
Stock–Watson type models. A&O focus on the oneyear forecast horizon. Given the lags inherent in the
effects of monetary policy actions, it is reasonable to
consider whether their results extend to longer horizons. Consequently, we analyze both the one-year and
two-year forecast horizons.
Our findings confirm the A&O results for the
1985–2000 period, but not for 1977–84. The Phillips
curve models perform poorly in both the 1985–92
and 1993–2000 periods when forecasting core CPI.
Jonas D. M. Fisher is a senior economist, Chin Te Liu
is an associate economist, and Ruilin Zhou is a senior
economist at the Federal Reserve Bank of Chicago.

1Q/2002, Economic Perspectives

FIGURE 1

12-month changes in 12-month core inflation
A. Core CPI
8.0
6.0
4.0
2.0
0.0
-2.0
-4.0
-6.0
-8.0
1967

'69

'71

'73

'75

'77

'79

'81

'83

'85

'87

'89

'91

'93

'95

'97

'99

'01

'71

'73

'75

'77

'79

'81

'83

'85

'87

'89

'91

'93

'95

'97

'99

'01

B. Core PCE
8.0
6.0
4.0
2.0
0.0
-2.0
-4.0
-6.0
-8.0
1967

'69

Source: Haver Analytics, Inc., 2001, U.S. economic statistics database, July.

However, when forecasting core PCE, these models
improve significantly relative to the naive model in
the 1993–2000 period. While the Phillips curve models
do poorly for the one-year-ahead forecast horizon,
we do find evidence in favor of the Phillips curve
models for the two-year-ahead forecast horizon, at
least with respect to core inflation. Taken together,
these findings are consistent with our suspicion that
periods of low inflation volatility and periods after
regime shifts favor the naive model.

Federal Reserve Bank of Chicago

The relatively poor performance of the Phillips
curve models reflects their inability to forecast the
magnitude of inflation accurately. Ultimately, the way
we assess our forecasting models should reflect the
usefulness of the forecasts in policymaking. In our
view, policymakers understand that precise forecasts
of inflation are fraught with error. As a result, they
pay considerable attention to the direction of change
of future inflation. For this reason, we do not view
measures of forecast performance used by A&O and

31

many others that emphasize magnitude as the only
criteria for evaluating forecasting models.
Consequently, we consider a complementary
approach to evaluating forecasting models that emphasizes the forecasted direction of change of future
inflation. Under the assumption that forecast errors are
symmetrically distributed about the forecast, the naive
model provides no information about future inflation;
it is no better than a coin flip at predicting the future
direction of inflation. Under the same symmetry assumption, the Phillips curve models predict that inflation will change in the direction indicated by comparing
the point forecast with the current level of inflation.
We analyze the ability of our Phillips curve models to
forecast the direction of inflation and find that they
do quite well. Over the entire 1977–2000 period, the
Phillips curve models are able to forecast the correct
direction of inflation one year ahead between 60 percent and 70 percent of the time. For the same period,
the models forecast the correct direction two years
ahead more than 70 percent of the time.
These results suggest that the Phillips curve models
forecast the direction of inflation changes relatively
well across measures of inflation and across time. But
when it comes to forecasting the magnitude of inflation changes, there may be times, such as after a change
in monetary policy regime, when the naive model may
do better than the Phillips curve models. The last question we address is whether it is possible to improve
on the forecasts of the naive model in difficult times
by using the directional information contained in the
Phillips curve models. We show that it is possible to
improve on the naive model, although the improvement is modest.
One interpretation of our findings is that it is possible to forecast inflation accurately during some periods,
but not others. We argue that the periods in which it is
difficult to forecast inflation are associated with changes
in monetary policy regime, broadly interpreted. This
implies that if we are in a stable monetary regime and
expect the regime to persist, then it may make sense
for policymakers to pay attention to inflation forecasts.
In the next section, we outline the different forecasting models that we consider in our analysis. Next,
we discuss the standard methodology we implement
to evaluate the ability of these models to forecast the
magnitude of future inflation. We then discuss our results for forecasting magnitude and present our analysis of forecasting directional changes in inflation. We
describe our procedure for combining the naive model
with our directional forecasts and how well this procedure performs over our sample period. Finally, we discuss some possible policy implications of our findings.

32

Statistical models of inflation
The standard approach to forecasting inflation is
rooted in ideas associated with the Phillips curve, the
statistical relationship between changes in inflation and
measures of overall economic activity. The generalized version of the Phillips curve proposed by S&W
involves variables that summarize the information in
a large number of inflation indicators. S&W argue that
their generalization is superior to conventional Phillips
curves as a forecasting tool. A&O argue that neither
the conventional nor the generalized Phillips curve
framework can do better than a simple forecast (their
naive model) that says inflation over the coming year
is expected to be the same as inflation over the past
year. We reexamine this claim using a broader class
of S&W-type models than considered by A&O. Now
we describe in detail the models we study.
The naive model
The benchmark for evaluating our models is the
naive model described by A&O. The starting point
for the naive model is the martingale hypothesis, which
states that the expected value of inflation over the next
12 months is equal to inflation over the previous 12
months. Specifically,
1)

12
Et Q12
t 12  Q t ,

where the 12-month inflation rate, Q12
t is defined as
the 12-month change in the natural logarithm of the
price indexes pt,

Q12t  ln pt  ln pt 12 ,
and Et denotes the expectation conditional on date t
information. The naive model equates the forecast of
inflation over the next 12 months, Qˆ 12
t 12 , with its conditional expectation. That is,
12
2) Qˆ 12
t 12  Q t .

Notice that if the martingale hypothesis holds, then
the expected value of 12-month inflation in the second
year following date t must also equal inflation over
the 12 months prior to date t, that is
12
Et Q12
t  24  Q t .

Similar to the 12-month forecast, the naive model
equates the forecast of inflation over the next 24 months,
Qˆ 12t  24 , with its conditional expectation:

1Q/2002, Economic Perspectives

3)

Qˆ 12t  24  Q12t .

Generalized Phillips curve models
The simplest alternative to the naive model postulates that changes in 12-month inflation only depend on recent changes in one-month inflation. That
is, for J = 12, 24,
12
4) Q12
t  J  Q t  B  C ( L ) ( Q t  Q t 1 )  F t  J ,

where the one-month inflation rate, pt, is defined by

pt = ln pt – ln pt–1.
In addition, et is an error term, and b(L) specifies the
number of lags in the equation.3 Below, we refer to
this as model 1.
The next model we consider is based on the
Chicago Fed National Activity Index (CFNAI). This
index is a weighted average of 85 monthly indicators
of real economic activity. The CFNAI provides a single, summary measure of a common factor in these
national economic data. As such, historical movements
in the CFNAI closely track periods of economic expansion and contraction. The index is closely related
to the “Activity Index” studied in S&W.4 Our model
based on this index postulates that changes in 12-month
inflation, in addition to recent changes in inflation,
also depend on current and past values of the CFNAI.
That is, for J = 12, 24,
5)

Q12t  J  Q12t  B  C( L) (Q t  Q t 1 )  H ( L)at  F t  J ,

where at denotes the value of the CFNAI at date t, and
b(L) and g(L) specify the number of lags in inflation
and the index, respectively, included in the equation.
We refer to this as model 2.
The remaining models we consider are based on
the diffusion index methodology described in S&W.
This methodology uses a small number of unobserved
indexes that explain the movements in a large number
of macroeconomic time series. Our implementation of
the S&W methodology uses 154 data series, including
data measuring production, labor market status, the
strength of the household sector, inventories, sales,
orders, financial market, money supply, and price
data. The procedure that obtains the indexes processes
the information in the 154 series so that each index is
a weighted average of the series and each index is
statistically independent of the others. We consider six
indexes, d1t, d2t, ..., d6t, which are ranked in descending
order in terms of the amount of information embedded
in them.

Federal Reserve Bank of Chicago

Our diffusion index models postulate that changes
in 12-month inflation depend on recent changes in inflation, and current and past values of a number of
diffusion indexes. That is, for J = 12, 24,
12
6) Q12
t  J  Q t  B  C ( L ) ( Q t  Q t 1 )



¤ R ( L)d
K

i 1

i

it

 Ft  J ,

where K = 1, 2, ..., 6, and b(L) and qi(L) specify the
number of lags in inflation and diffusion index i, respectively, included in the equation. As more indexes
are included in the equation, more information about
the 154 series is incorporated in the forecast. We refer
to these as models 3, 4, ..., 8.
For all these models, we equate the forecasts of
inflation with the conditional expectation implied by
the model. That is, for J = 12, 24,

Qˆ 12t  J  Et Q12t  J .
We estimate all these models by ordinary least
squares (OLS). In each case, we use the Bayes Information Criterion (BIC) to select the number of lags
of inflation, the CFNAI, and the diffusion indexes. Intuitively, BIC selects the number of lags to improve the
fit of the model without increasing by too much the
sampling error in the lag coefficients. We allowed for
the possibility that lags could vary from 0 to 11.
In real time, it is difficult to choose the appropriate
model to use to form a forecast. To address this issue,
we consider a forecasting model in which the forecast
of inflation at any given date is the median of the forecasts of models 1 through 8 at that date.5 This procedure
has the advantage that it can be applied in real time. We
call this the median model. Stock and Watson (2001)
use a similar model. For convenience, we refer to the
collection of models comprising models 1 through 8
plus the median model as Phillips curve models.
Model evaluation methodology
We evaluate the accuracy of the generalized
Phillips curve models by comparing them with the
naive model. We do this through various simulated
out-of-sample forecasting exercises. These exercises
involve constructing inflation forecasts that a model
would have produced had it been used historically to
generate forecasts of inflation. Two drawbacks of our
approach, which also affect A&O and S&W, are that
we do not use real-time data in our forecasts and we
assume all the data are available up to the forecasting
date. On a given date, particular data series may not

33

yet be published. Also, many data series are revised
after the initial release date. In our forecasting exercises,
we calculate the CFNAI and diffusion indexes assuming all the series underlying the indexes are available
up to the forecast date.6 In practice, this is never the
case and we must fill in missing data with estimates.
Since we do not use real-time data to construct the
CFNAI and diffusion indexes, we also abstract from
problems associated with data revisions. We suspect
that these drawbacks lead us to overstate the effectiveness of our CFNAI and diffusion index models. Data
revision is also a problem for the lagged inflation and
naive PCE models, since this price index is subject to
revisions. It does not affect the CPI versions of these
models, since the CPI is never revised.
To assess the accuracy of our various models, we
first construct a measure of the average magnitude of
the forecasting error. The measure we use is root mean
squared error (RMSE). The RMSE for any forecast is
the square root of the arithmetic average of the squared
differences between the actual inflation rate and the
predicted inflation rate over the period for which simulated forecasts are constructed. For J = 12, 24,

7)

¥1

¤ ª¨Q
§T

RMSE  ¦

T

t 1

12
tJ

´
 Qˆ 12t  J ¹· µ
¶
2

1/ 2

,

where T denotes the number of forecasts made over
the period under consideration. We compare the forecast of a given Phillips curve model with that of the
naive model by forming the ratio of the RMSE for
the Phillips curve model to the RMSE for the naive
model. We call this ratio the relative RMSE.
A ratio less than 1 thus indicates that the Phillips
curve model is more accurate than the naive model.
Subtracting 1 from the ratio and multiplying the result
by 100 gives the percentage difference in RMSE between the two models. The RMSE might be strongly
affected by one or two large outliers. We reworked our
analysis using a measure of forecasting error that places
equal weight on all forecasting errors and found that
our findings are robust.7 The RMSE statistics are subject to sampling variability and, consequently, are measured with error. In principle, we could use Monte Carlo
methods to assess the magnitude of this error. However, this would require specifying an underlying datagenerating process for all the variables in our analysis
(more than 150 of them). One should keep this sampling error in mind when interpreting the results below.
The sample period of our analysis begins in 1967.
We chose this date because it is the beginning date for
the data used to construct the CFNAI and the diffusion
indexes.8 We estimate the forecasting equations using

34

rolling regressions, a method that keeps the number
of observations in the regression constant across forecasts. Since it excludes observations from the distant
past, this approach can in principle accommodate the
possibility of structural change in the data-generating
process. We choose this sample length for the rolling
regression procedure to be 15 years.9
Finally, we consider three distinct periods over
which to evaluate the forecasts of the models: 1977–
84, 1985–92, and 1993–2000. To compare our results
with those in A&O, we also evaluate the overall performance of the models over the 1985–2000 period.
To complete the analysis, we study the performance
of the models over the entire 1977–2000 period as well.
The 1977–84 period is one of high inflation volatility
and general economic turbulence. The 1985–92 period is generally associated with a new monetary policy regime. This period also includes a mild recession.
The 1993–2000 period witnessed uninterrupted economic expansion, stable monetary policy, and declining inflation.
Atkeson and Ohanian revisited
We estimated the Phillips curve models for the five
sample periods and computed the associated RMSEs
and the relative RMSEs. For models 1–8, we do not
report all the results, just the results for the best models.
We do this to demonstrate the potential forecasting
capacity of these models. A&O report the performance
of the best and worst models they look at across different lag lengths. S&W use BIC to select lag length
and report the performance of all their models. All
of these approaches suffer from the deficiency that
in real time one may not know which is the best performing model. Our median model overcomes this
deficiency. Table 1 displays the RMSE statistics of
the best and median 12-month-ahead and 24-monthahead forecasts for the five sample periods and four
measures of inflation. The table also identifies the
best performing models. The numbers in bold in the
table indicate cases in which the naive model outperforms the Phillips curve models. Finally, for each
case we report the RMSE for the naive model.
Regarding the 12-month-ahead forecasts in table 1,
our findings are as follows. First, over the 1985–2000
period, essentially all the relative RMSEs are at least
as large as 1. That is, the naive model outperforms all
the Phillips curve models. This finding confirms the
result reported in A&O that Phillips curve models have
not performed well over the last 15 years. Second,
while inflation forecasting appears to have been quite
difficult over the last 15 years, for core PCE it has
become a little easier in the most recent forecasting

1Q/2002, Economic Perspectives

TABLE 1

Forecasting the magnitude of inflation: Phillips curve models vs. naive model
12 months ahead

24 months ahead

Sample period

Naive
model
RMSE

Core CPI
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

2.360
0.667
0.341

2
1
2

0.768
0.985
1.110

0.885
1.290
1.181

3.802
0.780
0.705

4
1
2

0.930
0.894
0.765

0.868
1.615
0.768

1985:01–2000:12
1977:01–2000:12

0.530
1.430

1
2

1.016
0.891

1.268
0.927

0.744
2.278

1
5

0.903
1.000

1.304
0.906

Core PCE
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

1.238
0.481
0.514

2
1
4

0.954
1.409
0.750

1.033
1.412
0.749

2.100
0.617
0.802

5
1
4

0.887
1.221
0.532

0.765
1.197
0.542

1985:01–2000:12
1977:01–2000:12

0.498
0.822

1
2

1.188
1.048

1.109
1.052

0.716
1.346

6
5

0.933
0.902

0.847
0.781

Total CPI
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

2.674
1.489
0.716

2
1
1

0.687
0.982
1.085

0.765
0.982
1.193

4.525
1.695
1.032

4
1
1

0.744
0.981
1.035

0.696
1.245
1.002

1985:01–2000:12
1977:01–2000:12

1.168
1.815

1
2

1.002
0.865

1.025
0.845

1.403
2.853

1
6

0.996
0.954

1.184
0.795

Total PCE
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

1.705
1.025
0.633

2
1
4

0.841
0.978
0.953

0.953
1.012
0.960

2.977
1.102
0.924

6
1
6

0.751
1.029
0.773

0.686
1.279
0.772

1985:01–2000:12
1977:01–2000:12

0.852
1.205

1
2

1.003
0.974

0.998
0.968

1.017
1.909

1
6

1.020
0.909

1.098
0.781

Best
performing
model

Rel.
RMSE

Median
rel.
RMSE

Naive
model
RMSE

Best
performing
model

Rel.
RMSE

Median
rel.
RMSE

Notes: Fifteen-year rolling regression. RMSE is root mean squared error. Numbers in bold indicate cases in which the naive model
outperforms the Phillips curve models.

period. In particular, the forecast by the best model
and the median forecast have RMSEs 25 percent lower than the naive model over the 1993–2000 period.
Note, however, that this pattern is not true for core
CPI. Third, the Phillips curve models are generally
better than the naive model in the 1977–84 period.
This result is uniform across inflation measures, except for the median forecast for core PCE. In some
cases the improvement is quite dramatic. For example, the best CFNAI model is more than 30 percent
better than the naive model when forecasting total
CPI. Even the median forecast is about 24 percent
better than the naive model.
The results for the 24-month-ahead forecasts in
table 1 suggest that, over longer horizons than 12
months, the Phillips curve models may more consistently outperform the naive model. In particular, the
best models at forecasting core inflation outperform
the naive model in the 1985–2000 period. However,
the gains are not dramatic. For core CPI, the gain is
roughly 10 percent, and for core PCE it is about 7

Federal Reserve Bank of Chicago

percent. The median forecasts for core CPI over this
period fare worse, but they are better for core PCE
with a gain of 15 percent over the naive model. In
the recent 1993–2000 period, the gains over the naive
model are more substantial. The best models improve
relative to the naive model by 24 percent for core CPI
and 47 percent for core PCE. We see similar gains
for the median forecasts. Finally, there are across the
board gains using Phillips curve models to forecast
24 months ahead for the 1977–84 period.
We can summarize these findings as follows. First,
the naive model does poorly in the 1977–84 period
and relatively well in the 1985–92 period, forecasting
12 months ahead. Second, the naive model does not
do well forecasting PCE inflation in the recent 1993–
2000 period. Finally, the naive model does better forecasting 12 months ahead than 24 months ahead.
We can attribute the first finding to an apparent
structural change in the early 1980s and the consequent
decline in inflation volatility in the post-1984 period
compared with the previous period. This decline in

35

volatility is evident in the pattern of RMSEs for the
naive model in table 1 (also see figure 1).10 Given that
the naive model predicts no change in inflation, it
should do better in a period of low inflation volatility
than in a period of high volatility. It is unclear how
the performance of the Phillips curve models is affected by inflation volatility. Nevertheless, we suspect
that changes in inflation volatility are a contributing
factor to the poor performance of the naive model in
the 1977–84 period and its significant improvement
in the most recent 15 years. Another factor that probably plays an important part in explaining our first
finding is that forecasting models do relatively well
in a stable environment. If the structure of the economy
changes, then regression equations tend to forecast with
more error. We suspect the change in structure in the
early 1980s has a lot to do with a change in monetary
policy regime around that time. We think volatility
and structural stability change may explain the second
finding as well. In particular, it appears that there was
a further decline in core CPI volatility in the 1993–
2000 period, which is not matched by core PCE.
We think one possible explanation of the improved performance of the Phillips curve models at
the 24-month forecast horizon has to do with the
sluggish response of the economy to monetary policy
actions. It is generally understood that economic activity and inflation respond with a considerable lag to
changes in monetary policy, and that inflation is more
sluggish in its response than economic activity. If
this is true then there may be less information about
future inflation in the 12-month-ahead forecasts than
in the 24-month-ahead forecasts. Note that as the forecast horizon is increased, forecasting performance in
terms of RMSE generically worsens. We can see this
by comparing the RMSEs of 12-month-ahead and
24-month-ahead forecasts of the naive model in table 1.
Evidently, the forecast errors for the Phillips curve
models deteriorate at a slower rate than the forecast
errors for the naive model.
Forecasting direction
In the previous section we used the RMSE criterion to evaluate the models. This measure emphasizes the ability of a forecasting model to predict the
magnitude of inflation. In this section, we consider a
complementary approach to evaluating forecasting
models, which emphasizes the forecasted direction of
change of future inflation.
What do the models we have described have to say
about direction of change of inflation? First, consider
the naive model. Strictly speaking, according to equations 2 and 3, this model always predicts no change

36

in inflation. In principle the martingale hypothesis,
equation 1, on which the naive model is based, could
be used to make forecasts about direction. Given the
conditional distribution of inflation 12 months and
24 months ahead, we could assess the probability of
an increase or decrease in inflation over these horizons
and use this to make predictions about the direction
of change. If this distribution is symmetric around the
conditional mean, then the martingale hypothesis would
suggest that the likelihood of an increase in inflation
is always 50 percent. If the distribution is skewed, the
odds of inflation changing in a particular direction
would be better than a coin flip. The martingale hypothesis does not provide any information about the
nature of the conditional distribution.
Deriving predictions about the direction of inflation changes from a Phillips curve model is more
straightforward. The main difference from the naive
model is that the conditional expectation of inflation
12 months and 24 months ahead is not constrained to
equal current inflation. Consequently, we can infer the
direction of change by making minimal assumptions
about the distribution of the error terms in equations
4–6. Specifically, if these distributions are symmetric,
then the direction of change is given by the sign of the
difference between the conditional forecast and the
current value of inflation.
Now we analyze the ability of our models to
forecast direction. We assume the forecast errors are
symmetrically distributed. Therefore, the naive model
predicts inflation increases with probability 50 percent.
We evaluate our Phillips curve models by assessing
how well they can forecast direction relative to this
baseline. Specifically, for a given Phillips curve model,
let Dˆ t12 J be the predicted direction of change in inflation J periods ahead. We define Dˆ t12 J as follows
for J = 12, 24:

8)

«1 if Et Q12t  J  Q12t
Dˆ t12 J  ¬
,
­ 1 otherwise

where Dˆ t12 J   1 indicates a forecasted increase in
inflation and Dˆ t12 J   1 indicates a decrease. Actual
changes in inflation are defined analogously. Let
Dt12 J be the actual direction of change in inflation
J periods ahead, for J = 12, 24,

«1 if Q12t  J  Q12t
.
­ 1 otherwise

Dt12 J  ¬

We measure the directional change performance of
a model by measuring the percentage of the directional

1Q/2002, Economic Perspectives

TABLE 2

Forecasting the direction of inflation changes
12 months ahead
Best
performing
model

Sample period

24 months ahead

PDPC

Median
PDPC

Best
performing
model

PDPC

Median
PDPC

Core CPI
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

3
7
4

75.0
62.5
78.1

71.9
59.4
80.2

3
6
1

91.7
66.7
85.4

82.3
63.5
78.1

1985:01–2000:12
1977:01–2000:12

7
7

70.3
69.1

69.8
70.5

1
3

74.5
75.0

70.8
74.7

Core PCE
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

2
1
8

79.2
61.5
69.8

69.8
42.7
69.8

2
6
1

90.6
61.5
90.6

87.5
52.1
82.3

1985:01–2000:12
1977:01–2000:12

1
2

64.1
67.0

56.3
60.8

6
2

70.8
72.2

67.2
74.0

Total CPI
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

2
8
4

86.5
60.4
60.4

71.9
58.3
57.3

4
6
3

92.7
76.0
77.1

89.6
72.9
74.0

1985:01–2000:12
1977:01–2000:12

5
2

59.4
62.8

57.8
62.5

5
4

73.4
78.5

73.4
78.8

Total PCE
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

3
1
5

89.6
56.3
71.9

77.1
52.1
67.7

4
4
5

94.8
68.8
80.2

93.8
62.5
76.0

1985:01–2000:12
1977:01–2000:12

7
7

62.0
65.6

59.9
65.6

5
5

74.0
80.2

69.3
77.4

Notes: Fifteen-year rolling regression. RMSE is root mean squared error. PDPC indicates percentage of directional predictions that are correct.
Numbers in bold indicate failure with respect to the naive model.

predictions that are correct (PDPC) in a particular
sample period. This percentage is defined as (for
J = 12, 24),

PDPC 

1
T

¤ I \Dˆ
T

t 1

12
tJ

 Dt12 J ^ ,

where I takes the value 1 when its argument is true
(that is, Dˆ t12 J  Dt12 J ), and 0 otherwise.
We used our estimates of the Phillips curve models computed for the RMSE comparisons in the previous section to make predictions about the direction
of change of inflation according to equation 8. We
report the findings for the best Phillips curve models
and for the median model. Table 2 displays the 12- and
24-month-ahead directional predictions for the five
sample periods and four measures of inflation. The
table also identifies the best performing models.
Numbers in bold indicate failure with respect to the
naive model.

Federal Reserve Bank of Chicago

Our findings can be summarized as follows. It is
immediately clear from the tables that the Phillips curve
models predict direction in excess of 50 percent of the
time for both 12-month and 24-month horizons in all
but one case. Similar to their performance in terms of
RMSE, these models are typically best at predicting
directional change during the 1977–84 period and
worst in the 1985–92 period. Interestingly, the best
models at predicting directional changes are not the
same as the best models in terms of RMSE. For example, model 4 (the model that includes d1t and d2t)
provides the best 12-month-ahead forecasts of directional changes of core CPI over the 1993–2000 period.
In terms of RMSE, model 2 provides the best forecasts
over this sample period. Moreover, it is possible for a
model to do well on directional changes while underperforming the naive model in terms of magnitude.
In the example just given, the best directional change
model is correct more than 78 percent of the time, but
the best RMSE models in the corresponding period

37

FIGURE 2

Median model directional predictions of 12-month changes in 12-month core inflation
A. Core CPI—70.5% correct
6.0

4.0

2.0

0.0

-2.0

Correct
Incorrect

-4.0

-6.0
1977

'79

'81

'83

'85

'87

'89

'91

'93

'95

'97

'99

'01

'97

'99

'01

B. Core PCE—60.8% correct
6.0

4.0

2.0

0.0

-2.0

Correct
Incorrect

-4.0

-6.0
1977

'79

'81

'83

'85

'87

are worse than the naive model. Finally, the 24-monthahead directional change forecasts perform better
than the 12-month ahead forecasts.
Figures 2 and 3 provide information on when our
directional change forecasts are correct. The bars indicate actual changes in core CPI and PCE inflation
and the green bars indicate the correct directional predictions of the median model over the 1977–2000 period. The main lesson from these figures is that much
of the success of the directional forecasts derives from
periods in which there is a consistent trend in one

38

'89

'91

'93

'95

direction or the other—the longer periods of consecutive increasing or decreasing inflation are associated
with better directional forecasting. The relatively poor
performance in the 1985–92 period may be partially
due to the absence of a trend. As with our interpretation of the RMSE findings, we believe the change
in monetary policy regime may also play a role. Interestingly, in the recent 1993–2000 period, despite
the general downward trend in core CPI inflation, the
one-year directional forecasts are able to correctly
anticipate the brief episodes of increasing inflation.

1Q/2002, Economic Perspectives

FIGURE 3

Median model directional predictions of 24-month changes in 12-month core inflation
A. Core CPI—74.7% correct
8.0
6.0
4.0
2.0
0.0
-2.0
-4.0

Correct
Incorrect

-6.0
-8.0
1977

'79

'81

'83

'85

'87

'89

'91

'93

'95

'97

'99

'01

'99

'01

B. Core PCE—74.0% correct
8.0
6.0
4.0
2.0
0.0
-2.0
-4.0

Correct
Incorrect

-6.0
-8.0
1977

'79

'81

'83

'85

'87

Can we improve on the naive model
in difficult times?
Confirming the A&O findings, we show that the
naive model has done quite well over the last 15
years in forecasting the magnitude of inflation. Over
the same period, the Phillips curve models seem to
provide information on the direction of changes in
inflation. A natural question is whether we can combine these models to get a better forecast for magnitude. Intuitively, we should be able to do this by
shaving the naive model forecasts up or down

Federal Reserve Bank of Chicago

'89

'91

'93

'95

'97

according to the directional predictions. In this section, we explore this idea and show that, indeed, it is
possible to do somewhat better than the naive model.
We modify the naive model by adjusting its forecast in the direction predicted by a given Phillips curve
or median model. That is, for J = 12, 24,

Qˆ 12t  J  Q12t  Dˆ t12 J s vt ,
where Dˆ t12 J is defined in equation 8 and vt is the adjustment factor. The intuition is that, for small enough

39

vt, we should be able to improve on the naive model.
In addition, we believe the magnitude of vt should be
related to recent changes in inflation. Consequently,
we adjust the naive model by a percentage of the average inflation change in the recent past. That is, we
assume vt evolves as follows:
vt  M s

¤
t

j t  N

Q12j  Q12j 12 .

There is nothing in this approach that pins down vt,
and one may define vt in other ways, provided that it
is not too large. This formulation assumes symmetry
in magnitude of increases and decreases in inflation.
Choices of l and N reflect the forecaster’s belief in
the relevance of past volatility of inflation for future
volatility. For fixed N, intuition suggests that, for
small enough l, there will be at least a slight improvement over the naive model. We choose l = 0.1

and N to correspond to the beginning of the regression sample. We call this the combination model.
Table 3, constructed in the same way as table 1,
shows how well the combination model performs
relative to the naive model. These results confirm
our belief that we can improve on the naive model
almost uniformly. For example, over the 1985–2000
period, the improvement of the best performing
combination model for core CPI is about 7 percent
and that for core PCE is about 3 percent for the 12month horizon. For the 24-month horizon, the gains
are 9 percent and 6 percent, respectively. Admittedly,
these are not large improvements. The results for the
median-based combination forecasts are less encouraging. The bad performance in the 1985–92 period
seems to be driven by the relatively poor performance of the directional forecasts for both one-year
and two-year horizons. The performance of the combination model may improve slightly by increasing l
by a small amount, but not by much.

TABLE 3

Forecasting the magnitude of inflation: Combination models vs. naive model
12 months ahead

24 months ahead

Sample period

Naive
model
RMSE

Core CPI
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

2.360
0.667
0.341

2
7
7

0.959
0.955
0.855

0.968
0.981
0.859

3.802
0.780
0.705

3
6
1

0.950
0.965
0.833

0.963
0.994
0.854

1985:01–2000:12
1977:01–2000:12

0.530
1.430

7
2

0.935
0.965

0.957
0.967

0.744
2.278

1
3

0.910
0.950

0.934
0.961

Core PCE
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

1.238
0.481
0.514

5
1
1

0.954
0.992
0.944

0.962
1.069
0.942

2.100
0.617
0.802

3
6
1

0.945
0.970
0.910

0.950
1.016
0.925

1985:01–2000:12
1977:01–2000:12

0.498
0.822

1
7

0.967
0.966

1.003
0.972

0.716
1.346

6
3

0.943
0.948

0.960
0.952

Total CPI
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

2.674
1.489
0.716

3
8
5

0.951
0.983
0.990

0.959
0.990
0.997

4.525
1.695
1.032

4
6
5

0.949
0.952
0.915

0.956
0.969
0.938

1985:01–2000:12
1977:01–2000:12

1.168
1.815

6
2

0.987
0.965

0.992
0.968

1.403
2.853

6
6

0.950
0.952

0.961
0.957

Total PCE
1977:01–1984:12
1985:01–1992:12
1993:01–2000:12

1.705
1.025
0.633

3
1
5

0.939
0.988
0.944

0.958
1.003
0.962

2.977
1.102
0.924

5
5
5

0.941
0.934
0.911

0.946
0.961
0.925

1985:01–2000:12
1977:01–2000:12

0.852
1.205

7
2

0.982
0.961

0.992
0.970

1.017
1.909

5
5

0.924
0.938

0.946
0.946

Best
performing
model

Rel.
RMSE

Median
rel.
RMSE

Naive
model
RMSE

Best
performing
model

Rel.
RMSE

Median
rel.
RMSE

Notes: Fifteen-year rolling regression. RMSE is root mean squared error. Numbers in bold indicate cases in which the naive model
outperforms the combination model.

40

1Q/2002, Economic Perspectives

Conclusion
We can summarize our main results as follows.
First, we show that the A&O findings hold for a broader
class of models than they studied, as well as for a longer
forecasting horizon. However, they do not hold for the
1977–84 period. We extend their analysis to core PCE
and show that the naive model does better over the
sample period considered by A&O at the one-year
horizon, but not at the two-year horizon. In the 1993–
2000 period, the Phillips curve models perform well
at forecasting core PCE for both horizons. Second,
we show that Phillips curve models have predictive
power for the direction of change in inflation. This is
particularly true in the 1977–84 and 1993–2000 periods.
However, in the 1985–92 period, the gain over the
naive model is quite modest. Third, in most cases it
is possible to combine the information in the directional forecasts with the naive model to improve on the
latter model’s forecasts.
A common thread in our results is the relatively
poor performance of the Phillips curve models in the
middle period, in terms of both magnitude and direction. We believe this is due to a reduction in inflation
volatility and the change in monetary policy operating
characteristics that took effect in this time.

Our findings suggest the following policy recommendation. If we expect the current monetary “regime”
to persist, then we can have some degree of confidence
in the Phillips curve models going forward. On the other
hand, if we suspect that a regime shift has recently
occurred, then we should be skeptical of the Phillips
curve forecasts. In any case, there may be some directional information in these forecasts, and we can
use this to improve on naive forecasts.
Our findings suggest that more empirical and
theoretical work is necessary to come to a complete
answer to the question raised in the title to this article.
An equivalent way of posing this question is to ask:
Why does inflation behave like a martingale over some
periods while at other times it does not? We have suggested some possible explanations. Empirically, we
need to assess the robustness of our results to crosscountry analysis. For example, here we have only one
regime change and, hence, only one observation for
the regime-switch hypothesis. Ultimately, assessing
the plausibility of various possible explanations will
require developing a fully specified theoretical model.
Such work may shed light on the connection between
monetary policy and aggregate outcomes, as well as
the nature of the price-setting mechanism.

NOTES
1
For a recent discussion of the intellectual history of the Phillips
curve and NAIRU, see Gordon (1997).

6

2
See Bernanke and Mihov (1998), Bordo and Schwartz (1997),
and Strongin (1995) for discussions of monetary regimes. These
papers argue that during the Volker chairmanship of the Board of
Governors from 1979–87 monetary policy shifted, in terms of operating procedures and the Fed’s increased willingness to combat
inflation. Furthermore, Bernanke and Mihov (1998) estimate
monetary policy rules and can reject the hypothesis of parameter
stability for dates in the 1980s.

7

Suppose there are K lags in the equation, thenÿb(L)xt = b0xt +
b1xt-1 + … bKxt-K, where the b parameters are scalars.

9

3

4
For more details on the CFNAI, see www.chicagofed.org/
economicresearchanddata/national/index.cfm.

With eight models, the median is the average forecast of the
fourth and fifth ranked forecasts.
5

Federal Reserve Bank of Chicago

One implication of this procedure is that the historical path of
the indexes may change between forecast dates.
The alternative measure we used was mean absolute value error.
T
12
ˆ 12
For J = 12, 24, this is expressed as (1/ T )
i 1 Q i  J  Q i  J .

¤

A&O use several sample periods for their analysis. When they
consider unemployment-rate-based Phillips curves, their sample
begins in 1959. When they consider CFNAI-based Phillips curves,
their sample begins in 1967. S&W begin their analysis in 1959.

8

We also considered estimating the forecasting equations using all
the data from 1967 up to the forecast date. The results obtained indicated either very similar forecast performance or, in a few cases,
a slight deterioration relative to the rolling regression procedure.

S&W report evidence supporting this hypothesis. They find that
unemployment-rate-based Phillips curve forecasting models exhibit parameter instability during the 1980s.

10

41

REFERENCES

Atkeson, Andrew, and Lee E. Ohanian, 2001, “Are
Phillips curves useful for forecasting inflation?,”
Federal Reserve Bank of Minneapolis, Quarterly Review, Vol. 25, No. 1, Winter, pp. 2–11.
Bernanke, Ben, and Ilian Mihov, 1998, “Measuring
monetary policy,” Quarterly Journal of Economics,
Vol. 113, No. 3, August, pp. 869–902.
Bordo, Michael, and Anna Schwartz, 1997, “Monetary policy regimes and economic performance: The
historical record,” National Bureau of Economic Research, working paper, No. 6201.

Stock, James H., and Mark W. Watson, 2001,
“Forecasting output and inflation: The role of asset
prices,” Princeton University, manuscript, February.
, 1999, “Forecasting inflation,” Journal
of Monetary Economics, Vol. 44, pp. 293–335.
Strongin, Steven, 1995, “The identification of monetary policy disturbances: Explaining the liquidity
puzzle,” Journal of Monetary Economics, Vol. 35,
pp. 463–497.

Gordon, Robert, 1997, “The time-varying NAIRU and
its implications for economic policy,” Journal of Economic Perspectives, Vol. 11, No. 1, Winter, pp. 11–32.

42

1Q/2002, Economic Perspectives

May 8-10, 2002
38th ANNUAL CONFERENCE ON BANK STRUCTURE AND COMPETITION
FEDERAL RESERVE BANK OF CHICAGO

Behavior
over

the Business
I

-J Os',

■

. •• .

A

On May 8-10, 2002, the Federal Reserve Bank of Chicago will hold its 38th annual
Conference on Bank Structure and Competition at the Fairmont Hotel in Chicago.
Since its inception, the conference has encouraged an ongoing dialogue and debate
on current public policy issues affecting the financial services industry. Each year the
conference brings together several hundred financial institution executives, academics,
and domestic and international regulatory authorities to examine topical policy issues.
Ihe 2002 conference will evaluate the changing condi­

Today, these institutions are more geographically diversified,

tion and behavior of financial institutions over the
business cycle and the appropriate regulatory

offer a wider range of financial products, and are better able
to hedge risk—all of which may make them more resilient

response. While we have seen unprecedented economic

against adverse economic shocks. However, it remains

expansion over the past decade, recent events indicate that

unclear as to whether the new financial system is actually

the business cycle with its associated booms and reces­

sions has not been eliminated. As we move through the

less susceptible to cycles. Given this uncertainty, what is the
appropriate regulatory and supervisory policy over the business

business cycle, there is a need to understand the forces

cycle? Some argue that policy should be countercyclical: more

driving financial firm behavior as well as how firms respond
to changing credit conditions and market demands. It is
also important to ponder the alternative policy options
available to regulators across the cycle and the impact of
those options on the industry.
The policy debate concerning firm and supervisory/regulatory
behavior over the business cycle has intensified recently with

stringent during upturns and more accommodating during
downturns to avoid exacerbating an already bad situation. Others
argue that sound regulatory and supervisory policy regarding
credit quality and bank condition must be based on established
standards that are independent from the business cycle.

As in past years, much of the program will be devoted to the
primary conference theme, but there will also be a number of

proposed modifications to the Basel Capital Accord. In the

sessions on current industry issues. Some of the highlights of

early 1990s, financial institutions responded to an economic

the conference include:

slowdown and changing capital requirements by adjusting
their credit standards and, as a result, credit availability.

A perceived "credit crunch" ensued, and recovery from that

■ The keynote address by Federal Reserve Board Chairman

Alan Greenspan.

recession was unusually slow. Deposit flows were also dis­
rupted, and as the economy recovered some institutions found
it more difficult than others to raise core deposit funds using

■ A discussion of the theme from a variety of perspectives

traditional means. As we proceed through the current slow­

by a panel of industry experts. Participants on this panel
include Charles Goodhart, the Norman Sosnow Professor

down, there is reason to believe that lending and deposit­

of Banking and Finance at the London School of Business;

taking institutions may react differently than in the past.

Sean Ryan, Fulcrum Global Partners; Karen Shaw Petrou,

Managing Partner, Federal Financial Analytics; and Richard
Spillenkothen, Board of Governors of the Federal Reserve

System.

■ Luncheon presentations by John Hawke, Jr., Comptroller
of the Currency, and Donald E. Powell, Chairman, Federal
Deposit Insurance Corporation (FDIC).
■ A discussion of the events of September 11 and the impli­
cations for the financial services sector by Roger Ferguson,

Participants include John Bovenzi, Chief Operating
Officer, FDIC; Mark Harding, Partner, Clifford Chance LLP;
Oliver Page, Financial Services Authority; Robert Pickel,
Chief Executive Officer, International Swap Dealers

Association; and Ken Scott, Stanford Law School.

As usual, the Wednesday sessions (May 8th) will showcase
more technical research that is of primary interest to research
economists in academia and government. The Thursday and

Friday sessions are designed to address the interests of a

Vice Chairman, Board of Governors of the Federal Reserve

broader audience.

System; Michael Chertoff, Assistant Attorney General,
U.S. Department of Justice; and Eileen Wallace,
Managing Director, Morgan Stanley Dean Witter, Inc.

If you are not currently on our mailing list or have changed

■ A panel discussion of issues related to the current bank
capital reform proposal by Nicholas Le Pan, Canadian

your address and would like to receive an invitation to the
conference, please contact:
Ms. Portia Jackson

Superintendent of Financial Institutions and Head, Basel

Conference on Bank Structure and Competition

Capital Accord Implementation Group; Richard J. Herring,

Research Department

Jacob Safra Professor of International Banking, The

Federal Reserve Bank of Chicago

Wharton School; Michael Ong, Executive Vice President

230 South LaSalle Street

and Chief Risk Officer, Credit Agricole Indosuez; and

Chicago, Illinois 60604-1413

Eric Rosengren, Senior Vice President, Federal Reserve
Bank of Boston.

Telephone: 312-322-5775

email: portia.jackson@chi.frb.org

■ A panel discussion of the complexities involved with failure
resolution of large, complex banking organizations.

Origins of the use of Treasury debt in open market operations:
Lessons for the present
David Marshall

Introduction and summary
From late 1997 through the third quarter of 2001, continuing fiscal surpluses by the federal government
caused the outstanding stock of Treasury debt to decrease substantially. While the onset of the current recession, along with the recent tax cuts, has slowed or
even reversed this trend, many analysts believe that
the declines in Treasury debt will resume over the next
decade once the economy starts to strengthen. This
could present an operational problem for the Federal
Reserve. The Fed currently injects liquidity into the
economy by expanding bank reserves via open market operations. That is, the Federal Reserve expands
liquidity by purchasing securities on the open market
and withdraws liquidity through open market sales of
securities. Currently, all permanent transactions by the
Federal Reserve open market desk use Treasury securities, and Treasury securities remain the primary medium for temporary transactions. As demand for currency
and dollar-denominated bank reserves grows in the
years to come, the Federal Reserve will have to acquire
ever-increasing amounts of Treasuries via open market
purchases. But if the total stock of such securities shrinks
over the next decade or two, the Fed may find it increasingly difficult to conduct the needed transactions.
The Federal Reserve would then have to consider
changing its longstanding procedures for open market
operations. In particular, the Fed may have to consider purchasing securities issued by non-governmental
obligors.1 Is there a precedent for Federal Reserve
trading in privately issued assets? How does the Federal Reserve choose the medium to use for open market operations? Has the Fed consistently chosen the
safest or most liquid class of securities, or has it sought
to influence the development of financial markets in
its choice of open market instruments?
In this article, I review the early history of open
market operations, with an eye toward addressing

Federal Reserve Bank of Chicago

these questions. The historical record shows that prior
to the U.S.’s entry into World War I, the Federal Reserve’s preferred media for open market operations
were private bills of exchange, trade acceptances, and
bankers’ acceptances,2 rather than public debt. The
Federal Reserve’s choice was influenced by the prevailing theory of monetary policy, known as the real
bills doctrine, which held that the central bank should
only provide liquidity in exchange for securities that
directly finance commerce.
In addition, the Federal Reserve’s use of private
acceptances in open market operations was in part an
effort to encourage the development of an active secondary market in private paper. At the same time, the
Federal Reserve was rather reluctant to hold large quantities of Treasury securities. Purchases of government
debt by the central bank were seen as tantamount to
“lending to the crown,” which was regarded as a dangerous path for central bank policy. Furthermore, there
were problems of coordination with the Treasury that
took several years to resolve.
The Federal Reserve eventually moved away from
private paper toward Treasury securities for several
reasons. The supply of Treasuries expanded rapidly
during World War I due to the financing needs of the
war. Concomitantly, the secondary market in Treasuries
grew rapidly. The supply of private paper contracted
during the recessions of 1920–21 and (more importantly) 1929–33. Finally, events during the 1920s caused
monetary theorists to become disenchanted with the
real bills doctrine.

David Marshall is a senior economist and economic
advisor at the Federal Reserve Bank of Chicago.
The author thanks Jim Clouse, Anne Marie Gonczy,
Ed Green, Thomas Simpson, and François Velde for
helpful comments.

45

So, what do we learn from this review of history?
First, there were extended periods when the Federal
Reserve conducted open market operations primarily
in private securities. Furthermore, the Federal Reserve
used its choice of open market instruments to influence the growth of financial markets in ways it deemed
useful for the public interest. Finally, a shift to a new
set of open market instruments may have unforeseen
side effects. It takes time to understand the full implications of a major change in operating procedures, so
a gradual transition may be the best way to proceed.
In the next section, I discuss the issues confronting Federal Reserve open market operations as the
stock of Treasury debt shrinks. I then describe how
open market operations evolved from the earliest days
of the Federal Reserve through the Great Depression.
Finally, I discuss how this historical record might have
relevance to the issues of the present day.
The problems currently facing
open market operations
An important source of liquidity in the U.S. economy is the monetary base, M0, which consists of currency in circulation plus bank reserves. M0 comprises
about 97 percent of Federal Reserve liabilities. These
liabilities are balanced primarily by securities purchased
on the open market (approximately 96 percent of
Federal Reserve assets). The other main way that the
Federal Reserve expands liquidity is by lending to commercial banks at the discount window. However, discount window loans represent a very small fraction
(currently 0.015 percent) of Federal Reserve asset
holdings. The vast majority of Federal Reserve security holdings—currently 95 percent—consist of U.S.
Treasury securities. The Federal Reserve has conducted
open market operations primarily in Treasury securities since the mid-1930s, and Treasuries are the only
medium it has used for its outright transactions since
1981.3 Thus, to a close approximation, every dollar’s
worth of M0 in circulation is matched on the Fed’s
balance sheet by one dollar’s worth of U.S. Treasury
securities acquired through open market purchases.
This fundamental balancing relationship presents
a problem: Demand for M0 is growing rapidly, while
the stock of Treasury debt that the Federal Reserve uses
to balance M0 has been shrinking. The black line in
figure 1 plots M0 from 1986 through the present (indicated by the vertical line). Over this period, the monetary base grew at a geometric rate of around 6.8 percent
per year. This is mostly due to a growing demand for
currency. In 1975, currency accounted for about 77
percent of M0. Since February 2000, however, this

46

fraction has exceeded 90 percent. (The only exception
occurred during the two weeks following the September 11 attacks, when the fraction of M0 represented by
currency dropped to 86 percent. This was due to the
Federal Reserve’s temporary expansion of bank reserves in response to the attacks.)
The growth in M0 is due in part to the growth in
domestic economic activity. In addition, much of the
increased demand for currency is due to increased demand for dollars abroad. Consider two examples:
Ecuador formally replaced the sucre with the dollar
as its official currency in 2000; and, while the peso continues to be the official currency in Argentina, around
60 percent of transactions in Argentina are actually
conducted with dollars. (See Velde and Veracierto,
2000.) These trends are likely to continue inducing
growth in demand for the U.S. monetary base. The
black dashed line in figure 1 plots a projected path
for M0 through 2011.4 The projection is a mechanical
extrapolation of past trends and is not intended as a
detailed forecast. Nevertheless, it is a plausible first
guess at how the monetary base might evolve over
time. Figure 1 shows the monetary base approximately doubling in the next ten years.
To accommodate this growing demand for M0, the
stock of assets owned by the Federal Reserve must
grow. If the Federal Reserve continues its current policy
of maintaining virtually all its asset holdings in the
form of Treasury securities, its ownership of Treasury
debt will have to expand rapidly. However, the total
quantity of Treasury securities may well fall during
the coming years. The green line in figure 1 plots the
stock of outstanding Treasury debt from 1986 to the
present. Note that the level of Treasury debt had fallen
from $3.8 trillion in November 1997 to $3.3 trillion
as of September 2001, a decrease of over 10 percent
in less than four years. In spite of the recession that
started in March 2001 and the 2001 tax cuts, the contraction in Treasury debt continued at least through the
third quarter of the year.
The green dashed line in figure 1 plots the path
of Treasury debt implied by the Congressional Budget
Office’s (CBO) most recently published forecasts of
federal surpluses through 2012, released in January
2002.5 These forecasts take the effects of the current
recession into consideration. The CBO predicts small
deficits through early 2004, followed by surpluses.
The problem facing the Federal Reserve can be
seen by comparing the two forecasts in figure 1. Taking
these forecasts at face value, the stock of base money
demanded by the economy would equal the stock
of Treasury debt in July 2012. This means that the

1Q/2002, Economic Perspectives

FIGURE 1

Monetary base versus Treasury debt
billions of dollars

4,500

Treasury debt

4,000
3,500
3,000
2,500
2,000
1,500
1,000

Monetary base
500
0
1986

'89

'92

'95

'98

'01

'04

'07

'10

Note: Dashed lines indicate projected paths.
Source: U.S. Treasury, Federal Reserve Board, and author’s calculations.

Federal Reserve could not accommodate the growing
demand for M0 beyond that date without purchasing
securities other than Treasuries. In fact, the problem
will arrive much sooner. The Federal Reserve recognizes that Treasury securities serve a unique role in
financial markets. Because they are free of default risk
and are highly liquid relative to other assets,6 Treasuries
are a preferred savings instrument for foreign investors and are extensively used for hedging and as benchmarks for pricing other fixed-income securities. If the
Federal Reserve held a large fraction of outstanding
Treasury securities, it would impair the liquidity of
Treasury markets, which could adversely affect other
markets and even affect the pace of economic activity. As a result, the Federal Reserve limits its fraction
of ownership of any individual Treasury issue. The
current ownership caps range from 35 percent for
Treasury securities with less than a year to maturity
to 15 percent for issues ten years and longer. If the
Federal Reserve continues to abide by these caps, it
will exhaust its capacity to acquire additional Treasury
securities long before the outstanding stock of Treasury
debt disappears.
Possible responses to
these issues
Clearly, the Federal Reserve may well have to
modify its current procedures for conducting open market operations. The Fed could relax its self-imposed

Federal Reserve Bank of Chicago

caps on Treasury holdings, but this could
impair liquidity in the Treasury market
and, in any event, would only represent a
temporary stopgap. A longer-run solution
would be for the Federal Reserve to start
including in its portfolio assets other than
Treasuries. Under current law, the Federal
Reserve can purchase a range of assets, including direct obligations of federal agencies or debt fully guaranteed by federal
agencies, debt of foreign governments,
certain state and local obligations, and selected other instruments. There has been
speculation in the press that the Federal
Reserve might seek legislation to expand
its authority to hold private assets.7
The extension of the Federal Reserve’s
portfolio to non-Treasury securities raises
a number of questions. Which assets should
'13
the Federal Reserve hold? Should it let
private market participants align on a new
substitute for Treasury securities and then
simply adopt this asset class? Alternatively, should the Federal Reserve actively
seek to influence the evolution of fixed-income markets as they adjust to an era of diminishing supply of
Treasury securities? In particular, should the Federal
Reserve attempt to steer the market toward the type
of Treasury substitute that it prefers?
The sorts of choices the Federal Reserve now faces are not unprecedented. In the following sections,
I review the early history of open market operations.
This account shows that, in the early days of the Fed,
Treasury securities were not the preferred medium
for open market operations. Only gradually did Treasuries displace other assets. Furthermore, the Federal
Reserve’s original intentions for open market operations included a desire to affect the evolution of financial markets. In particular, it sought to encourage an
active secondary market in acceptances. Thus, there
are antecedents both for the Federal Reserve holding
privately issued securities and for the Federal Reserve
using its open market procedures to influence the development of financial markets. Having said this, financial markets have changed enormously since the
early years of the Federal Reserve System, so we
should use caution in drawing lessons from these
precedents for current problems.
The early years of open market operations
The current practice of conducting Federal Reserve System open market operations almost exclusively with Treasury securities was not anticipated in

47

the earliest years of the Fed. Table 1, taken from
Meulendyke (1998), shows that bankers’ acceptances
were the primary asset class for the Federal Reserve
portfolio until World War I, and acceptances had a
roughly equal presence with Treasury securities
through the 1920s. Treasury securities did not become
predominant until the Great Depression. These patterns
reflect changes both in the thinking of Federal Reserve
officials and in the economic environment in which
the Fed operated.
At the inception of the Federal Reserve in 1913,
it was presumed that Federal Reserve assets would
primarily consist of short-term privately issued paper,

such as bankers’ acceptances, trade acceptances, and
bills of exchange.8 A key reason for this focus was the
real bills doctrine, which was the most influential
theory of central banking at the beginning of the
twentieth century. The real bills doctrine maintains
that “a banking system that confines its lending to discounting short-term self-liquidating commercial bills
of exchange arising from real transactions in goods
and services—the productive use as opposed to the
speculative use of credit—cannot over-issue.”9 That is,
the banking system would not create excessive (and
therefore inflationary) amounts of credit. A particularly important exponent of this view was Paul Warburg,

TABLE 1

Federal Reserve holdings, 1915–50
Year-end

Treasury securities

(dollars in millions)
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950

16.0
55.0
122.0
238.0
300.5
287.4
234.1
433.4
133.6
540.2
374.6
314.8
560.0
197.2
487.3
686.1
774.6
1,851.1
2,435.3
2,430.3
2,430.3
2,430.2
2,564.0
2,564.0
2,484.2
2,184.1
2,254.5
6,188.7
11,543.0
18,846.1
24,262.3
23,349.7
22,559.4
23,332.8
18,884.6
20,724.5

Bankers’ acceptances

(percent of total)
19.8
31.2
31.4
45.5
80.8
60.6
61.8
61.5
27.5
58.3
50.2
45.2
64.4
31.1
67.4
70.4
78.3
99.8
95.7
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0

(dollars in millions)
64.8
121.2
266.9
285.3
71.6
187.2
145.0
271.0
352.0
386.9
372.2
381.0
308.9
437.5
235.3
288.8
215.3
3.6
108.1
0.1
0.0
0.0
0.5
0.5
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

(percent of total)
80.2
68.8
68.6
54.5
19.2
39.4
38.2
38.5
72.5
41.7
49.8
54.8
35.6
68.9
32.6
29.6
21.7
0.2
4.3
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Source: Meulendyke (1998, table 1, p. 22).

48

1Q/2002, Economic Perspectives

“[T]he acquisition of Government securities by
a banker with Kuhn, Loeb & Co., whose pamphlet,
the central bank is regarded as opening the door
“A plan for a modified central bank” (published durto inflation. It is usual for the power of the central
ing the Financial Panic of 1907), strongly influenced
bank to lend to the Government to be carefully
the drafting of the Federal Reserve Act of 1913.
circumscribed, and the dividing line between lendWhile the real bills doctrine does not distinguish
ing direct and buying Government securities in
between bills acquired through rediscounting and bills
the market may be rather a fine one.”
acquired through open market purchases, there was a
Similarly, in his pamphlet, “Principles that must underperception that open market transactions by the Fedlie monetary reform in the United States” (published
eral Reserve in the commercial bills market would be
in November 1910), Warburg warns against the inflabeneficial to the economy. In particular, Warburg and
tionary danger of issuing notes backed by government
others believed that active trading of real bills by the
bonds (Warburg, 1930b, p. 176).
Fed could help foster the development of a secondary
In addition, it was thought that a well-run central
market in these securities. Unlike most Western
bank should be free from political influence.15 ExtenEuropean countries, the U.S. did not have an active
sive holdings of government debt might compromise
market in acceptances prior to the passage of the Fedcentral bank independence.16 Finally, the real bills doceral Reserve Act. Warburg and others saw the develtrine emphasized that central bank assets should be short
opment of a liquid acceptance market as essential for
term. At the time the Federal Reserve was established,
a modern banking system to emerge in the U.S. In
there were no short-term Treasury bills, and there was
Warburg’s words, “We should aim to transform our
no active market in short-term Treasury securities.17
commercial paper from a non-liquid asset into the
10
Thus, Federal Reserve assets prior to World War I
quickest [that is, most liquid] asset of our banks.”
were primarily short-term commercial bills. Table 3,
Prior to the establishment of the Federal Reserve, the
taken from Reynolds (1922b, p. 77) shows that until
most liquid short-term assets traded were call loans
mid-1917, commercial bills purchased in the open marused to finance stock purchases on the New York Stock
ket accounted for a larger fraction of Federal Reserve
Exchange. This form of investment was seen as specassets than government securities. From the beginning
ulative, rather than productive. It was hoped that the
of 1916 through the start of World War I, bills purdevelopment of a liquid market in real bills would facilitate the optimal allocation of credit across induschased in the open market exceeded those acquired
tries and geographical regions11 and divert credit
through rediscounting.18
toward productive investment and away from more
World War I and its aftermath
speculative uses.12 This development would be fosIt was America’s entry into World War I in April
tered by a central bank that actively purchased these
1917 that spurred the big shift away from this focus
bills in the open market.13 Pursuant to this goal, the
on commercial bills. The war was largely funded by
System’s purchases of commercial bills (primarily
government debt. The Federal Reserve was reluctant
trade and bankers’ acceptances) in the open market
vastly exceeded its acquisition of these securities via
to buy government debt directly from the Treasury,19
rediscounting. We can see this in table 2,
which is taken from Agger (1922, table 2,
TABLE 2
p. 216).14
Federal Reserve Bank discounts and purchases
In contrast to the focus on real bills
of trade and bankers’ acceptances
(particularly bankers’ acceptances) as ap(in thousands of dollars)
propriate assets for a central bank, there
was some concern about central bank purPurchases in
Discounts
open market
chases of government securities. Direct
Year
Bankers’
Trade
Bankers’
Trade
loans to the government were seen as dangerous, tying the supply of credit to the
1915
1,959
64,814
31
1916
5,212
369,762
16,333
spending whims of the government. Open
1917
37,771
1,046,765
30,948
market purchases of government securities
1918
19,940
187,373
1,748,503
61,036
were seen as equivalent to this sort of di1919
71,643
138,420
2,788,619
36,558
rect lending. Furthermore, monetizing
1920
187,162
192,157
3,143,737
74,627
government debt was seen as inflationary.
1921 (9 months)
49,810
101,129
996,851
6,687
Hawtrey (1933, p. 131) summarized these
Source: Agger (1922, table 2, p. 216).
ideas as follows:

Federal Reserve Bank of Chicago

49

but, under government pressure, it took steps to accommodate the increased supply of Treasury securities. It
accepted Treasury securities from member banks for
rediscounting; it accepted bills backed by Treasuries
from member banks for rediscounting; and it offered
a lower rate on loans collateralized with Treasury securities than with other forms of collateral.20
As a result of these steps, the Federal Reserve’s
portfolio became heavily based (directly or indirectly) on Treasury debt. By May 1919, 95.2 percent of
Fed purchases of commercial bills (total of rediscounts
plus open market purchases) were backed by government securities. Open market purchases of government
securities also increased dramatically. According to
West (1977), such purchases amounted to only $4.37
million in April 1917. By March 1918, purchases of
government securities amounted to $1,099 million
(55.1 percent of total investments).21
Once the war ended, purchases of government
securities suffered a decline relative to acceptances.
However, the war did result in permanent changes
in Federal Reserve operations. First, the war years
established a precedent for extensive Federal Reserve
holdings of government debt. However, there were
those who advocated withdrawing from the Treasury
market following the end of the war. For example,
Welton and Crennan (1922) attributed the inflation
during World War I to the backing of currency with
government securities, and argued that this practice

should stop. Second, the volume of Treasury securities
issued to finance the war created an active market in
government debt.22
A third development was a growing disenchantment with the real bills doctrine. In particular, the recession of 1920–21 was caused in part by excessive
inventory building. The inflation following the end
of the war motivated firms to hold speculative inventories, hoping to sell at higher prices. These inventories were financed, in part, by commercial bills,
which were suitable assets for rediscounting under
the real bills doctrine. The central bank credit thus
created further fueled the inflation. As a result, Federal Reserve officials (notably Benjamin Strong,
president of the Federal Reserve Bank of New York)
argued that the real bills doctrine was neither necessary nor sufficient to avoid inflationary credit expansion or to ensure that credit would be used for
productive, rather than speculative activity.23, 24
These developments weakened the Federal Reserve’s original focus on real bills. Thereafter, purchases
of real bills (mostly bankers’ acceptances) and government securities coexisted. The recession of 1920–21
severely contracted the supply of bankers’ acceptances and other real bills.25 The Fed responded by replenishing its earning assets with government debt.26
However, it appears that real bills were used for secular growth of the Federal Reserve’s open market
portfolio, while government securities were used to

TABLE 3

Earning assets of the Federal Reserve System
1914–17 (in thousands of dollars)
Date
1914
December 31

Bills discounted
for members

Bills bought
in open market

9,909

U.S. government
securities

Municipal
warrants

205

734

1915
January 29
April 30
July 30
October 29

13,955
22,774
29,102
30,448

13,812
11,625
13,619

2,015
6,813
7,923
10,505

11,165
18,656
16,107
25,014

1916
January 28
April 28
July 28
October 27

26,901
21,448
27,594
21,131

26,314
47,585
83,454
86,085

21,372
45,841
48,656
40,469

20,602
36,933
27,220
29,890

1917
January 26
April 27
July 27
October 26

15,711
35,043
138,459
397,094

97,697
71,400
195,097
177,590

55,769
117,818
76,953
110,042

12,249
14,999
1,469
233

Source: Reynolds (1922b, p. 77).

50

1Q/2002, Economic Perspectives

manage aggregate credit provision in the
FIGURE 2
short term. In particular, figure 2 (taken
Open market paper and government
from chart 9.3 in West, 1977, p. 191)
securities purchased, 1921–23
shows that the variability of government
thousands of dollars
securities purchases from 1921–23 was
2,000
much greater than the variability of open
27
market purchases of real bills. This
pattern suggests that Treasury securities
1,500
served a role analogous to present-day
temporary transactions.
In the early 1920s, an argument
against extensive open market operations
1,000
Government
in government securities came from the
securities
Treasury itself. The Reserve Banks’ iniOpen market
tial open market activity in government
paper
500
debt was uncoordinated, causing random
fluctuation in the pricing of these securities. This presented a problem for the
0
Treasury, making it more difficult to foreJan
May
Sept
Jan
May
Sept
Jan
May
Sept
cast auction prices.28 In part to address the
1921
‘22
‘23
Treasury’s concerns, in May 1922 the
Conference of Presidents of the Federal
Source: West (1977, chart 9.3, p. 191).
Reserve Banks established a committee
on centralized execution of purchases
and sales of government securities to coordinate all
acceptances simply infeasible. Evidence on this point
Federal Reserve purchases of Treasury securities.29
is provided by Groseclose (1965, p. 132): “[D]uring
In 1923, this committee was reconstituted under the
the boom just preceding the stock market crash of
supervision of the Board of Governors as the Open
1929 the volume of bankers’ acceptances rose to around
Market Investment Committee, the precursor of the
$1.5 billion, but thereafter declined to less than $150
current Federal Open Market Committee. The Treasury’s
million at the end of 1941. ...” As a result, “[a]fter 1937
concerns appeared to restrain the growth in the use of
the Federal Reserve practically ceased to buy or redisgovernment debt for open market operations.
count such paper” until after World War II. At the same
time, the government started issuing short-term debt
The Depression, 1929–33
on a regular basis. The first Treasury bill issue was in
The event that ultimately caused a permanent shift
December 1929,32 providing the Fed with an alternaaway from bankers’ acceptances to government secutive to bankers’ acceptances as a short-term instrument
rities was the Great Depression. According to Anderson
for open market operations.
(1965), there was a consensus that the Federal Reserve’s
A final impetus for extensive Federal Reserve
response to the market crash in fall 1929 should include
holdings of government debt was provided by the
aggressive open market purchases. Anderson wrote
Roosevelt administration’s national recovery actions
that, “Acceptances and, if necessary, [italics added]
in 1933. As with the costs of World War I, the governgovernment securities should be purchased to avoid
ment financed these actions with debt. The Treasury
any increase and possibly to bring some reduction in
needed to ensure that debt issues were successful, and
member-bank indebtedness to the Reserve Banks.”30
the Federal Reserve responded to the Treasury’s conInterestingly, this quote suggests that acceptances,
cerns. According to Anderson (1965, p. 72), “There
rather than government securities, were seen as the
was a consensus that with excess reserves still substanprimary vehicle for increasing bank liquidity.
tial, it was not desirable to buy government securities
After 1929, however, it is difficult to find any mento increase bank reserves. ... There was apprehension
tion of acceptances in discussions of open market op[however] that if the Treasury could not do its financerations.31 It appears that the aggregate supply of
ing successfully in the market, it would be forced to
acceptances fell with the decline in economic activiseek accommodation directly from the Reserve Banks.”
ty, rendering extensive open market operations in
As a result, in spring 1933 the Board of Governors

Federal Reserve Bank of Chicago

51

authorized the purchase of up to $1 billion of government securities, if necessary, to ensure successful
financing by the Treasury.
What can we learn from the
historical record?
We must be cautious in drawing lessons from this
historical account of open market operations. For one
thing, the events described in the preceding sections
all occurred under the gold standard, a very different
monetary environment from the present. In addition,
current financial markets are far more highly developed than in the early years of the Fed. Nonetheless,
there are a number of parallels between the System’s
experiences in its early years of existence and the policy
choices that the System may face over the next few
years. During the first three decades of the last century, the Fed went through the process of changing the
class of securities used in open market operations. The
problem encountered by the Federal Reserve during
the recession of 1920–21 and the Great Depression
resembles that currently facing the System: a dwindling supply of the assets traditionally used for open
market operations. In the 1930s, the problem was a reduced supply of acceptances induced by the economic contraction, while currently it is the possibility of
a reduced supply of Treasuries. The adjustment in the
1930s to a Treasuries-only policy was not immediate.
It took many years for the System to reconcile the advantages of using Treasury securities with their associated problems, most notably the problem of central
bank independence highlighted by Bagehot (1873)
some 70 years before. Ultimately, these issues were
not to be fully resolved until the Treasury–Federal
Reserve Accord of 1951, in which the Treasury agreed
that the Fed should be permitted to pursue an independent monetary policy.
In its early years, the Fed used open market operations to affect the development of private markets.
Specifically, the System deliberately used the purchase of private bills in the open market to foster the

52

development of a liquid secondary market in acceptances. This action stands in contrast to the Federal
Reserve’s current policy of minimizing market distortions, wherever possible, in its open market activities. In the early days of the System, however, concerns
about creating distortions in financial markets were
outweighed by other public policy considerations. At
that time, financial markets and the banking system
were not well developed. The Federal Reserve’s activities might therefore be seen as serving a public
policy purpose by addressing a market incompletion.
Today’s markets are so much more highly developed that it is difficult to make a case for this sort of
active interventionist policy. Nonetheless, the Federal
Reserve still faces a basic issue that was recognized
in its early years: Its role in financial markets may
have an influence on market outcomes. If the Fed
moves toward accepting privately issued securities in
its open market account, this policy shift may affect
the evolution of markets. For example, as the supply
of Treasury securities contracts, private markets will
align on some alternative benchmark security to replace the ever-scarcer Treasuries. The System’s choice
of private assets to use in its open market operations
may influence the class of securities that emerges as
the new benchmark.
In addition, if the Fed purchases private securities,
it might be seen as selectively approving those obligors
whose paper it purchases. When the Fed discontinued
all purchases of acceptances in 1984 (it discontinued
outright purchases of acceptances in 1977), this concern was a major factor. In the words of President
Solomon of the Federal Reserve Bank of New York,
“There are some people ... who misinterpret the Federal Reserve eligibility as a good housekeeping seal.”33
While there are antecedents for open market operations in private securities, there clearly are fundamental problems that must be addressed should the Federal
Reserve consider using private securities in this way
in the future.

1Q/2002, Economic Perspectives

NOTES
1

Broaddus and Goodfriend (2001) propose that the Treasury continue issuing bonds sufficient to meet the Federal Reserve’s needs,
purchasing private assets with the proceeds if necessary. This
would transfer the responsibility of holding private assets from
the Federal Reserve to the Treasury.
A bill of exchange is a negotiable security issued by one party (the
“drawer”) and accepted by the other party (the “drawee”), instructing the drawee to pay a fixed sum of money, usually as part of a
commercial transaction. It differs from a promissory note only in
that it is initiated as an instruction from the creditor, rather than
as a promise from the debtor. A trade acceptance is essentially a
bill of exchange issued in the course of an export/import transaction. It is an obligation of the buyer in the transaction. A bankers’
acceptance is a trade acceptance that has been guaranteed by the
buyer’s bank, at which point it becomes an obligation of the bank,
rather than of the buyer.
2

15

See, for example, Bagehot (1873), chapter 4.

16

Warburg, (1930b), p. 172.

17

Warburg, (1930b), p. 169.

18

In comparing tables 2 and 3, note that table 2 gives cumulative purchases over the year, while table 3 gives point-in-time asset stocks.
19
The Reserve Banks did agree to take a $50 million issue of 90day certificates of indebtedness. (See West, 1977, p. 187.)
20

See Reynolds (1922a), p. 191, and West (1977), pp. 187–188.

21

See West (1977), p. 188.

22

See West (1977), p. 192.

3

Outright transactions are purchases or sales of securities that are
intended to be permanent. The Federal Reserve generally conducts
outright transactions only a few times each year. In contrast, temporary transactions are purchases or sales that are expected to be
reversed in the near term. Temporary transactions are conducted
more frequently. For a discussion of the difference between outright (or permanent) and temporary open market transactions, see
Meulendyke (1998).
4

This forecast uses a statistical model that fits a twelfth-order
autoregression in the change in the log of M0. I use weekly data
from January 1962 through November 2001.
5

U.S. Congress, CBO (2002). Also see the CBO website,
www.cbo.gov.
6

7

8

9

See the discussion in Reinhart and Sack (2000).
Temple-Raston and Weisman (2001).
See Reynolds (1922b), pp. 74–75.
Bordo and Schwartz (2000).

23

See the discussion in West (1977), pp. 195–201.

24
While the System moved away from the real bills doctrine during
the early 1920s, the ideal of a self-regulating monetary policy has
received renewed attention in recent years. Most notably, Sargent
and Wallace (1982) formalize the notion of an “elastic currency”
(in the terminology of the Federal Reserve Act of 1913). They show
how a theoretical version of the real bills doctrine can allow both
the quantity of money and the price level to respond optimally to
fluctuations in real economic activity.
25

“From an estimated maximum of around $1 billion in acceptances
outstanding at the height of their use [prior to the recession], the
volume dropped to around $400 million in 1923. Much of this drop,
of course, was due to the business recession” (Groseclose, 1965).
26

See Anderson (1965).

27

See the discussion in West (1977), p.191.

28

See Anderson (1965), p. 144.

29

See Anderson (1965), p. 51.

10

From “A plan for a modified central bank,” quoted in Warburg
(1930a), p. 23.
11

See Warburg (1930a), p. 17, and Agger (1922), p. 209.

12

See West (1977), p. 185.

13

According to Reynolds (1922b), one of the key goals of the Federal Reserve System in its first two years was “to endeavor to regulate the interest rates and equalize the demand for money by the
purchase of bills and acceptances in the open market” (Reynolds,
1922b, pp. 74–75, italics added). Note that in this quote the term
“bills” clearly refers to bills of exchange, as Treasury bills were
not introduced until 1929.

30

Anderson (1965), p. 61.

31
For example, Anderson’s (1965) extensive discussion of the debates over Federal Reserve open market policy in the 1930s focuses
exclusively on government securities.
32

Bannon (1953).

33
Transcript of the FOMC meeting of March 26–27, 1984, available on the Board of Governors of the Federal Reserve System
website at www.federalreserve.gov/fomc/transcripts/
transcripts_1984.htm.

14

West (1977), pp. 185–186, also notes Benjamin Strong’s efforts
at the Federal Reserve Bank of New York to create an open market
in commercial paper.

Federal Reserve Bank of Chicago

53

REFERENCES

Agger, E. E., 1922, “The development of an open market for commercial paper,” in The Federal Reserve
System—Its Purpose and Work, A. D. Welton and C.
H. Crennan (eds.), Annals of the American Academy
of Political and Social Science, Vol. 99, January.

, 1922b, “Early functioning of the Federal
Reserve System,” in The Federal Reserve System—
Its Purpose and Work, A. D. Welton and C. H. Crennan
(eds.), Annals of the American Academy of Political
and Social Science, Vol. 99, January.

Anderson, Clay, 1965, A Half-Century of Federal
Reserve Policymaking, Philadelphia: Federal Reserve
Bank of Philadelphia.

Sargent, Thomas J., and Neil Wallace, 1982, “The
real bills doctrine versus the quantity theory: A reconsideration,” Journal of Political Economy, Vol. 90,
pp. 1212–1236.

Bagehot, Walter, 1873, Lombard Street, London:
John Murray.
Bannon, Richard J., 1953, “History of the weekly
combined statement of the twelve Federal Reserve
Banks,” Catholic University of America, doctoral
dissertation, May.
Bordo, Michael D., and Anna J. Schwartz, 2000,
“The performance and stability of banking systems
under ‘self-regulation’: Theory and evidence,” The
Cato Journal, Vol. 14, No. 3.
Broaddus, J. Alfred, and Marvin Goodfriend,
2001, “What assets should the Federal Reserve buy?,”
Federal Reserve Bank of Richmond, Economic
Quarterly, Vol. 87, No. 1, Winter, pp. 7–22.
Groseclose, Elgin, 1965, Fifty Years of Managed
Money, London: Macmillan and Company, Ltd.
Hawtrey, R. G., 1933, The Art of Central Banking,
London: Longmans, Green and Co.
Meulendyke, Ann-Marie, 1998, U.S. Monetary Policy and Financial Markets, New York: Federal Reserve Bank of New York.
Reinhart, Vincent, and Brian Sack, 2000, “The
economic consequences of disappearing government
debt,” Brookings Papers on Economic Activity,
Washington, DC, No. 2, pp. 163–209.
Reynolds, George, M., 1922a, “Rediscount rates, bank
rates and business activity,” in The Federal Reserve
System—Its Purpose and Work, A. D. Welton and
C. H. Crennan (eds.), Annals of the American Academy of Political and Social Science, Vol. 99, January.

54

Temple-Raston, Dina, and Jonathan Weisman,
2001, “Fed may purchase private bonds,” USA Today,
September 6, p. 1A.
U.S. Congress, Congressional Budget Office, 2002,
The Budget and Economic Outlook, Fiscal Years
2003–12, Washington DC: U.S. Government Printing Office, forthcoming, January 31.
Velde, François, and Marcelo Veracierto, 2000,
“Dollarization in Argentina,” Federal Reserve Bank
of Chicago, Economic Perspectives, First Quarter,
pp. 24–36.
Warburg, Paul M., 1930a, The Federal Reserve
System; Its Origin and Growth, Vol. I, New York:
The MacMillan Company.
, 1930b, The Federal Reserve System;
Its Origin and Growth, Vol. II: Addresses and Essays,
1907–1924, New York: The MacMillan Company.
Welton, A. D., and C. H. Crennan, 1922, “The integrity of the Federal Reserve System,” in The Federal Reserve System—Its Purpose and Work, A. D.
Welton and C. H. Crennan (eds.), Annals of the
American Academy of Political and Social Science,
Vol. 99, January.
West, Robert Craig, 1977, Banking Reform and the
Federal Reserve, 1863–1923, Ithaca, NY: Cornell
University Press.

1Q/2002, Economic Perspectives