View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Inflation Targeting
William Poole
This article was originally presented as a speech to Junior Achievement of Arkansas, Inc.,
Little Rock, Arkansas, February 16, 2006.
Federal Reserve Bank of St. Louis Review, May/June 2006, 88(3), pp. 155-63.

I

am delighted to speak with Junior
Achievement of Arkansas. I cannot report
a story about how Junior Achievement (JA)
got me off to a good start, but I do have a
personal story—from my oldest son, Will. When
I accepted this speaking invitation, I asked Will
to reflect on his JA experience, and here is the
paragraph he sent me.
I was involved in Junior Achievement when I
was in 8th grade. Most entrepreneurial-minded
kids I knew gained their business experiences
on paper routes, painting houses or the like.
But I was drawn to JA’s concept of teaching the
basics of business and figuring out how to
mass-produce something. Little did I know
where it would lead me. In my JA group, we
assembled wooden coat pegs on boards and
painted them up nicely. I quickly learned that
building a single coat-rack widget is not so
hard, but leading a handful of people to make
50, with quality, is much harder. And that getting all of them sold for a profit is even harder
yet. I can’t say exactly which of the skills I
learned at JA helped me end up running the
Windows business at Microsoft. I was a big
dreamer back then, but even I would not have
dreamt that I would someday be leading a team
of 3,000 professionals that create software that
is used in 169 countries around the world and
powers 200,000,000 new PCs sold every year.
JA, thanks for the jump-start!

Will is a senior vice president for Microsoft’s
Windows client business. Needless to say, I am
immensely proud of him. I don’t know the list,
but will bet that numerous other JA alumni are
in very responsible positions today.
I find computers a bit mysterious, and I know
that many think that monetary policy is even
more mysterious. Federal Reserve officials used
to delight in adding to the mystery, but today
advances in macroeconomic theory have made
clear the importance of central bank transparency
to an effective monetary policy.
Since coming to the St. Louis Fed in 1998, I
have spoken often on the subject of the predictability of Federal Reserve policy, emphasizing
that predictability enhances the effectiveness of
policy.1 Predictability has many dimensions, but
one is certainly that the market cannot predict
what the Fed is going to do without a deep understanding of what the Fed is trying to do.
The Fed has stated for many years that a key
monetary policy objective is low and stable
inflation. I believe that adding formality to that
objective can clarify what the Fed does and why.
That is my topic today.
Before proceeding, I want to emphasize that
the views I express here are mine and do not
1

See Poole (1999) for the first of a series of speeches on this topic.

William Poole is the president of the Federal Reserve Bank of St. Louis. The author appreciates comments provided by colleagues at the
Federal Reserve Bank of St. Louis. Daniel L. Thornton, vice president in the Research Division, provided special assistance. The views
expressed do not necessarily reflect official positions of the Federal Reserve System.

© 2006, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

155

Poole

necessarily reflect official positions of the Federal
Reserve System. I thank my colleagues at the
Federal Reserve Bank of St. Louis for their comments; Daniel L. Thornton, vice president in the
Research Division, provided special assistance.
I take full responsibility for errors.

THE FRAMEWORK
The Federal Open Market Committee (FOMC)
has the responsibility to determine monetary
policy. The Committee implements policy by
setting a target for the federal funds rate. Policy
predictability does not mean that the public or
the markets can successfully forecast the target
federal funds rate next week, next month, or next
year. The target rate is based on policymakers’
current information and best estimate of future
economic events; the key observation is that
incoming information may depart from the best
estimate and indicate that the target funds rate
needs to be changed to achieve policy objectives.
What we must mean by perfectly predictable is
that the public and the markets are not surprised
by the Fed’s response to the latest economic
information, understanding that the information
itself is not predictable.
Although new information creates a steady
stream of mostly minor surprises, the FOMC ought
to be clear about what it is trying to accomplish.
At present, most members of the Committee
would probably be pretty close together on how
to state the inflation goal. A benefit of greater
formality in defining the inflation goal is that
individual FOMC members would have a clearer
idea as to what the inflation objective is.
To illustrate this point, I have often said that
my preferred target rate of inflation is “zero,
properly measured.” That is, allowing as best we
can for measurement bias, which might be in the
neighborhood of half a percent per year for broad
measures of consumer prices, I favor literally zero
inflation. Given measurement bias in price
indices, I might state my goal as inflation between
0.5 and 1.5 percent as measured by the price index
for personal consumption expenditures (the PCE
price index). Others prefer a somewhat higher
rate of inflation, perhaps in the range of 1 to 2
156

M AY / J U N E

2006

percent as measured by the PCE price index. Still
others might favor a different target range, with a
different midpoint and/or a wider or narrower
range. If the FOMC decides to discuss inflation
targeting, all dimensions of specifying a target
will be considered carefully.
Why does precision on a target range matter?
Consider a situation in which the actual rate of
inflation is 1.5 percent. Those favoring a target
range of 1 to 2 percent would say that the policy
stance is just right; inflation is in the exact center
of the target range. I, given my preferred target
range, would argue for a somewhat more restrictive stance, to move the inflation rate down toward
the center of my preferred range. The difference
between these two target ranges is small, and yet
that difference might be enough to call for a somewhat different policy stance.
Obviously, the Fed cannot simultaneously
pursue two different inflation goals, and therefore
there is every reason for the Committee to agree
on a common objective. An agreed-upon common
objective is much more important than the small
difference between my own preferred objective
and the range of objectives I believe are favored
by others.
If the FOMC were to decide on a common
objective, then the Committee could communicate
it to the general public. Discussion of the formal,
numeric objective and what it means would help
markets to better understand monetary policy and
would make policy more predictable. However,
many details matter and an inflation target will
not be a source of increased clarity unless the
details are specified appropriately. So, let’s talk
about those important details. To simplify the
language, I’ll refer to a publicly announced, specific numerical target range for inflation as a
“formal” inflation target or objective.

WHAT IS INFLATION?
If the FOMC is going to adopt a formal inflation
objective, we need to agree on what “inflation” is.
However inflation is measured, it is important to
distinguish between “high frequency” inflation,
which central banks have little control over, and
“low frequency” inflation, which central banks
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

can control. High-frequency inflation is the rate
of change in the price level over relatively short
time periods—months, quarters, or perhaps even
a year. Low-frequency inflation is an economywide, systemic process that is affected by past,
present, and expected future economic events.
Central banks accept responsibility for lowfrequency inflation because such inflation
depends critically on past and, especially,
expected future monetary policy. When I advocate
that the Fed establish a formal inflation objective,
I am speaking of the low-frequency inflation rate.
As a practical matter, low-frequency inflation can
be thought of as the average inflation rate over a
period of a few years.

SETTING THE TARGET RATE OF
INFLATION
The Employment Act of 1946 sets objectives
for monetary policy—indeed, objectives for all
economic policy.2 The Act declares that it is the
“responsibility of the Federal Government...to
promote maximum employment, production, and
purchasing power.” These objectives are reflected
in the FOMC’s twin objectives of “price stability”
and “maximum sustainable economic growth.”
Although useful, these phrases are somewhat
vague. For example, in the late 1970s and early
1980s, the Fed pursued the goal of price stability
by reducing inflation from double-digit rates; from
the mid-1980s into the early 1990s, the goal was
to bring inflation down from the 4 percent neighborhood. Over the past decade or so, the goal has
come to mean keeping the inflation rate low.
But what inflation rate constitutes price stability? Rather than a numerical definition, former
Chairman Greenspan preferred a conceptual
definition, suggesting that “price stability is best
thought of as an environment in which inflation
is so low and stable over time that it does not
materially enter into the decisions of households
and firms.”3 But does Greenspan’s definition
require zero inflation?
2

See Santoni (1986) for a discussion of the creation of the Act.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Because measuring the price level is a daunting
task, zero true inflation and zero measured inflation may differ. Prices of individual goods and
services change over time, but if some prices are
falling and others are rising, then the average of
all prices, or the price level, can remain constant.
Nevertheless, defining a price index when prices
are changing at different rates involves measurement issues that are complicated at both conceptual and practical levels. For a variety of technical
reasons that I won’t discuss, the best we can do
is to approximate the theoretical construct of the
price level. Experts believe that price indices, such
as the consumer price index (CPI) and the PCE
price index, have an upward bias. That is, if the
price level were truly unchanged, the price index
would show a low rate of inflation.
When asked during the July 1996 FOMC meeting what level of inflation does not cause distortions to economic decisionmaking, Chairman
Greenspan responded, “zero, if inflation is properly measured.”4 Greenspan’s view that the theoretically correct definition of price stability is zero
inflation stems from his belief that economic
growth is maximized when the price level is
unchanged on average over time.5 While I believe
that there is a virtual consensus that the economy
functions best when the theoretically correct measure of inflation is “low,” not everyone agrees with
Greenspan that true price stability—a zero rate
of inflation properly measured—is the best target
for the Fed. For a variety of reasons, some economists believe that the economy functions best
when inflation correctly measured is “low” but
not zero.
While the goal of price stability is specific in
both the Federal Reserve Act and the Employment
Act of 1946, some suggest that the FOMC lacks
the authority to establish a numerical inflation
objective. They claim that only Congress has this
authority. That Congress has the power to estab3

Greenspan (2002, p. 6).

4

Transcript of the FOMC meeting held on July 2-3, 1996, p. 51.

5

For completeness, I note that Friedman (1969) argued that the
optimal rate of inflation was negative. Specifically, he suggested
that economic welfare was maximized when the nominal interest
rate was zero. This requires that the inflation rate is equal to negative
of the real interest rate.

M AY / J U N E

2006

157

Poole

lish the goals of economic policy is indisputable;
however, it does not follow that the FOMC does
not have the authority to adopt a formal inflation
objective as part of implementing its broad congressional mandate. It is common practice for
Congress to establish objectives and guidelines
and leave it up to the agency responsible for
meeting those objectives to fill in the details.
The real question is this: Should the FOMC
announce what its inflation objective is? Answering this question is simple in principle. If announcing a specific, numerical inflation objective
enhances the efficacy of monetary policy, then the
answer is yes. If doing so reduces the efficacy of
monetary policy, the answer is no. I believe the
answer is yes for a variety of reasons.

THE CASE FOR AN INFLATION
TARGET
I have already pointed out that a formal
inflation goal should improve the coherence of
internal Fed deliberations by focusing attention
on how to achieve an agreed goal rather than on
the goal itself. Adopting and achieving a formal
inflation objective should reduce risks for individuals and businesses when making long-term
decisions.
Because the benefits of price stability are
indirect and diffuse, they are difficult to quantify.
One area where the benefits of price stability are
most apparent is the long-term bond market. It is
not surprising that the 10-year Treasury bond
yield has generally drifted down with actual and
expected inflation since the late 1970s. The reduction in long-term bond yields reflects market
participants’ expectations of lower inflation and
their increased confidence about the long-term
inflation rate. Moreover, the volatility of the market’s expected rate of inflation, measured by the
spread between nominal and inflation-indexed
10-year Treasury bond yields, has trended down
since the late 1990s, suggesting an increased
confidence in the Fed’s resolve to keep inflation
low. I anticipate that the adoption of a formal
inflation objective would result in some, probably
158

M AY / J U N E

2006

modest, further reduction in the level and variability of nominal long-term bond yields.
Adopting a formal inflation objective, and
success in achieving that objective, will also
enhance policymakers’ ability to pursue other
policy objectives, such as conducting countercyclical monetary policy. I suspect that some of
those who oppose a specific inflation objective are
concerned that doing so will cause policymakers
to become what Mervyn King, Governor of the
Bank of England, has colorfully termed “inflation
nutters.” King (1997) is referring to policymakers
who aim to stabilize inflation, whatever the costs.
I believe that just the opposite has happened.
The debate is fundamentally about the relationship between the low-inflation objective and
the high-employment objective. Even before
British economist A.W. Phillips published
research in 1958 that gave rise to what quickly
came to be called the Phillips curve, many economists believed that there was a negative relationship between inflation and unemployment—i.e.,
lower inflation resulted in higher unemployment.
Some preferred to think of causation as going the
other way around—that higher unemployment
resulted in lower inflation.
The inflation-unemployment trade-off was
thought to be permanent. Society could have a
permanently lower average unemployment rate
by accepting a higher average rate of inflation. In
the late 1960s, Milton Friedman (1968) and
Edmund Phelps (1967) challenged the idea of a
permanent trade-off by making the theoretical
argument that the Phillips curve must be vertical
in the long run in a world where economic agents
are rational. Subsequent evidence confirmed the
Friedman-Phelps view, and few economists today
believe that there is any long-run trade-off.
A vertical long-run Phillips curve does not
imply that one long-run inflation rate is as good as
any other. Rather, the dynamics of the FriedmanPhelps theory imply that inflation would accelerate continuously were policymakers to pursue
a policy of keeping the unemployment rate permanently below its natural, or equilibrium, rate.
This equilibrium rate came to be called the
NAIRU—the nonaccelerating inflation rate of
unemployment.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

The Friedman-Phelps theory demonstrates
why a policy of keeping the unemployment rate
permanently below its natural rate is futile. It does
not tell us the inflation rate that maximizes social
welfare, which I will call the optimal inflation
rate. Economic theory demonstrates why inflation
is costly, and worldwide experience demonstrates
that “high” inflation and “slow” economic growth
appear to be inexorably linked. Everyone acknowledges that, beyond some rate, inflation reduces
economic growth. The goals of price stability and
maximum sustainable economic growth are not
substitutes, as implied by the original Phillips
curve, but complements. Monetary policymakers
can make their greatest contribution to achieving
maximum sustainable economic growth by achieving and maintaining low and stable inflation.
That inflation and economic growth are
complements does not imply that policymakers
should not engage in countercyclical monetary
policy when circumstances warrant. For example,
with inflation well contained at the end of the
long 1990s expansion, the FOMC began reducing
its target for the federal funds rate in January 2001,
somewhat in advance of the onset of the 2001
recession. The funds rate target was reduced still
further in 2002 and 2003 as incoming data revealed
that the economy was responding somewhat more
slowly than expected and that actual and expected
inflation remained well contained. The funds rate
target was eventually reduced to 1 percent and
remained there for slightly more than a year.
Those who suggest that adopting a formal
inflation objective will cause policymakers to
become inflation nutters and, somehow, limit the
Fed’s ability to pursue other policy objectives
should examine actual experience. Not only did
the Fed’s commitment to price stability not prevent
it from engaging in countercyclical monetary
policy—it facilitated it.6 Such an aggressive countercyclical monetary policy as pursued starting
in early 2001 would have been unthinkable were
it not for the fact that the credibility established
over the years since Paul Volcker dramatically
altered the course of monetary policy in October
1979.7
6

For evidence on how inflation interfered with countercyclical
policy in the past, see Poole (2002).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

I believe that having a formal inflation objective will further enhance the Fed’s credibility and,
consequently, its ability to engage in countercyclical monetary policy. The reason is simple.
The more open and precise the Fed is about its
long-run inflation objective, the more confident
the public will be that the Fed will meet that
objective. The objective, and the accompanying
obligation to explain situations in which the
objective is not achieved, should increase the
Fed’s credibility.
Because it will be much easier for the public
to determine whether the FOMC is pursuing its
inflation objective if that objective is known with
precision, adopting a formal objective for inflation also will enhance the Fed’s accountability.
Having a formal objective makes the Congress’s
and the public’s job easier, thereby enhancing
accountability. If the FOMC misses its inflation
objective, it will have to say why the objective
was missed. By the same token, the FOMC will
have to explain why it failed to respond to a particular event when inflation appeared to be wellcontained within the objective. In essence, having
a specific inflation objective will help the public
better understand what I have elsewhere called
“the Fed’s monetary policy rule.”8

SPECIFYING THE TARGET
That there are differences of opinion about
the optimal inflation rate is not a reason for having
a fuzzy objective. If there are important differences
of opinion within the FOMC on the appropriate
target, which I doubt, the Committee ought to
resolve those differences and not permit them to
be a source of uncertainty.
Because the target should apply to lowfrequency inflation, the target needs to be stated
in terms of either a range or a point target with
an understood range of fluctuation around the
point target. The choice is more a matter of the
most effective way of communicating the target
and what it means than a matter of substance.
7

For those interested in understanding the issues that lead up to
and succeeded this event, see Federal Reserve Bank of St. Louis
(2005).

8

Poole (2006).

M AY / J U N E

2006

159

Poole

Figure 1A
CPI and Core CPI 3-Year Moving Averages
12

10

8

6

4

2

0
1960

1965

1970

1975

1980

–2

M AY / J U N E

1990

1995

2000

2005

CPI for All Urban Consumers: All Items
CPI for All Urban Consumers: All Items Less Food & Energy
Difference

A specific target range, such as 1 to 2 percent
annual change in a particular price index, has the
advantage of focusing attention on low-frequency
inflation. Even here, there could be special circumstances, which the Fed should explain should
they occur, that would justify departure from the
target. The way the range is expressed interacts
with the period over which inflation is averaged.
A narrower range would be appropriate for a
target expressed as a three-year average than for
a year-over-year target.
To understand what such a target means,
suppose states were to abolish sales taxes and
raise income taxes to offset the revenue loss. The
effect of this change in tax structure would be to
reduce measured prices. Such a tax change would
be a one-time effect—the price level would change
when the new tax law took effect but there would
not be continuing pressure over time tending to
lower prices. Suppose the one-shot price level
change took measured inflation outside the target
range. With a formal inflation target, the FOMC
would have the responsibility of explaining why
160

1985

2006

a monetary policy response to this target miss
would be unnecessary and perhaps harmful.
A formal inflation target needs to refer to a
particular price index. That there is no price index
that adequately reflects the economy’s true rate
of inflation is yet another reason given for not
adopting a specific inflation objective. My own
judgment is that the PCE price index measures
consumer prices reasonably well and has some
advantages, which can be explained, over the CPI.
Moreover, the FOMC could reasonably maintain
a rate of increase in this index in a range of, say
1 to 2 percent, on a two-year moving average basis
under most circumstances.
Over time, refinements in the price index or
introduction of better indices may lead to substitution of another index for the PCE index or justify
a change in the target range. The FOMC would
then have to explain why it was adjusting the
objective or index used to evaluate the objective.
The formal target provides a valuable vehicle for
explaining an important issue in the conduct of
monetary policy. Experience with inflation targetF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Figure 1B
PCE and Core PCE 3-Year Moving Averages
10

8

6

4

2

0
1960

1965

1970

1975

1980

–2

ing in industrial economies suggests that issues
of this sort have not been important. The markets
are already well informed about such issues—
and are becoming increasingly so. Conducting
this conversation with the markets will improve
the clarity of monetary policy and therefore its
effectiveness.
Over the past decade or so the Fed has gravitated to the position of placing primary emphasis
on the core rate of inflation, as measured by the
PCE price index excluding food and energy. The
reason is not that food and energy are unimportant—these are obviously two very important
categories of goods. Rather, experience indicates
that food and energy prices are subject to large
short-run disturbances that are beyond the ability
of monetary policy to control without policy
responses having adverse consequences for general
economic stability. If we examine total and core
price inflation over three years, say, most experience is that the averages are quite close. That is,
food and energy prices display substantial shortrun variability that yields large changes in the
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

1985

1990

1995

2000

2005

PCE: Chain-type Price Index
PCE Less Food and Energy: Chain-type Price Index
Difference

short-run rate of inflation in overall price indices
without affecting longer-run inflation. (See the
charts in Figure 1, which track the CPI and PCE
indices from 1960 through 2005.)

HOW MUCH DIFFERENCE
WOULD A FORMAL INFLATION
TARGET MAKE?
There is a large and growing literature comparing the performance of inflation-targeting
countries with their non-inflation-targeting
counterparts, especially the United States. This
literature finds few statistically significant differences between countries that have established
inflation targets and those that have not. This
finding has led some analysts to argue, “if it isn’t
broke, don’t fix it.” There are a number of reasons
why such findings are not too surprising: The
benefits from price stability are diffuse and difficult to measure; the industrialized economies
are highly interconnected, so that some of the
M AY / J U N E

2006

161

Poole

benefits to countries that have inflation targets
spill over to those that do not; the growth rate
effect is small, so it will take a long time before
one can distinguish a statistically significant
growth-rate effect. Finally, many of the countries
that adopted an inflation target had a history of
inflation. Adopting a target was a manifestation
of a societal commitment to bring down and keep
down the rate of inflation.
Given that the United States pursued a successful anti-inflation policy after 1979 without a
formal target, and established a high degree of
monetary credibility, there is no reason to expect
to observe measurable effects from adopting a
target now. Nevertheless, I cannot help reflecting
on other cases in which low inflation prevailed
but did not last. Consider U.S. policy errors of
the type that occurred in the mid-to-late 1920s
and in Japan in the late 1980s. In both of these
instances, policymakers failed to respond to deflation. I believe that a formal inflation target would
have focused attention on the policy mistake
leading to deflation and would have increased
public pressure on the central banks to respond
more forcefully.
Similarly, the Fed failed to tighten policy
appropriately in the late 1960s as inflation began
its ascent. In the early 1960s, as today, the Fed
enjoyed a high degree of market confidence and
inflation expectations were low. At that time,
only a small minority of economists thought that
monetary policy was “broken” in any important
way, and thus the case for “fixing it” was minimal.
Would a formal inflation target in 1960 have been
an ironclad guarantee that the Great Inflation
would never have happened? Surely not. Would it
have helped? I believe that the answer is surely yes.

CONCLUDING REMARKS
Inflation targeting is an approach to monetary
policy adopted by many countries, in most cases
in the context of a societal effort to address undesirably high inflation. The United States, fortunately, is not dealing with an inflation problem
at this time. The case for adopting an inflation
target is that it should help to avoid inflation in
the future and should increase the effectiveness
of monetary policy in a low-inflation environment.
162

M AY / J U N E

2006

The increase in policy effectiveness should
arise from two consequences of a formal system
of inflation targeting. The first consequence is that
the market will likely hold inflation expectations
more firmly. The second, and probably more
important, consequence is that the inflationtargeting framework provides a vehicle, or structure, within which the FOMC can better explain
its monetary policy actions and the policy risks
it must face. Inflation targeting should increase
accountability not so much by keeping score of
target hits and misses but rather by encouraging
a much deeper understanding of how monetary
policy decisions are made. That understanding
depends on continuing FOMC communications
with the markets and the public and FOMC willingness to listen as well as talk.

REFERENCES
Federal Reserve Bank of St. Louis. “Reflections on
Monetary Policy 25 Years After October 1978:
Proceedings of a Special Conference.” Federal
Reserve Bank of St. Louis Review, March/April
2005, 87(2, Part 2).
Friedman, Milton “The Role of Monetary Policy.”
American Economic Review, March 1968, 58(1),
pp. 1-17.
Friedman, Milton. “The Optimum Quantity of
Money,” in The Optimum Quantity of Money and
Other Essays. Chicago: Aldine Publishing, 1969,
pp. 1-50.
Greenspan, Alan. Chairman’s Remarks. Federal
Reserve Bank of St. Louis Review, July/August
2002, 84(4), pp. 5-6.
King, Mervyn. “Changes in U.K. Monetary Policy:
Rules and Discretion in Practice.” Journal of
Monetary Economics, June 1997, 39(1), pp. 81-97.
Phelps, Edmund S. “Phillips Curves, Expectations of
Inflation and Optimal Employment over Time.”
Economica, August 1967, 34(3), pp. 245-81.
Phillips, A.W. “The Relation between Unemployment
and the Rate of Change of Money Wage Rates in
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

the United Kingdom, 1861-1957.” Economica,
November 1958, 25(100), pp. 283-99.
Poole, William. “Synching, Not Sinking, the Markets.”
Speech prepared for the meeting of the Philadelphia
Council for Business Economics, Federal Reserve
Bank of Philadelphia, Philadelphia, August 6, 1999;
www.stlouisfed.org/news/speeches/
1999/08_06_99.html.
Poole, William. “Inflation, Recession and Fed Policy.”
Speech prepared for the Midwest Economic
Education Conference, St. Louis, April 11, 2002;
www.stlouisfed.org/news/speeches/2002/
04_11_02.html.
Poole, William. “The Fed’s Monetary Policy Rule.”
Federal Reserve Bank of St. Louis Review, January/
February 2006, 88(1), pp. 1-11; originally presented
as a speech at the Cato Institute, Washington, DC,
October 14, 2005.
Santoni, G.J. “The Employment Act of 1946: Some
History Notes.” Federal Reserve Bank of St. Louis
Review, November 1986, 68(9), pp. 5-16.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

163

164

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

The Geography, Economics, and Politics of
Lottery Adoption
Cletus C. Coughlin, Thomas A. Garrett, and Rubén Hernández-Murillo
Since New Hampshire introduced the first modern state-sponsored lottery in 1964, 41 other states
plus the District of Columbia have adopted lotteries. Lottery ticket sales in the United States topped
$48 billion in 2004, with state governments reaping nearly $14 billion in net lottery revenue. In
this paper the authors attempt to answer the question of why some states have adopted lotteries
and others have not. First, they establish a framework for analyzing the determination of public
policies that highlights the roles of individual voters, interest groups, and politicians within a
state as well as the influence of policies in neighboring states. The authors then introduce some
general explanations for the adoption of a new tax that stress the role of economic development,
fiscal health, election cycles, political parties, and geography. Next, because the lottery adoption
decision is more than simply a tax decision, a number of factors specific to this decision are
identified. State income, lottery adoption by neighboring states, the timing of elections, and the
role of organized interest groups, especially the opposition of certain religious organizations, are
significant factors explaining lottery adoption.
Federal Reserve Bank of St. Louis Review, May/June 2006, 88(3), pp. 165-80.

L

otteries have had a turbulent history in
the United States.1 In early America,
lotteries were used by all 13 colonies to
finance improvements in infrastructure,
such as bridges and roads. Both during and after
the Revolutionary War, lotteries were used to provide support for the military (e.g., the Continental
Army), public projects, and the financing of private universities, such as Harvard. These early
lotteries were closer to a raffle than to the modern
concept of a lottery. Private lotteries began operating in the mid-1800s, with many of these lotteries
operating through the mail system. As a result
of corruption and a growing public distrust of
lotteries, the federal government prohibited all
interstate lottery commerce in the early 1890s.
1

See Clotfelter and Cook (1989 and 1990) for an extensive history
of state lotteries.

As a result of this federal prohibition and growing public distrust, the majority of states enacted
explicit constitutional prohibitions against lotteries of any form. By 1894 no state allowed the
operation of a lottery.2
Lotteries remained illegal in the United States
for almost 70 years. In the early 1960s, however,
New Hampshire had a lottery referendum that
allowed the citizens of New Hampshire to vote
for or against a state-sponsored lottery. Not only
was New Hampshire the first state to propose the
legalization of lottery gambling after 70 years of
nationwide prohibition, it was the first modern
attempt at state-run gambling. The voters of New
Hampshire decided in favor of a lottery, with 76
percent of public votes in favor of adoption. In
1964, New Hampshire became the first state to
2

See Blanch (1949).

Cletus C. Coughlin is deputy director of research, Thomas A. Garrett is a research officer, and Rubén Hernández-Murillo is a senior economist
at the Federal Reserve Bank of St. Louis. Lesli Ott provided research assistance.

© 2006, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

165

Coughlin, Garrett, Hernández-Murillo

offer a lottery. Since that time, 41 additional
states and the District of Columbia have adopted
state-sponsored lotteries.3 North Carolina adopted
a lottery in the summer of 2005, leaving only
Alabama, Alaska, Arkansas, Hawaii, Mississippi,
Nevada, Utah, and Wyoming without a lottery.
Coinciding with the more frequent use of lotteries has been a rise in both lottery purchases and
the importance of lottery revenue as a percentage
of state revenue. Lottery ticket sales in 2004
topped $48 billion, or about 0.5 percent of total
national income.4 Of this $48 billion in sales,
states received nearly $14 billion in net lottery
revenue (i.e., revenue available to state governments after the deduction of prizes, commissions,
and administration costs).5 In terms of national
per capita spending, lottery sales amounted to
roughly $166 per person in 2004. Net lottery
revenue as a share of total state government revenue rose from 0.35 percent in 1980 to 1.22 percent in 2002.
The spread of state lotteries coincides with
changing attitudes toward legalized gambling,
growing state and local government expenditures,
and growing public opposition to both new taxes
and increased rates for existing taxes (Fisher,
1996). Arguably, lotteries are a more politically
attractive means of generating additional revenue
than increasing rates on existing tax bases.
Although this premise may explain the initial
interest in modern lotteries, it fails to adequately
explain the uneven rate of lottery adoption over
3

Many states have entered into multistate lottery games, such as
PowerBall. Multistate games pool ticket revenue from participating
states to offer much larger jackpots (at more remote odds of winning)
than single-state lottery games. Since 1964, states’ participation in
multistate games has increased. One likely reason for this increased
participation is to maintain players’ excitement about lottery
games in the face of increased competition from casino gaming.
See Hansen (2004) for a description of the various types of lottery
games, including multistate lottery games.

4

Several states, such as Delaware and West Virginia, operate video
lottery terminals at pari-mutuel racetracks. These venues are similar
to casinos and generate hundreds of millions of dollars annually.
Clearly, this form of lottery differs from the traditional scratch-off
or numbers game lottery. Sales figures presented here include both
traditional lottery sales and video lottery sales.

5

On average, states allocate 50 percent of sales to lottery prizes and
20 percent to administrative costs and retailer commissions. The
remaining 30 percent is retained by the state. Hansen (2004) notes,
however, that the shares for prizes and administrative costs vary
by state.

166

M AY / J U N E

2006

the past 40 years (see Table 1). For example,
between 1964 and 1975, 14 states adopted lotteries. No states adopted lotteries in the late 1970s,
18 states adopted lotteries in the 1980s, and 6
states adopted a lottery in the 1990s.
In this paper, we focus on the question of why
most states have adopted lotteries and why some
states have yet to adopt a lottery. The growth in
government and relaxed moral views of gambling
may be a partial answer to this question, but these
reasons are too broad, as they ignore the political
and economic realities of public policy formulation. We review the literature on lottery adoption
and, more importantly, public policy adoption in
general to understand which factors drive policy
formation. As the title of our paper suggests, lottery
adoption is the result of geographic, economic,
and political factors.

THE POLITICAL ECONOMY OF
PUBLIC POLICIES
Public policy decisions result from the interaction of various factors. Before examining the
factors that play a role in lottery adoption, we
provide a framework for analyzing public policy
decisions in general. Figure 1 highlights the primary actors and the legislative decision process
in the democratic determination of a public policy.6 Similar to the demand and supply for a good,
there also exists a demand side and a supply side
for legislation, in this case the adoption of a state
lottery. On the demand side, one starts with the
opinions that individuals possess concerning the
adoption of a lottery (see box A in Figure 1). The
opinion of an individual is likely to be related to
numerous considerations, such as income, education, age, potential impact of the legislation, and
moral values.
A common feature of any political decision
in the United States is that interest groups are
involved. Because of the intensity of opinions on
an issue and the importance (economic and otherwise) of the issue, interest groups form and attempt
6

Our framework is based on ideas presented by Rodrik (1995) in
the context of the political economy of trade policy.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

Table 1
State Lottery Adoption
State
Arizona
California
Colorado
Connecticut
Delaware
District of Columbia
Florida
Georgia
Idaho
Illinois
Indiana
Iowa
Kansas
Kentucky
Louisiana
Maine
Maryland
Massachusetts
Michigan
Minnesota
Missouri
Montana
Nebraska
New Hampshire
New Jersey
New Mexico
New York
North Carolina
North Dakota
Ohio
Oklahoma
Oregon
Pennsylvania
Rhode Island
South Carolina
South Dakota
Tennessee
Texas
Vermont
Virginia
Washington
West Virginia
Wisconsin

Start date

Method of approval

July 1, 1981
October 3, 1985
January 24, 1983
February 15, 1972
October 31, 1975
August 22, 1982
January 12, 1988
June 29, 1993
July 19, 1989
July 30, 1974
October 13, 1989
August 22, 1985
November 12, 1987
April 4, 1989
September 6, 1991
June 27, 1974
May 15, 1973
March 22, 1972
November 13, 1972
April 17, 1990
January 20, 1986
June 27, 1987
September 11, 1993
March 12, 1964
December 16, 1970
April 27, 1996
June 1, 1967
March 30, 2006
March 25, 2004
August 13, 1974
October 12, 2005
April 25, 1985
March 7, 1972
May 18, 1974
January 7, 2002
September 30, 1987
January 20, 2004
May 29, 1992
February 14, 1978
September 20, 1988
November 15, 1982
January 9, 1986
September 18, 1988

Initiative
Initiative
Initiative
Legislation
Legislation
Initiative
Referendum
Referendum
Referendum
Legislation
Referendum
Legislation
Referendum
Referendum
Referendum
Referendum
Referendum
Legislation
Referendum
Referendum
Referendum
Referendum
Referendum
Legislation
Referendum
Legislation
Referendum
Legislation
Referendum
Legislation
Referendum
Initiative
Legislation
Referendum
Referendum
Referendum
Referendum
Referendum
Referendum
Referendum
Legislation
Referendum
Referendum

SOURCE: Hansen (2004); North Carolina and Oklahoma information from news reports.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

167

Coughlin, Garrett, Hernández-Murillo

Figure 1
The Determination of a Public Policy: Lottery Adoption
“Demand Side”
Individual Opinions
(A)

combined with

Interest Groups
(B)

Public Policy:
Lottery Adoption/Rejection
(F)

Policies of Other States
(E)

Policymaker Opinions
(C)

combined with

Institutional Structure
of Government
(D)

“Supply Side”

to influence the political decision (see box B in
Figure 1). Through lobbying and contributions,
interest groups attempt to affect the positions of
representatives voting on the legislation.7 In addition, they attempt to increase popular support for
their position as well. Thus, individual opinions
and interest groups determine the demand side
of the market for a public policy.8
On the supply side, one starts with the opinions of policymakers (see box C in Figure 1). These
7

McCormick and Tollison (1981) model an interest group economy
using supply and demand analysis. Becker (1983) presents a theory
of public policy formation that results from competition among
special interest groups.

8

In Figure 1, we separate the demand side from the supply side for
clarity in presentation. Because interest groups attempt to influence
policymakers directly, we could have drawn a line from box B to
box C. For illustrative purposes, one can think of such influence as
playing itself out through the interaction of demand and supply.

168

M AY / J U N E

2006

policymakers include the legislators and those
in the executive branch who can affect the legislation. Because the majority of these policymakers
are elected representatives who, in many cases,
wish to be re-elected, it is reasonable to anticipate
that their positions will reflect to some degree the
preferences of those who have elected them. As
mentioned previously, interest groups also attempt
to influence the positions of policymakers and are
frequently involved in the drafting of legislation.9
The other consideration on the supply side
is the institutional structure of government (see
box D in Figure 1). Legislation is not simply proposed and then voted on, but rather it must work
its way through a legislative process. As a piece of
legislation is subjected to the scrutiny of legisla9

See Bonner (2005).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

tive committees, the legislation may well be modified. The institutional structure of government
might also affect the support for a piece of legislation by means of the trading of support that
occurs between legislators. The extent to which
a specific party is in control also might affect
political support for the adoption of a lottery.
Finally, the decisions made by policymakers
and voters in a specific state may affect the decisions of other states (see box E in Figure 1).10
Geography as well as economics may come into
play because the economic effects stemming from
a specific state’s lottery may be more pronounced
for nearby states. As a result, citizens and decisionmakers in nearby states may feel they must take
a similar action (i.e., in this case, also adopt a
lottery) as a form of self-defense. The prior adoption by another state may also provide information
on the consequences of such legislation, which
may influence the positions of individuals and
policymakers.
Ultimately, the various factors mentioned
above interact to produce a decision. In the present
case, the public policy decision is whether to adopt
or reject a state lottery (see box F in Figure 1).11
As shown in Table 1, the precise method of
approval of lotteries varies across states. Many
states use a statewide referendum as part of the
adoption process.12 A referendum is a popular
vote on an issue already approved by a legislative
body, with the final decision made by the electorate rather than by their representatives. Instead
of a referendum, some states have adopted lotteries
through the initiative process. The initiative
process enables a specified number of voters to
propose a law by petition. In the case of California,
Proposition 37 was submitted to California’s
voters, who approved the law. Finally, lottery
adoption in several states, most recently North
Carolina, simply required approval by each state’s
10

For example, Hernández-Murillo (2003) provides evidence of tax
competition across states and Garrett, Wagner, and Wheelock (2005)
present empirical evidence of cross-state dependence in banking
deregulation.

11

Hersch and McDougall (1988) and Garrett (1999) explore legislative
voting behavior on the issue of state lottery adoption.

12

Hansen (2004) notes that many states required a referendum or an
initiative to remove a state constitutional ban on lotteries.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

legislature and governor without a direct citizen
vote.13

THE ADOPTION OF NEW TAXES
Without question, the issues surrounding
the adoption of a tax are similar to those for the
adoption of a lottery; however, the adoption of a
lottery entails more than (and is quite different
from) simply authorizing a new tax. Lottery adoption involves legalizing a previously illegal activity, from which a state will generate revenue.
Furthermore, consumer participation in the lottery
is strictly voluntary; it is possible, then, that those
who are opposed to the lottery might not oppose
the legislation because they can decline to play
the lottery, whereas it is more complicated and
potentially illegal to decline to pay a tax.14 Also,
one’s position on lottery adoption as a means to
raise revenues might be overwhelmed by other
considerations. Therefore, our discussion of lottery adoption must go beyond an explanation of
adopting a new tax: In fact, our discussion integrates all these considerations and others after
examining some literature on the adoption of new
taxes.
Hansen (1983) develops a theory of taxation
highlighting the role of political incentives, many
of which apply to the lottery adoption process.
According to Hansen, politicians make decisions
with an eye toward retaining their positions.
Economic considerations come into play in the
adoption of new taxes by affecting the political
incentives. For example, the existence of an
economic crisis may reduce the political risks
of approving new taxes, whereas new taxes are
unlikely to gain approval if a state has a budgetary
surplus on the horizon. In the absence of crises,
separating taxpayers from their incomes/wealth
is an unwise electoral strategy. Even with a crisis,
however, Hansen stresses the importance of politi13

The initiative is similar to a referendum; however, policymakers
have a more limited role in the approval of lottery adoption if the
initiative process is used, compared with either a referendum or
standard legislative process.

14

Arguably, paying sales and income taxes is voluntary if one
chooses not to purchase consumer goods or work.

M AY / J U N E

2006

169

Coughlin, Garrett, Hernández-Murillo

cal parties’ control of government for implementing tax policy. A unified government is crucial for
providing the political opportunity for adopting
a new tax.
Capitalizing on political opportunity is an
issue Berry and Berry (1992) take up. They identify
the following categories of explanations for the
adoption of new taxes, or what is often termed
as a tax innovation: (i) economic development, (ii)
state fiscal health, (iii) election cycles, (iv) political
party control, and (v) regional diffusion. The
economic development explanation suggests that
a state’s level of economic development affects
the likelihood of adopting a tax. More-developed
states are likely to have a combination of tax capacity and demand for public services that lead to tax
adoption.15 The fiscal health explanation suggests
that the existence of a fiscal crisis, such as a large
budget deficit, increases the probability of approving a new tax. The crisis reduces the political
risks for politicians of a tax innovation. Similar
reasoning is used in the election cycle explanation.
Tax increases are unpopular; therefore, elected
officials do not innovate in election years. The
party control explanation has two propositions.
First, if the party in control is a liberal party, the
adoption of a new tax is more likely. Second, a
state in which the same party controls the governorship and the legislative bodies (i.e., a unified
government) is more likely to adopt a tax than a
state with a divided government. The fifth explanation, the regional diffusion explanation, suggests
that states emulate the tax policies adopted by
others. Political scientists have stressed that prior
adoptions provide information and make the tax
increase easier to sell to constituents. In addition,
in the context of lotteries, adoption by a state puts
competitive pressures on nearby states because
some of its lottery tax revenues are due to attracting players from nearby states.
15

As described in Filer, Moak, and Uze (1988), the Advisory
Commission on Intergovernmental Relations uses a “representative
tax system” to calculate tax capacity. Tax capacity is the revenue
that each state would raise if it applied a uniform set of rates to a
common set of tax bases. The uniform set of rates is the average
rates across states for each of 26 taxes. Using the same rates for every
state causes potential tax revenue for states to vary only because
of differences in underlying tax bases. A state’s “tax capacity index”
is its per capita tax capacity divided by the average for all states.

170

M AY / J U N E

2006

A sixth category for explaining the adoption
of a lottery not discussed by Berry and Berry
(1992) considers the alternatives and constraints
facing policymakers and the factors that apply
specifically to the approval of a lottery. We use
the term “situational-specific determinants” to
describe this category. When faced with pressures
to increase revenues, policymakers examine a
range of possibilities that include increasing the
rates of existing taxes, expanding what is taxable,
as well as adopting and implementing new taxes.16
The ability of policymakers to increase revenues
may be limited by prior political decisions and by
a state’s economic circumstances that are beyond
economic development and fiscal health. For
example, a state that has no sales tax is likely to
face different political constraints than a state with
a sales tax. Increasing a sales tax rate from 0 to 1
percent, which requires adopting and implementing a sales tax, is likely much different from
increasing a sales tax rate from 4 to 5 percent.

FRAMEWORKS FOR ANALYZING
THE LOTTERY ADOPTION
DECISION
Various conceptual frameworks have been
used to model state lottery adoption, but they
all rely on rational behavior by legislators. What
differentiates the frameworks is the objective
function of the legislator. The most frequently used
framework is the legislator-support maximization
approach (see Filer, Moak, and Uze, 1988). The
position on lottery adoption that a given legislator
takes reflects an attempt to maximize re-election
prospects. Legislators recognize that increased
state spending can increase their political support
by increasing the well-being of their constituents.
Note that it is through political support that the
demand side of the determination of a public
policy is incorporated. Spending cannot be raised,
however, without some loss of support due to the
increased tax burden placed on their constituents.
This trade-off guides how the legislator votes on
specific issues. For a legislator to vote in favor of
16

Of course, cutting spending is also an option to deal with an
imbalance between spending and revenues.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

a specific proposal, the increase in support associated with the spending must exceed the decrease
in support associated with the taxes.
A second framework, Martin and Yandle’s
(1990) duopoly transfer mechanism approach,
views the state government as a rent-seeker and
a redistributive agent. Lotteries compete with both
legal and illegal gambling operations. A state-run
lottery provides a mechanism that allows the state
to generate some revenues that they miss by not
being able to tax illegal operations. A closely
related issue is why lotteries are organized as a
state enterprise as opposed to allowing private
firms to freely enter and provide lottery services
in a competitive environment. One answer is that
the revenues for the state from a state-run lottery
are likely to exceed the revenues that the state
would generate from allowing private firms to
enter the lottery market and then taxing the profits
of these firms. In Martin and Yandle’s (1990)
approach, the state achieves equilibrium with the
illegal operators. Moreover, lotteries provide a
way for higher-income voters to redistribute the
tax burden associated with state spending from
themselves to lower-income groups.17
A third framework, in Erekson et al. (1999),
assumes the legislator maximizes utility subject
to a constraint. The legislator reflects the median
voter because of the decisive role of this voter in
producing a majority. The legislator receives utility
from improving the state’s fiscal well-being, but
the legislator is constrained, similar to the constraint in the legislator-support maximization
approach, by his re-election desires that hinge
on the satisfaction of his constituents.
The empirical implementation of these frameworks has proceeded in two ways. Filer, Moak,
and Uze (1988), Martin and Yandle (1990), and
Davis, Filer, and Moak (1992) address the question
of whether a state has a lottery as of a specific year.
Filer, Moak, and Uze (1988) and Davis, Filer, and
Moak (1992) estimate binary choice probit models,
while Martin and Yandle (1990) use ordinary
least squares. These studies identify and then
examine statistically a number of variables that
are related to whether a state has a lottery as of a
17

Note that, despite the focus on state government, there is still a
demand role played by the state’s citizens.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

specific year. The other approach, using hazard
or duration models, provides evidence on which
variables increase or decrease the likelihood that
a state adopts a lottery. Using this approach, the
variable to be explained is termed the hazard rate,
which is the probability that a state without a
lottery will in fact adopt a lottery during a specific
time period, generally a calendar year.18

WHY SOME STATES HAVE
LOTTERIES AND OTHERS DO NOT
Using the tax innovation explanations put
forth by Berry and Berry (1992) that were discussed
earlier, this section sheds light on the question of
why some states have lotteries and others do not.

Economic Development
With respect to the level of economic development, higher levels of per capita state income are
associated with (i) an increased probability that
a state has a lottery as of a specific date and (ii) a
shorter time until a state adopts a lottery.19 A
common argument rationalizing this result is
based on the finding that lotteries tend to be a
regressive form of taxation. In other words, evidence suggests that those with low incomes bear
a relatively higher lottery tax burden than those
with high incomes.20 In their report to the National
Gambling Impact Study Commission, Clotfelter
et al. (1999) provide evidence that low-income
groups spend a larger share of their incomes on the
lottery and that they also spent more in absolute
terms.21 For example, those with an annual house18

Berry and Berry (1990), Alm, McKee, and Skidmore (1993),
Caudill et al. (1995), Mixon et al. (1997), Erekson et al. (1999),
and Glickman and Painter (2004) estimate hazard functions in
their lottery adoption studies.

19

See Davis et al. (1992), Martin and Yandle (1990), Berry and Berry
(1990), Caudill et al. (1995), Mixon et al. (1997), Erekson et al.
(1999), and Glickman and Painter (2004).

20

There is some evidence that high jackpot lottery games, such as
PowerBall, may be less regressive than lower jackpot games. See
Oster (2004).

21

This latter finding suggests that lottery tickets are inferior goods
(e.g., the income elasticity of demand for lottery tickets is negative).
Although most studies have found lotteries to be regressive, most
have not found lotteries to be inferior goods. See Clotfelter and
Cook (1989 and 1990) and Fink, Marco, and Rork (2004) for a survey
of the literature.

M AY / J U N E

2006

171

Coughlin, Garrett, Hernández-Murillo

hold income of less than $10,000 spent $597 on
lotteries on a per capita basis and those with a
household income of between $10,000 and
$24,999 spent $569. This spending was substantially more than spending by those with a household income of between $25,000 and $49,999
($382), between $50,000 and $99,999 ($225), and
over $100,000 ($196).
In light of the regressive nature of lotteries, it
has been argued that a legislator with a lowincome constituency is more likely to oppose
raising funds by means of a lottery than a legislator
with a high-income constituency.22 Martin and
Yandle (1990) stress this redistributive feature by
arguing that lotteries are a mechanism for higherincome voters to redistribute tax burdens in their
favor. Thus, states with higher per capita incomes
are more likely to have lotteries.23 In addition to
the redistribution argument, Berry and Berry
(1990) and others stress that higher per capita
income is associated with more revenue potential
from a lottery. However, if higher-income households in fact spend less on lotteries than lowerincome households, then the revenue potential
from a lottery may decline as per capita income
rises.
The connection between revenue potential
and lottery adoption has been explored by Filer,
Moak, and Uze (1988). They argue that states with
a larger urban population are more likely to have
lotteries than more rural states because of relatively
lower administrative costs.24 More densely populated states will tend to have more potential purchasers per lottery outlet and generate a relatively

greater value of revenue per dollar of administrative costs. Filer, Moak, and Uze find that the percentage of a state’s population that is urban is
related positively to lottery adoption, while Alm,
McKee, and Skidmore (1993), Caudill et al. (1995),
and Glickman and Painter (2004) find that state
population density is related positively to lottery
adoption. However, Caudill et al. (1995) and
Mixon et al. (1997) fail to find a statistically significant relationship between a measure of predicted
lottery profits and lottery adoption. The empirical
evidence examining the connection between total
population and lottery adoption is also mixed.
Alm, McKee, and Skidmore (1993) find a positive
and statistically significant relationship between
state population and lottery adoption, but Filer,
Moak, and Uze (1988) and Glickman and Painter
(2004) do no find a statistically significant
relationship.
Another measure of economic development
that has been examined for its statistical relationship to lottery adoption is per-pupil state education spending. Lottery adoption often occurs as
part of a promise to earmark lottery proceeds to
finance spending on education. Thus, states with
lagging education spending are more likely to
support such earmarking. Second, the current
lack of education spending portends a bleak economic future that motivates a state to take action
to alter the future. Erekson et al. (1999) find a
statistically significant, negative relationship
between per-pupil education spending and the
decision to adopt a lottery.25

22

This argument assumes that legislators with low-income constituencies take such a position for paternalistic reasons (rather
than from a desire to represent the views of their constituents).
Legislators with high-income constituencies, on the other hand,
take a position in line with the views of their constituents. A
related argument is that low-income groups with poor economic
prospects may place a relatively higher discount on lottery losses
than high-income groups. Lotteries offer a small prospect for a
large gain for those who play. Representatives of low-income constituents could vote to restrict lotteries to inhibit these constituents
from gambling away their minimal resources.

The fiscal health explanation suggests that the
existence of a fiscal crisis, such as a large budget
deficit, increases the probability of approving a
new tax or, similarly, a state lottery. Numerous
variables have been used to measure a state’s fiscal
health. A measure used by two early studies was

23

Filer, Moak, and Uze (1988) use a measure of the percentage of
poor within a state and find this measure to be a negative and
statistically significant determinant of lottery adoption.

24

DeBoer (1985) examines the economies of scale in state lottery
production.

172

M AY / J U N E

2006

Fiscal Health

25

Although net lottery revenue is earmarked for public education in
many states, there is little evidence that the earmarking of lottery
revenue has increased education expenditures. The reason for this
is that state legislators divert funds away from education and simply
replace these diverted funds with net lottery revenues, thus leaving
total education expenditures unchanged. See Spindler (1995) and
Garrett (2001).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

the Advisory Commission on Intergovernmental
Relations’ tax effort. This measure is a ratio of a
state’s tax collections relative to its capacity of
tax revenue. Larger values of this measure are
thought to indicate that a state was making an
increased effort to generate tax revenues. As this
measure becomes larger, the odds increase that a
state will seek additional revenue sources. Both
Filer, Moak, and Uze (1988) and Davis, Filer, and
Moak (1992) find that the higher a state’s tax effort,
the larger the probability the state has a lottery.
Rather than use tax effort, some studies have
simply used per capita state tax revenues. Martin
and Yandle (1990) argue that higher per capita
state taxes are an indicator of tax pressures, and,
thus, higher levels provide an increased incentive to find a way to relieve the pressure. Martin
and Yandle (1990) argue that one way to simultaneously relieve the pressure and increase tax
revenues is to shift the relative tax burden from
higher- to lower-income taxpayers by means of a
lottery. They found that higher per capita state
taxes were associated with lottery adoption. On
the other hand, Caudill et al. (1995) and Mixon
et al. (1997) do not find a statistically significant
relationship between per capita state taxes and
the probability that a state adopts a lottery in a
given time period.
Another commonly used measure of fiscal
health is per capita state debt. Higher levels of
debt raise increased doubts about a state’s fiscal
health. As put forth in Martin and Yandle (1990),
higher debt will result in a larger demand for shifting taxes to lower-income groups; such pressures
make lotteries more likely. Martin and Yandle
(1990) find a positive and statistically significant
relationship between per capita state debt and
lottery adoption. However, numerous other studies,
such as Caudill et al. (1995), Mixon et al. (1997),
and Glickman and Painter (2004), do not find a
statistically significant relationship between per
capita state debt and the probability that a state
will adopt a lottery within a given period. On the
other hand, when Alm, McKee, and Skidmore
(1993) separate overall debt into short-term and
long-term debt, they find a statistically significant,
positive relationship between short-term state debt
and lottery adoption, but no such statistically
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

significant relationship between long-term state
debt and lottery adoption.
Rather than looking at overall debt, Berry and
Berry (1990) and Erekson et al. (1999) examine the
difference between state revenue and spending
relative to spending as a measure of fiscal health.
This measure is a proxy for the budget deficits
faced by states. The more negative this measure,
the worse a state’s fiscal health. It is reasonable
to expect that the worse a state’s fiscal health, the
less the risk faced by a public official who supports
a tax increase. The evidence using this measure
is mixed. Berry and Berry (1990) find a negative
but not statistically significant relationship,
whereas Erekson et al. (1999) find a negative,
statistically significant relationship.
It is likely that economic growth in a state is
related to the state’s fiscal health. States experiencing a recession may find it especially difficult to
increase revenues using conventional forms of
taxation, which may lead them to adopt a lottery.
Alm, McKee, and Skidmore (1993) find that the
larger the percentage change in real state personal
income, the less likely is a state to adopt a lottery
in a given period. Somewhat surprisingly, however, Alm, McKee, and Skidmore (1993) do not
find either the percentage change in state tax
revenues or the percentage change in state and
local tax revenues to be statistically significant
determinants of lottery adoption. One would
expect these measures to be more closely tied to
fiscal crises and thus to support the fiscal crisis
argument.
Erekson et al. (1999) use somewhat different
measures to capture the percentage change in the
tax base in that they use the percentage change
in per capita earnings in selected industries. They
find some support for the fiscal health argument
when they examine growth of earnings in the
service sector.
Fiscal pressures might be lessened by intergovernmental transfers. Alm, McKee, and
Skidmore (1993) examine both the percentage
change in intergovernmental transfers to state
government only and to state and local government jointly. Such transfers are not statistically
significant determinants of lottery adoption.
Caudill et al. (1995) and Mixon et al. (1997) genM AY / J U N E

2006

173

Coughlin, Garrett, Hernández-Murillo

erate a similar finding using levels rather than
percentage changes in intergovernmental transfers.
In summary, the fiscal health argument
receives, at best, limited empirical support. One
additional finding might provide some insights
as to why the fiscal health argument applies only
weakly to lottery adoption. Fink, Marco, and
Rork (2004) found that overall state tax revenues
declined with increased lottery sales. This net
decline results from a decrease in sales and excise
tax revenue, which is offset partially by an
increase in income tax revenue. These changes
in revenue for specific taxes are related to changes
in both economic behavior and tax laws. For
example, consumers are likely to substitute, to
some degree, the purchase of lottery tickets for
the purchase of goods subject to a sales tax. This
substitution would increase lottery tax revenue
and reduce sales tax revenue. In another study,
Fink, Marco, and Rork (2003) find that lottery
sales did not have a statistically significant effect
on per capita state tax revenues. The implication
of these studies is that the adoption of lotteries
does not appear to provide even a partial solution
to a state’s fiscal problems.

Election Cycles and the Political
Decision Process
Berry and Berry (1990) examine whether the
timing of elections might affect the adoption of
tax increases. They argue that lotteries will tend
to be adopted in election years relative to other
years because the lottery relative to other types
of tax increases is generally more popular, a fact
that elected officials are aware of and likely
attempt to use to their advantage.26 Their relative
popularity makes lotteries best suited for consideration and adoption in election years. Other taxes,
because of their unpopularity, are more likely to
be adopted in the year immediately following an
election because this provides the maximum time
prior to the next election for the electorate to forget about an unpopular tax increase. In addition,
for those years that are neither an election year
nor the year immediately following an election
26

We are not suggesting that lotteries are uncontroversial, only less
controversial as a revenue-increasing option.

174

M AY / J U N E

2006

year, one should expect the probability of lottery
adoption to fall somewhere in between—that is,
to be less than it is in an election year but more
than it is in the year following an election. Berry
and Berry (1990) find empirical support for the
preceding reasoning; however, Glickman and
Painter (2004) do not.
The political decision process provides some
limited information as to whether a lottery will
be adopted in a given period. Alm, McKee, and
Skidmore (1993) argue that political pressures
for lottery adoption are likely to differ between
states that use a referendum or an initiative compared with states that use a standard legislative
process. They find that states using either a referendum or an initiative are more likely to adopt
a lottery in a given year.

Party Control
Single-party control of a state’s governorship
and both houses of the legislature should make it
easier for proposed legislation to be passed and
signed into law. An empirical issue is whether
such control increases the probability of lottery
adoption in a given period. Berry and Berry (1990)
hypothesize that such control would be associated
with lottery adoption. They found, however, that
such control decreased the probability of lottery
adoption. One possibility is that a unified government will find it easier to increase existing taxes
to achieve substantial revenue increases. Thus, a
lottery is not needed to raise revenues. However,
in terms of political control, divided governments
might find it easier to reach agreement on a lottery, which is a relatively less controversial funding mechanism.
The connection between party control and
lottery adoption has been explored in a number
of other studies, but the results do not provide
clear insights into the connection between party
control and lottery adoption. For example, Alm,
McKee, and Skidmore (1993) explored the role
of party control by using separate dummies for
Democratic control and Republican control versus
shared control. Democratic control was a negative
and statistically significant determinant of lottery
adoption, whereas Republican control was not
statistically significant. Glickman and Painter
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

Figure 2

Figure 2
Lottery Diffusion by Decade

Lottery Diffusion by Decade

WA
MT

ME

ND
MN

OR
ID

VT
WI

SD
WY

MI
IA

NE

NV

PA
IL

UT

CA

NH
MA
CT RI

NY

IN

DE

DC

CO
KS

NJ

OH
WV

MO

VA

MD

KY

AZ

NM

NC

TN

OK

SC

AK
MS

AL

GA

TX
LA
FL
AK

HI
Lottery began in the 1960s

Lottery began in the 1990s

Lottery began in the 1970s

Lottery began in the 2000s

Lottery began in the 1980s

Non-lottery state

SOURCE: Data from Hansen (2004); North Carolina and Oklahoma information from news reports.

(2004) find a statistically significant, negative
association between the percentage of a state’s
lower house that is Democratic and lottery adoption. Finally, Mixon et al. (1997) find no relationship between the percentage of a state’s legislative
bodies made up of the majority political party and
lottery adoption.

Regional Diffusion
The spread of lotteries shows a geographic pattern (see Figure 2). Beginning in New Hampshire,
lotteries spread to other states in New England,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

the Mid-Atlantic, and Great Lakes; they then
spread throughout states in the Midwest and on
the Pacific Coast and began appearing in states in
the Plains and Rocky Mountains; most recently,
they have spread to the South. In addition to
Alaska and Hawaii, the only non-adoptee states
are located in the South and the Rocky Mountains.
Hansen (2004) notes that the initial reluctance
to adopt lotteries stemmed from concerns over
the ability of lotteries to raise revenues both efficiently and without corruption. In addition, and
more importantly from a geographic perspective,
M AY / J U N E

2006

175

Coughlin, Garrett, Hernández-Murillo

once lotteries were adopted, cross-border ticket
sales were substantial. For example, prior to North
Carolina’s passage of a lottery, it was estimated
that state residents were spending $100 million
per year on Virginia’s lottery.27 Legislators and
residents concluded that if residents were going
to play the lottery, they would prefer that the
spending and resulting tax revenues be kept
within their state.28
To account for the possibility that tax competition between state governments might explain
lottery adoption, the statistical connection between
lottery adoption and a number of geographically
based measures have been examined. The results
suggest that lottery adoption by a state is related
positively to the existence of a lottery in a neighboring state. The only exceptions can be found
in Filer, Moak, and Uze (1988) and Glickman and
Painter (2004). In both of these studies a dummy
variable is used to identify whether an adjacent
state had a lottery. On the other hand, this measure
is also used by Alm, McKee, and Skidmore (1993),
who found a statistically significant, positive
relationship. Other studies, such as Davis, Filer,
and Moak (1992), Erekson et al. (1999), Caudill
et al. (1995), and Mixon et al. (1997), find a statistically significant relationship using the percentage of a state’s border contiguous with states that
have lotteries. Caudill et al. (1995) also find that
the overall percentage of states already having
adopted a lottery tended to increase the probability
that a state would adopt a lottery in a given time
period. Finally, Berry and Berry (1990) find that
a given state was more likely to adopt, the larger
the number of adjacent states that had previously
adopted.

Situational-Specific Determinants
In addition to the sets of determinants associated with the adoption of taxes, there are many
more constraints and considerations that might
influence the adoption of a lottery. For example,
27

See Dube (2005).

28

The importance of cross-border ticket sales, however, does not
cease when neighboring states both have lotteries. Tosun and
Skidmore (2004) find that the introduction of competing games
had an adverse effect on lottery revenues in West Virginia, a state
that relies heavily on sales to players in nearby states.

176

M AY / J U N E

2006

Caudill et al. (1995) and Mixon et al. (1997) examine the impact of existing legalized gambling on
lottery adoption. They argue that the larger a
state’s per capita tax revenue from legalized gambling, the less the need for an alternative revenue
source and the more likely the organized opposition to a lottery because of the competitive threat
that a lottery poses. Both studies find support for
this argument.29
Tax exporting enables the citizens of a state
to shift their tax burdens to those outside the
state. Generally, taxpayers of a state would prefer
to have taxpayers of other states provide the funding for their public services. In terms of lottery
adoption, researchers have suggested that the
shifting of lottery taxes is easier the larger a state’s
tourist industry. Tourists will take advantage of
the opportunity to play the lottery and thus will
provide lottery revenues. Based on the results
of Filer, Moak, and Uze (1988), Davis, Filer, and
Moak (1992), Caudill et al. (1995), and Mixon et al.
(1997), this argument does not receive empirical
support as indices of tax exporting that are based
on tourism are not statistically significant determinants of lottery adoption.
The decision by a state to adopt a lottery is
likely to be related to its prior fiscal decisions.
States desiring to raise revenues have various
ways to do so; however, some states have fewer
alternatives than others. Filer, Moak, and Uze
(1988) and Davis, Filer, and Moak (1992) explore
the possibility that states without a sales tax are
more likely to adopt a lottery than states with a
sales tax. Neither study, however, finds a statistically significant relationship. Glickman and
Painter (2004) examine the influence of state tax
and expenditures limits on lottery adoption and
find limits on assessment increases are related to
lottery adoption. Finally, Martin and Yandle (1990)
examine the impact of state balanced budget
requirements and do not find a statistically significant relationship between balanced budget
requirements and lottery adoption.
29

In contrast, Davis, Filer, and Moak (1992) find a statistically significant, positive relationship between per capita tax revenue from
gambling and lottery adoption. They stress that this finding reflects
a preference for gambling.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

As suggested earlier, the preferences of the
electorate are likely to influence political decisions in a democracy. Caudill et al. (1995) and
Mixon et al. (1997) use whether a state already
has legalized gambling as an indicator of a state’s
preferences toward lotteries. Despite the noteworthy exception of Nevada, both studies find that
if a state has legalized gambling it is more likely
to adopt a lottery. In addition, the preferences of
two groups, the elderly and religious groups, have
been examined.
Alm, McKee, and Skidmore (1993) argue that
the elderly tend to oppose most tax increases, but
that they might not oppose a form of tax increase
that can be viewed as much more voluntary than
other forms of tax increases. Using the percentage
of a state’s population that is 65 and older, however, neither Alm, McKee, and Skidmore nor
Glickman and Painter (2004) find a statistically
significant relationship.
When one thinks of those in opposition to
lottery adoption, one generally starts with religious
groups. The lottery battle in Tennessee illustrates
this fact. Bobbitt (2003) noted that the largest and
most influential anti-lottery group, the Gambling
Free Tennessee Alliance, consisted primarily of
church groups, such as the Tennessee Baptist
Convention, Tennessee Catholic Public Policy
Group, and the United Methodist Church. Generally, the most strident opposition by many church
groups is based on the belief that gambling is
immoral. In addition, other issues such as the
prospects of deceptive advertising, the regressivity
of the lottery tax, and the prospects for gamblingrelated problems have provided the basis for opposition. With respect to gambling-related problems,
the Gambling Free Tennessee Alliance stressed
the increased incidence of compulsive gambling
and the associated social problems of increased
crime, suicide, drug use, and job loss.
Various proxies have been used to measure the
preferences of religious groups. Frequently used
proxies indicating opposition to gambling—used
by Filer, Moak, and Uze (1988), Berry and Berry
(1990), Martin and Yandle (1990), Caudill et al.
(1995), and Mixon et al. (1997)—are either the
percentage of a state’s population that is Southern
Baptists or, more broadly, that are fundamentalist
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Christians.30 Excluding Filer, Moak, and Uze
(1988), these measures are negative and statistically significant determinants of lottery adoption.
Two other proxies have also been used.
Erekson et al. (1999) use the increase in the percentage of state population that is Protestant. They
find a statistically significant, negative relationship with lottery adoption. The final proxy that
has been used is the percentage of Catholics in a
state. Despite the fact that a Catholic group was
part of the Gambling Free Tennessee Alliance,
Catholics are often viewed as having a greater
preference and tolerance for gambling because
of their use of bingo and other games for fundraising purposes. In addition, a larger percentage of
Catholics in a state may indicate a smaller percentage of those with religious affiliations who would
oppose lotteries. Alm, McKee, and Skidmore
(1993) and Glickman and Painter (2004) use such
a measure; the former finds a statistically significant, positive relationship with lottery adoption,
whereas the latter does not find a statistically
significant relationship.
Mixon et al. (1997) argue that long-stable
societies are more likely to have the special
interest groups that provide such stability. These
special interest groups—professional associations,
labor unions, trade associations, and other coalitions that attempt to shift the distribution of
income in their favor—are more likely to exist in
older states, which is measured by the years since
statehood. Therefore, in light of the redistribution
associated with lotteries, the older the state, the
more likely a lottery. Mixon et al. find support
for this argument.

SUMMARY AND CONCLUSIONS
Geographic, economic, and political factors
have all played roles in the spread of lotteries as
well as the decision of some states not to adopt
lotteries. On the basis of previous literature, we
30

Fundamentalist Christians encompass those Protestant denominations who believe in the inerrancy of the Bible, the virgin birth of
Jesus Christ, the doctrine of substitutionary atonement, the bodily
resurrection and return of Christ, and the divinity of Christ. For
more details, see “What Is a Fundamentalist Christian?” by Dale A.
Robbins at www.victorious.org/chur21.htm.

M AY / J U N E

2006

177

Coughlin, Garrett, Hernández-Murillo

suggest that explanations for lottery adoption can
be organized into six categories: (i) economic
development, (ii) fiscal health, (iii) election cycles,
(iv) political party control, (v) regional diffusion,
and (vi) situational-specific determinants.
In terms of economic development, one consistent finding is that higher levels of per capita
state income are positively associated with lottery
adoption. This finding supports the view that,
because those with low incomes bear a higher
lottery tax burden than those with high incomes,
lotteries are a mechanism for those with high
incomes to shift some of their tax burden to those
with low incomes. Another explanation is that
higher state per capita income is associated with
more potential revenue from a lottery.
Relatively more urban states have been found
to be more likely to have adopted lotteries. A
similar comment pertains to more densely populated states. In the former case, the revenue potential argument hinges on the possibility of relatively
lower administrative costs in states with relatively
larger urban populations. In the latter case, the
argument hinges on the possibility of relatively
greater values of revenue per dollar of administrative cost due to more potential purchasers of
lottery tickets per sales outlet.
Although fiscal crises are hypothesized to
increase the probability that a state will approve
a new tax, a review of existing studies raises
doubts about the importance of this explanation
for lottery adoption. The various proxies that
have been used to capture a state’s fiscal health
fail to have a consistent relationship with lottery
adoption. One explanation for the absence of a
statistically significant relationship between a
state’s fiscal health and lottery adoption is that
lottery revenues are unlikely to provide sufficient
revenues, especially in the near term, to alleviate
a fiscal crisis.
With regard to political factors that might
influence lottery adoption, some empirical support exists for what is termed an election cycle
explanation. The political decision to raise taxes
is always resisted by at least some individuals
and groups; however, the lottery relative to other
forms of tax increases is popular. From a politician’s point of view, especially one who desires
178

M AY / J U N E

2006

to be re-elected, the consideration of tax increases
in an election year is very risky. As a result, lotteries, which can be viewed as a voluntary tax
payment relative to other taxes, are best suited
for consideration and adoption in election years.
Moreover, even if raising state tax revenues is
not an issue, the popularity of lottery adoption
might make an election year an especially good
time to adopt a lottery.
The nature of the decision process does play
a role in whether a lottery is ultimately adopted.
Relative to a legislative process that relies solely
on approval by a state’s legislature prior to a signing by the governor, states that use either an initiative or a referendum as part of the approval process
are more likely to have lotteries. The importance
of the decision process might be the key to understanding the empirical findings with respect to
party control.
Findings with respect to party control and
lottery adoption fail to provide clear insights concerning the impact of single-party control of a
state’s governorship and both houses of the legislature. Such control should make it easier for the
controlling party to pass and sign into law its
desired legislation. Nonetheless, a consistent
empirical relationship between party control and
lottery adoption is not identified in the existing
studies. Party control might have a lessened effect
in states using either an initiative or a referendum
because of the direct voicing of the electorate’s
preferences.
Our review of existing studies highlights the
importance of geography in the spread of lotteries
across the United States. Regardless of the measure
used to account for lotteries in neighboring states,
it is clear that lottery adoption by a state is related
positively to the existence of lotteries in a neighboring state or states. This finding is a clear indicator of the impact of tax competition between
states. State legislators and their constituents
have concluded that if residents are going to play
the lottery, they would prefer that the spending
and tax revenues be kept within their state.
It is also reasonable to think that the electorate
in a state that already has legalized gambling
would be inclined to support lottery adoption.
Despite the absence of a lottery in Nevada, empiriF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Coughlin, Garrett, Hernández-Murillo

cal support for this line of thinking exists. States
with relatively more police and police expenditures are also more likely to have adopted lotteries.
Only limited support exists for the argument that
prior fiscal decisions that limit a state’s tax revenue
potential provide a motivation for lottery adoption.
In terms of the preferences of specific groups,
both the elderly and religious groups have been
examined. For the former group, there is no statistically significant relationship between the
percentage of a state’s population that is elderly
and lottery adoption. On the other hand, it is clear
that the larger the relative size of a religious group
that opposes lottery adoption the less likely that
a state will adopt a lottery. Religious opposition
to lotteries is undoubtedly a key reason that lottery
adoption by states in the South tended to lag lottery
adoption by states in the rest of the country.
One final result suggests that long-stable
societies, which are more likely to have entrenched
special interest groups, are more likely to have
lotteries. Because interest groups attempt to shift
the distribution of after-tax income in their favor,
lotteries are more likely because of their redistributive effects. This final result is simply one
of the many illustrations in this review of lottery
adoption showing the connection between economic motivations and political results.

Opportunity,” American Journal of Political
Science, August 1992, 36(3), pp. 715-42.
Blanch, Ernest. “Lotteries and Pools.” American
Statistician, 1949, 3(1), pp. 18-21.
Bobbitt, Randy. “The Tennessee Lottery Battle:
Education Funding vs. Moral Values in the Volunteer
State.” Public Relations Quarterly, Winter 2003,
48(4), pp. 39-42.
Bonner, Lynn. “Lobbyists Often the Ghost Writers of
State Laws.” The News & Observer, October 26,
2005; www.newsobserver.com/656/v-print/story/
351031.html.
Caudill, Stephen B.; Ford, Jon M.; Mixon, Franklin
G. and Peng, Ter Chao. “A Discrete-Time Hazard
Model of Lottery Adoption.” Applied Economics,
June 1995, 27(6), pp. 555-61.
Clotfelter, Charles T. and Cook, Philip J. Selling
Hope: State Lotteries in America. Cambridge, MA:
Harvard University Press, 1989.
Clotfelter, Charles T. and Cook, Philip J. “On the
Economics of State Lotteries.” Journal of Economic
Perspectives, 1990, 4(4), pp. 105-19.

REFERENCES

Clotfelter, Charles T.; Cook, Philip J.; Edell, Julie A.
and Moore, Marian. “State Lotteries at the Turn of
the Century: Report to the National Gambling Impact
Study Commission.” Report, Duke University, 1999.

Alm, James; McKee, Michael and Skidmore, Mark.
“Fiscal Pressure, Tax Competition, and the
Introduction of Lotteries.” National Tax Journal,
December 1993, 46(4), pp. 463-76.

Davis, J. Ronnie; Filer, John E. and Moak, Donald L.
“The Lottery as an Alternative Source of State
Revenue.” Atlantic Economic Journal, June 1992,
20(2), pp. 1-10.

Becker, Gary S. “A Theory of Competition among
Pressure Groups for Political Influence.” Quarterly
Journal of Economics, August 1983, 98(3), pp. 371400.

DeBoer, Larry. “Administrative Costs of State
Lotteries.” National Tax Journal, December 1985,
38(4), pp. 479-87.

Berry, Frances Stokes and Berry, William D. “State
Lottery Adoptions as Policy Innovations: An Event
History Analysis.” American Political Science
Review, June 1990, 84(2), pp. 395-415.
Berry, Frances Stokes and Berry, William D. “Tax
Innovation in the States: Capitalizing on Political
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Dube, Jonathan. “N.C. Watches as States Add
Lotteries,” 2005;
www.jondube.com/resume/charlotte/lottery.html.
Erekson, O. Homer; Platt, Glenn; Whistler, Christopher
and Ziegert, Andrea L. “Factors Influencing the
Adoption of State Lotteries.” Applied Economics,
July 1999, 31(7), pp. 875-84.
M AY / J U N E

2006

179

Coughlin, Garrett, Hernández-Murillo

Filer, John E.; Moak, Donald L. and Uze, Barry. “Why
Some States Adopt Lotteries and Others Don’t.”
Public Finance Quarterly, July 1988, 16(3), pp.
259-83.
Fink, Stephen C.; Marco, Alan C. and Rork, Jonathan C.
“The Impact of Lotteries on State Tax Revenues.”
Proceedings from the Ninety-fifth Annual Conference
on Taxation, November 14-16, 2002, Orlando, FL;
minutes of the annual meeting of the National Tax
Association, November 14, 2002. Volume 27.
Washington, DC: National Tax Association, 2003,
pp. 1169-72.
Fink, Stephen C.; Marco, Alan C. and Rork, Jonathan C.
“Lotto Nothing? The Budgetary Impact of State
Lotteries.” Applied Economics, December 2004,
36(21), pp. 2357-67.
Fisher, Ronald. State and Local Public Finance.
Chicago: Irwin, 1996.
Garrett, Thomas A. “A Test of Shirking under
Legislative and Citizen Vote: The Case of State
Lottery Adoption.” Journal of Law and Economics,
April 1999, 42(1, Part 1), pp. 189-208.
Garrett, Thomas A. “Earmarked Lottery Revenues for
Education: A New Test of Fungibility.” Journal of
Education Finance, Winter 2001, 26(3), pp. 219-38.
Garrett, Thomas A.; Wagner, Gary A. and Wheelock,
David C. “A Spatial Analysis of State Banking
Regulation.” Papers in Regional Science, November
2005, 84(4), pp. 575-95.
Glickman, Mark M. and Painter, Gary D. “Do Tax and
Expenditure Limits Lead to State Lotteries? Evidence
from the United States: 1970-1992.” Public Finance
Review, January 2004, 32(1), pp. 36-64.
Hansen, Alicia. “Lotteries and State Fiscal Policy.”
Background Paper No. 46, Tax Foundation,
October 2004.

Hernández-Murillo, Rubén. “Strategic Interaction in
Tax Policies Among States.” Federal Reserve Bank
of St. Louis Review, May/June 2003, 85(3), pp. 47-56.
Hersch, Philip J. and McDougall, Gerald S. “Voting
for ‘Sin’ in Kansas.” Public Choice, May 1988,
57(2), pp. 127-39.
Martin, Robert and Yandle, Bruce. “State Lotteries as
Duopoly Transfer Mechanisms.” Public Choice,
March 1990, 64(3), pp. 253-64.
McCormick, Robert and Tollison, Robert. Politicians,
Legislation, and the Economy: An Inquiry into the
Interest Group Theory of Government. Boston:
Martinus Nijhoff, 1981.
Mixon, Franklin G. Jr.; Caudill, Steven B.; Ford, Jon M.
and Peng, Ter Chao. “The Rise (or Fall) of Lottery
Adoption within the Logic of Collective Action:
Some Empirical Evidence.” Journal of Economics
and Finance, Spring 1997, 21(1), pp. 43-49.
Oster, Emily. “Are All Lotteries Regressive? Evidence
from PowerBall.” National Tax Journal, June 2004,
57(2, Part I), pp. 179-87.
Robbins, Dale A. “What Is a Fundamentalist
Christian?” 1995; www.victorious.org/chur21.htm.
Rodrik, Dani. “Political Economy of Trade Policy,”
in Gene M. Grossman and Kenneth Rogoff, eds.,
Handbook of International Economics. Volume III.
Princeton, NJ: Elsevier, 1995, pp. 1457-94.
Spindler, Charles. “The Lottery and Education:
Robbing Peter to Pay Paul.” Public Budgeting and
Finance, Fall 1995, 15(3), pp. 54-62.
Tosun, Mehmet Serkan and Skidmore, Mark. “Interstate
Competition and State Lottery Revenues.” National
Tax Journal, June 2004, 57(2, Part 1), pp. 163-78.

Hansen, Susan B. The Politics of Taxation: Revenue
without Representation. Westpoint, CT: Praeger,
1983.

180

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

The 1990s Acceleration in Labor Productivity:
Causes and Measurement
Richard G. Anderson and Kevin L. Kliesen
The acceleration of labor productivity growth that began during the mid-1990s is the defining
economic event of the past decade. A consensus has arisen among economists that the acceleration was caused by technological innovations that decreased the quality-adjusted prices of semiconductors and related information and communications technology (ICT) products, including
digital computers. In sharp contrast to the previous 20 years, services-producing sectors—heavy
users of ICT products—led the productivity increase, besting even a robust manufacturing sector.
In this article, the authors survey the performance of the services-producing and goods-producing
sectors and examine revisions to aggregate labor productivity data of the type commonly discussed
by policymakers. The revisions, at times, were large enough to reverse preliminary conclusions
regarding productivity growth slowdowns and accelerations. The unanticipated acceleration in
the services sector and the large size of revisions to aggregate data combine to shed light on why
economists were slow to recognize the productivity acceleration.
Federal Reserve Bank of St. Louis Review, May/June 2006, 88(3), pp. 181-202.

O

ver the past decade, economists
have reached a consensus that (i)
the trend rate of growth of labor
productivity in the U.S. economy
increased in the mid-1990s and (ii) the underlying cause of that increase was technological
innovations in semiconductor manufacturing
that increased the rate of decrease of semiconductor prices.1 This productivity acceleration
is remarkable because, unlike most of its predecessors, it continued with only a minor slowdown during the most-recent recession. In this
article, we briefly survey research on the genesis
of the productivity rebound. We also examine
the “recognition problem” that faced economists
and policymakers during the 1990s when preliminary data, both economywide and at the industry

1

The first chapter of Jorgenson, Ho, and Stiroh (2005) surveys the
decrease in semiconductor prices.

level, showed little pick up in productivity
growth. Using more than a decade of vintage
“real-time” data, we find that revisions to labor
productivity data have been large, in some cases
so large as to fully reverse initial preliminary
conclusions regarding productivity growth slowdowns and accelerations.
The 1990s acceleration of labor productivity
has three important characteristics. First, it was
unforeseen. An example of economists’ typical
projections during the mid-1990s is the 1996
Economic Report of the President, prepared during
1995, in which the Council of Economic Advisers
foresaw no revolutionary change. The Council
foresaw labor productivity growth in the private
nonfarm business sector at an average annual rate
of 1.2 percent from the third quarter of 1995 to
the end of 2002. This estimate largely extrapolated
recent experience: productivity from 1973 to 1995
had grown at an average annual rate of 1.4 percent.

Richard G. Anderson is an economist and vice president and Kevin L. Kliesen is an economist at the Federal Reserve Bank of St. Louis.
The authors thank Aeimit Lakdawala, Tom Pollmann, Giang Ho, and Marcela Williams for research assistance.

© 2006, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

181

Anderson and Kliesen

Figure 1
Nonfarm Labor Productivity
Percent Change from Peak Value, 2001:Q1 to 2005:Q4
20
18
Current Estimate
16
14
Average Business Cycle

12
10
8
6
4
2
0

0

1

2

–2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

Quarters after Peak

NOTE: The NBER’s business cycle dating committee on November 26, 2001, selected the first quarter of 2001 as the cyclical peak. The
business-cycle average is calculated as the mean of the nine NBER post-World War II business cycles, excluding the 1980 and 2001
recessions.

Incoming data during 1995 and 1996 were not
signaling an increase in productivity growth.
Gordon (2002, p. 245) notes that economists in
1997 were still seeking to identify the causes of
the post-1973 slowdown in productivity growth:
“Those of us who participated in panels on productivity issues at the January 1998 meetings of
the American Economic Association recall no
such recognition [of a productivity growth rate
increase]. Rather, there was singular focus on
explaining the long, dismal period of slow productivity growth dating from 1972.”2 Today, with
revised data, we know that the productivity acceleration started before 1995.
Labor productivity growth showed its resilience by slowing only modestly during the mild
2

As the discussion in Edge, Laubach, and Williams (2004) indicates,
Princeton University professor Alan Blinder in 1997 estimated that
future, near-term trend labor productivity growth was effectively
the same as its average since 1974: 1.1 percent. Further, in 1999,
Northwestern University professor Robert Gordon estimated a
trend rate of growth of 1.85 percent; he then subsequently revised
this up to 2.25 percent in 2000 and then to 2.5 percent in 2003.

182

M AY / J U N E

2006

2001 recession. Forecasters adopted new views
of the trend. By 2001, the Council of Economic
Advisors had increased its projection of the
annual growth of structural labor productivity to
2.3 percent per year. Other forecasters, including
many in the Blue Chip Economic Indicators and
the Federal Reserve Bank of Philadelphia’s Survey
of Professional Forecasters, were even more
optimistic.3 Yet, since the March 2001 National
Bureau of Economic Research (NBER) business
cycle peak, labor productivity has been stronger
than both these upward-revised forecasts and its
average following past cyclical peaks; the latter
point is illustrated in Figure 1. Although no one
can be certain of future gains in productivity, it
now seems clear that the combination of lower
prices for information and communications technology (ICT) equipment plus new related business
practices have boosted the economy’s trend rate
3

See the September 10, 2000, issue of the Blue Chip Economic
Indicators or the first quarter 2001 Survey of Professional
Forecasters.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

of productivity growth. We note that similar
increases in labor productivity growth have
occurred in other eras and other countries, usually
associated with technological innovations.4
Second, the underlying cause was an increase
in the rate of decrease of semiconductor prices and,
in turn, of ICT capital equipment. In response to
falling ICT prices, producers in both servicesproducing and goods-producing sectors shifted
increasing amounts of capital investment toward
ICT products, reducing in some cases purchases
of more traditional capital equipment. Subsequently, many business analysts have noted that,
following a gestation lag, the lower cost of ICT
equipment has induced firms to “make everything
digital” and reorganize their business practices;
Friedman (2005) and Cohen and Young (2005)
provide detailed case studies.
And third, the post-1995 productivity acceleration is largely a services-producing sector
story.5 After 1995, productivity growth in services
increased sharply while productivity growth in
manufacturing continued at approximately its
then-extant pace. Ironically, the post-1973 slowdown in aggregate productivity growth also was
a services-producing sector story—but one in
which productivity in services-producing sectors
collapsed.6 Post-1973 pessimists cited Baumol’s
4

Basu et al. (2004) compare and contrast the differing U.S. and
U.K. experiences after 1995.

5

In this article, we follow the Bureau of Economic Analysis (BEA)
data reporting schema. Before June 2004, the BEA followed the SIC
(Standard Industrial Classification) schema. Services-producing
industries included transportation and public utilities; wholesale
trade; retail trade; finance, insurance, and real estate (FIRE), including depository and nondepository institutions; and services (business services, including computer and data processing services).
Private goods-producing industries included agriculture, forestry,
and fishing; mining; construction; and manufacturing. In June 2004,
the BEA revised its schema to follow the 1997 North American
Industry Classification System (NAICS). The composition of
services-producing industries changed slightly to include utilities;
wholesale trade; retail trade; transportation and warehousing;
finance, insurance, real estate, rental and leasing (FIRE); professional
and business services, including computer systems design and
related services; educational services, health care, and social assistance; arts, entertainment, recreation, accommodation, and food
services; and other services, except government. Compared with
the SIC, the NAICS more consistently classifies high-technology
establishments into the correct industry and provides increased
detail on the services sector (Yuskavage and Pho, 2004). We do not,
in this analysis, examine interactions between these redefinitions
and revisions to published data.

6

See Kozicki (1997).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(1967) analysis that services sectors had little
potential to increase labor productivity and
expounded views that the expanding share of
services in gross domestic product (GDP) foreshadowed an eternal era of slow labor productivity
growth for the U.S. economy.7 As early as the 1973
productivity slowdown, however, the servicesproducing sector was a major user of information
technology, poised to benefit from improvements
in semiconductor manufacturing. Hence, the significant technological advances in the early 1990s
were especially important for services-producing
sectors (Triplett and Bosworth, 2004; Jorgenson,
Ho, and Stiroh, 2005). The mechanism was
straightforward: Sharp decreases in the prices
of semiconductors and related ICT capital goods
induced services-sector firms to significantly
increase their use of ICT capital, in turn increasing
productivity growth and, with it, productivity
growth for the entire economy. Both then and now,
three-quarters of private-sector real GDP arises
from services-producing sectors.
Poor data quality often has been cited as the
barrier to identifying the causes of the post-1973
slowdown in services-sector productivity growth;
see Griliches (1992 and 1994) or Sherwood (1994).
Measurement issues for services-producing sectors
have a long history, largely focused on correct
measures of “output,” including the price deflators
necessary for obtaining real output from nominal
magnitudes. As early as 1983, members of the
Federal Open Market Committee (FOMC) questioned the quality of data on output and productivity in services-producing sectors; such discussions
became longer and more frequent after Chairman
Greenspan’s lengthy soliloquy at the December 22,
1992, meeting.8 In 1996, Chairman Greenspan
noted it was implausible that services-sector labor
7

Baumol (1967) argued that some services—including municipal
government, education, performing arts, restaurants, and leisure
time activities—had a “technological structure” that made longterm increases in the real cost of such services unavoidable because
it was unlikely that productivity gains would be large enough to
offset increasing wages. Baumol did not suggest, however, that all
services-producing sectors were condemned to little or no productivity growth even though some later authors attributed that position to him.

8

Anderson and Kliesen (2005) review the history of productivity
discussions in the FOMC transcripts from 1982 to 1999.

M AY / J U N E

2006

183

Anderson and Kliesen

productivity had not increased during the past 20
years and requested the Board staff to conduct a
study of the quality of data for services-producing
industries. The resulting study—Corrado and
Slifman (1999)—confirmed the problematic quality of services-sector data but concluded that “the
output, price, and unit-costs statistics for the nonfinancial corporate sector are internally consistent
and economically plausible” (p. 332).9 Yet, even
in these data, measured productivity growth in
manufacturing was approximately double that in
nonmanufacturing: For 1989 to 1997, the increases
in output per hour were 2.9 and 1.4 percent for
manufacturing and nonmanufacturing, respectively (Corrado and Slifman, 1999, p. 329, Table 1).
The situation has improved significantly in
recent years. During the past decade, data measurement programs at both the BEA and the Bureau
of Labor Statistics (BLS) have produced wellmeasured data for the services sectors, culminating
in the BEA’s December 2005 publication of the
first NAICS industry-level data fully consistent
across their input-output matrices, their annual
industry accounts, and the nationwide GDP
national income accounts system. Somewhat
earlier, resolution of the vexing services-sector
productivity problem occurred in 2000 when
the BEA incorporated into the annual industry
accounts their October 1999 revisions to the
national income and product accounts (NIPA).10
9

10

Previously published data had shown some
rebound in measured productivity growth for
services sectors, but services continued to lag well
behind manufacturing. The revised sector and
industry data demonstrated that, far from being
the laggard, labor productivity growth in servicesproducing sectors had exceeded productivity
growth in manufacturing during the 1990s.
Two extensive recent analyses are Triplett and
Bosworth (2004) and Jorgenson, Ho, and Stiroh
(2005). Unfortunately, the studies’ datasets and
analytics differ, making direct comparisons of
their numerical productivity growth rates difficult.11 For brevity, we cite results from only one
of the studies. Triplett and Bosworth (2004) find
that labor productivity in services-producing
industries increased at an annual average rate of
2.6 percent between 1995 and 2001 (including
the 2001 economic slowdown), slightly faster
than manufacturing’s 2.3 percent pace. Servicesproducing sectors accounted for 73 percent of
1995-2001 labor productivity growth and 76 percent of multifactor productivity growth (defined
below). Increased use of ICT capital was the primary cause behind the productivity acceleration:
When weighted by its large share of the economy,
increased ICT use in services accounts for 80 percent of the total contribution of ICT to increased
economywide labor productivity growth between
1995 and 2001. Their conclusion? On page 2, they
write, “As with labor productivity growth and

Corrado and Slifman (1999) argued that most data problems were
in the nonfinancial noncorporate sector, half of which was composed of difficult-to-measure services-sector firms. They concluded
that mismeasurement so contaminated these figures that data for
the nonfarm business sector should not be used for analysis.
During the 1990s, the BEA greatly expanded and improved its
industry database, partly in response to controversy regarding
productivity growth. The BEA added gross output (shipments) by
industry in 1996 (Yuskavage, 1996). Gross output is more desirable
for productivity studies than gross product originating (value added),
a point highlighted by Evsey Domar’s much-earlier quip that few
people find it interesting to study productivity in shoe manufacturing when leather is omitted. Interested readers can judge the impact
of the October 1999 revisions by comparing studies before and
after their publication. Such a comparison is not included here
because, in our opinion, methodological changes for the annual
industry accounts have been so large as to render comparisons of
vintage data of questionable value. Typical of the pre-revision analyses is Triplett and Bosworth (2001), a paper originally presented
at the January 2000 American Economic Association meetings.
Ironically and with more than a touch of understatement, they note
that “The nonfarm multifactor productivity numbers are due for
revision in the near future, to incorporate the revisions to GDP that

184

M AY / J U N E

2006

were released in October, 1999. This will undoubtedly raise the nongoods estimate but not the manufacturing productivity estimate…”
Shortly thereafter, they declared “Baumol’s disease” to be cured;
see Triplett and Bosworth (2003 and 2006), the latter paper was
originally prepared for an April 2002 conference at Texas A&M
University. Interested readers might also compare Gordon (2000
and 2003). One of the earliest studies using the revised data is
Stiroh (2002), which first appeared in January 2001 as Federal
Reserve Bank of New York Staff Report 115 (the published article
contains later, revised data) and showed productivity accelerations
in broad service sectors, including wholesale and retail trade and
finance, insurance, and real estate.
11

Triplett and Bosworth use output and employment data from the
BEA’s annual industry accounts and capital from the BLS’s capital
flow accounts. Their labor input measure is persons employed,
not hours worked, and is not quality-adjusted. Although these
shortcomings perhaps bias upward their estimated level of labor
productivity, it seems unlikely that it distorts labor productivity
growth significantly over shorter periods (i.e., 5 years or so).
Jorgenson, Ho, and Stiroh measure output broadly to include the
services of household durable goods and housing. They also use
constant-quality index numbers for labor and capital input.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

multifactor productivity growth, the IT [information technology] revolution in the United States
is a services industry story.” It is important to note,
as they do, that not all services-sector industries
had productivity increases; most did, but some
services industries continue to have negative
measured productivity growth.

THE ROLE OF SEMICONDUCTOR
PRICES
Understanding the sources of the labor productivity acceleration makes it easier to appreciate
the difficulties in measuring it. Economists define
labor productivity as the ratio of the economy’s
real output, Y, to total hours of labor input, H, Y/H.
Let us assume that total output is produced by
means of an aggregate production function, Y =
A × F(H,K), where K measures the flow of productive services from the economy’s capital stock
and A measures increases in output not due to
increases in labor (H) or capital (K), that is, multifactor productivity (MFP).12 In this framework,
there are two sources of increases in labor productivity: capital deepening and increases in MFP.
Capital deepening is defined as increases in the
amount of capital equipment available per hour
worked, K/H. Increases in MFP often are referred
to as improvements “in the ways of doing things,”
that is, changes in firms’ business management
practices. The growth rate of A may be written
as, and often is measured as, a residual by means
of the equation
g A = gY − ⎡⎣(1 − ν ) g K + ν g H ⎤⎦ ,
where gA, gY, gK , and gH , respectively, are the
growth rates of MFP, output, capital services, and
labor services and ν is the share of labor in total
output. When increases in output, Y, are fully
accounted for by increases in H or K by means of
the function F (H, K ), then by definition there is
no change in A, that is, no increase in MFP (but
12

For simplicity, we are omitting intermediate inputs, making total
output equal to value added (real GDP originating). For a richer
model that contains intermediate inputs, see Jorgenson, Ho, and
Stiroh (2005). The term A also often is referred to as the Solow
residual.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

labor productivity may continue to increase
through capital deepening).
Note that mismeasurement of H or K will cause
mismeasurement of MFP. Specifically, if H and K
contain unmeasured increases in quality, then
measured H and K will tend to understate the flow
of labor and capital services and tend to overstate
growth of MFP.13 Productivity statistics published
by the BLS use, for H, a constant-quality index of
labor input that adjusts for such changes by using
hourly wage rates to combine the working hours
of workers with different characteristics.14 For the
brief time periods considered here, labor quality
adjustments likely matter little: For 1995-2000,
Jorgenson, Ho, and Stiroh (2005) find that the
difference between aggregate economywide hours
at work and their constant-quality quantity index
of labor volume is small. The constant-quality
index grew only 0.34 percent faster than total
hours worked during the period, largely due to
the more rapid growth of higher-paid workers.
Measuring capital input, K, has similar issues.
The productive capital stock of the United States
is a heterogeneous collection of producers’ durable
equipment and structures, each with different
specific characteristics. Although macroeconomists tend to measure the “capital stock” by summing the constant-dollar purchase prices of all
such capital assets (after a depreciation allowance),
a superior practice for productivity analysis is to
measure K as a constant-quality quantity index of
the flow of capital services.15 Jorgenson, Ho, and
13

This relationship has been well-known at least since Solow (1960).
Note that Solow (1957) uses a capital stock measure that is not
adjusted for quality change and, as a result, captures almost all
productivity improvements in A as MFP. Solow (2001) lauds the
introduction of constant-quality labor and capital services index
numbers. Many macroeconomic studies, however, continue to use
capital stock measures unadjusted for quality; see, for example,
Jones (2002).

14

Jorgenson, Gollop, and Fraumeni (1987) is the classic study. For a
recent discussion that also provides newly constructed measures
for ICT-related sectors, see Jorgeonson, Ho, and Stiroh (2005,
Chap. 6).

15

Hulten (1992) has a clear exposition of the trade-off between
measuring quality change in capital goods and measuring multifactor productivity. Pakko (2002a) explores whether applying reasonable quality adjustments to non-ICT capital investment during
the 1990s would change the then-published profile of investment
spending. He concludes it would not.

M AY / J U N E

2006

185

Anderson and Kliesen

Stiroh (2005) emphasize that incorporating quality
measures for capital services is essential to understanding the 1990s productivity acceleration.
Many analysts have noted the short service lives,
rapid depreciation rates, and high marginal products of ICT equipment. Further, the technological
innovations that accelerated the fall in semiconductor prices have also allowed the creation of
entirely new types of ICT capital goods (as well
as innovative consumer goods).16 During the
1990s, for example, businesses shifted their capital
investment spending patterns toward relatively
shorter-lived ICT capital; information processing
equipment and software comprised 25.1 percent
of private fixed investment in 2002 versus 11.4
percent in 1977. Both Triplett and Bosworth (2004)
and Jorgenson, Ho, and Stiroh (2005) use qualityadjusted capital stock data from the BLS constructed using methods pioneered by Jorgenson.
To be specific, a constant-quality quantity index
for investment in asset j can be written as
Ij =

Nominal Investment j
PI , j

,

where PI,j is a constant-quality price index that
reflects changes in the productive characteristics
and perceived “quality” of the capital asset (time
subscripts are omitted). If PI,j is correct, then Ij
measures the quantity of new nominal investment
in constant-quality “efficiency units” relative to
the base year of the price index (Hulten, 1990).
Capital stocks are constructed by means of these
methods, for example, in Jorgenson, Ho, and
Stiroh (2005, Chap. 5).
Solow (2001) emphasizes that constant-quality
price and quantity indices are subtle concepts,
as is the separation of capital deepening from MFP.
Fundamentally, all productivity increases are
due to increases in knowledge: In some cases, an
economist might measure these as increases in
quality-adjusted capital and capital deepening;
in other cases the economist might identify the
16

Innovations in communications equipment include cell phones,
high-density multiplexers for fiber optic cable, and voice-overInternet-protocol (VOIP) telephone equipment. Doms (2005) provides a survey.

186

M AY / J U N E

2006

gains as MFP after quality adjustments have been
made to labor and capital inputs. Regardless,
increases in the knowledge of how to produce
goods and services is the fundamental cause of
productivity growth. Quality adjustments often
are subjective and uncertain. Judgment errors in
the PI,j necessarily affect measures of both capital
deepening and MFP. Overestimates of increased
quality potentially can inflate constant-quality
quantity indices to the extent that MFP vanishes.
Solow offers an example to illustrate the issue.
Consider a competitive two-sector economy in
which one sector produces capital goods from
labor (only) and the other produces consumer
goods from labor and capital. Let us assume a
technological innovation occurs that reduces the
quantity of labor required to produce one unit of
the capital good but does not change its physical
characteristics. In this case, both the observed
market price and the constant-quality price index
fall (no quality adjustment is made to the observed
price of the capital good). As profit-maximizing
producers of consumer goods increase the quantity
of now less-expensive capital per hour of labor,
both the physical capital stock and the constantquality quantity index (as well as the capitallabor ratio) will increase. Alternatively, starting
from the same two-sector economy as before, let
us assume a technological innovation occurs that
increases the productivity of each unit of capital
in the production of consumer goods but does not
change the amount of labor required to produce
each unit of the capital good. In this case, the
observed price of the capital good is unchanged
but the constant-quality price index for capital
goods falls. As profit-maximizing producers of
consumer goods replace older capital with newer
capital, the constant-quality quantity index for
capital will increase even if the physical capital
stock does not, and the ratio of constant-quality
capital units to labor volume will rise. Under mild
assumptions, the long-run equilibrium economic
effect of the two alternative technological innovations is exactly the same, although the adjustment may entail a long lag when the technological
improvement is largely or entirely embedded in
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

the new capital good (Pakko, 2002b and 2005).
The widely used putty-clay model of Greenwood,
Hercowitz, and Krussell (1997) is similar to
Solow’s example. In their model, a single output
good can be “hardened” into consumption goods,
investment in structures, or investment in equipment. The production function for output includes
labor and both types of capital goods, allowing
rich feedback effects among investment-specific
technological progress, production of new capital
goods, and capital deepening.17
Solow’s example is helpful in understanding
how investment in ICT equipment affects productivity. To be specific, it is useful to distinguish
between increases in productivity at firms that
make high-technology products and at firms that
solely use ICT.18 For the former, technological
progress in semiconductor manufacturing allows
more computing power to be produced from the
same inputs of capital and labor because such
firms are large users of information technology
equipment in development and production. For
the latter, decreases in the cost of information
technology induce capital deepening—that is,
they induce the firm to provide additional capital
equipment for each worker. Examples include
initiating/expanding e-commerce on the Internet;
improving the timeliness of linkages between
point-of-sale cash registers and inventory management systems; and improving network links among
geographically separated sites. Studies suggest
that such changes in business practice may take
considerable time to implement; hence, the
response of productivity to changes in investment
17

Greenwood, Hercowitz, and Krussell (1997) compare their constantquality capital stock measures for 1954-90, built from data as published circa early 1994, to the capital stock measures calculated
by the BEA (which at that time lacked quality adjustment) but not
to the constant-quality quantity indices of the BLS. See Dean and
Harper (2001) and Jorgenson, Landefeld, and Nordhaus (2006) for
comparisons of the BEA and BLS capital measures.

18

Readers are cautioned that Solow’s example, while illustrative, is
only an example; recall he assumes that capital goods are made
from labor only. In the real world, semiconductor manufacturing
is a large user of its own products in the form of computer-assisted
design and manufacturing. In the model of Greenwood, Hercowitz,
and Krussell (1997), the economy’s output can be hardened into
capital which, in following periods, is an input to the production
of more output, including future capital goods.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

in ICT equipment varies among firms and industries. Such variation may delay timely recognition
that forces have arisen, which, eventually, will
increase productivity economywide.

EMPIRICAL ESTIMATES OF
ACCELERATING AGGREGATE
LABOR PRODUCTIVITY
The above framework has been used by a
number of authors to measure the effect of investment in ICT on productivity growth. Among the
more important aggregate (not industry-level)
studies are Oliner and Sichel (2002) and Jorgenson,
Ho, and Stiroh (2002, 2003, and 2004). Although
the studies’ details differ, at the aggregate level
of the national economy the authors attribute
approximately three-fifths of the acceleration in
labor productivity during the second half of the
1990s to capital deepening and two-fifths to
increases in MFP. In turn, the authors find that
approximately four-fifths of the capital deepening is due to investment in ICT equipment, with
increased spending on traditional business equipment accounting for the other one-fifth. Both
studies emphasize that purchases of ICT equipment were boosted by rapid decreases in the prices
of such equipment, due in large part to rapidly
falling prices of component semiconductors, and
perhaps displaced to some extent purchases of
traditional equipment.
As an example of the interaction between
measurement and economic modeling, consider
the Oliner and Sichel (2002) model. In this model,
the rate of increase in MFP is measured by the
inverse of the rate of decrease of semiconductor
prices, creating a direct link between observed
decreases in semiconductor prices and unobserved
increases in productivity growth. The intuition
is that, because semiconductor prices are falling
rapidly relative to the aggregate price level, MFP
at semiconductor manufacturers must be increasing; if not, the firms would exit the industry. The
effect of this measurement technique is that the
sharp decline in semiconductor prices in 1997,
shown in Figure 2, appears immediately as an
M AY / J U N E

2006

187

Anderson and Kliesen

Figure 2
Contributions to Labor Productivity Growth and Relative Changes in Semiconductor Prices
Percent

Contribution from Capital Deepening

4

Contribution from MFP
Percent Change in Relative
Semiconductor Prices (right axis)

3

Percent
10
0
–10
–20

2

–30
1
–40
–50

0

–60
–1
–70
–2 1983

1986

1989

1992

1995

1998

2001

2004

–80

SOURCE: Productivity data, Dan Sichel (via e-mail); semiconductor prices, BLS.

increase in labor productivity growth. More
recent estimates provided to the authors by Dan
Sichel, shown in Figure 2, suggest that the direct
contribution from the semiconductor industry
was responsible for 0.08 percentage points of the
0.37 percent growth of MFP from 1974 to 1990 and
0.13 percentage points of the 0.58 percent growth
from 1991 to 1995; after 1995, the proportions
change.19 He estimates that the direct contribution
from the semiconductor industry from 1996 to
2003 was responsible for 0.40 percentage points
of the economy’s total 1.34 percent annual growth
of MFP.
Complementary analyses are presented by
Jorgenson, Ho, and Stiroh (2002 and 2004). (The
latter paper’s results differ from the former’s
because of data revisions.) In their 2004 analysis,
labor productivity (adjusted for shifts in labor
quality) increased during the 1995-2003 period at
a rate 1.6 percentage points greater than during
the 1973-95 period; they attribute a little less than
three-fifths of this increase to capital deepening.
If the acceleration of productivity was driven
by an increase in the rate of decrease of semicon19

Unpublished estimates received from Dan Sichel via e-mail correspondence on June 28, 2004.

188

M AY / J U N E

2006

ductor (and computer) prices, just how fast did
prices fall? As shown in Figure 2, semiconductor
prices decreased throughout the 1990s with the
rate of decrease accelerating during the latter half
of the decade. Caution must be used in interpreting
these figures, however, because rapid technological
change has introduced thorny quality-adjustment
problems. The caution expressed by Gullickson
and Harper (2002) is typical:
These findings rest on estimated trends for high
tech inputs and outputs that incorporate adjustments to account for changes in their quality.
Many of the high tech input and output growth
rates are well up in the double-digit percentage
range. These extraordinary trends, in turn, rest
on the use of quality adjusted price indexes in
deflation. These indicate that prices for high
tech goods of constant quality have fallen very
rapidly. These price trend estimates have withstood much scrutiny, but we must emphasize
their importance for our conclusions. While
it is likely that real output trends have been
underestimated in many or all of the service
sector industries with negative MFP trends, it
is also possible that the growth trends for high
tech inputs have been overestimated. Underestimating service sector output trends would
bias the aggregate productivity trend downF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

ward. Overestimating high tech input and
output trends would bias the aggregate productivity trend upward...We can express a concern
that the “measurement playing field” may not
be level. We have very intricate means of making quality adjustments to high tech goods, but
we have few means to make quality adjustments to service outputs.

In other cases, the survey sample for some
products, such as semiconductors, has changed.20
Holdway (2001, p. 15), cautions:
It would be disingenuous to imply that the PPI
has been able to properly value and account for
technological change in its cmpu [CPU] price
measurements. The standard PPI methodologies
for valuing quality change [are] rather limited
when faced with quality improvements that are
accompanied by reduced input costs due to
shifts in the production function.

Holdway also notes that the apparent acceleration of semiconductor price decreases during
early 1997, as shown in Figure 2, most likely is a
result of the introduction of secondary-source
pricing data.21 Interested readers also should see
Grimm (1998) and Landefeld and Grimm (2000).
Since 2000, the relative price of quality-adjusted
semiconductors (and related products) has
decreased at a slower rate than during the latter
part of the 1990s; see Figure 2. Even though the
relative prices of semiconductors fell by approximately 38 percent in 2004, this was less than its
average decline of approximately 65 percent from
1998 to 2000.
20

For semiconductor prices, for example, the BLS has a series in the
producer price index, the BEA has a series used in the national
income accounts, and the Federal Reserve Board has a price
measure used in its industrial production index. See, for example,
Hulten (2001). The semiconductor price series plotted in Figure 7
is the PPI measure relative to the GDP price index.

21

Secondary source prices are price figures collected from catalogs
and industry publications, rather than from the manufacturer’s
price list. Holdway doesn’t speculate on whether secondary-source
price data, if available, might change the pre-1997 trend, but the
absence of such data introduces a risk into any study that attributes
the productivity acceleration to more rapid price decreases: Would
the studies reach the same conclusion if the rate of price decrease
from 1993 to 1997 had been the same as that beginning in 1997?
Or did the decision to solicit secondary-source price data reflect
observations of increased pricing pressure?

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

LABOR PRODUCTIVITY:
MEASUREMENT, VOLATILITY,
AND REVISIONS
Published labor productivity growth rates
have two characteristics that complicate recognizing changes in trend growth: volatility and revisions. Volatility is illustrated in Figure 3, which
shows compound annual growth rates calculated
from the most recently published data for 1-, 4-,
and 40-quarter intervals. The high volatility is
obvious. Beyond volatility, the figure also illustrates that “trend” labor productivity growth since
World War II appears to have gone through three
phases: more rapid growth from 1948 to 1973;
slower growth from 1973 to 1994; and more rapid
growth beginning circa 1995. Measured labor
productivity growth in the nonfarm business
sector, for example, averaged 3 percent per annum
during 1949 to 1972 but less than half this pace
during 1973 to 1994, despite strong productivity
growth in manufacturing.
Since 1995, the pace of productivity growth
in the total nonfarm business sector has been about
equal to its rate during the earlier high-growth
period of 1949 to 1972; for the larger total private
business sector, growth over the past 10 years still
remains modestly below its earlier pace. The lower
two sections of Table 1 decompose productivity
growth into growth of its numerator (output) and
of its denominator (hours). The increase in productivity growth from 1973-94 (column 2) to 19952004 (column 3) reflects both more rapid growth
of the numerator (output) and slower growth of
the denominator (hours). For broad sectors, the
table shows that the post-1973 productivity growth
slowdown (compare columns 1 and 2) largely was
due to slowdowns in the services and nondurable
manufacturing sectors—durable manufacturing’s
labor productivity growth increased modestly
throughout the slowdown period. During the most
recent decade, durable manufacturing’s productivity growth has jumped to an average annual
pace of approximately 5.75 percent, double its
1949-72 pace.
Published measurements of the economy’s
output and labor input are frequently revised. Not
only do data revisions complicate the task facing
M AY / J U N E

2006

189

Anderson and Kliesen

Figure 3
Labor Productivity Growth, Nonfarm Business Sector
10-Year Growth Rate
Compound Annual Rate, Percent, Quarterly Data
3.5
3
2.5
2
1.5
1
0.5
0
1947 1950 1953 1956 1959 1962 1965 1968 1971 1974 1977 1980 1983 1986 1989 1992 1995 1998 2001 2004

1-Year Growth Rate
Compound Annual Rate, Percent, Quarterly Data
8
6
4
2
0
–2
–4
1947 1950 1953 1956 1959 1962 1965 1968 1971 1974 1977 1980 1983 1986 1989 1992 1995 1998 2001 2004

1-Quarter Growth Rate
Compound Annual Rate, Percent, Quarterly Data
20
15
10
5
0
–5
–10
–15
1947 1950 1953 1956 1959 1962 1965 1968 1971 1974 1977 1980 1983 1986 1989 1992 1995 1998 2001 2004

190

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

Table 1
Decomposition of Average Labor Productivity Growth for the Business Sector
Growth for periods indicated

Output per hour
Business
Nonfarm business
Manufacturing
Durable
Nondurable
Nonfinancial corporate business
Output
Business
Nonfarm business
Manufacturing
Durable
Nondurable
Nonfinancial corporate business
Hours
Business
Nonfarm business
Manufacturing
Durable
Nondurable
Nonfinancial corporate business

1949-72

1973-94

1995-2005

1949-2005

3.23
2.77
2.58
2.64
2.83
2.61

1.58
1.48
2.59
3.02
1.90
1.40

2.76
2.69
4.44
5.86
2.85
3.34

2.49
2.25
2.94
3.40
2.47
2.21

4.10
4.22
3.74
4.21
3.48
5.51

3.18
3.17
2.51
2.87
1.90
3.23

3.61
3.64
2.38
4.19
0.16
4.27

3.65
3.70
3.00
3.68
2.22
4.17

0.84
1.41
1.14
1.53
0.63
2.86

1.57
1.66
–0.08
–0.15
0.00
1.81

0.83
0.92
–1.97
–1.58
–2.61
0.90

1.12
1.41
0.05
0.27
–0.25
1.92

NOTE: Compounded annual growth rates using quarterly data: 1949:Q1 to 1972:Q4; 1972:Q4 to 1994:Q4; 1994:Q4 to 2005:Q4. Data
for nonfinancial corporations begins in 1958:Q1 and ends in 2005:Q3. Data for total manufacturing and durable and nondurable
manufacturing are on an SIC basis prior to 1987. Data for total manufacturing and durable and nondurable manufacturing are on an
SIC basis prior to 1987.
SOURCE: BLS.

policymakers—changing perceived strength or
weakness of economic conditions that inform their
judgments—but they are often significant enough
to dramatically alter economic history.22 As an
example, each year the BEA revises the national
income and product accounts and the BLS revises
employment and aggregate hours worked in the
establishment survey. Selected revisions, and their
effects, are shown in Table 2. For 1998 and 1999,
for example, measured output growth in the sub22

See Himmelberg et al. (2004), Kozicki (2004), Orphanides and
van Norden (2005), or Runkle (1998).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

sequent, revised data is sharply higher than in the
earlier, preliminary data.23 Beginning with 2001,
however, the pattern changed: Measured output
growth in the revised data has tended to be lower
than in previous, preliminary figures. The NIPA
revisions published during mid-2005, for example,
trimmed measured real GDP growth over the previous three years by 0.3 percentage points per
year, to approximately 3.25 percent.
23

The 1999 revisions, it should be noted, were boosted by the
reclassification of software purchased by businesses as fixed
investment, rather than as an intermediate expense; see Gullickson
and Harper (2002).

M AY / J U N E

2006

191

Anderson and Kliesen

Table 2
Major Statistical Revisions Since 1996 and Real-Time Estimates of Their Effects
Statistical series

Major aspects of revision

Estimated
magnitude of revision

January 1996

Comprehensive revision
of the NIPA

Switch to chain-weighted price
indices from fixed-weighted
price indices in the NIPA.
Government investment defined
differently. New methodology for
calculating depreciation of fixed
capital.

Revised estimates show real GDP
grew at a 3.2 percent annual
rate from 1959 to 1984, 0.2
percentage points faster than
old estimate. Real GDP growth
from 1987 to 1994 was lowered
0.1 percentage point.*

July 1998

Annual revision of the
NIPA

Updated source data. Methodology
changes to expenditures and
prices for autos and trucks;
improved estimates for several
categories of consumer
expenditures for services; new
method of calculating change in
business inventories; some
purchases of software by
businesses classified as expenses
(removed from business fixed
investment).

From 1994:Q4 to 1998:Q1 the
growth of real GDP was revised
0.3 percentage points higher to
3.4 percent; growth of real fixed
investment revised 0.6
percentage points higher to 12.7
percent; growth of GDP price
index reduced 0.3 percentage
points to 1.8 percent.

February 1999

Consumer price index (CPI)

Switch to geometric means
estimation to eliminate lowerlevel bias; affected 61 percent of
consumer expenditures.

According to the BLS, this switch
will reduce the annual rate of
increase of the CPI by 0.2
percentage points per year.
According to the CEA,
methodological changes to the
CPI from 1994 to 1999 reduced
the annual rate of increase of
the CPI by 0.6 percentage
points in 1999 compared with
the 1994 estimate.†

October 1999

Comprehensive revision
of the NIPA

Introduction of CPI geometric
weights; classification of
software as a fixed investment;
incorporated data from the
latest 5-year economic census
and 1992 benchmark input-output
accounts.

From 1987 to 1998, these revisions
boosted the annual rate of
growth of real GDP by an
average of 0.4 percentage points
per year.‡

July 2001

Annual revision of the
NIPA

Updated source data (for example,
Census Bureau Annual Surveys);
new price index for
communications equipment from
Federal Reserve Board; monthly
data used to calculate GDP
converted from SIC to NAICS.

Growth of real GDP during
revision period (1998:Q1 to
2001:Q1) reduced from 4.1
percent to 3.8 percent
(compared with pre-revision
estimates).

Publication date

192

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

Table 2, cont’d
Major Statistical Revisions Since 1996 and Real-Time Estimates of Their Effects
Publication date

Statistical series

Major aspects of revision

Estimated
magnitude of revision

July 2002

Annual revision to the
NIPA

Updated source data (for example,
Census Bureau annual surveys);
new methodology for estimating
quarterly wages and salaries; new
price index within PCE services.

Growth of real GDP during
revision period (1999:Q1 to
2002:Q1) reduced from 2.8
percent to 2.4 percent
(compared with pre-revision
estimates).

July 2004

Annual revision to the
NIPA

Update source data; only minor
changes in methodology for
treatment of health care plans
for retired military and
measurement of motor vehicle
inventories.

Growth of real GDP during
revision period (2000:Q4 to
2004:Q1) was unchanged at 2.5
percent; growth of real fixed
investment in equipment and
software revised 0.6 percentage
points lower.

July 2005

Annual revision to the
NIPA

Updated source data; incorporation
of Census’ quarterly services
survey for investment in
computer software and for
consumer spending for services;
improved method of calculating
implicit services provided by
commercial banks. BEA claims
these changes will reduce the
volatility of the price index for PCE.

Growth of real GDP from 2001:Q4
to 2005:Q1 reduced from 3.5
percent to 3.2 percent. Over the
same period, growth of GPD
price index and the core PCE
price index were revised 0.2
percentage points higher to 2.2
and 1.7 percent, respectively.

NOTE: Discussion and estimates of annual revisions to the NIPA were taken from archived reports at their web site: www.bea.gov.
SOURCE: *1996 Economic Report of the President, p. 48.
†2000 Economic Report of the President, p. 61.
‡Ibid, p. 81.

Revisions to national income data change
measured productivity, often significantly.
Changes since 1994 are summarized in Table 3.24
Consistent with revisions to output, in both 1998
and 1999 the BLS revised upward measured nonfarm labor productivity, and in 2001 and 2002 it
revised downward measured productivity. The
2001 revision, for example, reduced the measured
three-year growth rate of labor productivity by
more than three-quarters of a percentage point.
Overall, revisions to productivity growth primarily
are due to revisions to measured output and not
to revisions in measured employment or aggregate
24

These revisions incorporate both the annual three-year revisions
to the NIPA as well as the periodic comprehensive revisions, which
occur about every five years. See the footnote to Table 3.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

hours worked. Since 1994, for example, the mean
absolute revision to the growth rate of output,
0.30 percentage points, is more than double that
of hours worked, 0.14 percentage points, and
approximately equal to that of the growth rate of
productivity growth, 0.28 percentage points.25
A longer-horizon picture of historical revisions
to measured labor productivity growth is shown
in Figure 4. For each year, 1959 to 2004, the figure
has one vertical line that summarizes all the values
of that year’s labor growth as published in various
issues of the Economic Report of the President.
25

The BLS’s annual benchmark revisions to establishment data
have become smaller over time. From 1984 to 2004, the absolute
percentage change in nonfarm payrolls averaged 0.2 percent, a
third as much as the 1964-83 period. See Haltom, Mitchell, and
Tallman (2005).

M AY / J U N E

2006

193

Anderson and Kliesen

Table 3
Effect of Annual NIPA Revisions on Measured Growth of Labor Productivity, Output, and Hours
in the Nonfarm Business Sector (percent change at a compound rate)
Output per hour

Output

Hours

NIPA revision period

Initial

Revised

Difference

Initial

Revised

Difference

Initial

Revised

Difference

1994

2.55

2.36

–0.19

3.99

4.12

0.13

1.40

1.70

0.30

1995

1.72

1.68

–0.04

4.70

4.85

0.15

2.87

3.09

0.22

1996

0.83

0.57

–0.26

2.92

2.84

-0.08

2.03

2.28

0.25

1997

0.75

0.88

0.13

3.21

3.33

0.12

2.40

2.44

0.04

1998

1.55

2.06

0.51

4.15

4.76

0.61

2.60

2.64

0.04

1999

2.31

2.60

0.29

4.44

4.74

0.30

2.38

2.42

0.04

2000

3.30

3.30

0.00

5.41

5.51

0.10

2.06

2.13

0.07

2001

3.05

2.28

–0.77

4.28

3.60

–0.68

1.16

1.27

0.11

2002

3.08

2.71

–0.37

2.00

1.44

–0.56

–1.08

–1.27

–0.19

2003

2.87

3.60

0.73

1.56

1.50

–0.06

–1.27

–2.02

–0.75

2004

4.69

4.45

–0.24

4.33

4.23

–0.10

–0.38

–0.23

0.15

2005

3.97

3.68

–0.29

4.76

4.59

0.17

0.76

0.87

0.11

Mean revision

–0.04

0.01

0.03

Mean absolute
revision

0.32

0.26

0.19

NOTE: Pre- and post-benchmark figures as published in the BLS Productivity and Cost Report. The NIPA revision period is the nine
quarters up to and including the first quarter of the year indicated. The year indicated is the year of publication of the NIPA revision,
usually July or August. The 1999 NIPA revision, more extensive than most, incorporated the October 28, 1999, introduction of computer
software into business fixed investment. This resulted in revisions back to 1959. Nevertheless, for consistency, the revisions shown here
are for the nine quarters ending in the first quarter of the year indicated. (The 1999 revisions to “hours” appeared in the August 5, 1999,
Productivity and Cost Report.)

The lower and upper ends of each line correspond
to the lowest and highest published growth rates,
respectively, for that year, while the “dot” indicates the most recent estimate. For many years,
the minimum-to-maximum range equals or
exceeds 2 percentage points. Ranges for years after
1995 are smaller, perhaps due to better measurement techniques, or perhaps because there are
fewer observations.
Further insight can be gained from “case
studies” of periods during which breaks in trend
productivity growth occurred. Here, we consider
1973 and 1995-96.
• For 1973, the first-published estimate of
labor productivity growth was approximately 3 percent; see Figure 5. This value
194

M AY / J U N E

2006

fell sharply in subsequent revisions. During
the late 1980s, however, the published value
began to increase. In the most recently published data, 1973’s measured productivity
growth is greater than its initially published
value—removing entirely any “slowdown”
during the year.26
• For 1995 and 1996, the most recently published values differ sharply from initial
estimates. For 1995, the most recent value
is much lower than the initial estimate; see
Figure 6. For 1996, the most recent figure is
much higher than the initial estimate; see
26

In this vein, it appears that the switch to chain weights from fixed
weights in 1996 (see Table 2) was particularly significant. See
Gullickson and Harper (2002).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

Figure 4
Revisions to Real-Time Estimates of Labor Productivity Growth, 1959-2004
1959-69

1970-79

Percentage Points
6.0

Percentage Points
6.0
5.0

5.0

4.0

4.5

4.0

3.8

3.6

3.5
3.1

3.0

3.4

3.1

3.0

4.0
3.3

3.0
2.0

1.6

1.3

0.0

1.7
1.2

1.0

2.7

1.5

1.0
2.0

3.3

3.1

–0.3

–1.0
0.1

0.0
Current Estimate

–1.5

–2.0
–3.0

Current Estimate

1980-89

78

79
19

77
19

19

76
19

75
19

73

19
74

72

19

19

19

19

70

63
19
64
19
65
19
66
19
67
19
68
19
69

62

19

61

19

59

60

19

19

19

71

–4.0

–1.0

1990-2004

Percentage Points

Percentage Points

5.0

6.0

4.5

4.0

5.0
3.1

3.0
2.0

1.7

1.5

1.4

–0.2

1.9 1.7

–1.0

0.4

–1.0

Current Estimate

–2.0

2.7

3.8

3.4

2.5

1.6

1.2

1.0
0.0

–1.0

2.8 2.8

2.7

2.0
0.7

0.5

0.0

4.0

3.0

2.0

1.0

4.1

4.0

0.5

Current Estimate

92
19
93
19
94
19
95
19
96
19
97
19
98
19
99
20
00
20
01
20
02
20
03
20
04

19

91

90

19

19

88

89
19

87

19

19

86

85

19

19

19
84

83
19

82
19

81
19

19

80

–2.0

SOURCE: Economic Report of the President, annual issues, 1959-2004.

Figure 7. The revision patterns for 1995 and
1996 made it difficult to recognize, during
1995 and 1996, that a change in trend productivity growth was occurring. Although
the initially published estimates for the first
three quarters of 1995 suggested a productivity acceleration, by mid-1996 these estimates had been revised downward to less
than 1 percent. For 1996, initial estimates
for all four quarters were between approximately 0.5 and 1.5 percent, hardly supportive of acceleration. Not until the third
quarter of 1997 did revised estimates sugF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

gest an acceleration, and not until mid-1998
was its extent clearly visible in the revised
data.
Differences between first-published and most
recently published productivity figures for 1985
to 2005 are summarized in Table 4 and Figures 8
and 9. The principal conclusion to be drawn from
Table 4 is that, although mean revisions are small,
mean absolute revisions are large, in some cases
approximately equal to the estimated annual
growth rate itself. Revisions to four-quarter growth
rates are smaller than revisions to one-quarter
growth rates, although this is due, in part, to the
M AY / J U N E

2006

195

Anderson and Kliesen

Figure 5
Real-Time Estimates of 1973 Labor Productivity Growth
Percent
3.50

3.00

2.50

2.00

1.50

1.00
1974

1977

1980

1983

1986

1989

1992

1995

1998

2001

2004

Publication Date of Economic Report of the President

Figure 6
Labor Productivity Growth, 1995
Year-Over-Year Percent Change, Quarterly; Monthly Figures, Jan 1995–Dec 2000
4
3.5
3

1995:Q1
1995:Q2
1995:Q3
1995:Q4

2.5
2
1.5
1
0.5
0
–0.5
–1

Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct
95 95 95 95 96 96 96 96 97 97 97 97 98 98 98 98 99 99 99 99 00 00 00 00

196

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

Figure 7
Labor Productivity Growth, 1996
Year-Over-Year Percent Change, Quarterly; Monthly Figures, Jan 1996–Dec 2000
4.00
3.50
3.00
2.50
2.00
1.50
1996:Q1
1996:Q2
1996:Q3
1996:Q4

1.00
0.50
0.00

Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct Jan Apr Jul Oct
96 96 96 96 97 97 97 97 98 98 98 98 99 99 99 99 00 00 00 00

Figure 8
Nonfarm Business Sector Labor Productivity Growth Estimates
(four-quarter growth rate)
Published Value as of 2005
7

6

5

4

3

2

y = 0.4917x + 1.2572
R2 = 0.331

1

–2

0
0

–1

1

2

3

4

5

6

7
Initial Published Value

–1

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

197

Anderson and Kliesen

Figure 9
Nonfinancial Corporate Business Labor Productivity Growth Estimates
(four-quarter growth rate)
Published Value as of 2005
10

8

6

4

y = 0.5566x + 1.0938
R2 = 0.5079

2

0
–4

–6

–2

0

2

4

6

8

10
Initial Published Value

–2

–4

–6

arithmetic of expressing all changes—including
those for one quarter—as annualized growth rates.
Note that revisions to output growth rates are
smaller than those for productivity and that revisions to hours worked are smaller than revisions
to output—suggesting that hours worked may be
measured, at least in the near-term, with less error
than output. Among the aggregate business sectors,
durable goods manufacturing has the largest mean
absolute revision. The larger revision likely reflects
the better near-term precision with which this
sector is measured, including more timely incoming revised data.
Two similar conclusions are suggested by
Figures 8 and 9. First, there are large differences
between first-published data and revised data.
Second, more-accurate measurement matters:
Revisions for the narrower and somewhat bettermeasured nonfinancial corporate business sector
are smaller than for the broader and less wellmeasured nonfarm private business sector.
198

M AY / J U N E

2006

CONCLUSIONS
Since 1995, estimates of the economy’s longrun, or structural, rate of labor productivity
growth have increased significantly. After having
increased at about a 1.4 percent annual rate from
1973 to 1994, the current sustainable pace of labor
productivity growth in the nonfarm business
sector is widely believed to be from one-half to 1
percentage point higher.
Recognition during the mid-1990s of the acceleration of productivity was delayed by weaknesses
in measuring productivity. Initial aggregate data
for 1995 and 1996, for example, showed little
increase in measured productivity. Although these
productivity measurements were at odds with
both anecdotal observations at individual firms
and available data on business investment spending (which suggested that rapidly falling semiconductor and computer prices were encouraging
significant capital deepening), not until mid-1997
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

Table 4
Initially Published vs. Most Recently Published Growth Rates of Nonfarm Labor Productivity,
1985:Q3 to 2005:Q4
Output per hour
Mean
revision

Mean
absolute
revision

Output
Mean
revision

Hours

Mean
absolute
revision

Mean
revision

Mean
absolute
revision

1.55

–0.02

1.07

Growth from preceding period (quarterly, percent annual rate)
Business sector
Nonfarm
Manufacturing
Durable
Nondurable
Nonfinancial corporate

0.40

1.78

0.37

0.41

1.80

0.34

1.54

–0.07

1.09

0.03

2.12

–0.06

1.65

–0.06

1.28

–0.08

2.77

–0.04

2.21

0.05

1.42

0.08

2.16

–0.18

1.73

–0.22

1.38

–0.12

2.01

–0.12

1.82

–0.02

1.04

Growth from corresponding period 1 year earlier (quarterly, percent annual rate)
Business sector
Nonfarm
Manufacturing

0.26

1.03

0.24

0.81

–0.02

0.62

0.25

1.00

0.19

0.80

–0.05

0.62

–0.16

1.37

–0.18

0.89

–0.01

0.77

Durable

–0.28

1.89

–0.19

1.43

0.11

0.83

Nondurable

–0.11

1.13

–0.32

0.71

–0.18

0.79

–0.04

1.12

–0.04

1.04

–0.02

0.70

Nonfinancial corporate

NOTE: Each figure is equal to the initially published growth rate minus the most recently published growth rate for the span indicated.
SOURCE: BLS, Productivity and Cost.

did revised data for 1995 and 1996 display gains
in productivity growth. Our analysis suggests that
such measurement delays and revisions are not
uncommon.

REFERENCES
Anderson, Richard G. and Kliesen, Kevin L.
“Productivity Measurement and Monetary
Policymaking during the 1990s.” Working Paper
2005-067, Federal Reserve Bank of St. Louis,
October 2005.
Basu, Susanto; Fenald, John G.; Oulton, Nicholas
and Srinivasan, Sylaja. “The Case of the Missing
Productivity Growth: Or, Does Information
Technology Explain Why Productivity Accelerated
in the US but Not in the UK?” NBER Working Paper
No. 10010, National Bureau of Economic Research,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

October 2003, published in Mark Gertler and
Kenneth Rogoff, eds, NBER Macroeconomics
Annual, 2003. Cambridge, MA: MIT Press, 2004.
Baumol, William J. “Macroeconomics of Unbalanced
Growth: The Anatomy of Urban Crisis.” American
Economic Review, June 1967, 57(3), pp. 415-26.
Cohen, Linda and Young, Allie. Multisourcing:
Moving Beyond Outsourcing to Achieve Growth
and Agility. Cambridge, MA: Harvard Business
School Press, 2005.
Corrado, Carol and Lawrence, Slifman. “Decomposition
of Productivity and Unit Costs.” American Economic
Review, May 1999, 89(2), pp. 328-32.
Dean, Edwin R. and Harper, Michael J. “The BLS
Productivity Measurement Program,” in Charles R.
Hulten, Edwin R. Dean, and Michael J. Harper, eds.,
New Developments in Productivity Analysis.
M AY / J U N E

2006

199

Anderson and Kliesen

Chicago: University of Chicago Press for the NBER,
2001, pp. 55-84.
Doms, Mark. “Communications Equipment: What
Has Happened to Prices?” in Carol Corrado, John
Haltiwanger, and Daniel Sichel, eds., Measuring
Capital in the New Economy. Chicago: University
of Chicago Press for the NBER, 2005, pp. 323-62.
Edge, Rochelle M.; Laubach, Thomas and Williams,
John C. “Learning and Shifts in Long-Run
Productivity Growth.” Working Paper 2004-21,
Board of Governors of the Federal Reserve System,
April 2004.
Friedman, Thomas L. The World Is Flat. New York:
Farrar, Straus, and Giroux, 2005.

Ellis W. “Payroll Employment Data: Measuring the
Effects of Annual Revisions.” Federal Reserve
Bank of Atlanta Economic Review, Second Quarter
2005, pp. 1-23.
Himmelberg, Charles P.; Mahoney, James M.; Bang,
April and Chernoff, Brian. “Recent Revisions to
Corporate Profits: What We Know and When We
Knew It.” Federal Reserve Bank of New York Current
Issues in Economics and Finance, March 2004, 10(3),
pp. 1-7.
Holdway, Mike. “An Alternative Methodology:
Valuing Quality Change for Microprocessors in the
PPI.” Presentation to the Advisory Committee to
the Bureau of Economic Analysis, May 11, 2001.

Gordon, Robert J. “Does the ‘New Economy’ Measure
Up to the Great Inventions of the Past?” Journal of
Economic Perspectives, 2000, 14(4), pp. 49-74.

Hulten, Charles R. “The Measurement of Capital,” in
Ernst R. Berndt and Jack E. Triplett, eds., Fifty Years
of Economic Measurement. Chicago: University of
Chicago Press for the NBER, 1990, pp. 119-52.

Gordon, Robert J. “Productivity Growth and the New
Economy: Comments.” Brookings Papers on
Economic Activity, 2002, 0(2), pp. 245-53.

Hulten, Charles R. “Growth Accounting When
Technical Change Is Embodied in Capital.” American
Economic Review, September 1992, 82(4), pp. 964-80.

Gordon, Robert J. “Exploding Productivity Growth:
Context, Causes and Implications.” Brookings Papers
on Economic Activity, 2003, 0(2), pp. 207-98.

Hulten, Charles R. “Total Factor Productivity: A Short
Biography,” in Charles R. Hulten, Edwin R. Dean,
and Michael J. Harper, eds., New Developments in
Productivity Analysis. Chicago: University of
Chicago Press for the NBER, 2001

Greenwood, Jeremy; Hercowitz, Zvi and Krusell, Per.
“Long-Run Implications of Investment-Specific
Technological Change.” American Economic Review,
1997, 87(3), pp. 342-62.
Griliches, Zvi. Output Measurement in the Service
Sectors. Chicago: University of Chicago Press, 1992.
Griliches, Zvi. “Productivity, R&D, and the Data
Constraint.” American Economic Review, March
1994, 84(1), pp. 1-23.
Grimm, Bruce T. “Price Indexes for Selected
Semiconductors 1974–96.” Survey of Current
Business, February 1998, 78(2), pp. 8-24.
Gullickson, William and Harper, Michael J. “Bias in
Aggregate Productivity Trends Revisited.” Monthly
Labor Review, March 2002, 125(3), pp. 32-40.
Haltom, Nicholas L.; Mitchell, Vanessa D. and Tallman,
200

M AY / J U N E

2006

Jones, Charles I. “Sources of U.S. Economic Growth
in a World of Ideas.” American Economic Review,
March 2002, 92(1), pp. 220-39.
Jorgenson, Dale W.; Gollop, Frank M. and Fraumeni,
Barbara M. Productivity and U.S. Economic Growth.
Harvard Economic Studies. Volume 159. Cambridge,
MA: Harvard University Press, 1987.
Jorgenson, Dale W.; Ho, Mun S. and Stiroh, Kevin J.
“Projecting Productivity Growth: Lessons from the
U.S. Growth Resurgence.” Federal Reserve Bank of
Atlanta Economic Review, Third Quarter 2002,
pp. 1-13.
Jorgenson, Dale W.; Ho, Mun S. and Stiroh, Kevin J.
“Lessons from the U.S. Growth Resurgence.” Paper
prepared for the First International Conference on
the Economic and Social Implications of Information
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Anderson and Kliesen

Technology, Washington, DC, January 27-28, 2003;
www.si.umich.edu/~kahin/hawk/htdocs/
jorgensonpaper.doc.
Jorgenson, Dale W.; Ho, Mun S. and Stiroh, Kevin J.
“Will the U.S. Productivity Resurgence Continue?”
Federal Reserve Bank of New York Current Issues
in Economics and Finance, December 2004, 10(13).
Jorgenson, Dale W.; Ho, Mun S. and Stiroh, Kevin J.
“Projecting Productivity Growth: Lessons from the
U.S. Growth Resurgence,” in William H. Dutton et
al., eds., Transforming Enterprises: The Economic
and Social Implications of Information Technology.
Cambridge, MA: MIT Press, 2005, pp. 49-75.
Jorgenson, Dale W.; Landefeld, J. Steven and Nordhaus,
William D. A New Architecture for the U.S. National
Accounts. Chicago: University of Chicago Press for
the NBER, 2006 (forthcoming).
Kozicki, Sharon. “The Productivity Growth Slowdown:
Diverging Trends in the Manufacturing and Service
Sectors.” Federal Reserve Bank of Kansas City
Economic Review, First Quarter 1997, pp. 31-46.
Kozicki, Sharon. “How Do Data Revisions Affect the
Evaluation and Conduct of Monetary Policy?”
Federal Reserve Bank of Kansas City Economic
Review, First Quarter 2004, 89(1), pp. 5-38.
Landefeld, J. Steven and Grimm, Bruce T. “A Note
on the Impact of Hedonics and Computers on Real
GDP.” Survey of Current Business, December 2000,
pp. 17-22.
Oliner, Stephen D. and Sichel, Daniel E. “Information
Technology and Productivity: Where Are We Now
and Where Are We Going?” Federal Reserve Bank
of Atlanta Economic Review, Third Quarter 2002,
87(2), pp. 1-13.
Orphanides, Athanasios and van Norden, Simon.
“The Reliability of Inflation Forecasts Based on
Output Gap Estimates in Real Time.” Journal of
Money, Credit, and Banking, June 2005, 37(3), pp.
583-601.
Pakko, Michael R. “The High-Tech Investment Boom
and Economic Growth in the 1990s: Accounting
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

for Quality.” Federal Reserve Bank of St. Louis
Review, March/April 2002a, 84(2), pp. 3-18.
Pakko, Michael R. “What Happens When the
Technology Growth Trend Changes? Transition
Dynamics, Capital Growth and the ‘New Economy’.”
Review of Economic Dynamics, April 2002b, 5(2),
pp. 376-407.
Pakko, Michael R. “Changing Technology Trends,
Transition Dynamics, and Growth Accounting.”
Contributions to Macroeconomics, 2005, 5(1),
Article 12.
Runkle, David E. “Revisionist History: How Data
Revisions Distort Economic Policy Research.”
Federal Reserve Bank of Minneapolis Quarterly
Review, Fall 1998, 22(4), pp. 3-12.
Sherwood, Mark K. “Difficulties in the Measurement
of Service Outputs.” Monthly Labor Review, March
1994, 117(3), pp. 11-19.
Solow, Robert. “Technical Change and the Aggregate
Production Function.” Review of Economics and
Statistics, 1957, 39(3), pp. 312-20.
Solow, Robert. “Investment and Technological
Progress,” in Kenneth Arrow, Samuel Karlin, and
Patrick Suppes, eds., Mathematical Methods in the
Social Sciences. Stanford, CA: Stanford University
Press, 1960, pp. 89-104.
Solow, Robert. “After ‘Technical Progress and the
Aggregate Production Function’,” in Charles R.
Hulten, Edwin R. Dean, and Michael J. Harper, eds.,
New Developments in Productivity Analysis.
Chicago: University of Chicago Press for the NBER,
2001, pp.173-78.
Stiroh, Kevin J. “Information Technology and the
U.S. Productivity Revival: What Do the Industry
Data Say?” American Economic Review, December
2002, 92(5), pp. 1559-76.
Triplett, Jack E. and Bosworth, Barry P. “Productivity
in the Services Sector,” in Robert M. Stern, ed.,
Services in the International Economy. Ann Arbor,
MI: University of Michigan Press, 2001;
www.brookings.edu/views/papers/triplett/
20000112.htm.
M AY / J U N E

2006

201

Anderson and Kliesen

Triplett, Jack E. and Bosworth, Barry P. “Productivity
Measurement Issues in Services Industries:
‘Baumol’s Disease’ Has Been Cured.” Federal
Reserve Bank of New York Economic Policy
Review, September 2003, 9(3), pp. 23–33.
Triplett, Jack E. and Bosworth, Barry P. Productivity
in the U.S. Services Sector. Washington, DC:
Brookings Institution, 2004.
Triplett, Jack E. and Bosworth, Barry P. “Baumol’s
Disease Has Been Cured: IT and Multifactor
Productivity Growth in the U.S. Services Industries,”
in Dennis Jansen, ed., The New Economy and
Beyond: Past, Present and Future. Cheltenham:
Edward Elgar, 2006 (forthcoming);
www.brookings.edu/es/research/projects/
productivity/workshops/20020517_triplett.pdf.
Yuskavage, Robert E. “Improved Estimates of Gross
Product by Industry, 1959-94.” Survey of Current
Business, August 1996, 76(8), pp. 133-55.
Yuskavage, Robert E. and Pho, Yvon H. “Gross
Domestic Product by Industry for 1987-2000.”
Survey of Current Business, November 2004, pp.
33-53.

202

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

The Learnability Criterion and Monetary Policy
James B. Bullard
Expectations of the future play a large role in macroeconomics. The rational expectations assumption, which is commonly used in the literature, provides an important benchmark, but may be too
strong for some applications. This paper reviews some recent research that has emphasized methods
for analyzing models of learning, in which expectations are not initially rational but which may
become rational eventually provided certain conditions are met. Many of the applications are in
the context of popular models of monetary policy. The goal of the paper is to provide a largely nontechnical survey of some, but not all, of this work and to point out connections to some related
research.
Federal Reserve Bank of St. Louis Review, May/June 2006, 88(3), pp. 203-17.

INTRODUCTION
Overview

I

n a number of recent papers, economists
have begun to analyze the stability of rational
expectations equilibria under learning in
microfounded models of monetary policy. Most
of these analyses have been in versions of the
New Keynesian macroeconomics, as presented
most prominently by Woodford (2003a). The goal
of this paper is to provide a brief, largely nontechnical survey of some, but not all, of this work
and to point out connections to some related
research.

Origins
Learning has been an issue in macroeconomics
since the rational expectations revolution swept
the field in the 1970s and 1980s. Rational expectations has long been understood as a modeling
device: When studying economic outcomes, we
economists should think of them as equilibria

only if expectations are consistent with actual
outcomes. But, how is it that economic actors
could come to possess rational expectations if
they do not initially possess detailed knowledge
concerning the nature of equilibrium in the
economy or economic situation in which they
find themselves?1
Several key papers in the 1980s, including
Bray (1982), Evans (1985), Lucas (1987), and
Marcet and Sargent (1989a,b), explored an idea
concerning one resolution of this question. The
idea was that, indeed, economic actors cannot be
expected to initially know the nature of the equilibrium of the economy in which they operate.
Instead, they have a perception of the equilibrium
law of motion, and they use available data generated by the economy itself to update their perceived law of motion using recursive algorithms,
1

Some of the tenor of the earlier, feisty debate on this question is
conveyed by the following quote from an influential paper by
Stephen DeCanio (1979, p. 52, italics in original): “Thus, direct
computation of rational expectations by flesh-and-blood agents in
an actual market situation is impossible in practice.”

James B. Bullard is a vice president and economist at the Federal Reserve Bank of St. Louis. This paper is a revised and extended version of
remarks originally prepared for the conference, “Heterogeneous Information and Modelling of Monetary Policy,” held in Helsinki, Finland,
October 2-3, 2003. The author thanks the Bank of Finland and the Center for Economic Policy Research for sponsoring this event, and Seppo
Honkapohja, Massimo Guidolin, and Michael Owyang for helpful comments. Deborah Roisman provided research assistance.

© 2006, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

203

Bullard

such as recursive least squares. Should the perceived law of motion come to coincide with the
actual law of motion of the economy, a rational
expectations equilibrium will have been attained.
The economic actors will have “learned” the
rational expectations equilibrium.
This idea also has an appealing practical interpretation. In an actual macroeconomic environment, expectations of all of the key players are
influenced by the expectations of the forecasting
community. The forecasting community uses
econometric models of the economy, recursively
updated. Thus, it is not too far-fetched to think
that a dynamic like the one described is powerful
and at work in observed macroeconomies.
The question of whether such a process will
actually converge or not is technically demanding
because, in economic models, beliefs concerning
the future help determine actual values of key
variables; but, under learning, these same values
are used in the recursive updating and so feed
back into the generation of updated beliefs. It is
not at all clear how such a system should be
expected to behave. The findings of Marcet and
Sargent (1989a,b) on this question were revised,
extended, and explored in a series of papers by
George Evans and Seppo Honkapohja during the
1990s. Much of that effort is discussed in the landmark book by Evans and Honkapohja (2001),
where they present a complete theory of the effects
of recursive learning in macroeconomic environments. One theme of their theory is that local convergence in such systems can often be assessed
by calculating a certain expectational stability
(E-stability) condition, viewing the mapping from
the perceived law of motion to the actual law of
motion as a differential equation in notional
time. They show the conditions under which the
stability of this differential equation governs the
stability of the system under real-time recursive
learning.2 These conditions are generally quite
weak, and so many authors now routinely calculate expectational stability conditions as a means
of assessing stability under recursive learning in
models of interest.
2

The systems under real-time learning are stochastic difference
equations with time-varying coefficients.

204

M AY / J U N E

2006

A Minimal Criterion
It is important to stress that the idea of stability
under recursive learning—learnability—just outlined can be viewed as a “minimal deviation from
rational expectations” approach to this question.
The agents in the model are endowed with a perceived law of motion which, in most cases, corresponds in form to the equilibrium law of motion
for that economy. Thus, the agents are given the
correct specification for their recursively estimated
vector autoregressions that they use to forecast
the future. In addition, the theorems are local in
nature, so that we think of the systems as initially
quite near the rational expectations equilibrium.
And, the agents are passive updaters—they simply
update the coefficients in their model as new data
are produced. Convergence hinges on whether
initially small expectational errors are damped
or magnified as the economy evolves. One interpretation of this is that the situation is very favorable to allowing the agents to learn the rational
expectations equilibrium. If the equilibrium cannot be learned even under these very favorable
conditions, then one might be quite pessimistic
about the possibility of observing such an equilibrium in an actual economy. Thus, the learnability criterion can be viewed as a minimal stability
condition that any reasonable equilibrium should
be required to meet.

What Has Been Learned So Far?
The main messages of the learning literature
to date are not difficult to summarize. First and
foremost, it is possible in many macroeconomic
environments that recursive learning as described
above can produce a dynamic that converges to
a rational expectations equilibrium. So, some
rational expectations equilibria are indeed learnable in this sense. Some initial thinking on this
issue suggested that a general case could be made
for nonconvergence, and thus that rational expectations equilibrium was not a useful concept. But
that argument has been dispelled.
A second message, however, is that not all
rational expectations equilibria are learnable.
Some, in fact, are unstable under the recursive
learning dynamic. Furthermore, because this conF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

ception of recursive learning involves a minimal
deviation from rational expectations, the unlearnable equilibria are particularly suspect as descriptions of actual economies. One certainly has the
impression from much of the economics profession that all rational expectations equilibria are
somehow learnable,3 but it turns out not to be true.
It is perhaps not hard to imagine now that, for
systems like this, the feedback could be too strong
and expectational errors could be amplified.
The state of affairs is thus that some rational
expectations equilibria are learnable while others
are not. Furthermore, convergence will in general
depend on all the economic parameters of a given
system, including the policy parameters (that is,
it depends on the entire economic structure).
Therefore, an important additional message is that
policy can have an impact on whether a targeted
rational expectations equilibrium is learnable or
not. Policymakers therefore may wish to take into
account how a particular policy choice might
influence the stability of a targeted equilibrium.
This feature of the recent literature has generated
considerable interest.
One additional message is that there appears
to be no clear, general relation between conditions
for learnability and conditions for determinacy
of rational expectations equilibrium. I will discuss
this issue briefly below.

Alternative Formulations of Learning
In a recent after-dinner speech, eminent
economist Charles Goodhart remarked that, in his
opinion, most learning in a large macroeconomy
comes not from statistical regression of any kind,
but from information passed from person to person. Goodhart said, “You ask your uncle.”4 That
comment certainly rings true and echoes a longstanding criticism of the learning literature as I
have described it. But learning along this line has
also been pursued in the macroeconomics and
finance literature.
A key aspect of the Goodhart comment is
that important economic judgment travels from
3

This seems to be the message in Lucas (1987).

4

I am paraphrasing a portion of the remarks by Goodhart (2003).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

person to person, leaving different people in the
population with different beliefs most of the time.
As an example, consider an individual decision
that has important implications for macroeconomics: How much should a household save out
of current income, and how should savings be
allocated among available assets? It seems undeniable that in actual economies, households obtain
information to help them answer these questions
by asking those around them and by obtaining
professional advice. Households with similar
characteristics often have very different savings
strategies. This would seem to conflict with most
models, in which behavior and expectations are
homogeneous.
The artificial intelligence literature has produced some models that can address some of
these issues.5 The ones that have been investigated
in economic contexts are often variants of genetic
algorithms. Some prominent examples in the literature include Marimon, McGrattan, and Sargent
(1990) and Arifovic (1996). In these models, a
standard economic environment is assumed, but
agents are allowed to hold initially diverse beliefs
concerning a key future variable, such as an
expected rate of return on an asset. Agents then
make optimal decisions given their expectations,
which, aggregated over all of the agents in the
economy, produces some actual outcomes—prices
and quantities—for the current period. Agent
beliefs are then updated using genetic operators.
These operators draw on evolutionary principles.
First, beliefs that deliver low utility to their owners
tend to get replaced with beliefs that deliver higher
utility. In addition, agents experiment with alternative beliefs, either ones that are mixes of their
own and those of other agents in the economy,6
called crossover, or simply by means of a random
change in belief, called mutation. With a new set
of beliefs in place, new decisions are made, and
new outcomes are produced. The question is then:
Will such a process converge to a rational expectations equilibrium of the model?
5

Heterogeneity and learning have been addressed outside the artificial intelligence literature as well. See, for instance, Branch and
Evans (2006), Giannitsarou (2003), and Guse (2005).

6

This operator relates to Goodhart’s comment.

M AY / J U N E

2006

205

Bullard

The papers in the evolutionary learning literature for macroeconomics tend to be computational, as few analytical results are available. The
short answer is that, yes, processes like the one I
have described can converge to rational expectations equilibria of well-defined models. And again,
not all rational expectations equilibria are stable
under this type of learning dynamic.7 The genetic
algorithm approach departs from the “minimal
deviation from rational expectations” ideal of
the recursive learning literature and asks the
learning dynamic to describe a global search for
equilibrium from initial agent behavior that might
be nearly random. In this sense, the approach is
much more ambitious. It is also more attractive
as a model of the type of social learning that seemingly takes place every day in observed macroeconomies. The genetic algorithm approach also
puts heavy emphasis on how information diffuses
across households in an economy. The nature of
the information diffusion is based on the properties of the genetic operators that are assumed.8

Relation to Behavioral Finance
Sometimes learning is mentioned in conjunction with the burgeoning behavioral finance literature.9 The behavioral finance approach draws on
psychology, especially experiments with human
subjects, to document behavior patterns. Take the
following case of subjects who undergo observation in psychological studies. They may seem to
be persistently pessimistic, for example, during
the course of the study. The literature would then
seek to postulate these behaviors in models to
see whether apparent anomalies in financial data
can then be explained.10 The behavioral finance
approach, then, is quite different from the learning literature as I have described it. The macroeconomics learning literature asks how rational
7

The Arifovic (1996) paper, for instance, describes a process that
does not converge and instead produces endogenously fluctuating
exchange rates.

8

For a survey of this literature, see Arifovic (2000).

9

For one summary of work in behavioral finance, see VissingJorgensen (2004).

10

These ideas are not so new; see the volume by Hogarth and Reder
(1987).

206

M AY / J U N E

2006

expectations could come about, allowing that
agents behave optimally given their expectations.
The behavioral finance literature seeks to understand the empirical implications of postulating
certain types of seemingly irrational, but laboratory documented, behavior on the part of market
participants. A natural question, and one that is
sometimes asked, is whether the seemingly irrational behavior can survive over a long period
of time or whether instead market participants
would learn the rational behavior. Thus, learning
is often mentioned in conjunction with behavioral
finance, and this seems to be a fruitful area of
future research.

LEARNABILITY IN MONETARY
POLICY MODELS
Taylor-Type Policy Rules
Consider a small, closed, New Keynesian
economy described by Woodford (1999 and
2003a) and Clarida, Galí, and Gertler (1999):
(1)

zt = Êt zt +1 − σ −1 ⎡⎣ rt − Êt π t +1 ⎤⎦ + rtn ,

(2)

π t = κ zt + β Êt π t +1.

These equations are derived from a model in
which each infinitely lived member of a continuum of household-firms produces a differentiated
good using labor alone, but consumes an aggregate
of all goods in the economy. The household-firms
price their good under a constraint on the frequency of price change. The first-order conditions
for the consumption problem yield equation (1)
while those for the pricing problem yield equation
(2). The variable πt is the percentage-point time-t
deviation of inflation from a fixed target value; zt
is the output gap, also in percentage points; rtn is
an exogenous shock, usually thought of as being
serially correlated; and rt is the deviation of the
short-term nominal interest rate from the value
consistent with inflation at target and which is
under the control of the monetary authority. The
parameter β is the common discount factor of the
households, σ relates to the elasticity of intertemporal substitution in consumption of the
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

household, and κ relates to the degree of price
stickiness in the economy. These parameters are
argued to be invariant to contemplated changes
in policy. Bullard and Mitra (2002) view the inflation target and the long-run level of output as zero.
The notation Êt is meant to indicate a possibly
nonrational expectation taken using information
available at date t, so that Et without the hat is
the normal expectations operator.11 To close the
model, one might postulate a simple monetary
policy feedback rule of the type discussed by
Taylor (1993) and analyzed in the large literature
since that paper was published. One could write
such a rule as
(3)

rt = ϕπ π t + ϕ z zt ,

where ϕπ and ϕz are nonnegative and not both
equal to zero. The parameters in the policy rule
are particularly interesting as they may have an
impact on the nature of the rational expectations
equilibrium of the model, and they may also have
an impact on the ability of the private sector agents
to learn a rational expectations equilibrium.
One interesting feature of this model is that
expectations enter on the right-hand side of equations (1) and (2). This is a consequence of the
microfoundations, in which the household-firms
are forward-looking in deciding today’s consumption and today’s prices. This would seem to be an
inescapable consequence of the microfounded
approach; therefore, we might expect all monetary
policy models to have this feature in some form,
and thus that the type of analysis discussed below
should apply to a wide variety of models of monetary policy and not only to the simple example
given here.
Bullard and Mitra (2002) studied the model
(1)-(3) under both a rational expectations assumption and under a learning assumption using the
approach of Evans and Honkapohja (2001). Under
rational expectations, a key question is whether
rational expectations equilibrium is unique, a.k.a.
11

The microfoundations of the model were developed assuming
rational expectations. Preston (2005) has argued that these equations would change under some interpretations of the microfoundations when agents are learning. But Evans, Honkapohja, and Mitra
(2003) have argued that, under some reasonable assumptions, these
equations would remain unaltered.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

determinate. To calculate determinacy properties,
substitute (3) into (1) and write the resulting
system in matrix form as
y t = α + BÊt y t +1 +ℵrtn ,

(4)

with yt = [zt, πt ]′, α = 0, Ꭽ is a conformable matrix
that is not needed in the calculations below, and
(5)

B=

1 − βϕπ ⎞
⎛σ
1
.
σ + ϕ z + κϕπ ⎜⎝κσ κ + β(σ + ϕ z )⎟⎠

Both zt and πt are free variables in this system,
and so both eigenvalues of B need to be inside
the unit circle for determinacy to hold.12 Bullard
and Mitra (2002) show that the condition for
determinacy is
(6)

ϕπ +

(1 − β ) ϕ
κ

z

> 1.

This condition is a statement of the Taylor principle, as discussed by Woodford (2001 and 2003a).
From equation (2), a permanent increase in inflation increases the output gap by (1 – β )/κ percentage points. Then, given equation (3), the left-hand
side of (6) can be interpreted as the extent of the
long-run increase in the nominal interest rate in
response to a permanent change in inflation. The
condition (6) states that this response must be
greater than 1, that is, that nominal interest rates
must rise more than one-for-one with inflation to
achieve determinacy of equilibrium.
Even when determinacy obtains, however, the
question of learnability still needs to be decided.
To calculate learnability, Bullard and Mitra (2002)
postulated a perceived law of motion for the
private sector given by
(7)

y t = a + crtn ,

where a is a 2 × 1 vector and c is a 2 × 2 matrix.
This perceived law of motion corresponds in form
to the minimal state variable solution to equation
(4) and thus endows the agents with the correct
specification of the rational expectations equilibrium. Under this perceived law of motion, agent
expectations are given by
12

Blanchard and Kahn (1980).

M AY / J U N E

2006

207

Bullard

Figure 1
Policy Rules with Contemporaneous Data
ϕz
4

Indeterminate and E-unstable
Determinate and E-stable

3

2

1

0
0

1

2

3

4

5
ϕπ

6

7

8

9

10

NOTE: Regions of determinacy and expectational stability for the class of policy rules using contemporaneous data. Parameters other
than ϕπ and ϕz are set at baseline values. Reprinted with permission from Bullard and Mitra (2002).

(8)

Et y t +1 = a + cρrtn ,

where ρ is the serial correlation parameter for the
shock rtn. Substituting equation (8) into equation
(4) yields the actual law of motion given the perceptions in equation (7), namely,
(9)

y t = Ba + ( Bcρ +ℵ) rtn .

Equations (7) and (9), the perception and the
reality, respectively, together define a map, T, as
(10)

T (a, c ) = ( Ba, Bcρ +ℵ).

Expectational stability is determined by the matrix
differential equation
(11)

d
(a,c ) = T (a,c ) − (a,c ).
dτ

If the differential equation (11) is asymptotically
stable at the fixed point (a–,c– ) the system is said
to be expectationally stable.
A key result in Bullard and Mitra (2002) is to
show that the condition for expectational stability
in this system is exactly the inequality (6). As has
208

M AY / J U N E

2006

been argued, this condition corresponds exactly
to the Taylor principle applied to this system.
Thus, the Taylor principle delivers both determinacy and learnability for a standard New
Keynesian model.13 It would seem to be good
advice to give to policymakers, both from the
point of view of uniqueness of equilibrium and
from the point of view of achievability of that
equilibrium, that they adopt the Taylor principle
in selecting a particular policy rule—values for
ϕπ and ϕz—in this model.
This key result is summarized in Figure 1
from Bullard and Mitra (2002), where parameter
values other than those in the policy rule have
been set at the calibration values recommended
by Woodford (1999). The message of Figure 1 is
that, so long as the monetary authority chooses a
pair of values, ϕπ and ϕz , that are sufficiently large,
or “aggressive,” then the economy will possess
an equilibrium that is both unique and learnable.
Should the policymaker choose values in such a
13

For some further discussion of the connections between the conditions for determinacy and those for learnability in this model,
see Woodford (2003a,b).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

Figure 2
Policy Rules with Lagged Data
ϕz
4

Determinate and E-stable
Determinate and E-unstable
Indeterminate and E-unstable

3

Explosive

2

1

0
0

1

2

3

4

5
ϕπ

6

7

8

9

10

NOTE: Determinacy and learnability for rules responding to lagged data, with parameters other than ϕπ and ϕz set at baseline values.
Determinate equilibria may or may not be E-stable. Reprinted with permission from Bullard and Mitra (2002).

way that the Taylor principle (6) is violated, then
determinacy does not obtain and unexpected
outcomes may arise. Among the pairs of ϕπ and
ϕz that deliver determinacy and learnability,
policymakers can apply other criteria, such as
the expected utility of the representative household, to decide on an optimal policy.
More information can be gleaned from
Figure 1, however. Under rational expectations,
once one demonstrates that a determinate equilibrium exists, there is little further to discuss, other
than the quantitative nature of the equilibrium
itself. Under learning, however, there is more to
the story, because even within the determinate
and learnable region, the choice of the parameters
in the policy rule will influence the speed with
which the private sector can learn the rational
expectations equilibrium. This issue has been
analyzed in Ferrero (2004). Some policy choices
may involve learning times that are extremely
long, and hence policymakers may wish to think
twice about adopting them.
Figure 1 would seem to suggest that determinacy and learnability go hand in hand, but this
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

is not the case. Consider the alternative policy
rule defined by
(12)

rt = ϕπ π t −1 + ϕ z zt −1.

Here the monetary authority reacts to last period’s
values of inflation deviations and the output gap,
perhaps because of realistic information lags. As
McCallum (1999) has emphasized, central banks
do not observe inflation or the output gap in the
same quarter in which they must make decisions
regarding their short-term nominal interest rate
target. Bullard and Mitra (2002) show that this
case is more complicated and in fact that the
conditions for determinacy and learnability do
not align. This result is shown in Figure 2. The
conclusion is that determinacy does not imply
learnability. The darkest region in the figure indicates a situation where the policy rule generates
determinacy, but not learnability.
The policy rules that have been considered
so far have the monetary authority reacting to
current or past developments concerning key
economic variables. But one might imagine that
central banks are forward-looking, so that they
M AY / J U N E

2006

209

Bullard

react not to current or past data directly, but to
their own forecast of future developments, say,
one period in the future. This case can also be
analyzed, assuming that both the private sector
and the central bank learn in exactly the same way.
Bullard and Mitra (2002) calculate determinacy
and learnability conditions in this case and find
that the two criteria do not coincide when central
banks are forward-looking.14
In a closely related paper, Bullard and Mitra
(2006) consider the more complicated, but more
realistic, situation when the central bank also
includes a lagged interest rate in its policy rule,
(13)

rt = ϕπ π t −1 + ϕ z zt −1 + ϕ r rt −1,

with ϕr > 0. They come to the conclusion that
policy inertia tends to improve the prospects for
both determinacy and learnability. This might
provide some part of an explanation as to why
empirical estimates of actual central bank behavior
put important weight on the lagged value of the
short-term nominal interest rate.15

Optimal Policy Rules
Svensson (2003) has argued that postulating
Taylor-type monetary policy rules, even with open
coefficients16 as in Rotemberg and Woodford
(1999), is not a satisfactory practice. Instead, the
monetary authority should be modeled as having
an objective that they wish to accomplish as best
they can with the instruments at their disposal
and under the constraints imposed on them by
the economic environment. Such an approach
would imply “a more complex reduced-form
reaction function” (Svensson, 2003, p. 14). One
could argue with this conception. By specifying
a class of linear policy feedback rules, the analysis
can isolate conditions for determinacy and learnability for rules within the class—and then calcu14

For a recent analysis of the related issue of constant interest rate
forecasts on the part of central banks, see Honkapohja and Mitra
(2005).

15

Typical estimates in the literature put the value of ϕr at 0.7 or even
0.9, depending on the country and the time period.

16

That is, without assigning specific numerical values. Rotemberg
and Woodford (1999) indeed found optimal policy rules, but
within classes of possible rules that look like the ones Taylor
discussed, such as (13).

210

M AY / J U N E

2006

late an optimal rule from among the ones that
satisfy the determinacy and learnability conditions according to any criterion one wishes to
ascribe to the policymaker. By specifying policymaker behavior according to a given objective first,
one risks specifying a policy rule that generates
indeterminacy, unlearnability, or both.
One example of this phenomenon occurs in
Evans and Honkapohja (2006 and 2003a,b). They
considered the economy described by equations
(1) and (2) but replaced (3) with an explicit optimization problem for the monetary authorities to
solve. This problem can be viewed as policymakers attempting to maximize
∞

(14)

Et ∑ β s ⎡⎣π t2+ s + α zt2+ s ⎤⎦ ,
s =0

where β is the discount factor used by policymakers (assumed to be the same as the discount
factor used by the private sector) and the relative
weight on output versus inflation is given by α ,
with α = 0 corresponding to the “strict inflation
targeting” case.17 The inflation target is again
assumed to be zero. It is well-known that the firstorder conditions for this problem differ depending
on whether one assumes a discretionary central
bank or one that is able to commit to a superior
policy by taking a timeless perspective.18 Under
discretion the first-order condition is
(15)

κπ t + α zt = 0,

whereas under commitment it is
(16)

κπ t + α ( zt − zt −1 ) = 0.

Evans and Honkapohja (2006 and 2003a,b)
stress that one still needs an interest rate reaction
function to implement the policy, and, importantly, there are many such functions that will
implement the optimal policy under rational
expectations. Do all of these possible reaction
functions induce equilibria with the same determinacy and learnability properties? In fact, they
17

Woodford (2003a) has argued that objective (14) approximates the
utility of the representative household, in which α takes on a
specific value.

18

See Woodford (2003a).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

do not. One might consider the “fundamentalsbased” optimal policy—that is, an interest rate rule
that calls for instrument adjustments directly in
response to the fundamental shocks.19 One can
write down such a rule for either the discretionary
or the commitment case. The startling result of
Evans and Honkapohja (2003a) is that interest rate
reaction functions of this type invariably imply
that the equilibrium is unstable in the learning
dynamics. Equilibrium is also always indeterminate. Evans and Honkapohja (2003b) label this
finding “deeply worrying,” and, indeed, the analysis shows the dangers of proceeding naively from
the objective (14) to an implementable policy without considering the effects of that policy on the
nature of equilibrium or the stability of the equilibrium in the face of small expectational errors.
However, equilibrium can be rendered both
determinate and learnable with an alternative
interest rate feedback rule, as Evans and
Honkapohja (2003a) show. This alternative rule
still implements the optimal policy according to
the objective (14), but it does so in a way that
creates a determinate and learnable equilibrium.
The key is to augment the set of variables included
on the right-hand side of the feedback rule to
include private sector expectations of key variables
(the output gap and inflation) as well as the fundamental shocks of the model. This alternative
representation of the optimal policy rule is successful in generating determinacy and learnability
because it does not assume the private sector has
rational expectations, instead allowing the central
bank to react to small expectational errors. Of
course, for this type of policy rule to be of importance in actual economies, one has to assume that
private sector expectations are observable.20

Learning Sunspots
With the rational expectations revolution
came the idea of sunspot, or nonfundamental,
equilibria, in which homogeneous expectations
19

For one discussion, see Clarida, Galí, and Gertler (1999).

20

The Evans and Honkapohja (2006 and 2003a,b) results are sensitive
to the specification of the objective function. If one includes interest
rate deviations in the objective, E-stability can be achieved without
requiring the monetary authority to react to private sector expectations. See Duffy and Xiao (2005).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

are conditioned on purely extrinsic uncertainty,
that would not matter for the economy except
that agents do condition their expectations on it.
This idea has had considerable success as interpretations of many macroeconomic events seem
to be consistent with the idea of self-fulfilling
expectations. A general finding in the theory literature is that sunspot equilibria exist when equilibrium is indeterminate, so that indeterminacy can
imply both the existence of multiple, fundamental
equilibria and also the existence of additional,
nonfundamental sunspot equilibria. But could
agents actually learn such equilibria, in the sense
we have described here? To do so, the agents
would need to have a perceived law of motion
that is consistent with the possibility of a sunspot
variable playing an important role.
In a classic paper, Woodford (1990)
addressed this question and argued that, indeed,
a simple recursive learning dynamic might lead
agents to coordinate on a sunspot equilibrium.
His environment was a version of the overlapping
generations model. Honkapohja and Mitra (2004)
carry out an analysis of the learnability of nonfundamental equilibria in models like the one
described in equations (1)-(3). They find that the
Taylor principle continues to play an important
role in the learnability of nonfundamental equilibria. In their analysis, violations of the Taylor
principle tend to imply indeterminacy, and none
of the equilibria are learnable in those cases. Thus,
violation of the Taylor principle would seem to
imply that the private sector cannot coordinate
on a rational expectations equilibrium of any kind
in the context of the New Keynesian model.21
This idea turns out not to completely characterize
the situation, however. Evans and McGough (2005)
show that sunspot equilibria may indeed be learnable if one focuses on common factor representations of the sunspot solution.
The tendency in the monetary policy literature, and indeed in the macroeconomics theory
literature generally, has been to regard the case
of indeterminacy and possible sunspot equilibria
as a situation to be avoided at all costs. If a par21

Similar results occur in a real business cycle context with indeterminacy. The sunspot equilibria that exist there are generally
not learnable, as shown by Duffy and Xiao (2006).

M AY / J U N E

2006

211

Bullard

ticular policy generates indeterminacy, then in
the eyes of most authors the policy is not a desirable one, quite apart from any question concerning learnability of equilibrium. A dissenter from
this view is McCallum (2003), who argues that
when multiple equilibria exist, only fundamental,
minimal state variable solutions are likely to be
observed in practice, and thus arguments based
on the mere existence of many nonfundamental
equilibria should be given less weight in the literature. A portion of his argument is that nonfundamental equilibria are unlikely to be learnable. In
discussing McCallum, Woodford (2003b) argues
that, because in the indeterminate cases the
minimal state variable solution is also often not
learnable, as in the Honkapohja and Mitra (2004)
analysis, one should not rely solely on the minimal
state variable criterion in generating a “prediction”
from a given model.

LEARNABILITY IN RELATED
MODELS
Liquidity Traps
The fact that Japan has experienced zero or
near-zero short-term nominal interest rates for
several years has rekindled ideas about liquidity
trap equilibria originally discussed in the 1930s.
Benhabib, Schmitt-Grohé, and Uribe (2001) presented an influential analysis of this situation.
They argued that the combination of a zero bound
on nominal interest rates, commitment of the
monetary authority to an active Taylor rule (that
is, one that follows the Taylor principle) at a targeted level of inflation, and a Fisher relation
generally implies the existence of a second steadystate equilibrium. This second steady state is characterized by low inflation (lower than the target
level) and low nominal interest rates in a wide
class of monetary policy models currently in use.
The Taylor principle does not hold at the lowinflation steady state. They also showed, in the
context of a specific economy, the existence of
equilibria in which interest rates and inflation
are initially in the neighborhood of the targeted
inflation rate, but which leave that neighborhood
212

M AY / J U N E

2006

and converge to the low-inflation steady state.
From the perspective of the literature on expectational stability, a natural question is, Which of
the steady-state equilibria presented by Benhabib,
Schmitt-Grohé, and Uribe (2001) are learnable?
Based on the results presented so far, in which
the Taylor principle governs convergence under
recursive learning, one might expect that the targeted, high-inflation equilibrium (in which the
Taylor principle holds) would be stable under
recursive learning, while the low-inflation equilibrium would not be. Evans and Honkapohja
(2005) analyze versions of the Benhabib et al.
(2001) economy in which this logic generally
holds. The monetary authority in Evans and
Honkapohja (2005) can switch to an aggressive
money supply rule at low rates of inflation, and
this switch can support a third steady state characterized by an even lower inflation rate. This
steady state can be learnable in their analysis,
and in this sense they find a learnable liquidity
trap. But if the monetary authority switches to
the money supply rule in support of an inflation
rate that is sufficiently high, then the economy is
left with only the targeted, relatively high-inflation
steady state as a learnable equilibrium.
Another analysis of this issue is by Eusepi
(2005), who also finds some instances of a learnable liquidity trap in a model with a forecast-based
interest rate rule. Eusepi (2005) also provides an
analysis of the nonlinear dynamics of this model
under learning. As a border of a stable region of
the parameter space is approached (say, as a particular policy parameter is increased), an eigenvalue crosses the unit circle, which is normally a
defining feature of a local bifurcation. The system
can then display cycles and other stationary
behavior in a neighborhood of the steady state.
Eusepi (2005) finds that this type of outcome
can occur in versions of the model studied by
Benhabib, Schmitt-Grohé, and Uribe (2001) under
learning.22
22

Models with multiple steady states are a natural laboratory for the
study of learning issues, independent of questions about liquidity
traps. A recent example is Adam, Evans, and Honkapohja (2006).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

The Role of Escape Dynamics
An alternative approach to low nominal
interest rate outcomes is studied in Bullard and
Cho (2005). Their model is linear and possesses
a unique equilibrium in which inflation is near
target at all times. To explain persistently, and
unintentionally, low nominal interest rates, they
design their model to produce an “escape” from
the unique equilibrium toward a nonequilibrium
focal point, which is characterized by low nominal
interest rates and low inflation. The systems they
study tend to return to the unique equilibrium
following these episodes of “large deviations.”
Thus, the Bullard and Cho (2005) approach to
low nominal interest rate outcomes does not
involve the economy being permanently stuck in
a liquidity trap. To generate the escape dynamics,
Bullard and Cho (2005) rely on the following features: (i) The private sector has a certain misspecified perceived law of motion for the economy;
(ii) there is feedback from the beliefs of the private
sector to the actions of the monetary authority;
and (iii) the private sector uses a constant gain
learning algorithm, which puts more weight on
recent observations and less weight on past observations when obtaining key estimates of parameters by means of recursive learning.
Students of escape dynamics will recognize
the elements just described from themes in Sargent
(1999), Cho, Williams, and Sargent (2002), Kasa
(2004), Sargent and Williams (2005), and Williams
(2001). The escape dynamics in a learning model
are interesting because they describe a situation
in which the economy is at or near rational expectations equilibrium most of the time, but in which
rare events can endogenously push the economy
away from the equilibrium toward persistent nonequilibrium outcomes. This may be quite valuable
in helping economists understand unusual, but
important, macroeconomic events, such as market
crashes or depressions.
One aspect of this type of analysis is that a
rare or unusual event precipitates the escape
episode. How rare is this event? In some analyses,
it may seem implausible to wait for such a rare
event to explain an important macroeconomic
outcome. However, McGough (2006) suggests that
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

in models that have escape dynamics, one may
not have to wait for the rare precipitating event
to occur to observe the escape dynamics. Instead,
the escape can be triggered by a shock to the underlying fundamentals of the economy. In a version
of Sargent’s (1999) economy, the shock is a plausible shift in the natural rate of unemployment.23
The models with escape dynamics therefore have
a certain instability, which might be activated by
events other than the precise combination of
shocks within the model necessary to generate
an escape.

Learning and Structural Change
It has long been emphasized in economics
that for one-time, unanticipated developments,
learning makes a great deal of sense and rational
expectations is inappropriate. That is, for structural change or other important, one-time shocks,
the most appropriate analysis would include
transitional learning dynamics as private sector
and government officials learn the new equilibrium. The empirical evidence on the existence of
structural change in macroeconomic time series is
quite strong. For instance, most macroeconomic
time series display a reduction in volatility after
1984, according to standard tests.
There is a rational expectations approach that
one can take to study problems of this kind, such
as the one used by Andolfatto and Gomme (2003).
One can postulate that a key feature of the economy follows a regime-switching process, with
given transition probabilities. One can then compute optimal behavior of the agents in the economy, given that underlying fundamentals may
switch between two regimes. A full-information,
rational expectations approach would endow the
agents with knowledge of the current state along
with the probability transition matrix and allow
them to make optimal decisions given the uncertainty they face. A more realistic approach, and
the one used by Andolfatto and Gomme (2003),
asks the agents to infer the regime using available
data and knowledge of the transition probabilities.
The agents can solve this signal extraction prob23

See Ellison and Yates (2006) for an alternative explanation of the
timing of the escape dynamics described by Sargent (1999).

M AY / J U N E

2006

213

Bullard

lem optimally using Bayesian methods, and this
is sometimes thought of as a type of “learning”
analysis. However, in the context of the macroeconomic learning literature, this approach is
really one of rational expectations given information available to the agents in the model.24
The rational expectations regime-switching
approach is interesting, even brilliant, because it
transforms an otherwise nonstationary problem
into a stationary one, allowing the researcher to
maintain a form of the rational expectations
assumption. But I do not think this method is the
right one for most types of structural change. Most
of the shocks we think we observe are one-time
permanent events, widely unexpected, such as
the productivity slowdown from the 1970s to the
1990s in the United States. The nature of the event
is that the current status quo changes permanently,
but not to any well-defined alternative status quo.
The new reality is learned only after the event
has occurred. For this reason, I think subjecting
available models to one-time permanent shocks,
and allowing the agents in the model to learn the
new equilibrium following the shock, is a better
model of the nonstationarity we observe in the
data. Of course, for recursive learning to tend to
lead the economy toward the new equilibrium, the
new equilibrium must be expectationally stable,
and this expectational stability must extend to a
wide enough neighborhood that the permanent
shock does not destabilize the economy completely.
To implement this type of learning the literature has turned to constant-gain learning, inspired
by the discussion in Sargent (1999). Most learning
algorithms have today’s perceptions as yesterday’s
perceptions plus a linear adjustment that is a
function of the forecast error from the previous
period. The coefficient multiplying the forecast
error would typically be 1/t, to give equal weight
to all past forecast errors. But an agent suspicious
of structural change may wish to downweight past
forecast errors and put more weight on more
recent forecast errors. A simple method of doing
24

There has been recent work that draws tighter connections between
classical and Bayesian approaches to learning. See, for instance,
Evans, Honkapohja, and Williams (2005) and Cogley and Sargent
(2005).

214

M AY / J U N E

2006

this is to change the gain from 1/t to a small positive constant. A more sophisticated method is to
use a Kalman filter or a nonlinear filter.25 The
agent is then able to track changes in the environment without knowing exactly what the nature
of those changes may be. Productivity growth
may not simply be switching between high and
low, but may visit many other regimes, some of
which may never have been observed. The tracking idea equips agents with methods of coping
in such an environment. It may well be a better
model of structural change in the types of problems macroeconomists try to analyze.
For examples of economies with structural
change and learning dynamics as I have described
it, see Bullard and Duffy (2004), Bullard and
Eusepi (2005), Lansing (2002), Milani (2005),
Orphanides and Williams (2005), and
Giannitsarou (2006).

RESOURCES ON THE WEB
In this paper, I have provided a limited survey
of some of the issues and recent results in the
macroeconomics learning literature. Much of this
literature has provided commentary on monetary
policy issues. The learnability criterion is just
beginning to be widely used to assess key aspects
of policy that have been difficult to address under
a pure rational expectations approach.
This survey is far from comprehensive.
There are many closely related issues that I have
not attempted to address here. As of this writing,
interested readers can consult the web page maintained by Chryssi Giannitsarou and Eran Guse at
Cambridge University, “Adaptive Learning in
Macroeconomics,” which provides a more complete bibliography with up-to-date links:
www.econ.cam.ac.uk/research/learning/.

REFERENCES
Adam, Klaus; Evans, George W. and Honkapohja,
Seppo. “Are Hyperinflation Paths Learnable?”
25

McCulloch (2005) provides an analysis of the connections between
constant-gain algorithms and the Kalman filter.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

Journal of Economic Dynamics and Control, 2006
(forthcoming).
Andolfatto, David and Gomme, Paul. “Monetary
Policy Regimes and Beliefs.” International Economic
Review, February 2003, 44(1), pp. 1-30.
Arifovic, Jasmina. “The Behavior of the Exchange
Rate in the Genetic Algorithm and Experimental
Economies.” Journal of Political Economy, June
1996, 104(3), pp. 510-41.
Arifovic, Jasmina. “Evolutionary Algorithms in
Macroeconomic Models.” Macroeconomic
Dynamics, September 2000, 4(3), pp. 373-414.
Benhabib, Jess; Schmitt-Grohé, Stephanie and Uribe,
Martin. “The Perils of Taylor Rules.” Journal of
Economic Theory, January/February 2001, 96(1-2),
pp. 40-69.
Blanchard, Olivier J. and Kahn, Charles M. “The
Solution of Linear Difference Models under Rational
Expectations.” Econometrica, July 1980, 48(5), pp.
1305-11.
Branch, William and Evans, George W. “Intrinsic
Heterogeneity in Expectation Formation.” Journal of
Economic Theory, March 2006, 127(1), pp. 264-95.
Bray, Margaret. “Learning, Estimation, and the Stability
of Rational Expectations.” Journal of Economic
Theory, April 1982, 26(2), pp. 318-39.
Bullard, James B. and Cho, In-Koo. “Escapist Policy
Rules.” Journal of Economic Dynamics and Control,
November 2005, 29(11), pp. 1841-65.

Bullard, James B. and Mitra, Kaushik. “Learning,
Determinacy, and Monetary Policy Inertia.” Journal
of Money, Credit, and Banking, 2006 (forthcoming).
Clarida, Richard; Galí, Jordi and Gertler, Mark. “The
Science of Monetary Policy: A New Keynesian
Perspective.” Journal of Economic Literature,
December 1999, 37(4), pp. 1661-707.
Cho, In-Koo; Williams, Noah and Sargent, Thomas J.
“Escaping Nash Inflation.” Review of Economic
Studies, January 2002, 69(1), pp. 1-40.
Cogley, Timothy and Sargent, Thomas J. “Anticipated
Utility and Rational Expectations as Approximations
of Bayesian Decision Making.” Unpublished manuscript, New York University, March 2005.
DeCanio, Stephen. “Rational Expectations and
Learning from Experience.” Quarterly Journal of
Economics, February 1979, 93(1), pp. 47-57.
Duffy, John and Xiao, Wei. “The Value of Interest
Rate Stabilization Policies When Agents Are
Learning.” Working Paper, University of Pittsburgh,
November 2005.
Duffy, John and Xiao, Wei. “Instability of Sunspot
Equilibria in Real Business Cycle Models Under
Adaptive Learning.” Journal of Monetary Economics,
2006 (forthcoming).
Ellison, Martin and Yates, Tony. “Escaping Volatile
Inflation.” Working paper, University of Warwick
and the Bank of England, January 2006.

Bullard, James B. and Duffy, John. “Learning and
Structural Change in Macroeconomic Data.”
Working paper 2004-016A, Federal Reserve Bank
of St. Louis, 2004.

Eusepi, Stefano. “Comparing Forecast-Based and
Backward-Looking Taylor Rules: A ‘Global’
Analysis.” Staff Report No. 198, Federal Reserve
Bank of New York, January 2005.

Bullard, James B. and Eusepi, Stefano. “Did the Great
Inflation Occur Despite Policymaker Commitment
to a Taylor Rule?” Review of Economic Dynamics,
April 2005, 8(2), pp. 324-59.

Evans, George W. “Expectational Stability and the
Multiple Equilibria Problem in Linear Rational
Expectations Models.” Quarterly Journal of
Economics, November 1985, 100(4), pp. 1217-33.

Bullard, James B. and Mitra, Kaushik. “Learning
about Monetary Policy Rules.” Journal of Monetary
Economics, September 2002, 49(6), pp. 1105-29.

Evans, George W. and Honkapohja, Seppo. Learning
and Expectations in Macroeconomics. Princeton,
NJ: Princeton University Press, 2001.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

215

Bullard

Evans, George W. and Honkapohja, Seppo.
“Expectations and the Stability Problem for Optimal
Monetary Policies.” Review of Economic Studies,
October 2003a, 70(4), pp. 807-24.
Evans, George W. and Honkapohja, Seppo. “Adaptive
Learning and Monetary Policy Design.” Journal of
Money, Credit, and Banking, December 2003b,
35(6, Part 2), pp. 1045-72.
Evans, George W. and Honkapohja, Seppo. “Policy
Interaction, Expectations and the Liquidity Trap.”
Review of Economic Dynamics, April 2005, 8(2),
pp. 303-23.
Evans, George W. and Honkapohja, Seppo. “Monetary
Policy, Expectations, and Commitment.”
Scandanavian Journal of Economics, 2006 (forthcoming).
Evans, George W.; Honkapohja, Seppo and Mitra,
Kaushik. “Notes on Agents’ Behavioral Rules under
Adaptive Learning and Recent Studies of Monetary
Policy.” Unpublished manuscript, University of
Oregon, 2003.
Evans, George W.; Honkapohja, Seppo and Williams,
Noah. “Generalized Stochastic Gradient Learning.”
Working Paper 2005-17, University of Oregon, 2005.
Evans, George W. and McGough, Bruce. “Monetary
Policy, Indeterminacy and Learning.” Journal of
Economic Dynamics and Control, November 2005,
29(11), pp. 1809-40.
Ferrero, Giuseppe. “Monetary Policy and the
Transition to Rational Expectations.” Paper No. 19,
Society for Computational Economics, Computing in
Economics and Finance, August 2004.
Giannitsarou, Chryssi. “Heterogeneous Learning.”
Review of Economic Dynamics, October 2003, 6(4),
pp. 885-906.
Giannitsarou, Chryssi. “Supply-side Reforms and
Learning Dynamics.” Journal of Monetary
Economics, March 2006, 53(2), pp. 291-309.
Goodhart, Charles A.E. Remarks made at the conference “Expectations, Learning, and Monetary Policy,”
sponsored by the Deursche Bundesbank and the
216

M AY / J U N E

2006

Center for Financial Studies, Eltville, Germany,
August 30-31, 2003.
Guse, Eran A. “Stability Properties for Learning with
Heterogeneous Expectations and Multiple
Equilibria.” Journal of Economic Dynamics and
Control, October 2005, 29(10), pp. 1623-42.
Hogarth, Robin M. and Reder, Melvin W., eds.
Rational Choice: The Contrast Between Economics
and Psychology. Chicago: University of Chicago
Press, 1987.
Honkapohja, Seppo and Mitra, Kaushik. “Are Nonfundamental Equilibria Learnable in Models of
Monetary Policy? “ Journal of Monetary Economics,
November 2004, 51(8), pp. 1743-70.
Honkapohja, Seppo and Mitra, Kaushik. “Performance
of Inflation Targeting Based on Constant Interest
Rate Projections.” Journal of Economic Dynamics
and Control, November 2005, 29(11), pp. 1867-92.
Kasa, Kenneth. “Learning, Large Deviations, and
Recurrent Currency Crises.” International Economic
Review, February 2004, 45(1), pp. 141-73.
Lansing, Kevin. “Learning About a Shift in Trend
Output: Implications for Monetary Policy and
Inflation.” Working Paper 2000-16, Federal Reserve
Bank of San Francisco, July 2002.
Lucas, Robert E. Jr. “Adaptive Behavior and Economic
Theory,” in Robin M. Hogarth and Melvin W. Reder,
eds., Rational Choice: The Contrast Between
Economics and Psychology. Chicago: University of
Chicago Press, 1987, pp. 217-42.
Marcet, Albert and Sargent, Thomas J. “Convergence
of Least-Squares Learning Mechanisms in Self-referential Linear Stochastic Models.” Journal of
Economic Theory, August 1989a, 48(2), pp. 337-68.
Marcet, Albert and Sargent, Thomas J. “Convergence
of Least-Squares Learning in Environments with
Hidden State Variables and Private Information.”
Journal of Political Economy, December 1989b,
97(6), pp. 1306-22.
Marimon, Ramon; McGrattan, Ellen and Sargent,
Thomas J. “Money as a Medium of Exchange in an
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Bullard

Economy with Artificially Intelligent Agents.”
Journal of Economic Dynamics and Control, May
1990, 14(2), pp. 329-73.
McCallum, Bennett T. “Issues in the Design of
Monetary Policy Rules,” in John B. Taylor and
Michael Woodford, eds., Handbook of Macroeconomics. Volume 1C. New York: Elsevier, 1999,
pp. 1483-530.
McCallum, Bennett T. “Multiple-Solution
Indeterminacies in Monetary Policy Analysis.”
Journal of Monetary Economics, July 2003, 50(5),
pp. 1153-75.
McCulloch, J. Huston. “The Kalman Foundations of
Adaptive Least Squares, with Applications to U.S.
Inflation.” Unpublished manuscript, Ohio State
University, August 2005.
McGough, Bruce. “Shocking Escapes.” Economic
Journal, 2006 (forthcoming).
Milani, Fabio. “Learning, Monetary Policy Rules,
and Macroeconomic Stability.” Unpublished manuscript, University of California, Irvine, July 2005.
Orphanides, Athanasios and Williams, John C. “The
Decline of Activist Stabilization Policy: Natural
Rate Misperceptions, Learning, and Expectations.”
Journal of Economic Dynamics and Control,
November 2005, 29(11), pp. 1927-50.
Preston, Bruce. “Learning about Monetary Policy
Rules When Long-Horizon Expectations Matter.”
International Journal of Central Banking, September
2005, 1(2), pp. 81-126.
Rotemberg, Julio J. and Woodford, Michael. “Interest
Rate Rules in an Estimated Sticky Price Model,” in
John Taylor, ed., Monetary Policy Rules. NBER
Conference Report series. Chicago: University of
Chicago Press, 1999, pp. 57-119.

Sargent, Thomas J. and Williams, Noah. “Impacts of
Priors on Convergence and Escapes from Nash
Inflation.” Review of Economic Dynamics, April
2005, 8(2), pp. 360-91.
Svensson, Lars E.O. “Monetary Policy and Learning.”
Federal Reserve Bank of Atlanta Economic Review,
Third Quarter 2003, 88(3), pp. 11-16.
Taylor, John B. “Discretion versus Policy Rules in
Practice.” Carnegie-Rochester Conference Series on
Public Policy, December 1993, 39(0), pp. 195-214.
Vissing-Jorgensen, Annette. “Perspectives on
Behavioral Finance: Does ‘Irrationality’ Disappear
with Wealth? Evidence from Expectations and
Actions,” in Mark Gertler and Kenneth Rogoff, eds.,
NBER Macroeconomics Annual 2003. Cambridge,
MA: MIT Press, 2004.
Williams, Noah. “Escape Dynamics in Learning
Models.” Ph.D. Dissertation, University of Chicago,
2001.
Woodford, Michael. “Learning to Believe in Sunspots.”
Econometrica, March 1990, 58(2), pp. 277-307.
Woodford, Michael. “Optimal Monetary Policy
Inertia.” Working Paper 7261, National Bureau of
Economic Research, 1999.
Woodford, Michael. “The Taylor Rule and Optimal
Monetary Policy.” American Economic Review,
May 2001, 91(2), pp. 232-37.
Woodford, Michael. Interest and Prices: Foundations
of a Theory of Monetary Policy. Princeton, NJ:
Princeton University Press, 2003a.
Woodford, Michael. “Multiple-Solution
Indeterminacies in Monetary Policy Analysis:
Comment.” Journal of Monetary Economics, July
2003b, 50(5), pp. 1177-88.

Sargent, Thomas J. The Conquest of American
Inflation. Princeton, NJ: Princeton University Press,
1999.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

M AY / J U N E

2006

217

218

M AY / J U N E

2006

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W