View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Understanding the Term Structure of Interest Rates
William Poole
This article was originally presented as a speech to the Money Marketeers, New York, New York,
June 14, 2005.
Federal Reserve Bank of St. Louis Review, September/October 2005, 87(5), pp. 589-95.

A

topic much discussed in recent
months is the relationship over the
past year or so between long-term
and short-term interest rates. Some
observers have argued that the failure of long rates
to trend up as the Fed has increased its target
federal funds rate is a puzzle. Others have argued
that Fed policy is ineffective because increasing
the rising short rate is not affecting the long rate.
I’ll not say much about the policy issue, but I
do want to address the puzzle.
However, I’m going to define the puzzle somewhat narrowly. I’ll not address the current low
level of the real rate of interest on long-term bonds.
That same puzzle existed a year ago, although it
may not have been so obvious at the time. What
I’ll discuss is the issue of why the long rate has
not increased as the Fed has raised the target
federal funds rate.
I thank my colleagues at the Federal Reserve
Bank of St. Louis—especially Ed Nelson—for their
assistance and comments.

THE RECENT TERM STRUCTURE
PUZZLE
Since June 2004, the Federal Open Market
Committee (FOMC) has increased the target federal

funds rate by 25 basis points every time they have
met, including the recent meeting on May 3.
Moreover, the federal funds futures market predicted that the Committee would raise the target
funds rate by another 25 basis points at its June
meeting. On the other hand, a key long-term
interest rate, the yield on 10-year U.S. Treasury
securities, has shown little persistent tendency
to change, either up or down, over the same
period. I refer to this discrepancy in interest rate
patterns as the recent term structure puzzle.
The eight increases in the target funds rate
took it from 1 percent to 3 percent as of May 3,
2005. The 10-year Treasury bond rate, however,
has exhibited a different pattern. If we look at
monthly average data, which I’ll use throughout
unless indicated otherwise, we can see that the
rate has not had a persistent trend since mid-2002,
when the rate was about 41/2 percent (a rate that
also prevailed at the end of 2003 and again this
spring). The monthly average level of the bond
rate increased by about 90 basis points from March
to June 2004, mostly in response to evidence of
stronger economic growth and the beginning of
Fed tightening. The June 2004 level of 4.73 percent on the bond rate was the highest since June
2002 and has not been exceeded since.
Some observers like to emphasize that the
long rate has declined since the Fed first started

William Poole is the president of the Federal Reserve Bank of St. Louis. The author appreciates comments provided by colleagues at the
Federal Reserve Bank of St. Louis. Edward Nelson provided special assistance. The views expressed are the author’s and do not necessarily
reflect official positions of the Federal Reserve System.

© 2005, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

589

Poole

raising rates in June 2004, but I think the right
observation, given the variability of the rate, is
to say that the long rate has fluctuated around
roughly 41/2 percent since mid-2002. June 2004
is not the best month to begin the analysis because
the Fed’s rate increases were foreseen some
months in advance. Based on the July 2004 federal
funds futures contract, in late 2003 the market
anticipated a funds rate of 1.25 percent or above,
but then the expected rate for July fell to nearly 1
percent (i) as the FOMC maintained its 1 percent
target funds rate at its January and March 2004
meetings and (ii) as a consequence of somewhat
weak economic data. When the FOMC introduced
the “measured pace” language at its meeting of
May 4, 2004, the market priced-in a policy target
of 1.25 percent for the June 2004 FOMC meeting.
In any event, I’ll frame this puzzle as the failure of
long-term interest rates to increase as short-term
interest rates have risen since the late winter and
spring of 2004.
Two phenomena deserve to be distinguished:
the level of long-term rates and the change in long
rates as short rates have risen. Low long-term rates
were already in place before the recent term structure puzzle, and some major factors behind low
long-term rates do not necessarily help in explaining the term structure puzzle, which concerns
changes in rates. Most notably, Fed Governor
Ben Bernanke (2005) has convincingly argued
that the “global saving glut” has been a depressing
factor on U.S. real and nominal interest rates since
2000. Yet this factor does not solve the term structure puzzle, for two important reasons. First, as
noted, the glut has been in force throughout this
decade, whereas the term-structure puzzle refers
to the period since early 2004. Second, the glut
is a source of downward pressure on real interest
rates at all maturities since 2001, whereas the term
structure puzzle instead refers to the recent flat
trend of the long rate despite a significant increase
in the short rate.

AVERAGE HISTORICAL BEHAVIOR
That there is a puzzle is a consequence of
just how atypical the recent behavior of the term
structure is. The funds rate and bond rate do typ590

S E P T E M B E R / O C TO B E R

2005

ically move in the same direction. A linear regression of the first difference of the bond rate on the
first difference of the federal funds rate provides
a simple description of the average relationship
between the bond rate and funds rate. The regressions indicate that the contemporaneous relationship between the two series is positive and
statistically significant. For the entire period from
May 1954 to March 2005, the regression coefficient is a bit below 0.2; for the period from January
1984 to March 2005, the coefficient is a bit above
0.3. Using the period from 1984, what the coefficient means is that on average a 100-basis-point
change in the funds rate has been associated
with a 32-basis-point change in the bond rate in
the same direction. Thus, over the past year, as
the funds rate rose by 200 basis points, we should
have seen an increase of the bond rate of about
65 basis points. Depending on how you eyeball
your favorite chart of the 10-year bond rate, instead
of increasing, the bond rate has been about flat,
or down somewhat, over the past year.

THE EXPECTATIONS THEORY OF
THE TERM STRUCTURE
To decide whether there really is a puzzle, or
to make sense of the puzzle, we’ll need to call on
economic theory. According to economic theory,
a key reason why the contemporaneous relationship between the funds rate and the bond rate is
far from one-for-one is that changes in the bond
rate should be closely linked not to today’s change
in the funds rate but to revisions in expectations
of the future path of the funds rate. The theory
will provide a framework for an analysis of the
recent term structure puzzle.
The essential message of the expectations
theory of the term structure is that market forces
should make longer-term interest rates a weighted
average of the short-term interest rates expected
to prevail over the life of the bond. The investor
should be indifferent between making N consecutive investments in 1-period securities and
investing in an N-period bond. Or at least enough
investors should be indifferent to force the Nperiod bond to trade in the market at the weighted
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Poole

average of the next N 1-period bonds. To take a
simple example, letting time be quarters, the
expectations theory says that the 2-quarter interest
rate should be equal to the average of today’s 1quarter interest rate and the expected 1-quarter
rate next quarter. We assume that today’s expectation of next quarter’s 1-quarter rate is based
rationally on all information available today.
The argument applies to bond rates of any
maturity. The simple expectations theory implies
that the 10-year bond rate reflects the expected
path over the next ten years of the short-term rate.
The 10-year bond rate at the beginning of June
2004 incorporated the 1-year rate and the next
nine expected 1-year rates, the last of which was
a 1-year rate on a security that would be issued
in June 2013 and mature in June 2014.
Similarly, the 10-year rate prevailing at the
beginning of June this year incorporated the current 1-year rate and the next nine expected 1-year
rates, the last of which was a 1-year rate on a
security that would be issued in June 2014 and
mature in June 2015. After comparing the 10-year
bond from a year ago with the one today, we see
that nine of the ten 1-year periods are the same.
Today’s 10-year bond does not include the 1-year
rate prevailing in June 2004—that security has
matured. Today’s 10-year bond does include the
expected 1-year rate on a security maturing in
June 2015. Thus, the difference in the yields on the
two 10-year bonds—last June’s and this June’s—
reflects substitution of (i) the expected 1-year
rate for a security to be issued in June 2014 for
(ii) the 1-year rate in the market in June 2004 for
the security that has just matured in June 2005,
plus revisions in the expected 1-year rates to prevail every year from 2005 through 2013. The key
to understanding changes in the 10-year rate is
to understand revisions in those nine expected
1-year rates.
To understand the process by which expected
future 1-year rates are revised, it is useful to partition the 1-year rate into a real rate and an inflation premium. How might we anticipate far-off
expected real short rates to behave? This variable
should respond to new information about the real
shocks likely to be facing the economy several
years in the future. It would be tempting to think
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

that such new information arises so infrequently
that the distant short-term real rate could be
treated as constant.
There is considerable evidence against this
presumption, however. For example, Laubach
(2003) finds that expectations of short-term nominal interest rates beyond five years in the future
fluctuate in response to the changes in multiyear
budget deficit projections, and some of this fluctuation may reflect revisions to expected real rates.
It is not hard to imagine other information that
might rationally affect investor expectations about
distant real rates. Ultimately, the issue is an empirical one and it does appear that the expected real
short rate fluctuates considerably in practice.
Historically, expected future nominal short
rates have often fluctuated in response to changes
in inflation expectations. Over the past year,
distant inflation expectations, as measured by
the spread between conventional and inflationprotected bonds, have not changed markedly.
Thus, we can proceed by assuming that long-term
expectations of inflation have remained roughly
constant in the past year because of confidence
in Federal Reserve policies and, in the absence
of information to the contrary, that there is no new
information about far-off real rates. With these
assumptions, the change in the long rate is driven
by new information about the medium-term path
of short-term real interest rates.
For example, if newly published data suggest
greater pressure on aggregate demand in the years
immediately ahead, agents will expect a greater
degree of offsetting pressure from the Federal
Reserve in the form of higher real interest rates,
and the expectation of future real rates will be
higher than the expectation based on the prior
period’s information set. My emphasis in this
discussion is that new information about the state
of the economy drives changes in long-term
interest rates.

A DETAILED LOOK AT
JANUARY 2004–MAY 2005
Consider the behavior of bond rates since
the beginning of 2004 from the perspective of
S E P T E M B E R / O C TO B E R

2005

591

Poole

the expectations theory of the term structure. In
January 2004, the 10-year bond rate was 4.15
percent; in January 2005, it was 4.22 percent. I’ll
concentrate on information that has created revisions to future expected short rates.
Consider revisions to expected real short rates
in immediately coming years. In past tightenings,
such as in 1994, policy-induced increases in real
rates led to sharp contemporaneous increases in
bond rates. The past year has not repeated this
phenomenon because the Federal Reserve indicated its tightening intentions well in advance
and because the economy has performed about
as expected.
An indication of what markets were expecting as of January 2004 is given by the Blue Chip
Consensus forecast for real gross domestic product
(GDP) growth in 2004 of 4.6 percent. In the event,
U.S. real GDP growth in 2004 was 4.4 percent. In
2004, the economy performed as close to expected
as we will find in the historical record. Events
have not much changed the outlook for 2005
either. In January 2004, the Blue Chip Consensus
forecast for 2005 real growth was 3.7 percent; the
latest (June 10, 2005) Blue Chip forecast is for real
growth of 3.5 percent, an extremely small downward revision from the expectation prevailing in
January 2004.
To study this matter more carefully, I’ve
examined large daily movements of the 10-year
bond rate since January 2004. These are listed in
Table 1. The criterion for determining a “large”
movement is a change of 10 basis points or more
in the bond rate.
See the table for details; I will provide here
the flavor of major financial news that occurred
on some of the “large change” days. The sluggish
recovery of employment during this expansion
was reflected in weak payroll data that surprised
the market on January 9, 2004, and March 5, 2004,
leading to declines in the bond rate of 16 basis
points and 19 basis points, respectively. These
employment reports led to revisions of market
expectations toward a slower expected withdrawal
by the Fed of its accommodative policy stance,
and, accordingly, expectations of real short rates
over the next few years declined.
As another example, the oil price spike on
592

S E P T E M B E R / O C TO B E R

2005

March 9, 2005, was associated with an increase
in the bond rate of 14 basis points. Such bond
rate increases can be interpreted two ways. One
interpretation is that markets did not revise
upward their expectations of future inflation but
did revise upward their expectations of the Fed
monetary policy required to keep inflation stable.
Alternatively, the bond rate increase may have
reflected expectations that the Fed would accommodate a temporary increase in inflation in the
wake of the oil shock.
Expectations of future monetary policy have
affected the bond rate significantly from time to
time. A recent study by Gürkaynak, Sack, and
Swanson (2005), covering a period earlier than
that considered here, finds that news about likely
future FOMC actions on the funds rate has an
important effect on the bond rate, distinct from
FOMC actions on the current funds rate. This
finding is, of course, in line with the expectations
theory. In the period considered here, news about
future policy increased bond rates by 11 basis
points on January 28, 2004, when the FOMC
dropped from its press release the phrase that it
expected policy accommodation to prevail “for
a considerable period.” Once this phrase was
dropped, markets revised their expectations of
short rates to a higher path than previously, and
bond rates accordingly were immediately revised
upward.
Although certain data releases did surprise
the market, over the period as a whole the data
came in about as expected, contributing to the
absence of a trend in the bond rate over the period
at issue. Likely policy responses to economic data
were also known in advance; and, in the absence
of economic surprises, FOMC decisions on the
funds rate were much as expected. Thus, there
was no particular reason over this period for the
market to revise its expectations of future interest
rates continuously in one direction; the bond rate
fluctuated in response to arriving information,
but ended up about where it started.
The argument I am making is not a new one.
There is a huge literature on the expectations
theory of the term structure of interest rates, and
policymakers have long been aware of the basic
ideas. For example, the Radcliffe Committee, a
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Poole

Table 1
Selected Changes in the 10-Year Treasury Bond Rate, January 2004–May 2005
Date

Bond-yield change,
basis points

Main news item

Source
Reuters

1/6/2004

–12

Weaker-than-expected growth in services sector

1/9/2004

–16

Weaker-than-expected payroll data

1/28/2004

+11

Federal Reserve drops “for a considerable period”
language from FOMC statement

NYT

3/5/2004

–19

Weaker-than-expected payroll data

WSJ

4/2/2004

+24

Higher payroll data

4/13/2004

+10

Weaker-than-expected retail sales for March 2004

5/7/2004

+16

Better-than-expected payroll data

6/15/2004

–20

Better-than-expected May inflation; reaction to
Greenspan Senate testimony

7/16/2004

–12

Better-than-expected June inflation

DJNW

7/27/2004

+13

Better-than-expected July consumer confidence

DJNW

8/6/2004

–19

Lower-than-expected payroll data

10/8/2004

–11

Weaker-than-expected payroll data

DJNW

10/27/2004

+10

Higher oil prices

DJNW

11/5/2004

+11

Better-than-expected payroll data

12/3/2004

–13

Weaker-than-expected payroll data

DJNW

12/16/2004

+10

Continuing reaction to FOMC statement

DJNW

3/9/2005

+14

Concern about spike in oil prices

4/15/2005

–10

Continued rise in energy prices. Disappointing reports
from Ford and GM

Bloomberg

4/21/2005

+10

Better-than-expected manufacturing report and jobless
claims data

Bloomberg

DJNW

WSJ
DJNW
WSJ
FT

WSJ

WSJ

NYT

NOTE: DJNW, Dow Jones News Wire; FT, Financial Times; NYT, New York Times; WSJ, Wall Street Journal. Dates refer to the date of
the interest rate change; sources refer to same-day wire reports and next-day newspaper reports on the principal economic news
accompanying the bond rate movement.

U.K. inquiry into monetary policy in the late
1950s, noted that “It is generally agreed that the
more temporary a rise in short rates is expected
to be, the less it will cause long rates to rise;
correspondingly, the more temporary a drop is
expected to be, the less will long rates fall.”1
Arthur Burns, then Federal Reserve Chairman,
observed in 1977 that “Long-term interest rates,
of course, are of much larger significance to the
economy than short-term rates; but the long-term
rates are also especially sensitive to inflationary
1

Radcliffe Committee (1959, paragraph 447).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

expectations.”2 In a 1976 paper, I studied the
implications for monetary policy of the expectations theory and concluded that the “implications
of the rational expectations hypothesis for macro
modeling are profound... This point is of greatest
importance for the auction markets in financial
assets” because the expectations theory tells us
that “long-term interest rates adjust immediately
and fully in response to new information.”3
The expectations theory of the term structure
2

Burns (1977, p. 724).

3

Poole (1976, pp. 471, 503).

S E P T E M B E R / O C TO B E R

2005

593

Poole

has been severely criticized on a number of
grounds, but for the problem at hand I believe
that the theory tells the basic story correctly. In
sum, economic surprises have been minimal over
the past year and there has been no reason for
significant revision in expected future short-term
interest rates. Thus, there has been no reason for
a significant trend in long-term interest rates.

FULL CIRCLE
I began by discussing the average term structure relationship, in which long rates change by
about 30 basis points for every 100-basis-point
change in short rates. Now I’ll circle back to that
topic.
The average relationship reflects average
business cycle experience in which information
surprises change expectations about future short
rates. But a casual glance at the data will show
how variable these periods have been. In some
cases, long rates rose by much more than 30 basis
points for every 100-basis-point increase in short
rates, and in some cases much less. For example,
over the 12 months ending July 1987, the bond
rate rose by 115 basis points while the federal
funds rate was rising by only 2 basis points. In
contrast, over the 24 months ending in July 1963,
the 10-year bond rate rose by only 10 basis points
while the federal funds rate was rising by 185
basis points. Clearly, I’ve picked out particular
cases to serve as examples; but I can assure you
that, if you look at the data systematically, you
will find that the average term structure relationship of about 30 basis points on the bond rate for
every 100 basis points on the funds rate is the
average of very diverse experience. If I were writing a Ph.D. thesis, I could explore in great detail
the flow of information and how both short and
long rates responded as new information changed
expectations about inflation, real growth, and
Fed policy.
Because the role of changes in inflation
expectations has been so important historically,
but not very important over the past decade or
so, consider an example from the 1980s. The 10year bond rate declined sharply over 1984-86,
from 11.67 percent in January 1984 to 7.11 percent
594

S E P T E M B E R / O C TO B E R

2005

in December 1986. Kozicki and Tinsley (2005,
p. 427) suggest that this decline reflected continued
adjustment of 10-year-ahead expectations of inflation in the wake of the Volcker disinflation. They
argue that the decline in consumer price index
(CPI) inflation to about 4 percent in 1983 was not
accepted as a lasting change until the mid-1980s,
whereupon it became more fully reflected in
long-term bond yields.
An episode that more closely resembles the
2004 experience is the period 1987-89. Here the
FOMC raised the target federal funds rate sharply,
but the long rate was fairly trendless. Kozicki and
Tinsley (2005, Figure 1) show that the late 1980s
was a period where 10-year-ahead expectations
of inflation continued to decline, even though
1-year-ahead expectations rose. The rise in 1-yearahead expectations probably reflected inflation
already in the pipeline. Actual Fed policy over
this period was, by contrast, disinflationary. It
seems that this episode corresponds to one where
the Fed adjusted down its long-run inflation objective. The long-term bond market understood this
change and discounted the rise in CPI inflation
as not reflecting the long-term direction of monetary policy.

FINAL THOUGHTS
It should be clear by now that I do not believe
that there is a term structure puzzle reflected in
interest rate behavior over the past year or so.
Recent experience is unusual but far from unprecedented. The real economy has performed very
close to expectation at the beginning of 2004. The
major surprise has been the large increase in
energy prices. The market has interpreted this
increase as a relative price change and not a sign
of higher long-run inflation. The spread between
conventional and inflation-protected bonds has
increased over the near-term horizon but not over
the period 5 to 10 years out.
The fact that the 10-year bond has not exhibited a persistent trend over the past 18 months or
so while the Fed has been increasing the target
federal funds rate by 200 basis points is not evidence that something is awry with monetary
policy. Think of the issue this way. At the beginF E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Poole

ning of a planning period the Fed has in mind a
probable course for the economy and expectations
about the policy adjustments that will be consistent with long-run policy objectives. Suppose the
market has the same understanding as the Fed.
Suppose also that events turn out largely as
expected. Then, everything goes according to plan,
including policy adjustments and the course of
bond rates. In fact, in January 2004 the eurodollar
futures contract for June 2005 traded at an average
rate of 2.81 percent, which was not far off the
target federal funds rate of 3.0 percent set by the
FOMC on May 3, 2004.
I am not claiming that the Fed had a firm plan
in mind in January 2004 to reach a target federal
funds rate of 3 percent in May 2005, but rather
that events have simply worked out that way,
corresponding rather closely to the market’s best
guess as to how events would unfold. In any event,
the fact that everything goes about as expected is
certainly not evidence of a policy problem.
I would be delighted, as would professional
forecasters, for the string of accurate forecasts to
continue. But we would be well advised not to
forget those forecast standard errors. They have
not vanished. With respect to forecast errors, the
future is more likely to be like the past several
decades than like the past year. If real growth
and/or inflation depart significantly from current
expectations, then we will see a persistent trend
in the bond rate. I hope we do not see such an
outcome, for I believe that the current outlook
for the economy is quite favorable. I hope that
current expectations are realized.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

REFERENCES
Bernanke, Ben S. “The Global Saving Glut and the
U.S. Current Account Deficit.” Homer Jones Lecture,
Federal Reserve Bank of St. Louis, St. Louis,
Missouri, April 14, 2005;
http://www.federalreserve.gov/boarddocs/
speeches/2005/20050414/default.htm.
Burns, Arthur F. Statement before the Committee on
Banking, Finance and Urban Affairs, U.S. House of
Representatives. Federal Reserve Bulletin, August
1977, 63, pp. 721-28.
Gürkaynak, Refet S.; Sack, Brian and Swanson, Eric.
“Do Actions Speak Louder Than Words? The
Response of Asset Prices to Monetary Policy Actions
and Statements.” International Journal of Central
Banking, May 2005, 1(1), pp. 55-93.
Kozicki, Sharon and Tinsley, P.A. “What Do You
Expect? Imperfect Policy Credibility and Tests of
the Expectations Hypothesis.” Journal of Monetary
Economics, March 2005, 52(2), pp. 421-47.
Laubach, Thomas. “New Evidence on the Interest
Rate Effects of Budget Deficits and Debt.” Finance
and Economics Discussion Series Paper No. 2003-12,
Federal Reserve Board, April 2003.
Poole, William. “Rational Expectations in the Macro
Model.” Brookings Papers on Economic Activity,
April 1976, 2(76), pp. 463-505.
Radcliffe Committee. Report: Committee on the
Working of the Monetary System. London: Her
Majesty’s Stationery Office. Command Paper No. 827,
August 1959.

S E P T E M B E R / O C TO B E R

2005

595

596

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Targeting versus Instrument Rules
for Monetary Policy
Bennett T. McCallum and Edward Nelson
Svensson (2003) argues strongly that specific targeting rules—first-order optimality conditions
for a specific objective function and model—are normatively superior to instrument rules for the
conduct of monetary policy. That argument is based largely on four main objections to the latter,
plus a claim concerning the relative interest-instrument variability entailed by the two approaches.
The present paper considers the four objections in turn and advances arguments that contradict
all of them. Then, in the paper’s analytical sections, it is demonstrated that the variability claim
is incorrect, for a neo-canonical model and also for a variant with one-period-ahead plans used by
Svensson, providing that the same decisionmaking errors are relevant under the two alternative
approaches. Arguments relating to general targeting rules and actual central bank practice are also
included.
Federal Reserve Bank of St. Louis Review, September/October 2005, 87(5), pp. 597-611.

1 INTRODUCTION

I

n the recent literature on monetary policy
analysis, several writers have emphasized
the distinction between instrument rules—
i.e., formulae for setting controllable instrument
variables in response to current conditions—and
targeting rules, as proposed by Svensson (1997,
1999).1 In a major contribution, Svensson (2003)
has presented a sophisticated and comprehensive
case for the use of targeting rules, arguing that
“monetary-policy practice is better discussed in
terms of targeting rules than instrument rules”
(2003, p. 429).2 The superiority of targeting rules
is, moreover, claimed to pertain to both normative
1 See, for example, Svensson (1997, 1999, 2003), Svensson and
Woodford (2005), Rudebusch and Svensson (1999), Clarida, Galí,
and Gertler (1999), Cecchetti (2000), Giannoni and Woodford
(2003a,b), Jensen (2002), Walsh (2003), and Woodford (2003).
2 In what follows, quotations with page-number citations but no
author or year indication, refer to that paper, i.e., Svensson (2003).

and positive perspectives (pp. 428-30). Svensson’s
paper is rich in both analytical and practical content and provides insights that can be usefully
pondered by all students of monetary policy
analysis.
It is our belief, nevertheless, that the paper
seriously overstates the relative attractiveness of
targeting rules, from both normative and positive
perspectives, and describes inaccurately the properties of instrument rules. The purpose of the
present paper is to develop this argument. As a
major part of our argument, we study in detail
one concrete and important claim of Svensson’s
regarding interest rate variability induced by
instrument rules with strong feedback. In the wide
variety of cases considered, we find all results to
be inconsistent with the claim.
The outline of the present paper is as follows.
Section 2 presents explanations of the basic concepts and an introduction to the issues. Section 3

Bennett T. McCallum is a professor of economics at Carnegie Mellon University and a research associate of the National Bureau of Economic
Research. Edward Nelson is a research officer at the Federal Reserve Bank of St. Louis and a research affiliate of the Centre for Economic
Policy Research. An earlier version of this paper was presented at a conference at the Board of Governors of the Federal Reserve System,
Washington, D.C., in March 2004 and will also be published in a Federal Reserve Board volume, Models and Monetary Policy: Research in
the Tradition of Dale Henderson, Richard Porter and Peter Tinsley (J. Faust, A. Orphanides, D.L. Reifschneider, eds.). The authors thank Lars
Svensson, James Bullard, Mark Gertler, Ricardo Rovelli, and Javier Vallés for comments on earlier drafts.

© 2005, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

597

McCallum and Nelson

then takes up, and disputes, four particular criticisms of instrument rules that are central to the
argument in Svensson (2003), after which
Section 4 does the same for two additional criticisms. In Sections 5 and 6, the paper turns to the
precise analytical claim mentioned above and
develops results in a number of settings that show
it to be incorrect. Finally, Section 7 provides a
brief recapitulation.

2 BASIC IDEAS AND
TERMINOLOGY
What is the distinction between instrument
and targeting rules? A rule of the former type
refers, quite simply, to some formula prescribing
settings for the monetary policymaker’s instrument as a function of currently observed variables. Well-known examples include the Taylor
rule (1993), several interest rate rules studied by
Henderson and McKibbin (1993a,b), and the
activist monetary base rules of McCallum (1988)
and Meltzer (1987). Precisely which variables
are observable is, of course, a matter that can be
debated in practical analyses, but is one on which
the analyst has to take some explicit position.
Note that expectations (based on current information) of present or future variables may be
among the variables that the instrument in the
rule responds to.3
The definition of targeting rules is somewhat
more complex. There has been some evolution
since Svensson’s (1997, 1999) introduction of the
concept,4 but his current terminology recognizes
both general and specific variants. Basically, a
general targeting rule is the specification of a
central bank objective function,5 whereas a specific targeting rule is an optimality condition
implied by an objective function together with a

specified model of the economy (pp. 448-60).6
Initially, optimization was presumed to be of the
discretionary type, with period-by-period reoptimization based on prevailing initial conditions,
but in Svensson (2003) the possibility of optimization from a “timeless perspective” (see Woodford,
1999) is also considered.
It is not our intention to argue that analysis
with instrument rules is in all respects preferable
to the use of targeting rules. Even if we held that
belief, moreover, we would not think it socially
desirable for all researchers to employ the same
approach. Nevertheless, we are more attracted to
analysis with instrument rules than with targeting rules and believe that a few words should be
included to indicate why—especially since
Svensson’s numerous writings argue so strongly
in favor of the targeting rule position.
As a matter of terminology, it seems inappropriate to refer to the specification of the policymaker’s objective function as a rule. Obviously,
for a given objective function, desirable instrument settings—i.e., policy actions—can be very
different under the same prevailing conditions,
depending on the policymaker’s preferred model
or models of the economy. There are words available to describe policymakers’ objectives—for
example, “policymakers’ objectives”—so there is
nothing analytical to be gained by referring to them
as “general targeting rules.” It is terminologically
useful, rather, for objectives and rules to be clearly
distinguished. Also, from the substantive perspective, the adoption of an objective function is
innocuous if the function accurately represents
the central bank’s true preferences. But if it does
not represent the true preferences and is made
public, as in the scheme suggested in Svensson’s
Section 5.3.3, then the central bank will be describing its objectives dishonestly to the public, a
6

3

In cases in which expectations are based on current-period information, however, Svensson refers to this type of policy rule as an
“implicit instrument rule.”

4

In particular, only specific (not general) targeting rules were considered in Svensson (1997) and they were called “target rules.”

5

Svensson (2003, p. 430) further requires that these be “operational
objectives” (italics in original), i.e., numeric targets for particular
variables, rather than a general concept such as “price stability.”

598

S E P T E M B E R / O C TO B E R

2005

Svensson has explained to us that he does not require that a specific targeting rule necessarily expresses an optimality condition,
as he has in the past (1997, p. 1136), and his definition on p. 429
conforms to that explanation. On p. 430, however, he states that
“specific targeting rules essentially specify operational Euler equations.” Also, on p. 455 Svensson states that “a specific targeting rule
specifies a condition…[that] may be an optimal first-order condition,
or an approximate first-order condition.” In the remainder of this
paper, accordingly, we shall follow Svensson’s practice by typically
treating specific targeting rules as first-order optimality conditions.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

practice that seems inconsistent with Svensson’s
emphasis on transparency.7
The most critical problem with specific targeting rules—i.e., first-order optimality conditions—
is that they are obviously model-dependent.8 By
construction, the coefficients and variables that
appear in these rules are always closely related to
the precise specification of private sector behavior in the associated model—and thus to the
assumptions made regarding the parameters and
dynamics of the model’s IS, Phillips curve, and
any other key structural equations. It is unclear
which portion of today’s macroeconomic models
are most questionable, but it is entirely clear that
there is much dispute among leading scholars
concerning the proper specification of several
of the crucial relationships. Yet a condition that
implies policy optimality in one model may be
highly inappropriate under other specifications.
Consequently, an attractive approach to policy
design, promoted, for example, by McCallum
(1988, 1999), is to search for an instrument rule
that performs at least moderately well—avoiding
disasters—in a variety of plausible models. In
other words, it is our belief that it is unwise to
restrict policy analysis to optimal-policy exercises,
which will typically be optimal only for the single
model being used. Yet such analysis is precisely
what is contemplated by focus on specific targeting rules.
A good illustration of the model-dependence
of optimality conditions is provided in a recent
paper by Levin and Williams (2003), which is a
follow-up to the robustness study of Levin,
Wieland, and Williams (1999). The initial experiments of Levin and Williams (2003) calculate the
consequences of using a policy rule, designed to
be optimal in one model, in other models. The
three models in their introductory example are
(i) a “New Keynesian” baseline model (NKB) that
is highly prominent in recent theoretical research,
7

Svensson has informed us that he would have the central bank
explain the discrepancy between its objective function and preferences to the public. We consider that such a need reflects a substantial degree of nontransparency.

8

The existence of model dependency is recognized by Svensson
(p. 450).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

(ii) an alternative specification (denoted FHP) with
more sources of inertia used by Fuhrer (2000), and
the empirically oriented model of Rudebusch
and Svensson (1999, RS hereafter). Suppose a
specific targeting rule is optimal in a calibrated
version of the NKB model, with a loss function
that assigns output gap variability a weight of λ
(as in Section 5) and also gives interest rate variability a weight of 0.1, both in relation to inflation
variability relative to target. If that optimality
condition is used instead in the FHP model, the
loss values are 95 or 150 percent higher (for λ
values of 0.0 and 0.5, respectively) than the minimum loss in that model. Even more strikingly, if
this NKB optimality condition is transferred to the
RS model, the combination generates explosive
oscillations—an “infinite” percentage deterioration. Next, a specific targeting rule that is optimal
in the FHP model produces losses that are 173
percent or 130 percent greater than the minimum
loss in the NKB model and explosive oscillations
in the RS model. Finally, a rule that is optimal in
the RS model generates analogous loss increases
of 219 percent or 254 percent in the NKB model
and 146 percent or 128 percent in the FHP model.
As an extension of our position, we would
suggest that it is not desirable always to limit
analysis to cases in which an explicit objective
function has been specified. Explicitness is itself
a virtue, of course, other things equal. But it is
unclear what terms actually appear in central
banks’ objective functions and what weights each
term receives. It is also unclear what weights and
terms should appear, since there is professional
disagreement over proper model specification.9
Accordingly, it can be useful to explore the way
in which different properties of a modeled economy (e.g., variances of key endogenous variables)
are related to policy rule parameters, leaving it
to actual policymakers to assign the relevant
weights. Examples of this approach appear in
some of our previous papers (e.g., McCallum and
Nelson, 1999a,b), as well as in Bryant, Hooper,
and Mann (1993).
9

Our position does not deny the attractiveness in principle of basing
policymaker objective functions on the preferences of individual
agents.

S E P T E M B E R / O C TO B E R

2005

599

McCallum and Nelson

3 FOUR MAIN OBJECTIONS
After some preliminary discussion, Svensson
considers the case of central bank commitment to
an optimal instrument rule (which he terms an
implicit reaction function when the rule includes
any current endogenous variables) and concludes
that the implied approach is “completely impractical.” Indeed, Svensson states that “commitment
to an optimal instrument rule has no advocates,
as far as I know” (p. 439). With this particular
judgment we have no serious disagreement; see
McCallum (1999, pp. 1490-95), for example. Consequently, Svensson moves on to consideration
of simple instrument rules (pp. 439-41), with one
subsection entitled “Problems of Commitment to
a Simple Instrument Rule” (pp. 441-44). We now
examine that subsection’s arguments in some
detail, since they evidently constitute the most
important ingredients of Svensson’s position.
In the subsection in question, there are four
main objections to instrument rules that are identified and discussed. The first is “(1) the simple
instrument rule may be far from optimal in some
circumstances” (p. 441). In particular, “[a] first
obvious problem for a Taylor-style rule…is that,
if there are other important state variables than
inflation and the output gap, it will not be optimal…For a smaller and more open economy
[than the U.S.], the real exchange rate, the terms
of trade, foreign output, and the foreign interest
rate seem to be the minimal essential state variables that have to be added” [for the rule to be
optimal] (p. 442). But Taylor rules do not comprise the entire class of simple instrument rules;
nominal income growth rules provide just one
obvious counterexample. Thus, the foregoing is
not actually an argument against simple instrument rules, but merely an objection to one particular class. Furthermore, it is not clear that the
supposed departure from optimality resulting
from the absence of the other state variables, pertaining to open economies, is quantitatively or
even qualitatively important. Indeed, in Clarida,
Galí, and Gertler’s (2001) small open-economy
model there are no additional terms in the welfare
function beyond the two Taylor-rule state variables—inflation and the output gap—provided
600

S E P T E M B E R / O C TO B E R

2005

that the former is defined in terms of domesticgoods price inflation. Similarly, the McCallumNelson (1999a, 2000b) open-economy model can
be formulated entirely in terms of consumer price
index (CPI) inflation, output, and the real interest
rate, with openness changing only the interpretation of the model parameters.
“A second problem,” Svensson states, “is that
a commitment to an instrument rule does not
leave any room for judgmental adjustments and
extra-model information…” (p. 442). This claim
is difficult for us to understand, since there seem
to be various ways in which judgmental adjustments to instrument rule prescriptions could be
made. For example, the interest rate instrument
could be set above (or below) the rule-indicated
value when policymaker judgments indicate that
conditions, not adequately reflected in the central
bank’s formal quantitative models, imply different
forecasts and consequently call for additional
policy tightening (or loosening). This way of incorporating judgment is not the same as the one
proposed by Svensson, which he represents by
the inclusion in the structural equations of the
central bank’s macroeconomic model of an unobservable exogenous stochastic variable that is not
generated by a simple process, such as “an exogenous autoregressive process” (p. 433). These
exogenous deviations appear in the model’s structural equations. “Judgment” is then the central
bank’s estimate of these deviation variables. But
it is unclear that this approach reflects the only,
or even the best, way of representing the role of
judgment in policymaking.10 Thus the fact that
the above-mentioned way of incorporating judgment is different from Svensson’s seems to be
beside the point—that is, it does not justify his
quoted statement.11 What is crucial is that judgment can be incorporated into instrument rules
as well as targeting rules.
10

Svensson also states that “a commitment to a simple instrument
rule does not provide any rules for when discretionary departures
from the simple instrument rule are warranted” (p. 442). But a
procedure that did do this would hardly seem to reflect what most
analysts would think of as “judgment.” It would be, rather, a complex rule.

11

We do not mean to deny that Svensson has insightful and constructive observations to make regarding incorporation of judgment;
our objection is to the asymmetry that he paints with respect to
such incorporation by means of targeting and instrument rules.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

Svensson suggests that “a third problem with
simple instrument rules would seem to be that a
once-and-for-all commitment to an instrument
rule would not allow any improvement of the…
rule when new information about the transmission mechanism, the variability of shocks, or the
source of shocks arrives” (p. 442). But the words
“would seem” appear in the foregoing quotation
because Svensson does not actually make the
foregoing argument. After mentioning it, he goes
on to recognize that Woodford’s (1999) “timeless
perspective” type of commitment does permit
modification of rules when new information is
developed.12 Such rules can, in a manner that is
indicated below, be implemented by means of an
instrument rule. Furthermore, the implied type
of commitment—to a procedure rather than a
formula—could also be applied to other types of
instrument rules.
Finally, switching from a normative to a positive point of view, Svensson states that “an obvious fourth problem is that commitment to a simple
instrument rule is far from an accurate description of current monetary policy” as practiced by
inflation-targeting or other central banks. He continues: “No central bank has (to my knowledge)
announced and committed itself to an explicit
instrument rule” (p. 444). But, as we have argued
previously (McCallum and Nelson, 2000a, p. 15),
no actual central bank has announced or committed itself to an explicit objective function, which
is a necessary condition for either the general or
specific type of targeting rule promoted by
Svensson.13 Indeed, commitment to an optimal
specific targeting rule would in addition entail
commitment to be bound by the output of a new
optimal control exercise, conducted with a particular quantitative macroeconomic model, each
decision period (e.g., each month). Such exercises
could, Svensson says, be modified by judgment.
But are they actually conducted by the central
banks that he identifies as the world’s leaders in
this regard, those of the United Kingdom, New
12

For discussions, see Woodford (1999) and Svensson and Woodford
(2005).

13

Note that at a minimum it would be necessary for the central bank
to state explicitly its value for the objective function parameter
labeled λ below and in Svensson’s equation (2.2).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Zealand, and Sweden? If so, what is the value of
the weight λ on output gap variability announced
and used by each of these central banks? What is
the specification of the model used?
In short, it seems appropriate to conclude
that all four of the objections to instrument rules
emphasized by Svensson are equally applicable—
or equally inapplicable—to targeting rules.

4 ADDITIONAL OBJECTIONS
Two other debatable points deserve some brief
attention before we turn to a major analytical
issue in Sections 5 and 6. One of these concerns
Svensson’s argument against the view that “simple
instrument rules fit actual central-bank behavior
well” (p. 444). In opposition to this idea, Svensson
states that “even the best empirical fits leave one
third or more of the variance of changes in the
[interest instrument] rate unexplained.” In this
regard it is important to note that the statement
pertains to the variability of first differences of
the interest rate, as found in the study by Judd
and Rudebusch (1998). In terms of levels, the fraction of the variance that is unexplained is approximately 0.02 (i.e., about 2 percent).14 Neither of
these measures is conceptually “correct” or
“incorrect,” of course, but to put matters in perspective, we note that 33 percent would be a comparatively small unexplained variance fraction
for the first difference of most important variables
in typical quarterly macroeconometric models.
In the well-known Rudebusch and Svensson
(1999) model, for example, the unexplained variance fractions for changes in inflation and the
output gap are about 71 percent and 87 percent,
respectively.15
14

Judd and Rudebusch (1998, p. 14) report a residual standard deviation of 0.27 for the Greenspan period 1987:Q3–1997:Q4. Over that
span, the standard deviation of the quarterly average funds rate is
1.93 (annual percentage units). Thus, the unexplained fraction of
variability is (0.27/1.93)2 = 0.0196.

15

These figures pertain to the model’s “inflation equation” and “output [gap] equation,” for which the reported residual standard errors
are, respectively, 1.009 and 0.819 (Rudebusch and Svensson, 1999,
p. 208). The sample standard deviations for first differences of the
relevant inflation and output gap series over the 1961:Q1–1996:Q2
sample period are, respectively, 1.197 and 0.877, so we have
(1.009/1.197)2 = 0.711 and (0.819/0.877)2 = 0.872.

S E P T E M B E R / O C TO B E R

2005

601

McCallum and Nelson

Our second point concerns Svensson’s contention that actual central banks noted for their
inflation-targeting regimes, including the Reserve
Bank of New Zealand, the Bank of Canada, and
the Bank of England, use in practice procedures
that are more reasonably characterized by the
notion of a targeting rule rather than an instrument
rule. We have already mentioned that none of
these central banks has publicly adopted an
explicit objective function. But, furthermore, we
find that descriptions of their policy procedures
provided by officials and economists of these
central banks read more like instrument rules
than specific targeting rules.
As a first example, there are several short
articles describing the policy procedures of the
Bank of Canada that appear in the Summer 2002
issue of the Bank of Canada Review. These do not
refer to targeting rules or optimal control exercises,
but discuss instrument rules quite explicitly—
see, e.g., Cote et al. (2002). Another relevant reference to the use of instrument rules in Canadian
policy is provided by Longworth and O’Reilly
(2002). At the risk of being excessively repetitive,
let it be said explicitly that we do not claim that
the Bank of Canada—or any actual central bank—
strictly follows an instrument rule, but rather that
its practices are closer to the analytical representation of an instrument rule than to the analytical
representation of a targeting rule.
For the Bank of England, a natural starting
place is a publication by Bean and Jenkinson
(2001) entitled “The Formulation of Monetary
Policy at the Bank of England,” which describes
the role of forecasts in policy decisions of the
Bank’s Monetary Policy Committee. Their paper’s
discussion explains that a variety of models and
techniques are used in the process, but recognizes
the special status of the “MM” quarterly macroeconometric model. In the publication Economic
Models at the Bank of England: September 2000
Update, there are several examples of policy
experiments with MM involving alternative
instrument rules (Bank of England, 2000, pp.
13-20). The more recent discussion by Allsopp
(2002, p. 489) suggests that “the broad features
of the reaction function in place in the United
602

S E P T E M B E R / O C TO B E R

2005

Kingdom increasingly seem to be publiclyunderstood and built into expectations.”
A still more recent discussion of the U.K.
policy framework is that in a document prepared
by the U.K. Treasury (2003). This study uses a
comparison of “interest rate decisions [with] those
that a Taylor rule would suggest” as one measure
of whether “the current frameworks…have allowed
monetary policy to perform a stabilizing role”
(pp. 33, 35). By contrast, there is no attempt to
evaluate policy using a numerically specified
loss function or Euler equation. The study does
note criticisms of the instrument rule approach,
citing Svensson (2003) in that regard. But it characterizes the deviation of actual policy from the
Taylor rule as reflecting discretionary adjustments:
“[Prescriptions from] Taylor rules…are typically
different from the actual rates chosen by central
banks, which use discretion to determine rates
based on a wider range of information” (2003,
p. 36). In addition, in a speech accompanying
the release of this study, the Chancellor of the
Exchequer (who sets the target for monetary policy
in the United Kingdom and appoints several of
the members of the Monetary Policy Committee)
was explicit in characterizing actual policy in a
Taylor-rule-like manner: “For a 1 per cent rise in
British inflation, the British interest rate would,
other things being equal, tend to rise by 1.5 per
cent” (Brown, 2003, p. 410).
In the case of New Zealand, descriptions of the
Reserve Bank’s policy procedures (e.g., Hampton,
2002) make no mention of optimal control exercises, but clearly refer to a role for an instrument
rule in their Forecasting and Policy System. In
addition, it is interesting to note that Svensson’s
own extensive and authoritative independent
review of New Zealand monetary policy (2001,
p. 66) suggests that “the Reserve Bank may want
to consider some further developments of its
Forecasting and Policy System. Alternative interest
rate reaction functions and alternative interest
rate paths could be used and presented systematically to the MPC [Monetary Policy Committee]
to provide a larger menu of policy choices for
discussions and consideration.”
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

5 VOLATILITY FROM
INSTRUMENT RULES?
We now turn to our main analytical discussion. Svensson’s subsection 5.5 expresses sharp
and specific disagreement with a crucial argument
made by McCallum (1999, p. 1493) and McCallum
and Nelson (2000a) concerning the relationship
between targeting and instrument rules. In particular, these two papers argue that an instrument
rule can be written so as to entail instrument
responses that would tend to bring about the satisfaction of any specific target rule (which usually
amounts to a first-order condition for the maximization of the central bank’s objective function).
By increasing the response coefficient attached to
the discrepancy between the relevant prevailing
conditions and the desired first-order condition,
the average discrepancy can be made arbitrarily
small.16 Thus, in a sense, one can accomplish with
an instrument rule anything that can be accomplished with a specific targeting rule, according
to our argument. Svensson (p. 461) has objected
to this argument, however, on the grounds that
“this is a dangerous and completely impracticable
idea. It is completely inconceivable in practical
monetary policy to have reaction functions with
very large response coefficients, since the slightest
mistake in calculating the argument of the reaction
function would have grave consequences and
result in extreme instrument-rate volatility.” A
similar objection is expressed by Svensson and
Woodford (2005).
Our intuition was that embedding a first-order
condition in an instrument rule with a large but
finite reaction coefficient (such as µ1 below) would
typically entail less-severe instrument movements
than would imposition of the relevant specific
targeting rule, because the latter is equivalent to
use of an “infinite” reaction coefficient. In other
cases, large µ1 values might entail somewhat
greater interest volatility, but in such cases the
magnitude of this volatility would approach that
obtained with the targeting rule as µ1 grows without bound. It is important to note that—in contrast
16

The sign of the response coefficient must, of course, be appropriate—
so that policy is tightened when aggregate demand needs to be
reduced, etc.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

to Svensson’s suggestion on p. 461—we actually
do not recommend the adoption of a large reaction
coefficient; see McCallum and Nelson (2000a,
pp. 20-24). Our point, instead, is that an instrument rule with a large reaction coefficient is less
open to Svensson’s objection than is its associated
specific targeting rule. In our paper (2000a) we
did not, however, explore the effects of mistakes
in calculating the argument of the reaction function. In the following paragraphs we shall, accordingly, investigate the validity of Svensson’s
conjecture.
For this exercise, suppose initially that the
economy is represented by the following model,
which is a version of the neo-canonical specification used by Bullard and Mitra (2002), Clarida,
Galí, and Gertler (1999), Jensen (2002), Woodford
(1999, 2003), McCallum and Nelson (1999b,
2000a), and many others:
(1) xt = Et xt +1 + βr ( it − Et π t +1 ) + ηt ,
(2) π t = α x xt + δ Et π t +1 + εt .

βr < 0

α x > 0, 0 < δ < 1

Here, xt is the output gap, πt is the inflation rate,
δ is a discount factor, and it is the one-period
nominal interest rate. Equation (1) is the nowfamiliar expectational IS function and (2) is the
Calvo price-adjustment relation—both consistent
under well-known assumptions with optimizing
behavior by individuals in the economy (e.g.,
Woodford, 2003).
Supposing that the central bank wishes at t
to minimize the loss function17

(

)

Et Σ `j =0δ j π t + j 2 + λ xt + j 2 ,
the optimum first-order condition in the absence
of commitment is πt = –(λ/αx)xt, or
(3)

π t + ( λ α x ) xt = 0. 18

This is the specific targeting rule that is implied
for this model, assuming the absence of commitment, by Svensson’s approach. The corresponding
17

Up to a scaling term, this is the same objective function as in
Svensson (2003), whose notation we follow.

18

See the papers cited in the previous paragraph.

S E P T E M B E R / O C TO B E R

2005

603

McCallum and Nelson

instrument rule proposed in McCallum and
Nelson (2000a) is

{

}

(4) it = (1 − µ2 ) r + π t + µ1 π t + ( λ α x ) xt  + µ2it −1,
where r– is the average long-run real rate of interest. The term r–, which is included along with πt
so as to express (4) in a Taylor-style form, is normalized to zero by expressions (1) and (2). For
present purposes the interest-rate-smoothing
coefficient, µ2 , may also be set equal to zero,
yielding it = πt + µ1[πt + (λ/αx)xt].
To incorporate mistakes of the type contemplated by Svensson, we modify (3) and (4) to
become

π t + ( λ α x ) xt + et = 0

(3′)
and

(4′)
it = (1 − µ2 ) r + π t + µ1 π t + ( λ α x ) xt + et  + µ2it −1,

{

}

where et represents a stochastic mistake term.
We have included the same mistake term, et, in
both the targeting and instrument rules, a step
that seems necessary to provide a reasonable basis
for comparison. Because the issue is whether use
of an instrument rule (with a large µ1 parameter)
leads to excessive variability (when there are
policy errors) in comparison with the corresponding targeting rule, it would make no sense to
omit the errors from the targeting rule.
In our experiments, we shall treat et as a firstorder autoregressive (AR(1)) process—usually as
white noise—with AR parameter ρe and innovation ωt (standard deviation σω ). Various values for
σω and ρe are considered. Behavioral parameter
values for the model are taken to be βr = –0.5,
αx = 0.03, and δ = 0.99. Also, the stochastic shock
term, ηt, in (1) includes a term, y–t – Et y–t+1, where
y–t is log potential output. This term forms part of
ηt—in addition to a white noise preference shock,
vt—because (1) and (2) are expressed in terms of
the output gap rather than output. The natural rate
value, y–t, is assumed to follow a first-order autoregressive process with AR parameter 0.95 and
innovation standard deviation 0.007. The white
noise preference shock has standard deviation
0.02, and the shock term, εt , in the price adjust604

S E P T E M B E R / O C TO B E R

2005

ment equation (2) is taken to be white noise with
standard deviation 0.005. For the results given
hereafter, the value of the central bank preference
parameter, λ, is set at 0.1.
We begin by reporting in Table 1 results of
using different values for the feedback parameter
µ1 (setting µ2 = 0 here and in subsequent cases).
The first column of results pertains to the µ1 value
of 0.5, as suggested by Taylor (1993). Successive
columns then use values of 5.0 and 50.0. Finally,
the last column includes results for “µ1 = `,”
that is, for the targeting rule (3′). In each cell, two
values are reported. The first is the unconditional
expected value of the loss function, which is (with
δ = 0.99) 100 times the unconditional expectation
of the single-period loss. The second is the standard deviation of it , the interest rate instrument.
These values are based on analytical expressions
for the unconditional variances of πt, xt , and it
implied by the model-plus-rule systems.
The first row of cells in Table 1 gives results for
the reference case in which there is no et mistake
term. The pattern is similar to those in McCallum
and Nelson (2000a, Table 4) in that the value of
the loss function with the instrument rule (4′)
approaches the value with the target-rule firstorder condition (3′). Here, however, the it standard
deviation values are also reported. Not surprisingly, they also show the instrument rule values
approaching the targeting rule value smoothly as
µ1 grows without bound. In the second row, the
mistake or error term, et , is included as white noise
with a standard deviation of 0.002. With this small
variability, the results are not much affected. Then,
in the third row, the standard deviation of et is
increased to a magnitude that is similar to that of
the other model shocks. Nevertheless, there is
again no tendency in this case for the large µ1
values to generate poor performance. Indeed, the
variability of it is slightly smaller, with µ1 = 50,
than with the targeting rule holding exactly. (The
same remains true if we set µ1 = 500.) For morestringent tests, we increase the standard deviation
of the error term by a factor of ten in the fourth
row and then, in the fifth row, revert to 0.02 for
the innovation standard deviation but with an
autoregressive parameter of ρe = 0.8. In both cases,
the standard deviation of the interest rate increases
slightly as we switch from a large µ1 coefficient
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

Table 1
Results with Model (1)-(2), Discretionary Policy, λ = 0.1
Instrument rule (4′)
µ1 = 0.5

Instrument rule (4′)
µ1 = 5.0

Instrument rule (4′)
µ1 = 50

Target rule (3′)
µ1 = `

σω = 0.0
ρe = 0.0

3.70
0.0191

2.52
0.0360

2.48
0.0397

2.48
0.0402

σω = 0.002
ρe = 0.0

3.70
0.0191

2.53
0.0360

2.48
0.0397

2.48
0.0402

σω = 0.02
ρe = 0.0

3.77
0.0198

2.81
0.0375

2.83
0.0414

2.83
0.0419

σω = 0.20
ρe = 0.0

11.02
0.0572

30.93
0.1121

37.31
0.1240

38.16
0.1255

σω = 0.02
ρe = 0.80

4.42
0.0192

3.58
0.0361

3.58
0.0398

3.59
0.0403

NOTE: Entries are loss times 103 and standard deviation of it.

Table 2
Results with Model (1)-(2), Timeless Perspective Policy, λ = 0.1
Instrument rule (6)
µ1 = 0.5

Instrument rule (6)
µ1 = 5.0

Instrument rule (6)
µ1 = 50

Target rule (5)
µ1 = `

σω = 0.0
ρe = 0.0

11.26
0.0336

2.83
0.0403

2.30
0.0401

2.31
0.0401

σω = 0.002
ρe = 0.0

11.27
0.0336

2.86
0.0403

2.34
0.0401

2.33
0.0401

σω = 0.02
ρe = 0.0

11.71
0.0337

5.99
0.0403

5.88
0.0401

5.92
0.0401

σω = 0.20
ρe = 0.0

55.62
0.0449

319.47
0.0417

359.99
0.0428

364.75
0.0430

σω = 0.02
ρe = 0.80

54.37
0.0396

60.70
0.0463

59.17
0.0460

59.05
0.0460

NOTE: Entries are loss times 103 and standard deviation of it.

value of 50 in the instrument rule to the analogous
targeting rule.
Table 2 repeats the same experiments as in
Table 1, but with the first-order targeting rule and
its analogous instrument rule pertaining to policy
behavior of the “timeless perspective” type of
commitment, rather than discretion.19 In this case,
the optimality condition is
19

This is the type of rule recommended by Woodford (1999, 2003)
and by Svensson and Woodford (2005).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

(5)

π t + ( λ α x ) ( xt − xt −1 ) + et = 0

and the analogous instrument rule (with µ2 = 0) is
(6) it = r + π t + µ1 π t + ( λ α x ) ( xt − xt −1 ) + et 
when the mistake terms, et , are included. Here
the values and patterns are quite different from
those in Table 1, but the same finding vis-à-vis
Svensson’s conjecture is obtained. There is, in
other words, no tendency for large µ1 values in
S E P T E M B E R / O C TO B E R

2005

605

McCallum and Nelson

(5) to lead to high it volatility or to poor performance, in comparison with the specific targeting
rule results of condition (6).
Thus, there appears to be little to choose from
between targeting rules and instrument rules on
the criteria of interest rate volatility and welfare
performance; consistently, the results obtained
from targeting rules emerge as a limiting case of
those obtained from instrument rules. Two other
criteria that we have not considered here for discriminating between policy rules are whether the
rule produces a determinate and learnable rational
expectations equilibrium. Other work, however,
suggests that these criteria do not appear to provide
grounds for favoring targeting rules over instrument rules. Studying these issues with a basic
canonical model and no policy mistakes, Evans
and Honkapohja (2004, p. 19) find that instrument
rules of the kind studied in this section “lead to
both determinacy and stability under learning.”

Here we have used the law of iterated expectations, for example, Et –1(Et Xt+1) = Et –1Xt+1. With
this modification, the optimal discretionary firstorder condition imposed in period t—that is, the
specific targeting rule—becomes
(9)

Et π t +1 + ( λ α x ) Et xt +1 = 0

instead of (3). (See Svensson, p. 452.) Accordingly,
the implied instrument rule with µ2 = 0 and r– = 0 is
(10) it = Et −1π t + µ1  Et −1π t + ( λ α x ) Et −1xt  .
Again the relevant experiment, designed to compare these two approaches in the presence of
policy mistakes, entails specifications with random error terms included in both rules. The model
to be solved, then, consists of equations (7), (8),
and either
(11)

Et −1π t + ( λ α x ) Et −1xt + et −1 = 0

or

6 MODEL WITH PREDETERMINED
OUTPUT AND INFLATION
There are various modifications to the model
(1)-(2) that could be examined20 to determine
whether the foregoing results obtain generally,
but one in particular is of special relevance. This
modification stems from recognition that the
examples in Svensson’s (2003) paper are worked
out in terms of models (pp. 432-35) in which
agents’ actions in period t have no effect on output or inflation until period t +1. Accordingly, we
now modify our model (1)-(2) so as to possess that
property. Thus, consider the following specification, in which symbols are the same as previously
noted21:
(7)
xt = Et −1 xt +1 + βr ( Et −1it − Et −1π t +1 ) + ηt ,
(8)
π t = α x Et −1xt + δ Et −1π t +1 + εt .

βr < 0

α x > 0, 0 < δ < 1

20

We have verified that inclusion of serial correlation in the εt shock
process does not alter our basic result.

21

Our specification is equivalent to Svensson’s, in which t +1 is used
wherever we use t, etc.

606

S E P T E M B E R / O C TO B E R

2005

(12) it = Et −1π t + µ1  Et −1π t + ( λ α x ) Et −1xt + et −1  .
Here the random mistake terms are dated t –1 so
as to respect the notion that output and inflation
in t are predetermined.
Before turning to more-complex cases, we
consider an analytical solution for the simple
special case in which discretion obtains and the
three disturbance terms are all white noise. Then
the minimum state variable solution to the system
(7), (8), and (11) is of the form
(13a)

π t = φ11εt + φ12ηt + φ13et −1

(13b)

xt = φ21εt + φ22ηt + φ23et −1

(13c)

it = φ31εt + φ32ηt + φ33et −1.

With this specification, we have Et –1πt = φ13et –1,
Et –1πt +1 = 0, Et –1xt = φ23et –1, and Et –1xt +1 = 0.
Undetermined coefficient calculations then
yield φ11 = 1, φ12 = 0, φ13 = –αx /[αx + (λ /αx )], φ21 = 0,
φ22 = 1, φ23 = –1/[αx + (λ /αx )], φ31 = 0, φ32 = 0, and
φ33 = –1/βr [αx + (λ /αx )].
For comparison, we need to solve with the
instrument rule (12) in place of the targeting rule
(11). The solution is again of the form (13), and
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

Table 3
Results with Model (7)-(8), Discretionary Policy, λ = 0.1
Instrument rule (12)
µ1 = 0.5

Instrument rule (12)
µ1 = 5.0

Instrument rule (12)
µ1 = 50

Target rule (11)
µ1 = `

σω = 0.0, ρe = 0.0
ρε = 0.0

7.03
0.0025

6.99
0.0022

6.99
0.0021

6.99
0.0021

σω = 0.02, ρe = 0.0
ρε = 0.0

7.10
0.0060

7.27
0.0108

7.34
0.0119

7.35
0.0121

σω = 0.2, ρe = 0.0
ρε = 0.0

14.4
0.0539

35.4
0.1061

41.8
0.1175

42.7
0.1189

σω = 0.02, ρe = 0.0
ρε = 0.9

772
0.0841

779
0.0847

780
0.0848

780
0.0849

σω = 0.02, ρe = 0.8
ρε = 0.9

773
0.0840

780
0.0841

780
0.0841

780
0.0841

Instrument rule (15)
µ1 = 50

Target rule (14)
µ1 = `

NOTE: Entries are loss times 103 and standard deviation of it.

Table 4
Results with Model (7)-(8), Timeless Perspective Policy, λ = 0.1
Instrument rule (15)
µ1 = 0.5

Instrument rule (15)
µ1 = 5.0

σω = 0.0, ρe = 0.0
ρε = 0.0

8.58
0.0058

7.01
0.0025

6.99
0.0022

6.99
0.0021

σω = 0.02, ρe = 0.0
ρε = 0.0

9.02
0.0065

10.2
0.0027

10.6
0.0026

10.6
0.0026

σω = 0.2, ρe = 0.0
ρε = 0.0

52.9
0.0304

324
0.0113

365
0.0152

369
0.0156

σω = 0.02, ρe = 0.0
ρε = 0.9

446
0.0392

308
0.0098

306
0.0128

306
0.0131

σω = 0.02, ρe = 0.8
ρε = 0.9

488
0.0444

362
0.0249

360
0.0260

360
0.0261

NOTE: Entries are loss times 103 and standard deviation of it.

now the undetermined coefficient calculations
yield φ11 = 1, φ12 = 0, φ13 = αx βr µ1/[1 – (1 + µ1)αx βr
– (λ /αx )µ1βr ], φ21 = 0, φ22 = 1, φ23 = βr µ1/[1 – (1 +
µ1)αx βr – (λ /αx )µ1βr ], φ31 = 0, φ32 = 0, and φ33 = µ1/
[1 – (1 + µ1)αx βr – (λ /αx )µ1βr ] > 0. Then to compare
the variability of it under the two types of policy
behavior, we need only to calculate the magnitude
of φ33 for the two cases, since Var(it) = φ332 σe2 in
both cases (where σe denotes the standard deviation of et). But with µ1 > 0, it is just a matter of
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

algebra to verify that φ33 is smaller in the second
case (i.e., the instrument rule). So again we find
that mistakes involving the first-order optimality
condition are less serious (in terms of interest
rate variability) when the instrument rule, rather
than the corresponding targeting rule, is used.
Also, it is straightforward to verify that, as µ1 → `,
the instrument rule expression for φ33 approaches
the targeting rule expression.
The case just examined is, however, excesS E P T E M B E R / O C TO B E R

2005

607

McCallum and Nelson

sively special. Indeed, inspection of the solutions
given above shows that, for the discretionary case
with all white noise shocks, there is no effect of
different µ1 values on the mean value (unconditional expectation) of the objective function. In
other words, with no source of serial correlation in
the model, and with the existence of an information lag, the discretionary policy rule has no stabilizing properties for πt and xt in the model (7)-(8).
Thus we need to consider cases with autocorrelated disturbances and/or with timeless perspective optimization. For the latter case we find, from
Svensson’s equation (5.28), that the relevant targeting and instrument rules are, respectively,

Be that as it may, with regard to the issue at
hand the results are clear-cut: There is no tendency for the variability of it to grow alarmingly
with large values of µ1. Indeed, in most cases the
variability of it is smaller with large values of µ1
used in the instrument rule than it is with the
associated specific targeting rule. In addition, the
results provided by the targeting rules (11) and
(14) are, as before, very closely approximated by
those of the instrument rules (12) and (15) for
large values of µ1.

(14) Et −1π t + ( λ α x )  Et −1xt − Et −2 xt −1  + et −1 = 0

Svensson (2003) argues strongly that general
and specific targeting rules, which amount to
commitments to specified objective functions
and first-order conditions (respectively), are normatively superior to instrument rules for the conduct of monetary policy. By contrast, we suggest
that it is unhelpful, terminologically, to refer to
“general targeting rules” as policy rules and that,
substantively, their adoption is either innocuous
or else represents a departure from transparency.
Most of the present paper’s discussion is focused,
accordingly, on specific targeting rules—i.e., the
first-order optimality conditions implied by the
combination of a specific objective function and
a specific model. We argue in Section 2 of this
paper that a key problem with targeting rules is
that they are inevitably fine-tuned to the model
chosen to describe private sector behavior; so,
they may perform poorly in the event that the
chosen model is misspecified. In that respect,
instrument rules, which may rely on more-generic
properties of models used for monetary policy
analysis, may be preferable.
Svensson’s argument that, instead, specific
targeting rules are superior to instrument rules
is based largely on four main objections to the
latter plus a claim concerning the relative interestinstrument variability entailed by the two
approaches. Our Section 3 considers the four
objections in turn and advances arguments that
contradict all of them. Then, in the paper’s analytical sections (5 and 6), we demonstrate that the
variability claim is incorrect for a neo-canonical
model and also for a variant with one-period-

and
(15)
it =
Et −1π t + µ1  Et −1π t + ( λ α x ) ( Et −1xt − Et −2 xt −1 ) + et −1  .

In Tables 3 and 4 we report numerical results
with the model (7)-(8). Again we report standard
deviations based on analytical covariances. In
most of the cases, the standard deviation of the
innovations to the policy errors is kept at σω = 0.02.
In Table 3, which pertains to discretionary behavior, the policy specifications are (11) and (12) for
the targeting and instrument rules, whereas, in
Table 4, with timeless perspective behavior, the
relevant rules are (14) and (15). In both tables the
first three rows apply to cases with white noise
shocks, so we see that, as in the analytical solution
just given, policy activism is not helpful in achieving policy objectives. Indeed, when policy errors
are included, as in rows 2 and 3, the activist rules
tend to be harmful. This should not be greatly
surprising, because there are no general optimality
results pertaining to the formulations being considered. In the final two rows of each table, serially correlated shocks are present, however, so
policy activism can potentially be helpful.22
Indeed, in Table 4 we see that larger values of µ1
lead to reduced values of the loss function.
22

Where autocorrelation is included in the εt process, the innovation
variance is kept at 0.0052.

608

S E P T E M B E R / O C TO B E R

2005

7 CONCLUSION

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

ahead plans used by Svensson, providing that the
same decisionmaking errors are relevant under
the two alternative approaches.
We suggest, then, that despite its large quantity
of meticulous analysis, Svensson (2003) does not
develop any compelling reasons for preferring
targeting rules over instrument rules, from a normative perspective. We also suggest, regarding
the positive perspective, that no actual central
bank has expressed explicitly the magnitude of
objective function parameters that are essential
for the utilization of a targeting rule.

REFERENCES
Allsopp, Christopher. “Macroeconomic Policy Rules
in Theory and in Practice.” Bank of England
Quarterly Bulletin, Winter 2002, 42(4), pp. 485-504.
Bank of England. Economic Models at the Bank of
England: September 2000 Update. London: Bank
of England, 2000.
Bean, Charles and Jenkinson, Nigel. “The Formulation
of Monetary Policy at the Bank of England.” Bank
of England Quarterly Bulletin, Winter 2001, 41(4),
pp. 434-41.
Brown, Gordon. “Economic and Monetary Union:
Statement by the Chancellor of the Exchequer on
U.K. Membership of the Single Currency.” House
of Commons Debates, London, June 9, 2003, pp.
407-15.
Bryant, Ralph C.; Hooper, Peter and Mann, Catherine
L., eds. Evaluating Policy Regimes: New Research
in Empirical Macroeconomics. Washington, DC:
Brookings Institution, 1993.
Bullard, James and Mitra, Kaushik. “Learning About
Monetary Policy Rules.” Journal of Monetary
Economics, September 2002, 49(6), pp. 1105-29.
Cecchetti, Stephen G. “Making Monetary Policy:
Objectives and Rules.” Oxford Review of Economic
Policy, Winter 2000, 16(4), pp. 43-59.
Clarida, Richard; Galí, Jordi and Gertler, Mark. “The
Science of Monetary Policy: A New Keynesian

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Perspective.” Journal of Economic Literature,
December 1999, 37(4), pp. 1661-707.
Clarida, Richard; Galí, Jordi and Gertler, Mark.
“Optimal Monetary Policy in Open versus Closed
Economies: An Integrated Approach.” American
Economic Review (Papers and Proceedings), May
2001, 91(2), pp. 248-52.
Cote, Denise; Lam, Jean-Paul; Liu, Ying and St-Armant,
Pierre. “The Role of Simple Rules in the Conduct
of Canadian Monetary Policy.” Bank of Canada
Review, Summer 2002, pp. 27-35.
Evans, George W. and Honkapohja, Seppo. “Monetary
Policy, Expectations and Commitment.” Unpublished
manuscript, University of Oregon, February 2004.
Fuhrer, Jeffrey C. “Habit Formation in Consumption
and Its Implications for Monetary-Policy Models.”
American Economic Review, June 2000, 90(3), pp.
367-90.
Giannoni, Marc P. and Woodford, Michael. “Optimal
Interest-Rate Rules: I. General Theory.” NBER
Working Paper No. 9419, National Bureau of
Economic Research, January 2003a.
Giannoni, Marc P. and Woodford, Michael. “Optimal
Interest-Rate Rules: II. Applications.” NBER
Working Paper No. 9420, National Bureau of
Economic Research, January 2003b.
Hampton, Tim. “The Role of the Reserve Bank’s
Macro-Model in the Formation of Interest Rate
Projections.” Reserve Bank of New Zealand
Bulletin, June 2002, 65(2), pp. 5-11.
Henderson, Dale W. and McKibbin, Warwick J. “A
Comparison of Some Basic Monetary Policy Regimes
for Open Economies: Implications of Different
Degrees of Instrument Adjustment and Wage
Persistence.” Carnegie-Rochester Conference Series
on Public Policy, 1993a, 39, pp. 221-317.
Henderson, Dale W. and McKibbin, Warwick J. “An
Assessment of Some Basic Monetary-Policy Regime
Pairs: Analytical and Simulation Results from
Simple Multiregion Macroeconomic Models,” in
Ralph C. Bryant, Peter Hooper, and Catherine L.

S E P T E M B E R / O C TO B E R

2005

609

McCallum and Nelson

Mann, eds., Evaluating Policy Regimes: New
Research in Empirical Macroeconomics.
Washington, DC: Brookings Institution, 1993b, pp.
45-218.
Jensen, Henrik. “Targeting Nominal Income Growth
or Inflation?” American Economic Review,
September 2002, 92(4), pp. 928-56.
Judd, John P. and Rudebusch, Glenn D. “Taylor’s
Rule and the Fed: 1970-1997.” Federal Reserve Bank
of San Francisco Economic Review, 1998, 24(3),
pp. 3-16.
Levin, Andrew T.; Wieland, Volker and Williams,
John C. “Robustness of Simple Monetary Policy
Rules under Model Uncertainty,” in John B. Taylor,
ed., Monetary Policy Rules. Chicago: University of
Chicago Press, 1999, pp. 263-99.
Levin, Andrew T. and Williams, John C. “Robust
Monetary Policy with Competing Reference Models.”
Journal of Monetary Economics, July 2003, 50(5),
pp. 945-75.
Longworth, David, and O’Reilly, Brian. “The Monetary
Policy Transmission Mechanism and Policy Rules
in Canada,” in Norman Loayza and Klaus SchmidtHebbel, eds., Monetary Policy: Rules and
Transmission Mechanisms. Santiago: Central Bank
of Chile, 2002, pp. 357-92.
McCallum, Bennett T. “Robustness Properties of a
Rule for Monetary Policy.” Carnegie-Rochester
Conference Series on Public Policy, Autumn 1988,
29, pp. 173-203.
McCallum, Bennett T. “Issues in the Design of
Monetary Policy Rules,” in John B. Taylor and
Michael Woodford, eds., Handbook of
Macroeconomics. Volume 1C. Amsterdam: North
Holland, 1999, pp. 1483-530.
McCallum, Bennett T. and Nelson, Edward.
“Nominal Income Targeting in an Open-Economy
Optimizing Model.” Journal of Monetary
Economics, June 1999a, 43(3), pp. 553-78.
McCallum, Bennett T. and Nelson, Edward.
“Performance of Operational Policy Rules in an

610

S E P T E M B E R / O C TO B E R

2005

Estimated Semi-Classical Structural Model,” in
John B. Taylor, ed., Monetary Policy Rules. Chicago:
University of Chicago Press, 1999b, pp. 15-45.
McCallum, Bennett T. and Nelson, Edward. “Timeless
Perspective vs. Discretionary Monetary Policy in
Forward-Looking Models.” NBER Working Paper
No. 7915, National Bureau of Economic Research,
September 2000a. (Revised version published in
Federal Reserve Bank of St. Louis Review, March/
April 2004, 86(2), pp. 43-56.)
McCallum, Bennett T. and Nelson, Edward. “Monetary
Policy for an Open Economy: An Alternative
Framework with Optimizing Agents and Sticky
Prices.” Oxford Review of Economic Policy, Winter
2000b, 16(4), pp. 74-91.
Meltzer, Allan H. “Limits of Short-Run Stabilization
Policy.” Presidential address to the Western
Economic Association, July 3, 1986. Economic
Inquiry, January 1987, 25(1), pp. 1-13.
Rudebusch, Glenn D. and Svensson, Lars E.O.
“Policy Rules for Inflation Targeting,” in John B.
Taylor, ed., Monetary Policy Rules. Chicago:
University of Chicago Press, 1999, pp. 203-46.
Svensson, Lars E.O. “Inflation Forecast Targeting:
Implementing and Monitoring Inflation Targets.”
European Economic Review, June 1997, 41(6), pp.
1111-46.
Svensson, Lars E.O. “Inflation Targeting as a Monetary
Policy Rule.” Journal of Monetary Economics, June
1999, 43(4), pp. 607-54.
Svensson, Lars E.O. Independent Review of the
Operation of Monetary Policy in New Zealand:
Report to the Minister of Finance. Wellington, New
Zealand, February 2001.
Svensson, Lars E.O. “What Is Wrong with Taylor
Rules? Using Judgment in Monetary Policy through
Targeting Rules.” Journal of Economic Literature,
June 2003, 41(2), pp. 426-77.
Svensson, Lars E.O. and Michael Woodford.
“Implementing Optimal Policy through InflationForecast Targeting,” in Ben S. Bernanke and

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

Michael Woodford, eds., The Inflation Targeting
Debate. Chicago: University of Chicago Press,
2005, pp. 19-83.
Taylor, John B. “Discretion versus Policy Rules in
Practice.” Carnegie-Rochester Conference Series on
Public Policy, 1993, 39, pp. 195-214.
U.K. Treasury. Policy Frameworks in the U.K. and
EMU. London: HM Treasury, June 2003.
Walsh, Carl E. “Speed Limit Policies: The Output Gap
and Optimal Monetary Policy,” American Economic
Review, March 2003, 93(1), pp. 265-78.
Woodford, Michael. “Commentary: How Should
Monetary Policy Be Conducted in an Era of Price
Stability?” in New Challenges for Monetary Policy:
A Symposium Sponsored by the Federal Reserve
Bank of Kansas City. Kansas City, MO: Federal
Reserve Bank of Kansas City, 1999, pp. 277-316.
Woodford, Michael. Interest and Prices: Foundations
of a Theory of Monetary Policy. Princeton, NJ:
Princeton University Press, 2003.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

611

612

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Targeting versus Instrument Rules
for Monetary Policy:
What Is Wrong with McCallum and Nelson?
Lars E.O. Svensson
In their paper “Targeting versus Instrument Rules for Monetary Policy,” McCallum and Nelson
critique targeting rules for the analysis of monetary policy. Their arguments are rebutted here.
First, McCallum and Nelson’s preference to study the robustness of simple monetary policy rules
is no reason at all to limit attention to simple instrument rules; simple targeting rules may have
more desirable properties. Second, optimal targeting rules are a compact, robust, and structural
description of goal-directed monetary policy, analogous to the compact, robust, and structural consumption Euler conditions in the theory of consumption. They express the very robust condition
of equality of the marginal rates of substitution and transformation between the central bank’s target
variables. Indeed, they provide desirable micro foundations of monetary policy. Third, under
realistic information assumptions, the instrument rule analog to any targeting rule that McCallum
and Nelson have proposed results in very large instrument rate volatility and is also, for other
reasons, inferior to a targeting rule.
Federal Reserve Bank of St. Louis Review, September/October 2005, 87(5), pp. 613-25.

1 INTRODUCTION

M

y good friends Ben McCallum and
Ed Nelson have written a paper,
McCallum and Nelson (2005), with
arguably a somewhat destructive purpose. They
attempt to contradict the arguments in favor of
targeting rules, rather than instrument rules, in
positive and normative analysis of monetary
policy that I have presented in Svensson (2003b)
and previous papers (for instance, Svensson, 1997
and 1999). In their concluding section, they suggest that Svensson (2003b) “does not develop any
compelling reasons for preferring targeting rules
over instrument rules.” They seem to believe that
the concept of targeting rules is unnecessary and
that instrument rules are all that is needed in
monetary policy analysis.

In their struggle against targeting rules, however, McCallum and Nelson seem to face an uphill
battle. There is now a rapidly growing literature by
many authors that successfully applies targeting
rules to monetary policy analysis. This literature
includes recent contributions by Benigno and
Benigno (2003), Benigno and Woodford (2004a,b),
Cecchetti (1998, 2000), Cecchetti and Kim (2004),
Evans and Honkapohja (2004), Giannoni and
Woodford (2003a,b and 2004), Kuttner (2004),
Mishkin (2002), Onatski and Williams (2004),
Preston (2004), Walsh (2003 and 2004a,b),
Woodford (2004), and others. In the first drafts of
Woodford’s (2003) book, there were no targeting
rules; in the final, published version, targeting
rules are prominent. In 1998, at a distinguished
National Bureau of Economic Research (NBER)

Lars E.O. Svensson is a professor of economics at Princeton University, a research fellow of the Centre for Economic Policy Research, and a
research associate of the National Bureau of Economic Research. The author thanks James Bullard, Bennett McCallum, Edward Nelson, and
Michael Woodford for comments and discussions and Kathleen Hurley for editorial and secretarial assistance.

© 2005, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

613

Svensson

conference on monetary policy rules (Taylor,
1999), Rudebusch and Svensson (1999) was the
only paper to use targeting rules; in 2003, at an
equally distinguished NBER conference on inflation targeting (Bernanke and Woodford, 2004),
several papers used targeting rules and no paper
used a simple instrument rule as a model of inflation targeting. A Google search with the string
‘“targeting rules” AND monetary’ gave about 1,700
results in April 2004, about 2,100 in August 2004,
and about 5,700 in June 2005. There are, hence,
more papers than mine—indeed, some books—
that McCallum and Nelson may want to take
issue with.1
To be clear: An instrument rule is a formula
for setting the central bank’s instrument rate as a
given function of observable variables. A simple
instrument rule makes the instrument rate a simple
function of a few observable variables. The bestknown example of a simple instrument rule is
the Taylor rule, where the instrument rate is a
linear function of the inflation gap (between inflation and an inflation target) and the output gap
(between output and potential output). Another
example is a formula for adjusting the monetary
base proposed by McCallum (1988) and Meltzer
(1987).2
A (specific) targeting rule specifies a condition
to be fulfilled by the central bank’s target variables
(or forecasts thereof). A real-world example of a
simple targeting rule is the one that has been
applied by the Bank of England, Sweden’s
Riksbank, and the Bank of Norway (Goodhart,
2001; Svensson, 2003a; Svensson et al., 2002):
The two-year-ahead inflation forecast shall equal
the inflation target. More precisely, the instrument
rate shall be set such that the two-year-ahead
1

Sims (1980) and Aizenman and Frenkel (1986) provide early discussions of targeting rules (the former without using the term).

2

Svensson (2005) provides a compact and general definition of
targeting rules and instrument rules. An explicit instrument rule
is an instrument rule where the instrument is a function of predetermined variables only. An implicit instrument rule is an instrument rule where the instrument is related to a non-predetermined
variable. An implicit instrument rule is an equilibrium condition,
where several variables are simultaneously determined. This makes
the practical implementation of implicit instrument rules more
complicated than that of explicit instrument rules (see footnote 12).
Any given equilibrium is consistent with a continuum of implicit
instrument rules.

614

S E P T E M B E R / O C TO B E R

2005

inflation forecast equals the inflation target.3 An
optimal targeting rule is a first-order condition
for optimal monetary policy. But, importantly,
not all targeting rules are optimal targeting rules.4
McCallum and Nelson explain that “we are
more attracted to analysis with instrument rules
than with targeting rules” (p. 598). They imply
that the main reason is that “an attractive approach
to policy design...is to search for an instrument
rule that performs at least moderately well—
avoiding disasters—in a variety of plausible
models” (p. 599). Thus, McCallum and Nelson
are attracted to simple and robust instrument
rules; they agree with Svensson (2003b) that a
complex optimal instrument rule is not practical.
The idea of a robust and simple instrument rule
is further developed in McCallum (1988 and 1999).
A simple and robust monetary policy rule is
indeed an attractive idea. There is always some
uncertainty about the true model of the transmission mechanism of monetary policy, and monetary
policy is always conducted under considerable
uncertainty of different kinds. A simple and robust
monetary policy rule gives the central bank an
option that it can fall back on in difficult times.
A central bank that knows nothing except current
inflation and some estimate of the current output
gap can always fall back on a Taylor rule. If the
bank does not trust its information about inflation
and the output gap, but data on monetary aggregates are more easily accessible or more reliable,
the central bank can fall back further on Friedman’s
rule of k-percent money growth.
But several facts stand in the way of McCallum
and Nelson’s attraction to simple instrument rules.
First, the fact is that nothing says that a simple
and robust monetary policy rule must be an
instrument rule. For instance, Friedman’s k-percent
rule is a targeting rule! The k percent refers to a
broad monetary aggregate, such as M2 or M3.
3

Strangely, McCallum and Nelson seem to believe that no central
bank is using a targeting rule and that a central bank needs to
announce an explicit loss function to use a targeting rule. Obviously,
neither of these beliefs is correct, as this paragraph shows.

4

Although McCallum and Nelson seem to want to restrict the discussion of targeting rules to optimal targeting rules, that makes no
more sense than to restrict the discussion of instrument rules to
optimal instrument rules.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson

This is an (intermediate) target variable, not an
instrument. It reacts with a lag of a quarter or so
to changes in the central bank’s instrument (the
instrument rate or the monetary base). The way
to implement Friedman’s k-percent rule, then, is
to make forecasts of broad money growth for the
next quarter and set the instrument such that the
one-quarter-ahead money-growth forecast equals
k percent (Svensson, 1999). Thus, the targeting
rule: “Set the instrument such that the forecast
of money growth equals k percent.”5 The simple
monetary policy rule used by the Bank of England,
the Riksbank, and the Bank of Norway—already
mentioned above—is also a targeting rule. Walsh
(2004b) has recently demonstrated an equivalence
between the robust-control policies of Hansen
and Sargent (2003 and 2005) and the optimal targeting rules derived by Giannoni and Woodford
(2003a,b).6
Second, the fact is that central banks normally
do not use the fallback options of the simple
instrument rules of Taylor or McCallum and
Meltzer or even the simple targeting rule of
Friedman’s k percent. With improved understanding of the transmission mechanism of monetary
policy, increased experience, and better-designed
objectives for monetary policy, central banks
believe that they can do better than follow these
mechanical simple rules. They have developed
complex decision processes, where huge amounts
of data are collected, processed, and analyzed
5

6

A broad monetary aggregate such as M2 or M3 is to a large extent
endogenously determined by demand and supply of broad money
and an endogenous multiplier between broad money and the monetary base. It reacts with a lag of a quarter or so to central bank adjustments of the instrument rate or the monetary base and is subject
to various intervening shocks during that lag. Hence, the central
bank does not have complete control over broad money; therefore,
it is not an instrument of monetary policy. Even if the money growth
forecast is on target, actual money growth will ex post deviate from
k percent due to unanticipated shocks and imperfections in the
forecasts.
In some of the literature mentioned above, the instrument rate is
also a target variable (that is, an argument of the loss function). In
such cases, the instrument rate appears in the targeting rule, and
the targeting rule is also an implicit instrument rule. Some of the
literature, for instance, Walsh (2004b), follows Giannoni and
Woodford (2003a,b) and frequently refers to such targeting rules
as instrument rules, which is a source of some confusion. A good
test of whether a rule is fundamentally a targeting rule or an instrument rule is to let the weight on the instrument rate in the loss
function go to zero. If the instrument rate then vanishes from the
rule, it is better to call it a targeting rule.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

(see Brash, 2001, and Svensson, 2001). They construct forecasts of their target variables, typically
inflation and the output gap, conditional on their
view of the transmission mechanism, their estimate of the current state of the economy and the
development of a number of exogenous economic
variables, and alternative instrument rate paths.
They select and implement an instrument rate or
an instrument rate path such that the corresponding forecasts of the targeting variables “look good”
relative to the objectives of the central bank. I have
called this monetary policy process “forecast targeting.” It is a decision process and implementation of monetary policy that is very different from
the mechanical application of the simple instrument rules that McCallum and Nelson favor.
Advanced central banks attempt to do better, to
fulfill their objectives as well as possible, to optimize. I am advocating targeting rules as a better
way to describe and prescribe this kind of monetary policy than the simple instrument rules. Targeting rules are one way to make the “look good”
concept precise. Bernanke (2004) endorses this
view of practical monetary policy, although he
uses the term “forecast-based policies” rather than
“forecast targeting.”7
Third, since central banks in a number of
countries have developed this approach of forecast targeting to monetary policy (essentially the
implementation of inflation targeting that started
in a few countries in the early 1990s and has since
spread to a large number of countries), the mon7

McCallum and Nelson note (in Section 4) that many central bank
publications refer to simple instrument rules. But this merely
demonstrates how the concept of simple instrument rules has
previously dominated the monetary policy debate (for instance,
as noted, in Taylor, 1999). It does not imply that central banks
conduct monetary policy by implementing simple instrument rules.
They also note that the Reserve Bank of New Zealand (RBNZ) has
used a particular instrument rule in generating forecasts in the socalled Forecasting and Policy System (Black et al., 1997). But, as far
as I know, the instrument path generated by the instrument rule is
subject to considerable judgmental adjustment, especially for the
first few quarters. Furthermore, the instrument rate path and the
inflation and output gap forecasts generated can be seen as reference
paths and forecasts, used as an input in the policy decision, in the
same way other central banks use forecasts conditional on a constant
interest rate. They are not necessarily the central bank’s optimal
instrument rate plan and optimal inflation and output gap forecasts
(although I am advocating improvements in that direction; see
Svensson, 2001, 2003a). Thus, the RBNZ’s use of an instrument rule
in generating its forecasts does not imply that the RBNZ is actually
following that instrument rule in setting its instrument rate.

S E P T E M B E R / O C TO B E R

2005

615

Svensson

etary policy outcome in those countries has been
extremely good. The past decade has seen unprecedented monetary and real stability with low inflation in a number of countries. This makes it even
more important, I believe, to develop the tools and
definitions through which this kind of monetary
policy can be best understood.8
McCallum and Nelson have one somewhat
constructive contribution in their paper. They
provide further analysis of the proposition, previously put forward in McCallum (1999, p. 1493)
and McCallum and Nelson (2000), that there is a
useful instrument rule analog, with a very large
response coefficient, to any targeting rule. In particular, they maintain that this large response
coefficient, counter to what is argued in Svensson
and Woodford (2005), Svensson (2003b), and, in
a related case, in Bernanke and Woodford (1997),
does not imply higher volatility of the instrument
rate, even if the central bank makes some realistic
errors in determining the arguments for the instrument rule. However, as we shall see, under reasonable information assumptions, McCallum and
Nelson are wrong. A large response coefficient
does indeed make the instrument rate very volatile.
Only under very strange information assumptions
is there no extra volatility. Even if they were right
on this volatility issue, there still seems to be no
point to their proposed instrument rule analog.
As we shall see, it simply adds unnecessary complexity to the monetary policy rule for no apparent
gain. It is conceptually and numerically inferior
to the targeting rule, and it is not neutral from a
determinacy point of view. In summary, the idea
of instrument rules with very large response
coefficients is both impractical and pointless.
Section 2 shows a useful analogy between the
8

McCallum and Nelson disagree with my statement that one of the
problems with a commitment to an instrument rule as a description
and prescription of monetary policy “is that a commitment to an
instrument rule does not leave any room for judgmental adjustments
and extra-model information” (Svensson, 2003b, p. 442). They state
(on p. 600): “This claim is difficult for us to understand, since there
seem to be various ways in which judgmental adjustments to instrument rule prescriptions could be made. For example, the interest
rate instrument could be set above (or below) the rule-indicated
value when policymaker judgments indicate that conditions, not
adequately reflected in the central bank’s formal quantitative
models, imply different forecasts and consequently call for additional policy tightening (or loosening).” McCallum and Nelson
seem to believe that a commitment is consistent with discretionary
adjustments, an obvious contradiction.

616

S E P T E M B E R / O C TO B E R

2005

development of Euler conditions as structural
descriptions of consumption choice in the theory
of consumption and the development of targeting
rules as a structural description of monetary policy
in the theory of monetary policy. Section 3 gives
an example of an optimal targeting rule and discusses some of its properties, including its robustness. Section 4 shows that the instrument rule
analog proposed by McCallum and Nelson indeed
brings high instrument rate volatility under reasonable information assumptions. Section 5 discusses McCallum and Nelson’s criticism of my
definition of “general” targeting rules. I concede
that another term, Walsh’s (2003) “targeting
regimes,” may be preferable. Consequently, in
future work, I am inclined to use the term “targeting regime” rather than “general targeting rule”
and to let “targeting rules,” as in this introduction,
refer to what I have also called “specific” targeting
rules.

2 AN ANALOGY WITH
CONSUMPTION THEORY
To view the issue of targeting rules versus
instrument rules from a broader descriptive perspective, it is useful to compare this issue with
the modeling of consumption in macroeconomics.
Several decades ago, it was common to model
consumption in period t, Ct, as a given function
of income, Yt , the real rate of interest, Rt , and
possibly other variables,

C t = f ( Rt ,Yt ,…).

(1)

In the past 25 years, especially after Hall
(1978), it has become common to model consumption as fulfilling an Euler condition—a first-order
condition for optimal consumption choice, which,
for an additively separable utility function of a
representative consumer, has the simple form,
(2)

Et

δU C (C t +1 )
U C (C t )

=

1
.
1 + Rt

Here, the left side of (2) is the representative
consumer’s expected marginal rate of substitution
of period-t consumption for period-t +1 consumpF E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson

tion (0 < δ < 1 is a discount factor and UC (Ct )
denotes the marginal utility of consumption).
The right side is the consumer’s marginal rate of
transformation of period-t +1 consumption into
period-t consumption, when the consumer can
borrow or lend; that is, the period-t consumption
value of consumption in period t +1. A loglinear
approximation to (2) is
(3)

ct = ct +1 t − σ ( rt − ρ ),

where ct ; lnCt , ct+1|t ; Et ct+1, σ is the intertemporal elasticity of substitution, rt ; ln(1 + Rt ) is
the continuously compounded real interest rate,
and ρ ; –lnδ > 0 is the rate of time preference.
As is well known, a serious problem with
modeling consumption as a given consumption
function is that this function is not a structural
relation but a reduced form. Its properties and
parameters depend on the whole model of the
economy, including the existing shocks and their
stochastic properties, the monetary and fiscal
policy pursued, and so forth.
In contrast, the consumption Euler condition
(2) or (3) is more structural, independent of the rest
of the model, and independent of the monetary
and fiscal policy pursued. It is a robust, compact,
and therefore practical description of optimizing
consumption behavior. Indeed, this development
of a more microfounded modeling of consumption
is an integral part of the rational expectations
revolution in macroeconomics.
The consumption function can be seen as
an instrument rule for consumption behavior,
whereas the Euler condition (2) or (3) can be seen
as a targeting rule for consumption. When I argue
for the adoption of targeting rules rather than
instrument rules in modeling monetary policy, I
am arguing for a development in the theory of
monetary policy that already happened, a long
time ago, in the theory of consumption.
McCallum and Nelson are attracted to modeling monetary policy with instrument rules rather
than targeting rules also for descriptive purposes
(see Section 4). If they were consistent, they
should also prefer to model consumption with
consumption functions rather than Euler conditions. But they are not consistent. Indeed, it is a
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

great irony that one of McCallum and Nelson’s
important contributions to macroeconomics is
precisely the introduction of Euler conditions in
modeling aggregate demand (for instance, in
McCallum and Nelson, 1999) and, with other
New Keynesian pioneers, the use of a condition
such as (3) to derive the New Keynesian aggregatedemand relation.
Do McCallum and Nelson really believe that
a modern central bank is less rational and goaldirected and a worse optimizer than the average
consumer? At least they must admit that policymakers in modern central banks have the advantage above the average consumer of being advised
by a staff with an increasing number of Ph.D.
economists with training in modern macroeconomics and intertemporal optimization. Indeed,
an increasing proportion of policymakers themselves are Ph.D. economists with such training!
A structural description of consumption
choice is essential in estimating meaningful and
robust empirical representations of consumption
behavior. In the same way, a structural description
of monetary policy is essential in estimating meaningful and robust representations of monetary
policy—for instance, parameters of a monetary
policy loss function. Furthermore, a structural
description of consumption choice is essential
in generating correct predictions in macro models
of the consequences of changes in the policy
regime. In the same way, a structural description
of monetary policy is essential in generating correct predictions in macro models of consequences
of changes in the monetary policy regime (in the
form of changes in parameters of the monetary
policy loss function), changes in the fiscal policy
regime, changes in the policy regime of other countries, or other changes in the relevant economic
or political environment.9
Indeed, microfoundations of policy are often
as helpful as microfoundations of private sector
behavior.
9

See Benigno and Benigno (2003) and Svensson (2004) for examples
of the use of targeting rules in discussing international monetary
cooperation and transmission of shocks.

S E P T E M B E R / O C TO B E R

2005

617

Svensson

3 AN EXAMPLE OF AN OPTIMAL
TARGETING RULE
To present an example of a targeting rule, let
me consider a variant of the New Keynesian
model, a variant used in Svensson and Woodford
(2005) and Svensson (2003b), where inflation and
the output gap are predetermined.10 This variant
will also be used in discussing McCallum and
Nelson’s instrument rule analog in Section 4.
Private sector “plans” made in period t for
inflation and the output gap in period t +1, πt+1|t
and xt+1|t , are determined in period t by
(4)

(

)

π t +1 t − E π t  = δ π t +2 t − E π t  + α x xt +1 t + α z zt +1 t ,

(

)

(5) xt +1 t = xt +2 t − βr it +1 t − π t +2 t − rt∗+1 t + βz zt +1 t .
The aggregate-supply relation, (4), follows
from the first-order condition for Calvo-style profitmaximizing price-setting firms. The firms are
assumed to index prices to the long-run average
inflation, E[πt ], between the times of optimal pricesetting, which implies that the long-run Phillips
curve is vertical. The parameter δ (0 < δ < 1) is a
discount factor, and αx > 0 is the slope of the shortrun Phillips curve. The expression αz zt+1 is the
inner product of a vector of coefficients, αz , and
a vector of exogenous random variables, zt+1 (the
“deviation” in period t +1), such that αz zt+1 is a
simple representation of the difference between
this simple model and the true model of the transmission mechanism. The deviation may also
include any “cost-push” and other shocks. Then,
zt+1|t ; Et zt+1, where Et denotes expectations
conditional on information available in period t,
is the private sector’s estimate of the deviation—
the private sector’s “judgment” in period t. Thus,
the one-period-ahead inflation plan depends on
expected future inflation, πt+2|t ; Et πt+2, the output
gap plan, xt+1|t , and the private sector judgment,
zt+1|t .
10

A predetermined variable depends on the current period’s realizations of exogenous variables and previous periods’ realizations of
endogenous and exogenous variables. Equivalently, a predetermined
variable has exogenous one-period-ahead forecast errors (cf. Klein,
2000).

618

S E P T E M B E R / O C TO B E R

2005

The aggregate-demand relation, (5), follows
from the first-order condition for optimal consumption choice by households. Here, it+1 is the
instrument rate set by the central bank in period
t +1, r*t+1 is an exogenous Wicksellian natural
interest rate (the real interest rate in a hypothetical
flexible-price economy with zero deviation), and
βr is a positive constant (in the simplest case, the
intertemporal elasticity of substitution in consumption). Thus, the one-period-ahead output
gap plan depends on the expected future output
gap, xt+2|t , the expected one-period-ahead real
interest-rate gap, it+1|t – πt+2|t – r*t+1|t , and the private sector judgment, zt+1|t (through the inner
product βz zt+1|t ).
Actual inflation and the output gap in period
t +1 will then differ from the plans because of
unanticipated shocks to the deviation and natural
interest rate:

(
= β (r

)

π t +1 − π t +1 t = α z zt +1 − zt +1 t ,
x t +1 − x t +1 t

∗
t +1

r

) (

)

− rt∗+1 t + βz zt +1 − zt +1 t .

Suppose the central bank conducts flexible
inflation targeting and has an intertemporal loss
function in period t,
`

(6)

Et ∑ (1 − δ )δ τ Lt +τ ,
τ =0

where the period loss is
(7)

Lt =

2
1
π t − π ∗ ) + λ xt2  ,
(


2

where π* is the inflation target and λ > 0 is the
weight on output gap stabilization relative to
inflation stabilization.
An equilibrium that minimizes the central
bank’s intertemporal loss function (under commitment in a timeless perspective) will fulfill the
first-order condition
(8)

π t +1 t − π ∗ +

(

λ
x
−x
α x t +1 t t

)=0

t −1

for all periods t (Svensson and Woodford, 2005,
and Svensson, 2003b). This condition is the central
bank’s optimal targeting rule for private sector
inflation and output gap plans.
Thus, optimal price-setting and consumption
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson

choice by the private sector is described by the
first-order conditions (4) and (5), and optimal
monetary policy is characterized by the first-order
condition (8), the central bank’s targeting rule.
The behavior of the agents of the model—the firms,
the households, and the central bank—are each
described by a first-order condition, an attractive
symmetry. The central bank’s targeting rule is a
robust, compact, and, therefore, practical way to
describe the optimal monetary policy. In particular, it is robust to the central bank’s estimate of the
deviation—the central bank’s “judgment”—and
any additive shocks and their stochastic properties, in the sense that neither the judgment nor any
shocks enter into the targeting rule. The targeting
rule (8) is a structural representation of monetary
policy to the same extent that the aggregate-supply
and aggregate-demand relations are structural
representations of private sector behavior.
As discussed in some detail in Svensson
(2003b), the optimal targeting rule is simply, and
fundamentally, a restatement of the standard
efficiency condition of equality between the
marginal rates of substitution and transformation
between the target variables. The target variables—
the variables that enter into the loss function—
are inflation and the output gap. The marginal
rate of substitution between inflation and the
output gap follows from the form of the loss function, including the relative weight, λ. The marginal
rate of transformation between inflation and the
output gap follows from the form of the aggregatesupply relation, including the slope of the shortrun Phillips curve, αx. Thus, these two parameters
appear in the targeting rule. Because the marginal
rate of transformation between inflation and the
output gap is completely determined by the
aggregate-supply relation, the aggregate-demand
relation and its parameters do not affect the targeting rule; the targeting rule is, in this case, robust
to the aggregate-demand relation.
Thus, fundamentally, the optimal targeting rule
is simply the very robust and intuitive relation

regardless of the particulars of the model and is,
in this sense, model independent. Consider the
following instruction: “From your loss function,
find the marginal rate of substitution between your
target variables. From your view of the transmission mechanism of monetary policy, find your
marginal rate of transformation between the target
variables. Find and implement an instrument rate,
or instrument rate plan, that makes these marginal
rates of substitution and transformation equal.
Optimal monetary policy is, in principle, as easy
as that.” What more robust description of optimal
monetary policy can you find?
The optimal equilibrium can be solved for
by combining the targeting rule, (8), with the
aggregate-supply relation, (4). This results in a
second-order difference equation that can be
solved for the optimal inflation and output gap
plans. Substitution of these plans into the
aggregate-demand relation, (5), gives the corresponding optimal instrument rate plan. Svensson
and Woodford (2005) and Svensson (2003b) discuss in some detail how the central bank can
implement (8) for private sector plans by “forecast
targeting”—constructing and announcing inflation
and output gap projections and a corresponding
instrument rate plan that “look good” in the sense
of fulfilling the analog of (8) for inflation and
output gap projections. McCallum and Nelson
do not go into those details.

4 VOLATILITY FROM
INSTRUMENT RULES?
Instead, McCallum and Nelson provide a
more precise analysis of their previous claim (in
McCallum, 1999, p. 1493, and McCallum and
Nelson, 2000) that there is a useful instrument
rule analog of any targeting rule. They discuss
two alternatives: The central bank implements a
targeting rule, such as (8), directly; and the central
bank replaces the targeting rule (8) with an instrument rule such as

MRS = MRT,
(9)
where MRS and MRT refer, respectively, to the
marginal rates of substitution and transformation
between the target variables. This relation holds
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

(


λ
it +1 − r ∗ − π t +1 t = µ π t +1 t − π ∗ +
x
−x
α x t +1 t t

S E P T E M B E R / O C TO B E R

2005

) ,

t −1

619

Svensson

where µ is a large positive number. The idea with
(9) is that, for a large µ, there would be an equilibrium fulfilling (4), (5), and (9), where the term
in the bracket on the right side of (9) is close to
zero and the instrument rate on the left side is
close to the optimal instrument rate. Therefore,
this instrument rule would result in an equilibrium close to the optimal equilibrium.
This is indeed the case, under some circumstances. But what is the point with McCallum and
Nelson’s instrument rule? First, for any finite µ, the
corresponding equilibrium is no longer optimal
but only close to optimal. Everything else equal,
optimal is better. Second, equation (9) is a more
complex equilibrium condition than (8). Everything else equal, simplicity is better than complexity. Third, the targeting rule (8) has the attractive
conceptual property of corresponding to a standard efficiency condition, the equality of the marginal rates of substitution and transformation
between the target variables. The instrument rule
(9) has no such intuitive interpretation. Hence,
there is a conceptual disadvantage to (9). Fourth,
it is no longer possible to solve for the optimal
inflation and output gap plans by combining (9)
only with the aggregate-supply relation, (4).
Because the instrument rate enters, (9) must now
be combined also with the aggregate-demand
relation, (5), leading to a higher-order system of
difference equations. Hence, there is a computational disadvantage to (9).11 Fifth, as discussed
in some detail in Svensson and Woodford (2005),
modifying targeting or instrument rules in this
way often affects the determinacy properties of
forward-looking models and is therefore not
innocuous.
Finally, as pointed out in Svensson and
Woodford (2005) and Svensson (2003b), a high
response coefficient, µ, can lead to instrument
rate volatility under realistic information assumptions of some central bank mistakes or even just
rounding errors. From a practical perspective, a
11

More precisely, (8) can be combined with only (4) to solve for the
optimal inflation and output gap plans. These can then be substituted into (5) to find the optimal instrument rate. If (9) is used
instead of (8), it has to be combined with both (4) and (5) to solve
for the optimal inflation and output gap plans.

620

S E P T E M B E R / O C TO B E R

2005

very high response coefficient is a bizarre idea
and would cause serious problems, except under
very strange circumstances, as we shall see.
Thus, for several reasons, the instrument
rule (9) is inferior to the targeting rule (8). I have
not found any arguments by McCallum and
Nelson in favor of (9). McCallum and Nelson
might have thought that (9) would be easier to
implement than (8). But a more precise discussion
of the implementation reveals that this is not so:
Aside from the issue of volatility, they are equally
difficult or easy to implement.12
To examine the case of central bank mistakes,
McCallum and Nelson consider the targeting rule
with a random error, et ,
(10)

π t +1 t − π ∗ +

(

λ
x
−x
α x t +1 t t

) + e = 0,

t −1

t

and the alternative instrument rule,
(11)
it +1 =

(


λ
r ∗ + π t + 1 t + µ π t + 1 t − π ∗ +
x t +1 t − x t
α
x


) + e  .

t −1

t

We can (in a simpler discussion of implementation than in footnote 12) interpret the instrument
rule as the central bank attempting to observe private sector plans πt+1|t and xt+1|t in period t, using
its previous observation of xt|t–1 in period t–1, to
calculate the expression
12

The instrument rule (9) is an implicit instrument rule, meaning
that it is an equilibrium condition, where the variables on the right
side depend on the instrument rate; there is a simultaneity aspect
that needs to be handled. In contrast, an explicit instrument rule
makes the instrument a function of predetermined variables, which
are hence independent of the instrument. Hence, the implementation of an explicit instrument rule is simply a matter of observing
the predetermined variables and calculating and announcing the
corresponding instrument value. Implicit instrument rules and
targeting rules are both equilibrium conditions, with variables that
are simultaneously determined. Hence, their implementation is
different from, and more complicated than, that of an explicit instrument rule. As discussed in detail in Svensson and Woodford (2005)
and Svensson (2003b), their implementation requires the central
bank to use its model of the transmission mechanism, make projections of the variables included in the target rule or implicit
instrument rule, and find the combination of instrument and targetvariable projections that fulfill the target rule or implicit instrument rule. Announcing these projections and implementing the
instrument rate path will then induce the private sector to behave
according to the desired equilibrium.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson

(12)

π t +1 t − π ∗ +

(

λ
x
−x
α x t +1 t t

)

t −1

for use in (9). In doing this, the central bank
introduces a random error, et .
McCallum and Nelson then actually calculate
the rational expectations equilibrium under the
implicit assumption that the error, et , is immediately observed and known to both the central bank
and the private sector in period t, before the instrument rate it+1 is announced. Suppose that the error
is positive, et > 0. Everything else equal, it would
raise the instrument rate by µet > 0, where µ is a
large number. The private sector, realizing this,
immediately responds by lowering their inflation
and output gap plans, πt+1|t and xt+1|t , according
to (4) and (5). Indeed, the private sector is assumed
to instantaneously adjust their plans so as to bring
about the rational expectations equilibrium for a
known error, et . Furthermore, the central bank is
then assumed to observe the adjusted plans, and
then calculate and implement the equilibrium
instrument rate according to (11). The result is
that the equilibrium instrument rate increases by
much less than µet . Indeed, with a large µ, (10) is
approximately fulfilled, so the equilibrium resulting from (11) ends up being similar to the equilibrium resulting from (10) (disregarding any
determinacy issues). In particular, the error introduces no more volatility for the instrument rule
(11) than for the targeting rule (10).
But the idea that the central bank and the
private sector immediately observe the error in
period t is strange, to say the least. If the central
bank observes the error, why does it not immediately correct the sum (12) so as to eliminate the
error and instead implement (9) without any error?
Assume, more realistically, that the error is
not immediately observed by the central bank or
the private sector. Instead, the private sector first
forms its plans under the assumption of an
expected central bank error equal to zero (assuming that the error is i.i.d. and has a zero mean).
The central bank then imperfectly observes those
plans, introduces the (measurement) error, and
announces the corresponding instrument rate, it+1,
for period t +1. Assume, realistically, that the
instrument rate can be announced only once in
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

each period. In this case, the error hits the instrument rate with the full force of µet . If the private
sector knows its own plans and how the central
bank calculates the instrument rate, the private
sector will be able to infer the error when it learns
it+1. If the announcement is early—in period t
rather than in period t +1—the private sector may
be able to adjust its plans after the announcement,
and the error will have an impact on the plans. If
the announcement is late—in period t +1—the
private sector plans cannot be adjusted and the
plans for inflation and the output gap are unaffected by the error. But, in either case, the error
still affects the instrument rate with the full magnitude µet . Under this realistic information assumption of the error not being immediately observed
by the central bank and the private sector, a large µ
will indeed introduce high volatility of the instrument rate, precisely as argued in Svensson and
Woodford (2005) and Svensson (2003b). Central
bankers, beware of McCallum and Nelson’s
instrument rule!
Even something as trivial as a small rounding
error could be problematic. Suppose that the
central bank rounds off its calculation of (12) to
one decimal percentage point—that is, 10 basis
points. This would introduce a uniformly distributed absolute error with a mean of 2.5 basis points.
With µ = 50, the corresponding mean absolute
error of the instrument rate is 125 basis points—
a sizeable error, especially because instrument
changes are seldom larger than 50 basis points.
In real-world monetary policy, the error, et , could
be substantially larger—say, a mean absolute
error of 50 basis points (0.5 percent) or more.
With µ = 50, this would lead to a huge mean
absolute instrument rate error of 2,500 basis points
or more.
McCallum and Nelson (2005) defend their
informational assumptions by pointing out, in
their reply (“Commentary,” pp. 627-31), that
Svensson and Woodford (2005) and Svensson
(2003b) make information assumptions that imply
that any error would be immediately revealed. But
Svensson and Woodford (2005) and Svensson
(2003b) do not attempt to provide any detailed
discussion of such central bank errors and related
realistic information assumptions. This detail is
S E P T E M B E R / O C TO B E R

2005

621

Svensson

provided here, instead. One might have wished
that McCallum and Nelson would have considered
more realistic information assumptions on their
own, because these assumptions are so crucial to
their proposition. Indeed, realistic assumptions
completely contradict their proposition.
Thus, the criticism in Svensson and Woodford
(2005) and Svensson (2003b) of McCallum and
Nelson’s proposed instrument rule stands up to
scrutiny: An instrument rule such as (9) with a
very large response coefficient is a purely academic construction and completely impractical
for any real-world monetary policy. The first five
items in the list in the beginning of this section
provide additional reasons why such instrument
rules are inferior to targeting rules.

5 GENERAL TARGETING RULES?
The discussion here has so far concerned
“specific” targeting rules, in the terminology of
Svensson and Woodford (2005) and Svensson
(2003b). Those papers also define “general” targeting rules for monetary policy as an operational
formulation of the objectives for monetary policy—
for instance, in the form of listing the target variables and the corresponding target levels and
specifying the loss function to be minimized.
McCallum and Nelson clearly find this definition
confusing and not useful. My idea behind the
definition is that the instruction to “specify your
loss function in an operational way, construct forecasts of the target variables, and select and implement an instrument rate or an instrument rate path
such that the forecasts minimize the loss function”
is such a specific instruction to a central bank that
it deserves to be called a “rule,” in the common
(and dictionary, see Merriam-Webster, 1996) sense
of a rule being “a prescribed guide for conduct or
action.”13 Perhaps it would have been better, and
caused less confusion, to refer to this as “general
targeting” instead of a “general targeting rule.”14
13

This is the idea behind the word “rule” in the title of Svensson
(1999), “Inflation Targeting as a Monetary Policy Rule.”

14

It should not be necessary to state that “targeting,” in the sense of
“achieving a target,” is best seen as equivalent to minimizing a
loss function that is increasing in the deviation between the target

622

S E P T E M B E R / O C TO B E R

2005

Walsh (2003) uses the term “targeting regime,”
which arguably is better.15
The idea with a particular terminology and
particular definitions is, of course, that it shall
contribute to more useful and precise discussion
and analysis. I am inclined to concede that the
term “general targeting rule” has not been successful and that Walsh’s term “targeting regime” is
better. Consequently, I am inclined to use that
terminology in the future and to let “targeting
rules” refer only to what I have previously called
“specific” targeting rules.16

6 CONCLUSION
Counter to what McCallum and Nelson seem
to take as granted, there is no reason at all to limit
a study of robust simple monetary policy rules
to instrument rules; simple targeting rules may
have more desirable properties. Furthermore,
targeting rules are a compact, robust, structural
and, therefore, practical representation of goaldirected monetary policy. From a descriptive
variables and the target levels. That is, targeting and target variables
refer to a loss function to be minimized and the arguments in that
loss function. Previously, the literature has, by “targeting variable
X,” sometimes meant putting variable X in the instrument rule. To
avoid confusion, it is better to call this “responding to variable X.”
Generally, the best way to target variable X, in the sense of minimizing a loss function increasing in deviations of variable X from its
target level, is to respond, in the explicit instrument rule, to all the
determinants of variable X. Even if inflation and the output gap are
the only target variables, there are usually many more variables
determining future inflation and the output gap, and it is optimal
to respond to all of those. Generally, the mapping from a loss function to the optimal reaction function, the optimal explicit instrument rule, is quite complex, and the response coefficients of the
optimal explicit instrument rule are complicated and sometimes
nonmonotonic functions of the parameters of the loss function and
the whole model. The size of the response coefficient of a variable
is not an indicator of the weight of the variable in the loss function.
15

In any case, there is always a close relation between a (specific)
targeting rule in the form of some scalar expression Tt (πt ,xt ) = 0
and a loss function of the form Lt = [Tt (πt ,xt )]2, because the former
is a first-order condition for a minimum of the latter.

16

For a situation when a commitment to an optimal (specific) targeting
rule is not possible, Svensson and Woodford (2005) and Svensson
(2003b) discuss a “commitment to continuity and predictability,”
which involves minimizing the central-bank loss function while
taking into account the cost of deviating from previously announced
forecasts. This will make optimization under discretion result in
the optimal outcome under commitment. Strangely, McCallum and
Nelson describe this mechanism that induces the central bank to
keep previous promises as “the central bank describing its objectives
dishonestly to the public” (p. 598).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson

point of view, they amount to the same development in the theory of monetary policy as the
consumption Euler conditions in the theory of
consumption. Optimal targeting rules express the
intuitive optimality condition of equality between
the marginal rates of substitution and transformation of the target variables. They provide microfounded monetary policy, in the same way Euler
conditions provide microfounded private sector
behavior. Regardless of McCallum and Nelson’s
skepticism in McCallum and Nelson (2005), targeting rules for the analysis of monetary policy
have arrived and are, as indicated by the long list
of papers and books mentioned in the introduction, likely to stay. In particular, McCallum and
Nelson’s proposed instrument rule analog to any
targeting rule will, under realistic information
assumptions, lead to very high instrument rate
volatility; for other reasons, it is also inferior to
the targeting rule.

REFERENCES
Aizenman, Joshua and Frenkel, Jacob A. “Targeting
Rules for Monetary Policy.” Economics Letters,
1986, 21(2), pp. 183-87.
Benigno, Pierpaolo and Benigno, Gianluca. “Designing
Targeting Rules for International Monetary Policy
Cooperation.” ECB Working Paper No. 279,
European Central Bank, October 2003.
Benigno, Pierpaolo and Woodford, Michael. “Optimal
Monetary and Fiscal Policy: A Linear-Quadratic
Approach” in Mark Gertler and Kenneth Rogoff, eds.,
NBER Macroeconomics Annual 2003. Volume 18.
Cambridge, MA: MIT Press, 2004a, pp. 271-333.

Bernanke, Ben S. and Woodford, Michael. “Inflation
Forecasts and Monetary Policy.” Journal of Money,
Credit, and Banking, November 1997, 29(4, Part 2),
pp. 654-84.
Bernanke, Ben S. and Woodford, Michael. The
Inflation-Targeting Debate. Chicago: Chicago
University Press, 2004.
Black, Richard; Cassino, Vincenzo; Drew, Aaron;
Hansen, Eric; Hunt, Benjamin; Rose, David and
Scott, Alasdair. “The Forecasting and Policy System:
The Core Model.” Research Paper No. 43, Reserve
Bank of New Zealand, 1997.
Brash, Donald T. “Making Monetary Policy: A Look
behind the Curtains.” Speech given before the
Canterbury Employers’ Chamber of Commerce,
Christchurch, January 26, 2001.
Cecchetti, Stephen G. “Central Bank Policy Rules:
Conceptual Issues and Practical Considerations,”
in Helmut Wagner, ed., Current Issues in Monetary
Economics. Heidelberg: Physica-Verlag, 1998, pp.
121-40.
Cecchetti, Stephen G. “Making Monetary Policy:
Objectives and Rules.” Oxford Review of Economic
Policy, 2000, 16(4), pp. 43-59.
Cecchetti, Stephen G. and Kim, Junhan. “Inflation
Targeting, Price-Path Targeting, and Output
Variability,” in Ben S. Bernanke and Michael
Woodford, eds., The Inflation-Targeting Debate.
Chicago: Chicago University Press, 2004, pp. 173-95.
Evans, George W. and Honkapohja, Seppo. “Monetary
Policy, Expectations and Commitment.” Unpublished
manuscript, 2004a (forthcoming in Scandanavian
Journal of Economics).

Benigno, Pierpaolo and Woodford, Michael.
“Optimal Stabilization Policy When Wages and
Prices are Sticky: The Case of a Distorted Steady
State.” Unpublished manuscript, 2004b (forthcoming
in Journal of the European Economic Association).

Giannoni, Marc P. and Woodford, Michael. “Optimal
Interest-Rate Rules: I. General Theory.” NBER
Working Paper No. 9419, National Bureau of
Economic Research, 2003a.

Bernanke, Ben S. “The Logic of Monetary Policy.”
Speech given before the National Economists Club,
Washington, DC, December 2, 2004;
www.federalreserve.gov.

Giannoni, Marc P. and Woodford, Michael. “Optimal
Interest-Rate Rules: II. Applications.” NBER Working
Paper No. 9420, National Bureau of Economic
Research, 2003b.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

623

Svensson

Giannoni, Marc P. and Woodford, Michael. “Optimal
Inflation Targeting Rules,” in Ben S. Bernanke and
Michael Woodford, eds., The Inflation-Targeting
Debate. Chicago: Chicago University Press, 2004,
pp. 93-171.

McCallum, Bennett T. and Nelson, Edward. “Timeless
Perspective vs. Discretionary Monetary Policy in
Forward-Looking Models.” Federal Reserve Bank
of St. Louis Review, March/April 2000, 86(2), pp.
43-56.

Goodhart, Charles A.E. “Monetary Transmission Lags
and the Formulation of the Policy Decision on
Interest Rates.” Federal Reserve Bank of St. Louis
Review, July/August 2001, 83(4), pp. 165-81.

McCallum, Bennett T. and Nelson, Edward. “Targeting
vs. Instrument Rules for Monetary Policy.” Federal
Reserve Bank of St. Louis Review, September/
October 2005, 87(5), pp. 597-611.

Hall, Robert E. “Stochastic Implications of the Life
Cycle-Permanent Income Hypothesis: Theory and
Evidence.” Journal of Political Economy, December
1978, 86(6), pp. 971-87.
Hansen, Lars Peter and Sargent, Thomas J. “Robust
Control of Forward-Looking Models.” Journal of
Monetary Economics, April 2003, 50, pp. 581-604.
Hansen, Lars Peter and Sargent, Thomas J. “Robust
Control and Model Uncertainty.” Unpublished
manuscript, 2005; homepages.nyu.edu/~ts43/.
Klein, Paul. “Using the Generalized Schur Form to
Solve a Multivariate Linear Rational Expectations
Model.” Journal of Economic Dynamics and Control,
September 2000, 24, pp. 1405-23.
Kuttner, Kenneth N. “The Role of Policy Rules in
Inflation Targeting.” Federal Reserve Bank of St.
Louis Review, July/August 2004, 86(4), pp. 89-111.
McCallum, Bennett T. “Robustness Properties of a
Rule for Monetary Policy.” Carnegie-Rochester
Conference Series on Public Policy, 1988, 29, pp.
173-204.
McCallum, Bennett T. “Issues in the Design of
Monetary Policy Rules,” in John Taylor and Michael
Woodford, eds., Handbook of Macroeconomics.
Volume 1C. New York: North-Holland, 1999, pp.
1483-530.
McCallum, Bennett T. and Nelson, Edward. “An
Optimizing IS-LM Specification for Monetary
Policy and Business Cycle Analysis.” Journal of
Money, Credit, and Banking, August 1999, 31(3),
pp. 296-316.

624

S E P T E M B E R / O C TO B E R

2005

Meltzer, Allan H. “Limits of Short-Run Stabilization
Policy.” Presidential address to the Western
Economic Association, July 3, 1986. Economic
Inquiry, January 1987, 25, pp. 1-13.
Merriam-Webster’s Tenth New Collegiate Dictionary.
Springfield, MA: Merriam-Webster, 1996.
Mishkin, Frederic S. “The Role of Output Stabilization
in the Conduct of Monetary Policy.” International
Finance, Summer 2002, 5(2), pp. 213-27.
Onatski, Alexei and Williams, Noah. “Empirical and
Policy Performance of a Forward-Looking Monetary
Model.” Unpublished manuscript, 2004;
www.princeton.edu/~noahw/forward15.pdf.
Preston, Bruce. “Adaptive Learning and the Use of
Forecasts in Monetary Policy.” Working paper,
Columbia University, 2004.
Rudebusch, Glenn D. and Svensson, Lars E.O.
“Policy Rules for Inflation Targeting,” in John
Taylor, ed., Monetary Policy Rules. Chicago:
Chicago University Press, 1999, pp. 203-46.
Sims, Christopher A. “Macroeconomics and Reality.”
Econometrica, January 1980, 48(1), pp. 1-48.
Svensson, Lars E.O. “Inflation Forecast Targeting:
Implementing and Monitoring Inflation Targets.”
European Economic Review, June 1997, 41(6), pp.
1111-46.
Svensson, Lars E.O. “Inflation Targeting as a Monetary
Policy Rule.” Journal of Monetary Economics,
1999, 43(3), pp. 607-54.
Svensson, Lars E.O. “Independent Review of the
Operation of Monetary Policy in New Zealand:

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson

Report to the Minister of Finance.” Unpublished
manuscript, 2001;
www.princeton.edu/~svensson/NZ/RevNZMP.htm.
Svensson, Lars E.O. “The Inflation Forecast and the
Loss Function,” in Paul Mizen, ed., Central Banking,
Monetary Theory and Practice: Essays in Honour
of Charles Goodhart. Volume 1. New York: Edward
Elgar, 2003a, pp. 135-52.
Svensson, Lars E.O. “What is Wrong with Taylor
Rules? Using Judgment in Monetary Policy through
Targeting Rules.” Journal of Economic Literature,
2003b, 41(2), pp. 426-77.
Svensson, Lars E.O. “The Magic of the Exchange
Rate: Optimal Escape from a Liquidity Trap in
Small and Large Open Economies.” Working paper,
Princeton University, 2004;
www.princeton.edu/~svensson.
Svensson, Lars E.O. “Optimization under Discretion
and Commitment, and Targeting Rules and
Instrument Rules.” Lecture notes. Princeton
University, 2005; www.princeton.edu/~svensson.
Svensson, Lars E.O; Houg, Kjetil; Solheim, Haakon
and Steigum, Erling. “An Independent Review of
Monetary Policy and Institutions in Norway.”
Norges Bank Watch 2002, Centre for Monetary
Economics, Norwegian School of Management BI,
2002; www.princeton.edu/~svensson.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Svensson, Lars E.O. and Woodford, Michael.
“Implementing Optimal Policy through InflationForecast Targeting,” in Ben S. Bernanke and
Michael Woodford, eds., The Inflation-Targeting
Debate. Chicago: Chicago University Press, 2004,
pp. 19-83.
Taylor, John B., ed. Monetary Policy Rules. Chicago:
Chicago University Press, 1999.
Walsh, Carl E. Monetary Theory and Policy. Second
edition. Cambridge, MA: MIT Press, 2003.
Walsh, Carl E. “Parameter Misspecification and
Robust Monetary Policy Rules.” Working paper,
University of California–Santa Cruz, 2004a;
econ.ucsc.edu/~walshc.
Walsh, Carl E. “Robustly Optimal Instrument Rules
and Robust Control: An Equivalence Result.”
Journal of Money, Credit, and Banking, December
2004b, 36(6), pp. 1105-13; econ.ucsc.edu/~walshc.
Woodford, Michael. Interest and Prices: Foundations
of a Theory of Monetary Policy. Princeton, NJ:
Princeton University Press, 2003.
Woodford, Michael. “Inflation Targeting and Optimal
Monetary Policy.” Federal Reserve Bank of St. Louis
Review, July/August 2004, 86(4), pp. 15-41.

S E P T E M B E R / O C TO B E R

2005

625

626

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Commentary
Bennett T. McCallum and Edward Nelson
The following are comments in response to Lars Svensson’s “Targeting versus Instrument Rules
for Monetary Policy: What Is Wrong with McCallum and Nelson?”
Federal Reserve Bank of St. Louis Review, September/October 2005, 87(5), pp. 627-31.

W

e are very pleased that Lars
Svensson refers to us as “good
friends,” for we certainly view
him in that manner. We therefore
regret that we have little agreement with the
manner in which he has represented the arguments in our paper (McCallum and Nelson, 2005).
To begin with, to characterize our paper as
“destructive” is, we believe, not justified by the
content of the paper. One of its main purposes
is to recognize and emphasize that there is no
single approach to policy rule analysis that is
uniquely legitimate; targeting rules are appropriate and convenient for some problems, whereas
instrument rules are for others. That this is our
position should be clear from our previous writings, from the explicit passage on our page 598,1
and from the fact that over half of our paper—
Sections 5 and 6—is devoted to analysis showing that instrument rules can be used to approximate targeting rules as closely as desired. In
what sense is any of this “destructive,” rather
than merely expressing a somewhat different,
more eclectic, approach to policy rule analysis?
Also, to suggest that we are engaged in a “struggle
1

“It is not our intention to argue that analysis with instrument rules
is in all respects preferable to the use of targeting rules. Even if we
held that belief, moreover, we would not think it socially desirable
for all researchers to employ the same approach.”

against targeting rules” is to suggest something
that we could not imagine that Lars would
believe, especially because we use targeting rules
in our own work—e.g., McCallum and Nelson
(2004) and Jensen and McCallum (2002).
On his p. 613, Svensson emphasizes that
“there is now a rapidly growing literature by
many authors that successfully applies targeting
rules to monetary policy analysis” and hints that
historical inevitability is on his side (page 613,
paragraph 2). We agree that an increasing fraction
of monetary policy rule analysis is based on targeting rules, but this fact does not settle any of
the actual issues. In Svensson’s passages, for
example, there is a good bit of appealing rhetoric
but no indication of how a study is judged to be
“successful.” Besides, there are many types of
contemporary phenomena that seem inevitable
yet highly undesirable.
In his footnote 3, Svensson says that we “seem
to believe that no central bank is using a targeting
rule and that a central bank needs to announce
an explicit loss function to use a targeting rule,”
which he denies. But in this regard, it is important to note that our paper interprets a targeting
rule as definitionally given by optimality conditions with respect to a particular objective function and particular model. Our justification for
this stated limitation is based on Svensson’s

Bennett T. McCallum is a professor of economics at Carnegie Mellon University and a research associate of the National Bureau of Economic
Research. Edward Nelson is a research officer at the Federal Reserve Bank of St. Louis and a research affiliate of the Centre for Economic
Policy Research.

© 2005, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

627

McCallum and Nelson

practice as well as his several writings on the
subject prior to his Journal of Economic Literature
paper (2003a). It is adopted explicitly in our
paper—see footnote 6.

INVALID ANALOGY WITH
CONSUMPTION THEORY
In his Section 2, Svensson makes the observation, with which we agree fully, that it is desirable
to model consumption decisions—and, for that
matter, all other private sector spending and pricing decisions—as reflecting optimizing behavior
by private agents in the economy. But Svensson’s
conclusions about the implications of this observation for modeling central bank behavior constitute a non sequitur. Dynamic general equilibrium
theory implies that valid policy analysis—for
example, working out the implications for inflation or output gap variability of a particular
monetary policy rule—always requires modeling
the private sector as optimizing. By contrast, how
central bank behavior should be modeled depends
on the purpose of the analysis. If the intention is
to work out the effects of a constant money growth
rule, then the central bank should be modeled as
following a constant money growth rule. If the
intention is to work out the effects of a fixed
exchange rate regime, the central bank should be
modeled as pursuing a fixed exchange rate. And
if the intention is to work out the effects of the
regimes that we observe in practice, the analyst
should strive to model central bank behavior
realistically.
Svensson, of course, argues that the most
realistic characterization of inflation targeting is
as a targeting rule. We have presented evidence
that casts doubt on this characterization and have
argued that an instrument rule characterization
of actual central bank behavior is preferable. To
emphasize, we argued that this was a valid characterization of the manner in which some inflationtargeting central banks actually carried out their
policy decisions. We rested our argument not on
the “descriptive” grounds Svensson attributes to
us—i.e., on the ex post reduced-form relationships
between the monetary policy instrument and
628

S E P T E M B E R / O C TO B E R

2005

other variables—but on documentation produced
by these inflation-targeting central banks of their
practices and on the support that that evidence
provides for an instrument rule interpretation of
policy.2 If our claim is valid, then the appropriate
means of carrying out a structural analysis of inflation targeting is to use a model that combines the
private sector’s optimality conditions with an
instrument rule (possibly including expectational
terms) estimated over the period of inflation targeting. There is no internal inconsistency, or irony,
in following this procedure. Rather, the procedure
takes into account the necessary condition for a
valid structural model (i.e., private sector optimizing behavior), while also using the policy rule
specification that is the best approximation of
actual practice.

FRIEDMAN’S k-PERCENT RULE
In his latest discussion, Svensson goes beyond
his argument that targeting rules closely describe
the practice of inflation-targeting central banks,
to claim that even “Friedman’s k-percent rule is
a targeting rule!” (2005, p. 614). A more careful
consideration of Friedman’s own description of
his proposed rule, however, rules out a targeting
rule interpretation.
Svensson argues that, because Friedman’s
proposal involves targeting growth in a definition
of the money supply that includes commercial
bank deposits, the targeted variable is necessarily
out of direct control of the central bank. Therefore,
he contends, the effort of the central bank to target
a monetary aggregate can be characterized as a
targeting rule. But the specifics of Friedman’s
proposal clearly contradict targeting rule practice.
Consider first the specific proposal for the kpercent money growth rule outlined in Friedman
2

This documentation included evidence that inflation-targeting
countries viewed discretionary adjustments to policy as adjustments to the settings implied by an instrument rule. The implication of this for our discussion of Svensson is that, contrary to the
suggestions of Svensson (2003a), central banks’ use of “judgment”
is not evidence in favor of targeting rules over instrument rules as
a characterization of inflation targeting. Svensson’s (2005) footnote 8
muddies the waters by focusing on the discretion-vs.-commitment
issue rather than the targeting-vs.-instrument rules issue that is at
the heart of our debate.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

(1960). The 1960 proposal included a list of
reforms to be undertaken prior to implementing
the rule, including the introduction of 100 percent
reserve requirements on those commercial banks
whose deposits were included in the proposed
target aggregate. This reform would make the
target identical to the monetary base—immediately
making the k-percent rule an instrument rule.
More frequently, Friedman has set out a kpercent money growth proposal without suggesting the major overhaul of the financial system
implied by a 100 percent reserve requirement. In
that case, the definition of money targeted, if it
includes commercial bank deposits, will not be
subject to exact central bank control. Does this
rule proposal correspond to a targeting rule?
Clearly not. Consider the following specifics of
the proposal as given by Friedman (1982, p.117):
Set a target for several years ahead for a single
aggregate—for example, M2 or the base…
Estimate the change over an extended
period, say three to six months, in the Fed’s
holdings of securities that would be necessary
to approximate the target path over that period.
Divide that estimate by 13 or 26. Let the Fed
purchase precisely that amount every week…
Finally, announce in advance and in full
detail the proposed schedule of purchases and
stick to it.

Friedman’s proposal here refers to targeting
either “M2 or the base.” The latter again corresponds simply to a constant-growth instrument
rule for the base. In the case of M2 targeting,
denoting the log of the money multiplier by
mu = log (M2) – h, with h the log of the monetary
base, this rule is given by ∆ht = (k/400) – 1.0
Et –1∆mut, that is, a simple instrument rule with
an intercept term and one further argument, the
expected change in the money multiplier.3 Importantly, Friedman’s proposal explicitly specifies
the policy instrument (the monetary base) with
3

Note that Friedman (1982) explicitly disavows using period-t
information in pursuing the monetary target. His proposal therefore cannot correspond to a targeting rule because an optimalcontrol approach to targeting M2 would utilize period-t information
helpful in hitting the target. Friedman is clearly willing to forfeit
possible extra precision in hitting the target in favor of making the
target one that can be pursued by a fully predictable instrument
rule for the monetary base.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

which to pursue the target. A targeting rule, by
contrast, generally does not explicitly refer to the
policy instrument.
While we disagree with Svensson’s characterization of Friedman’s rule, his surrounding discussion does indicate that his perspective is coming
closer to ours. Whereas Svensson once devoted
considerable effort to arguing that “[i]nflationtargeting central banks should specify explicit
loss functions…[including] a specific relative
weight on output-gap stabilization” (Svensson,
2003b, p. 148), Svensson (2005) goes so far as to
say that a “simple and robust monetary policy
rule is indeed an attractive idea,” especially if
the central bank “does not trust its information
about…the output gap” and in light of uncertainty
about “the true model of the transmission mechanism of monetary policy.” These are, of course,
long-standing arguments of those who argue for
instrument rules. A targeting rule is hardly an
ideal way of treating these problems. The lack of
information about the output gap that Svensson
acknowledges would make it hard for central
bank committee members to settle on a way of
estimating the gap, let alone follow the Svensson
(2003b) proposal of announcing a welfare function
with an explicit output gap weight. Proceeding
with such an announcement in the face of uncertainty about the output gap would hardly be the
way to create a “robust” rule and so would be
unattractive by Svensson’s own standard. As we
emphasized in McCallum and Nelson (2005), the
more general dilemma for targeting rules is that
they are especially vulnerable to robustness problems because of their model dependency. Levin
and Williams’s (2003) results graphically depict
the bloodbath that can result from imposing targeting rules derived from one model specification
on models that come from other areas of the
specification space.

VOLATILITY ANALYSIS
Let us now consider Svensson’s discussion
of our analytical contribution concerning interest
rate variability. We are, of course, quite pleased
that he acknowledges that our claims regarding
volatility are correct, under the information
S E P T E M B E R / O C TO B E R

2005

629

McCallum and Nelson

assumptions utilized in Svensson (2003a) and
Svensson and Woodford (2005). We had been
under the impression that these assumptions
reflected careful consideration, as is typically the
case in the work of both Svensson and Woodford.
But now Lars goes on to propose new assumptions
as representing “realistic” information conditions.
We find the particulars of his specification to be
unclear—e.g., concerning “early” versus “late”
in a given time period and especially the notion
that the central bank would “observe” its own
error; so, rather than attempting a new discussion,
let us state our position regarding information
assumptions that we believe to be appropriate
for monetary policy analysis. In previous work
(e.g., McCallum and Nelson, 2004), we have suggested that, when setting it (the one-period instrument interest rate in period t ), the central bank
does not know the values of πt or xt (the inflation
rate and output gap, respectively, during period
t ). Let us now provisionally agree with Svensson
that private agents also do not know πt or xt when
making decisions in period t. But they do know
it, for financial market prices are observable day
by day (or hour by hour), so it rather than Et–1it
appears in equation (7). Then, under the assumption of rational expectations and with common
information sets—except that private agents do
not know et –1, the central bank error made in setting it—private agents will be able to infer et –1
from the central bank’s policy rule together with
the specification of the economy using equation
(12) or (15). Therefore, expectations formed in
period t of any variable for period t or the future
will be the same for the central bank and private
agents. The foregoing is, however, equivalent to
the assumption used in our paper (as well as in
Svensson, 2003a, and Svensson and Woodford,
2005). So the analysis as presented in our Section 6
seems to be realistically appropriate, as well as
consistent with the two just-cited papers.
In the section of his comment that discusses
volatility, Svensson also presents five claims
(“first,” “second,” etc.) that are logically irrelevant
to the discussion—of course his equation (9) is
an approximation to (8)!—except for the fourth
item. This one is basically incorrect, however,
because to implement Svensson’s (8) requires
630

S E P T E M B E R / O C TO B E R

2005

use of his (5); the function of (9) in this context
is to constitute one way of implementing (8).

CONCLUSION
In conclusion, we note that on p. 621
Svensson warns as follows: “Central bankers,
beware of McCallum and Nelson’s instrument
rule!” But the rule he is referring to—with a very
large value of µ1—is one that we say (explicitly)
that we have not recommended (please see our
discussion on p. 603). It was used in our 2004
paper as an implementation device; in our current
paper, it serves to illustrate our analytical claim,
namely, that our instrument rule (actually, class
of rules) is usually superior in performance, with
respect to Lars’s own criterion, to the targeting
rule that it approximates.
Finally we turn to Svensson’s featured question: “What is wrong with McCallum and Nelson?”
In terms of personal characteristics, we would
admit to a multitude of flaws, weaknesses, and
fundamental defects. In terms of the arguments
of our paper, however, we believe that the correct
answer is: “Nothing.”

REFERENCES
Friedman, Milton. A Program for Monetary Stability.
Fordham, NJ: Fordham University Press, 1960.
Friedman, Milton. “Monetary Policy: Theory and
Practice.” Journal of Money, Credit, and Banking,
February 1982, 14(1), pp. 98-118.
Jensen, Christian, and McCallum, Bennett T. “The
Non-Optimality of Proposed Monetary Policy
Rules under Timeless-Perspective Commitment.”
Economics Letters, 2002, 77(2), pp. 163-68.
Levin, Andrew T. and Williams, John C. “Robust
Monetary Policy with Competing Reference Models.”
Journal of Monetary Economics, 2003, 50(5), pp.
945-75.
McCallum, Bennett T. and Nelson, Edward. “Timeless
Perspective vs. Discretionary Monetary Policy in
Forward-Looking Models.” Federal Reserve Bank

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

McCallum and Nelson

of St. Louis Review, March/April 2004, 86(2), pp.
43-56.
McCallum, Bennett T. and Nelson, Edward. “Targeting
vs. Instrument Rules for Monetary Policy.” Federal
Reserve Bank of St. Louis Review, September/
October 2005, 87(5), pp. 597-611.
Svensson, Lars E.O. “What Is Wrong with Taylor
Rules? Using Judgment in Monetary Policy through
Targeting Rules.” Journal of Economic Literature,
2003a, 41(2), pp. 426-77.
Svensson, Lars E.O. “The Inflation Target and the
Loss Function,” in Paul Mizen, ed., Central Banking,
Monetary Theory and Practice: Essays in Honour
of Charles Goodhart. Volume 1. Cheltenham:
Edward Elgar, 2003b, pp. 135-52.
Svensson, Lars E.O. “Targeting Rules vs. Instrument
Rules for Monetary Policy: What Is Wrong with
McCallum and Nelson?” Federal Reserve Bank of
St. Louis Review, September/October 2005, 87(5),
pp. 613-25.
Svensson, Lars E.O. and Woodford, Michael.
“Implementing Optimal Policy through InflationForecast Targeting,” in Ben S. Bernanke and
Michael Woodford, eds., The Inflation-Targeting
Debate. Chicago: University of Chicago Press, 2005,
pp. 19-83.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

631

632

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

The Monetary Instrument Matters
William T. Gavin, Benjamin D. Keen, and Michael R. Pakko
This paper revisits the debate over the money supply versus the interest rate as the instrument of
monetary policy. Using a dynamic stochastic general equilibrium framework, the authors examine
the effects of alternative monetary policy rules on inflation persistence, the information content
of monetary data, and real variables. They show that inflation persistence and the variability of
inflation relative to money growth depend on whether the central bank follows a money growth
rule or an interest rate rule. With a money growth rule, inflation is not persistent and the price
level is much more volatile than the money supply. Those counterfactual implications are eliminated by the use of interest rate rules whether prices are sticky or not. A central bank’s use of
interest rate rules, however, obscures the information content of monetary aggregates and also
leads to subtle problems for econometricians trying to estimate money demand functions or to
identify shocks to the trend and cycle components of the money stock.
Federal Reserve Bank of St. Louis Review, September/October 2005, 87(5), pp. 633-58.

entral banks around the world have
long settled on the use of interest rates
as instruments to implement monetary
policy; but, until recently, there was
no sound theory supporting this choice. The
intuition for why interest rate rules dominate is
straightforward in a world with sticky prices and
interest-elastic money demand (see boxed insert).
When the demand for real money balances is
interest elastic, any shock that affects the path
for expected inflation or the real interest rate
causes money demand to shift. When the central
bank follows a money growth rule, this shift
causes the price level to jump. If price adjustment
is costly, this jumping can create real distortions.
When the central bank follows an interest rate
rule, on the other hand, the money stock is
endogenous and absorbs the adjustment. The
central bank can accommodate this jump in the
money stock almost instantaneously and with
little cost.

C

This article explains the theory behind this
intuition by comparing and contrasting the properties of four monetary general equilibrium models.
The four models differ along two dimensions: the
monetary authority’s policy rule and the nature
of price adjustments. We examine two monetary
policy rules—an exogenous money growth rule
and an interest rate rule based on Taylor (1993)—
and two price adjustment mechanisms—flexible
prices found in a typical real business cycle (RBC)
model and sticky prices found in a typical New
Keynesian model. The closest work to this article
is Kim (2003), which looks at how the cyclical
nature of the real economy depends on the specification of the policy rule and the form of the
nominal frictions. The author concludes that getting the policy rule right is at least as important
as getting the nominal frictions right. Our paper
emphasizes the behavior of money and prices,
but also reports results for real variables that are
consistent with Kim’s findings.

William T. Gavin is a vice president and economist and Michael R. Pakko is a senior economist at the Federal Reserve Bank of St. Louis.
Benjamin D. Keen is an assistant professor of economics at the University of Oklahoma. The authors thank Dick Anderson and Ed Nelson for
helpful comments. Michelle Armesto provided research assistance.

© 2005, The Federal Reserve Bank of St. Louis. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in
their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made
only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

633

Gavin, Keen, Pakko

MONEY DEMAND AND INTEREST RATE RULES
The intuition for the difference between an interest rate and a monetary aggregate instrument
can be gleaned from the analysis of money demand in Friedman (1969). The demand for real money
balances is a function of a scale variable, such as+ income,
and an opportunity cost variable, such
–
as the nominal interest rate, such that MtD/Pt = H(Yt ,Rt ). Panel A in the accompanying figure is based
on Figure 3 from Friedman’s illustration of the response of money demand and the price level following the central bank’s surprise decision to permanently raise the money growth trend (inflation)
from zero to a positive number—that is, 2 percent in Panel A. The money supply and the price
level are indexed to 100 and remain fixed before the policy change. With the 2 percent rise of
inflation, the nominal interest rate rises by 2 percent and the demand for money drops immediately.
Because the central bank has exogenously fixed the money growth rate, the price level must rise
to accommodate the fall in real balances. In an economy where the long-run expected inflation
trend is subject to shocks, the inflation rate is highly variable relative to the money growth rate.
Panel B illustrates what happens if the central bank uses the interest rate as the monetary
policy instrument. In that case, the credible announcement of 2 percent inflation requires raising
the nominal interest rate target by 2 percent. The increase also leads to an immediate drop in the
demand for real money balances. With a nominal interest rate rule, however, the money supply
is endogenous and inflation is fixed by the policy rule. It is the money stock, rather than the price
level, that responds by shifting downward to clear the money market. Hence, in an economy with
stochastic inflation and an interest rate rule for monetary policy, the money growth rate is much
more variable than the inflation rate. That result is consistent with our observations from modern
economies where central banks generally use the nominal interest rate to implement policy.

Figure B1
Monetary Policy Rules and a Change in the Inflation Target
A. Money Growth Rule*

B. Interest Rate Rule*

Log P, Log M
6
5.8

Log P, Log M

5.6
5.4

6
5.8

Money
Price Level

5.2
5

5.2
5

4.8
4.6

4.8
4.6

4.4

4.4

4.2
4

0

Money
Price Level

5.6
5.4

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100

4.2
4

0

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100

*There is a shift from 0 to 2 percent in the inflation objective in period 50.

634

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Early dynamic stochastic general equilibrium
models that featured money as the policy instrument also included flexible prices—and hence
implied small effects of monetary shocks on real
variables and unrealistically high price-level variability with low inflation persistence; examples
include Cooley and Hansen (1989, 1995), Lucas
(1990), Fuerst (1992), and Christiano and
Eichenbaum (1992).1 Later, models with sticky
prices came to dominate the literature; Cho and
Cooley (1995), Kimball (1995), King and Wolman
(1996), and Yun (1996) are representative of this
approach.2
Kimball (1995), for example, examined a
sticky-price model that assumed a constant velocity of money and an exogenous money supply
rule. This article demonstrates that two distinct
elements omitted from Kimball’s model are crucial
for understanding price dynamics. The first is an
interest-sensitive money demand function, and
the second is a monetary policy reaction function
based on an interest rate rule. King and Wolman
(1996) present a model with a shopping-time role
for money demand that is interest elastic, but most
of their analysis assumes that the central bank
either controls inflation directly or follows an
exogenous money growth rule. They include
only a very brief analysis of money growth rules
versus interest rate rules.
We extend the methodology of King and
Wolman (1996) to analyze more thoroughly the
important distinctions between flexible-price
and sticky-price models on the one hand and
between interest rate rules and money supply
rules on the other. Even though central banks do
not use money growth rules in practice, we compare that regime to interest rate regimes because
much of our conventional wisdom about money
and monetary policy comes from analysis using
models with money growth rules. We also emphasize a distinction between the steady-state inflation

rate and the inflation target. Historically, most
central banks have not had constant inflation
targets, but their targets evolve over time. Here
the expected inflation target converges to the
steady state in the long run, but it can deviate for
a considerable period. Consequently, we consider
two types of policy shocks: a highly persistent
inflation target shock and a relatively short-lived
liquidity shock.

THE MODEL
In this model framework, agents are infinitely
lived. Households get utility from consumption
and leisure but need to spend time shopping for
consumption goods; they can reduce the shopping
time for a given level of consumption by holding
higher money balances. The interest elasticity of
money demand is a key parameter for determining
the nature of inflation dynamics. Households
consume a composite good that is a combination
of outputs from monopolistically competitive
firms. Sticky prices are introduced using a Calvo
(1983) specification that allows for the possibility
of perfect price flexibility as a nested special case.3
Thus, it is straightforward to hold all other model
features constant when comparing sticky-price
and flexible-price specifications. Monetary policy
is conducted through lump-sum monetary transfers that are determined by the central bank’s
monetary policy rule. Our focus is on the differences implied by policy rules that use the shortterm interest rate as an instrument versus those
that target money growth directly.

Households
Each period, households maximize the discounted present value of the expected utility they
get from consumption and leisure:
`

(1)
1

2

See Dittmar, Gavin, and Kydland (2005) and Dressler (2003) for
recent examples with flexible-price models with interest rate rules.
Ireland (2003) examines the role of policy in estimated versions
of both flexible- and sticky-price models.
See also influential papers by Ireland (1996), Rotemberg and
Woodford (1997), and McCallum and Nelson (1999). This basic
sticky-price model is developed rigorously by Woodford (2003,
Chap. 3).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

U = E0 ∑ β tu (ct , lt ),
t =0

where β is the household’s discount factor, ct is
the consumption bundle, and lt is leisure time.
3

The version with Calvo-style pricing and a money growth rule
was presented by Keen (2004). Appendix A provides complete
details of our model specification and solution procedures.

S E P T E M B E R / O C TO B E R

2005

635

Gavin, Keen, Pakko

The momentary utility function is assumed to
take the form
1−σ

l 1
u (ct , lt ) = ln (ct ) + χ t
,
1 − σ1

where the values of the preference parameters σ1
and χ are positive.
The household maximizes (1) subject to a
budget constraint
(2)

Pt ct + Pt it + M t + Bt
= Ptw t nt + Pt qt kt + Dt + M t −1 + Rt −1Bt −1 + Tt ,

where Pt is the nominal goods price; it is investment; kt is the capital stock, which evolves following the capital accumulation process, kt+1 = it +
(1 – δ )kt, and depreciates at rate δ. Mt and Bt are
stocks of money and bonds, wt is the real wage
rate, qt is the real rental price of capital, and Rt –1
is the gross nominal interest rate on bonds purchased at time t –1. The household also receives
monetary transfers, Tt, and distributed profits
from the goods-producing sector, Dt .
The household also faces a time constraint,
which specifies that total time (normalized to
unity) can be allocated to leisure, labor, and time
spent in transactions-related activities, st:
(3)

lt + nt + st = 1 .

The amount of time households spend shopping, st,
can be reduced by holding larger money balances
relative to nominal consumption expenditures:

 Pc 
st = ζ  t t  .
 Mt 
γ

(4)

Money-demand elasticities are determined by the
curvature parameter, γ > 0, and ζ > 0 is a scale
parameter used to calibrate s.
As discussed by Lucas (2000), this type of
shopping-time specification implies a set of general equilibrium relationships that resemble a
standard money-demand function. In particular,
after combining some of the first-order conditions from the household’s utility maximization
problem, optimal real money balances can be
expressed as
636

S E P T E M B E R / O C TO B E R

2005

1


M t  ζγ w t ctγ
=
Pt
 Rt − 1
 R
t


(5)

 1+ γ

 .




With the calibration γ = 1, this implies an interest
elasticity of –1/2. Note also that the real wage rate
and consumption spending enter this relationship
in such a way that their combined relationship
with real money balances is one-for-one; that is,
so long as productivity and consumption move
together (as they do on the steady-state path), the
scale elasticity of this money “demand” function
is unity.

Firms
The composite consumption good is a combination of outputs, yj,t , produced in period t by
monopolistically competitive firms. Each firm’s
output comes from a production function,
(6)

(

)

y j ,t = Z t f k j ,t , nj ,t ,

where j indicates the number of periods since
the firm last adjusted its price, nj,t is the firm’s
demand for labor, kj,t is the firm’s demand for
capital, and Zt is an economywide productivity
factor. The productivity factor is assumed to follow
a stationary autoregressive process,
(7)

ln ( Z t ) = ρZ ln Z t −1 + (1 − ρZ ) ln ( Z ) + ε Zt ,

where Z is the steady-state value of Zt and εZt is a
mean-zero, independently and identically distributed (i.i.d.) shock. Every period, each firm must
determine (i) the cost-minimizing combination
of nj,t and kj,t given its output level, the real wage
rate, wt, and the real rental rate of capital, qt ; and
(ii) whether or not it can adjust its price. Sticky
prices are introduced using Calvo’s (1983) model
of random price adjustment. Specifically, the
probability that a firm can set a new price, Pt*, is
η and the probability that a firm must keep the
price that it set j periods ago is (1 – η).
Each period, firms seek to minimize their
costs,
(8)

w t nj ,t + qt k j ,t ,

subject to the production function (6). MarketF E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

clearing conditions require that an individual
firm’s labor and capital demand must sum to the
economy aggregates, nt and kt . Our goal here is
merely to understand the workings of a simple
model, so we have omitted capital adjustment
and other frictions that are often included in this
type of model.
Cost minimization by households yields the
following demand equation facing each firm:
y j ,t

(9)

 Pt*− j 
=

 Pt 

−ε

yt ,

where –ε is the price elasticity of demand. Aggregate output, yt , is given by
(10)

`
j
ε −1 / ε 
y t =  ∑ η (1 − η ) y (j ,t ) 
 j =0


(12)

ε /(ε −1)

,

and the aggregate price level is a nonlinear combination of current and past prices,
(11)

`

j
Pt =  ∑ η (1 − η ) Pt*− j (1− ε ) 
 j =0


1/(1− ε )

.

Appendix A describes in more detail the implications of this pricing structure for the evolution
of the aggregate price level.

Policy Rules
Two classes of monetary policy regimes are
considered: a regime in which the central bank
follows an exogenous money growth rule and a
regime with a nominal interest rate rule in which
money growth is endogenous. Both policy rules
have two sources of disturbance: One is a shock
to the inflation target and the other is a shock to
the liquidity position.4 The shocks are identified
only by how long they persist. A shock to the liquidity position is not expected to affect inflation
expectations except at very high frequencies.
Historical examples of extreme liquidity shocks
would be the Fed’s responses to the 1987 stock
market crash and the September 11th attacks. In
contrast, a shock to the inflation target is expected
4

to be highly persistent, almost permanent. In our
model, we assume that the Fed has full credibility
and the inflation target is known.
In the United States today, the Fed does not
have an explicit inflation target such that the
public could distinguish perfectly between shocks
to liquidity and those to the inflation target. (In
extreme cases, this is not a problem; but it probably
matters for less-extreme cases.) For example, it
is not clear whether the Federal Open Market
Committee (FOMC) and/or the public have been
able to make this distinction during periods of
countercyclical policy.5
The money growth rule is given as

Both Ireland (2005) and Kozicki and Tinsley (2003) identify the
inflation target shock by assuming that this component has a unit
root.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

ˆ = µˆ * + (υ − υ ),
∆M
t
t
t −1
t

where the hat over a variable indicates the percent
(or log) deviation from the steady state and µ̂t* is a
stochastic money growth target, µ̂t* = ρµ µ̂t*–1 + εµ t ,
where εµ t is a mean-zero, i.i.d. shock to the nominal growth trend. The second disturbance in (12),
υt, represents a transitory policy disturbance that
follows its own AR(1) process, υt = ρυυt –1 + ευt ,
with a mean-zero, i.i.d. shock, ευt . Entering the
money-growth rule in first differences, the υ-shock
represents a transitory disturbance to the money
stock that leaves the long-run growth path
unchanged.
In the alternative regime, the central bank
operates with a Taylor-type interest rate rule that
is given by
(13)

Rˆ t = πˆ t + θπ (πˆ t − πˆ t* ) + θ y yˆ t + ut ,

where the inflation target follows a stochastic
AR(1) process, πˆt* = ρπ πˆ t*–1 + επ t , and the transitory policy shock, ut, follows an AR(1) process,
ut = ρuut –1 + εut . Both error processes, επ t and εut ,
are mean-zero, i.i.d. shocks.
The inflation target shock in equation (13)
plays the same role as the money growth shock
in (12); both disturbances have a persistent effect
on the nominal growth path of the economy. That
is, the expected inflation target converges to the
steady state in the long run, but the actual target
may deviate for long periods. Thus, inflation in
period t has three components: the steady-state
5

See Goodfriend (1993) and Erceg and Levin (2003) for analysis of
the Fed’s credibility.

S E P T E M B E R / O C TO B E R

2005

637

Gavin, Keen, Pakko

inflation rate, the stochastic component of the
inflation target (trend), and the transitory component, which is due to other shocks.
It is not clear how to define a common transitory policy shock or liquidity shock under the
alternative regimes. We define a transitory policy
shock to the money growth rule as a deviation of
the money stock that leaves the long-run growth
path unchanged. In the case of the interest rate
rule, we define a temporary liquidity shock in a
straightforward way—as a temporary shock to the
short-term interest rate. An expansionary liquidity
shock is a positive shock to money growth, υt , or
a negative shock to the nominal interest rate equation, ut. An inflation target shock, πˆt*, and a nominal interest rate shock, ut, have qualitatively
identical effects on the model’s dynamics. They
differ only by a scaling factor and, in our parameterization, by their persistence.6

CALIBRATION
To the extent possible, the parameters are
calibrated to generally accepted values for all the
experiments. Table 1 shows the baseline calibration used. In the utility function, the value of σ1
is set at 7/9. The steady-state labor share is 0.3
and shopping time is 1 percent of that value. That
calibration implies a labor supply elasticity of
real wages approximately equal to 3.7 The household discount factor is 0.99, so that the annual
real interest rate is 4 percent. The shopping-time
parameter, γ, is set to unity, implying an interest
rate elasticity of money demand equal to –0.5.
The capital share of output is set to 0.33, and the
capital stock is assumed to depreciate at 2.5 percent per quarter. The price elasticity of demand
is set equal to 6, implying a steady-state markup
of 20 percent. We set the probability of price
adjustment equal to 1 for the flexible-price case
and equal to 0.25 for the sticky-price case. For the
sticky-price case, this implies that firms change
prices on average once per year. The model is
6

7

The scaling is determined by the weight on deviations of inflation
from target.
The elasticity of labor supply with respect to the real wage equals
– –
––h
((1 – n
)/n)(1/σ1).

638

S E P T E M B E R / O C TO B E R

2005

calibrated so that the steady-state inflation rate
is zero.8
The policy rule is calibrated to match Taylor’s
(1993) values. The coefficient on the deviation of
inflation from target is set at 0.5, and the response
of the interest rate—specified as a quarterly
return—to the output gap is 0.125. Shocks to the
nominal growth trend are assumed to be highly
persistent, ρµ = ρπ = 0.95, whereas the transitory
policy shocks have a lower value for their AR
parameter, ρυ = ρu = 0.3. The shocks to technology
are calibrated to be highly persistent, ρZ = 0.95.9

MONETARY POLICY SHOCKS
As described previously, two types of monetary policy shocks are considered for each policy
rule: a shock to the nominal growth trend that
displays high persistence (near a random walk)
and a transitory policy shock with little persistence. To gain insight about those shocks, we
consider their effects separately, comparing their
impact on the economy under flexible-price and
sticky-price specifications.

Shocks to the Nominal Growth Trend
Figure 1 illustrates the response of the economy to a persistent money growth shock in a
model where the central bank follows a money
growth rule. Panels in the left column display
the impulse responses produced by the flexibleprice model, and the panels in the right column
reflect the responses from the sticky-price model.
The top row shows what happens to the price
level and the money stock. In both models, the
price level jumps immediately after the shock.
While the money supply moves identically in
both models, the initial price level increase in
the sticky-price model is a bit more than half the
8

This is necessary to prevent the nonadjusting firms’ prices from
becoming too far out of line with the flexible-price benchmark.
The same model dynamics result if there is positive steady-state
inflation and the nonadjusting firms can index prices to rise each
period by the steady-state inflation rate.

9

See Appendix A for details about the nonlinear model, the equilibrium, the steady state, and the linear approximation around
the steady state that are used to calculate the model dynamics.
The solution method is based on King and Watson (1998, 2002).

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Table 1
Baseline Calibration

Model Parameters
Utility

Symbol

Value

σ1

7/9

Steady-state market labor share

n

0.3

Household discount factor

β

0.99

Shopping time

γ

1

Capital share of output

α

0.33

Depreciation rate

δ

0.025

Price elasticity of demand

ε

6

Probability of price adjustment

η

0.25 → sticky prices
1 → flexible prices

Policy Reaction
Inflation
Output
Persistence
Technology shock

θp

0.5

θy

0.125

ρz

0.95

Nominal growth shock

ρπ = ρµ

0.95

Liquidity shock

ρυ = ρu

0.3

σz

0.0075

σπ

0.004

Standard deviation
Technology shock
Nominal growth shock

14 percent rise in the flexible-price case. The
growth rates of money and prices in both models
eventually converge back to their steady-state
rates, but their levels remain permanently higher.
The second row of Figure 1 shows the impulse
responses of the inflation rate to the money growth
shock. The inflation spike in period 1 essentially
reflects the immediate rise in the price level after
the policy shock. In both cases, inflation is persistent following a money growth shock, but the
persistence is masked by the surge in prices that
occurs contemporaneously with the shock.
The third row shows the responses of the
nominal and real interest rates to this shock.
Because that initial jump in the price level is
unanticipated, it does not affect nominal interest
rates. In the flexible-price model, the nominal
interest rate increases by about 0.3 percent, which
is approximately equal to the expected inflation
rate for period 2. The effect on the real rate is near
zero in the flexible-price model. With sticky
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

prices, the nominal interest rate rises about 0.75
percent above the steady state, reflecting the
higher expected inflation, which is associated
with the more gradual response in the price level
to the policy shock. The real rate declines for two
periods and then gradually returns to the steady
state as the effects of the shock dissipate.
The bottom two panels in Figure 1 display
the responses of output and hours worked. The
higher inflation rate acts as a tax on real money
balances, which leads households to spend more
time shopping and less time working. With price
flexibility, hours worked falls about 0.1 percent
below the steady state. With sticky prices and an
exogenous money growth rule, neither money nor
prices are free to accommodate the jump in money
demand. Therefore, the adjustment occurs in real
variables. In our sticky-price model, the spike in
output is over 80 percent. Given that such a large
output response is highly counterfactual, stickyprice models typically incorporate additional
S E P T E M B E R / O C TO B E R

2005

639

Gavin, Keen, Pakko

Figure 1
Responses to a Persistent Money Growth Shock with a Money Growth Rule
Flexible Prices

20
18

Sticky Prices

20
18

16

16

14

14

P

12

12

10

10

8

M

6
4

P

8
6

M

4

2

2

0

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0

15

15

12

12

9

9
Inflation

6

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

Inflation

6

3

3

0

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0

0.3

0.8

0.25

0.6

0.2

0

1

2

0.4

R

R

0.15

3

0.2
0.1
0

0.05
r
0

0

1

2

3

0

1

4

5

6

7

8

3

r

9 10 11 12 13 14 15 16 17 18 19 20
–0.4

–0.05

0
0

1 2

3

4 5

6

7

8 9 10 11 12 13 14 15 16 17 18 19 20

140
120

–0.02

100
–0.04

N

80
Y

–0.06
–0.08

2

–0.2

60
40

N

Y

20

–0.1

0

–0.12

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–20

NOTE: P, price level; M, money supply; R, nominal interest rate; r, real interest rate; Y, output; N, hours worked.

640

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

frictions that limit the adjustment of capital and/or
labor after a monetary policy shock.10
Shifting our analysis to models with an interest
rate policy rule, Figure 2 shows how our models
respond to an inflation target shock when the Fed
is using the Taylor rule. The price level does not
jump after the inflation target shock. The higher
rate of inflation causes money demand to shift
down; but, with an interest rate rule, the money
stock declines in order to clear the money market.
The size of the fall in the money supply depends
on the interest elasticity of money demand. As
money demand becomes more interest elastic,
the size of the shift needed to clear the money
market gets larger. Price level, inflation rate, and
interest rate responses are very similar under both
the flexible- and sticky-price specifications.
Output responses are much different under
the alternative price specifications. In the flexibleprice model, there are small negative effects on
output associated with the inflation tax on money
holdings. In the sticky-price case, output and
hours worked rise, but the effects are much more
reasonable than with a money supply rule.

Transitory Policy Shocks
Figure 3 shows the response of the economy
to a transitory money shock. The immediate
impact on the price level is smaller in the stickyprice model than in the flexible-price model.
This is easiest to see in the second row of panels,
which show that the brief spike in inflation is
smaller in the sticky-price specification. In neither
of these cases, however, does inflation exhibit
any measurable persistence. In the third row, we
see that the effect on interest rates is small. In the
flexible-price case, all of the effect is on the nominal interest rate, which declines temporarily as
the price level returns to the original steady-state
path. In the sticky-price case, the real return falls
by less than in the flexible-price case because the
price level never strays far from its steady-state
path. Output and hours worked both rise, but the
10

For example, by adding investment adjustment costs to this
sticky-price model, one can get reasonable-looking changes in
output and larger changes in the real interest rate.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

size of the effect is an order of magnitude larger
with sticky prices.
Note that, for the sticky-price case, we have a
pattern of dynamics that corresponds to the textbook description of a liquidity effect. The decline
in the nominal interest rate is associated with a
corresponding reduction in the real interest rate
and a brief surge in output; the relatively large
response of output is due to the presence of frictions restricting movement in both the price level
and the money supply. Again, the shift in money
demand requires large shifts in the real variables.
Figure 4 shows the effect of a transitory liquidity shock when the central bank is using an interest rate rule. In general equilibrium, a 1 percent
expansionary (negative) shock actually raises the
nominal interest rate by 25 basis points. In both
models, we see that the price level rises and the
money supply declines. The price increase is
large and permanent in the flexible-price model.
In the sticky-price case, where only a subset of
the firms can react to the shock, its transitory
nature causes a smaller adjustment and an eventual decline in the price level below the initial
equilibrium path. The third row of panels shows
the response of real and nominal interest rates.
The nominal rate rises by more in the flexibleprice model because the expected inflation rate
in periods 3 and beyond are larger. In the stickyprice model, the real interest rate rises slightly.
The output effect is small and negative in the
flexible-price case. In the sticky-price model, a 1
percent shock to the short-term interest rate raises
output 5 percent on impact, but the effect dissipates quickly. Note that this type of transitory
policy shock—which is standard in the literature
on interest rate rules—does not display a textbook
liquidity effect under either the sticky-price or
flexible-price specification.

Technology Shocks
The technology shock variable, zt, affects the
production function directly and therefore engenders a direct effect on output regardless of the
nature of the policy rule or whether prices are
sticky or flexible. However, the nature of the
central bank’s policy rule affects the endogenous
responses of inputs to the production function—
S E P T E M B E R / O C TO B E R

2005

641

Gavin, Keen, Pakko

Figure 2
Responses to an Inflation Objective Shock with an Interest Rate Rule
Flexible Prices

15

Sticky Prices

15

P

P

5

5
0

–5

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–15

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–15
M

–25

M

–25

–35

–35

–45

–45

1

1

0.9

0.9

0.8

0.8

0.7

Inflation

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

Inflation

0.1

0

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0

1

1

0.9

0.9

0.8

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

R

0.2

0.1

0.1

r

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0
0 1

2 3

4 5

6 7

8 9 10 11 12 13 14 15 16 17 18 19 20

–0.1

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0.8
0.7

–0.1

0.6

–0.15

r

0

–0.05

N

0.5

Y

0.4

–0.2

Y

0.3

–0.25
–0.3

1

0.7

0.6

–0.1

0

0.8

R

0.7

0.2
N

0.1

–0.35

0

–0.4

642

–5

0

S E P T E M B E R / O C TO B E R

2005

1

2

3

4

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Figure 3
Responses to a Transitory Policy Shock in a Money Growth Rule
Flexible Prices

1.2
1

1

0.8

0.8

0.6

0.6

0.4

0.4

M

0.2

P
1

M

0.2
P

0
0

Sticky Prices

1.2

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–0.2

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–0.2

0.03

0.03

0.02

0.02
Inflation

0.01
0
0

1

2

3

Inflation

0.01

4 5

6

7

8

0

9 10 11 12 13 14 15 16 17 18 19 20

0

–0.01

–0.01

–0.02

–0.02

–0.03

–0.03

0.005

1

2

3

4 5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0.005
r

0
0

1

2

3

4

5

6

7

8

0

9 10 11 12 13 14 15 16 17 18 19 20

–0.005

–0.005

–0.01

–0.01

–0.015

–0.015

0 1 2 3
r

4 5 6 7

8 9 10 11 12 13 14 15 16 17 18 19 20

R

R
–0.02

–0.02

–0.025

–0.025

0.006

0.4

0.005

0.3

0.004
N

0.003

N

0.2
0.1

0.002

Y
0

0.001
Y
0

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–0.1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
–0.001

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

–0.2

S E P T E M B E R / O C TO B E R

2005

643

Gavin, Keen, Pakko

Figure 4
Responses to a Transitory Policy Shock in an Interest Rate Rule
Flexible Prices

4
P

2

1

M

2

0
0

Sticky Prices

4

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–2

P

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–2

–4

–4
M

–6

–6

–8

–8

–10

–10

–12

–12

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Inflation

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0
–0.1

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

Inflation

0.0
0

–0.1

0.3

0.3

0.25

0.25

0.2

0.2

0.15

0.15

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

R

0.1

0.1
R

0.05

0.05

r

r

0

0
0

1

2

3

4

5

6

7

8

0

9 10 11 12 13 14 15 16 17 18 19 20

–0.05

–0.05

–0.1

–0.1

0.01

7

1

6

0
0

1 2

3

4 5

6

7 8

9 10 11 12 13 14 15 16 17 18 19 20

–0.01

5

Y
–0.02

4

–0.03

3

–0.04

2

–0.05

N

Y

1

N

–0.06

0

–0.07

–1

0

644

2

S E P T E M B E R / O C TO B E R

2005

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

which, in turn, affects the overall response of
output. In this regard, the nature of the monetary
policy reaction function is quantitatively important for the evolution of real variables only when
prices are assumed to be sticky.
The left column of Figure 5 shows the
response of the flexible-price, money growth rule
economy. This setting serves as a convenient baseline for our comparison, because it most closely
approximates an RBC model in which there are
no monetary distortions at all. As is typical of
this type of shock, the temporary but persistent
increase in output that results from the direct effect
of the disturbance is enhanced by an increase in
the real wage rate and employment. Consequently,
the initial rise in output is about 50 percent larger
than the direct effect that the technology itself
would imply. The increase in factor productivity
also engenders an investment response that serves
as a propagation mechanism.
However, as widely noted in the RBC literature, this mechanism is rather weak: If there were
no persistence in the technology shock, there
would be little persistence in output. Because
monetary policy does not respond in any way to
the shock under a money growth rule, the increase
in output implies that the price level falls below
trend; the ensuing anticipated disinflation requires
an upward adjustment to real money balances,
which takes place through a downward jump in
the price level. As we saw with shocks to the
inflation trend, the shifts in money demand are
accommodated by jumps in the price level.
When prices are sticky and the central bank
implements a money growth rule, as shown in
the right column of Figure 5, the responses of
the model to a technology disturbance are dramatically different: The initial downward jump
in the price level is only half the size of the jump
with flexible prices. Inflation is not persistent in
either case. In both cases, the real rate responds
as predicted in the benchmark RBC model. The
responses of nominal rates are small, but in opposite directions. The most dramatic effects occur
in output and hours worked, which decline
sharply in the sticky-price model.11 There is also
11

Dotsey (1999) shows that changes in interest rate smoothing can
have large real effects in a sticky-price model.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

an initial decline in investment (not shown), such
that the endogenous propagation channel of
capital accumulation is even less quantitatively
important.
The key to understanding the responses of
this version of the model to productivity shocks
is in the nature of the Calvo pricing process: With
a majority of firms unable to lower prices in
response to the shock, relative demand for their
products drops off, moving the firms back along
their marginal cost curves. With higher costs and
lower final demand, firms dramatically scale
back their demand for factors of production until
after they have an opportunity to adjust prices.
When a larger proportion of firms is assumed to
change prices each period, with η = 1/2, for example, the initial negative response of output and
work does not occur. After prices have adjusted
further—after four periods or more in the present
calibration—the model economy has adjusted to
a trajectory that resembles that of the flexibleprice specification.
The pattern of responses shown in the lower
right-hand panel of Figure 5 demonstrates the
limitations of a recent influential assertion by
Galí (1999). Using a long-run identifying assumption in a vector autoregression model, Galí found
that a permanent shock to technology is associated
with an initial decline in work effort. From this
finding, he argues that sticky prices must play a
role in the propagation of technology shocks.
But while our model predicts this type of
response when the central bank is following a
money growth rule, the response does not occur
when policy follows an interest rate rule. As
shown in Figure 6, the interest rate rule effectively
eliminates the difference between the sticky-price
and flexible-price models. The price response is
muted by the interest rate rule, compared with
the jump that is illustrated in Figure 5. Because
interest rate targeting smooths price changes,
the gap that develops between the firms that can
change prices and those that cannot remains small.
The model responses are nearly identical. In other
words, the use of an interest rate rule, by eliminating large price-level swings, insulates the real
responses of the model from the sticky-price distortions that arise under a money growth rule.
S E P T E M B E R / O C TO B E R

2005

645

Gavin, Keen, Pakko

Figure 5
Responses to a Technology Shock with a Money Growth Rule
Flexible Prices

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–0.1

Sticky Prices

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

7

8

9 10 11 12 13 14 15 16 17 18 19 20

–0.1

–0.2

–0.2
P

–0.3

P

–0.3

–0.4

–0.4

–0.5

–0.5

–0.6

–0.6

–0.7

–0.7

–0.8

–0.8

0.1

0.1

0
0

1

2

3

4

5

6

7

8

0

9 10 11 12 13 14 15 16 17 18 19 20

0

–0.1

–0.1

–0.2

–0.2

–0.3

–0.3

–0.4

2

Inflation

–0.4

Inflation

–0.5

–0.5

–0.6

–0.6

0.05

0.05

0.04

0.04

0.03

1

0.03

r

0.02

r

0.02

0.01

0.01

R

0
0

1

2

3

4 5

6

7

8

0

9 10 11 12 13 14 15 16 17 18 19 20

0

–0.01

–0.01

–0.02

–0.02

1.6

2

1

2

3

4

5

6

R

Y

1.4

1

Y

1.2
1

0

0.8

–1

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

N

0.6
0.4

–2

N

0.2

–3

0
0

646

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

S E P T E M B E R / O C TO B E R

2005

–4

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Figure 6
Responses to a Technology Shock with an Interest Rate Rule
Flexible Prices

14

Sticky Prices

14

12

12
M

10

8

6

6

4

4

2

2

0
0

–2

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

P

–4

M

10

8

0
0

–2

1

2

3

4

5

–6

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

P

–4
–6

0
0

1

2

3

4 5

6

7

8

0

9 10 11 12 13 14 15 16 17 18 19 20

0

–0.05

–0.05

–0.1

–0.1

–0.15

–0.15

–0.2

1

2

3

4 5

6

7

8

–0.2
Inflation

–0.25

9 10 11 12 13 14 15 16 17 18 19 20

Inflation

–0.25

–0.3

–0.3

–0.35

–0.35

0.1

0.1
r

r

0

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

–0.1

–0.1

–0.2

–0.2

R

–0.3

R

–0.3

1.8

1.8

1.6

1.6

1.4

Y

1.4

1.2

1.2

1

1

0.8

0.8

Y

0.6

0.6
N

0.4

N

0.4
0.2

0.2
0

0
0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

S E P T E M B E R / O C TO B E R

2005

647

Gavin, Keen, Pakko

TIME-SERIES PROPERTIES OF
MONEY AND PRICES
This section documents how the time-series
properties of money and inflation differ under
the alternative monetary policy regimes.12 There
is a large seasonal element in money but not in
prices or interest rates. Mankiw, Miron, and Weil
(1987) showed that a strong seasonal component
in U.S. interest rates disappeared after the creation
of the Federal Reserve in 1913. Barsky and Miron
(1989) showed that there are large seasonal components in quantities but not in prices. Both these
empirical regularities are consistent with our
model of a central bank that uses an interest rate
procedure to implement monetary policy.
We begin with a brief look at data for the G7
countries for the period from 1980:Q2 to 1998:Q4.
We use data on interest rates, consumer price
index (CPI) inflation, and M1 growth, which are
not seasonally adjusted.13 To calculate the relative
persistence in M1 growth and inflation, we calculated the largest root of each series using an augmented Dickey-Fuller equation. The relative
volatility is measured by the standard error of
the regression. We included five lags of quarterly
data to account for the remaining serial correlation
and the predictable seasonal component. The
results are shown in the top panel of Table 2. The
first column reports the standard error of the equation for the augmented Dickey-Fuller regression
for CPI inflation. The second column reports the
largest root in the CPI regression. The third and
fourth columns report the results for M1.
The standard error of the equation for M1 is
always larger than that for the CPI. On average it
is almost four times larger. For the G7 average,
the largest root in CPI inflation is 0.67 and the
largest root in M1 is 0.36. As the results show,
there is a large dispersion across countries in the
estimates of persistence of M1 growth.14
12

See Dressler (2003) and Sustek (2004) for models that include
both inside and outside money and attempt to account for the
dynamic behavior of the monetary aggregates.

13

We used the International Financial Statistics measure of currency
outside banks for the United Kingdom. We did not include 1998
for the United Kingdom because of a large break in the series in
1998:Q2.

648

S E P T E M B E R / O C TO B E R

2005

The bottom panel in Table 2 (Panel B)
reports results from our four alternative models
under the baseline calibration shown in Table 1.
Included are the standard deviations and the
first-order autoregressive coefficients for inflation
and money growth. We compute statistics for our
model economies subject to technology shocks
and persistent money growth shocks (with a
money growth rule) or persistent inflation target
shocks (with the Taylor rule). Those experiments
do not include the short-run liquidity shocks. The
first two rows of Panel B report the results for the
money growth rule and the next two rows report
the results for the Taylor rule. The policy rule
makes a much bigger difference than does the
degree of price flexibility. With a money growth
rule, the standard deviation of the inflation rate
is always greater than the standard deviation of
the money growth rate. The first-order autocorrelation for inflation is near zero. The first-order autocorrelation coefficient for money growth reflects
(but is substantially smaller than) the persistence in the shock to the money growth trend.
The model generated data that more closely
resemble observed economic data when we use
an interest rate rule. Money growth is more variable than inflation. Under both pricing regimes,
the first-order autocorrelation of inflation is near
0.7, but money growth exhibits no autocorrelation.15
The last two rows in Panel B show that our
results do not depend on persistent shocks to the
nominal growth trend. When there are only technology shocks, inflation and money growth are
about half as volatile; but the relative variability
of money growth and inflation is approximately
14

If we use data on currency and reserves, we find that the money
growth rates are much more volatile than when we use M1. The
autocorrelation functions of currency and reserve aggregates are
dominated by negatively correlated seasonal components. For the
United States, seasonally adjusting the data adds persistence to the
time series for both money growth and inflation. The adjustment
reduces the variability of money growth, but not inflation.

15

Our models imply that univariate models will underestimate the
persistence of shocks to the money growth trend when using data
generated under interest rate regimes. For example, Cooley and
Hansen (1989, 1995) estimate the autocorrelation of monetary
shocks to be just 0.5. Using cross-section price and output data
and long-run monetary neutrality to achieve identification, Balke
and Wynne (2004) estimate that the permanent component in M2
growth is highly persistent—matching broad movements in inflation.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Table 2
Statistical Properties of Nominal Variables (1980:Q2 to 1998:Q4)
A. Country results
CPI

M1

SEE

Largest root

SEE

Largest root

Canada

0.47

0.76

2.52

0.15

France

0.34

0.82

1.22

0.69

Germany

0.45

0.71

2.12

–0.33

Italy

0.41

0.91

1.67

0.54

Japan

0.50

0.43

1.37

0.44

United Kingdom

0.62

0.64

1.39

–0.68

United States

0.36

0.47

1.18

0.77

G7 average

0.45

0.68

1.64

0.36

AR(1)

SD

AR(1)

B. Model results
CPI
SD

M1

Money growth rule
Sticky prices

2.22

0.03

0.51

0.70

Flexible prices

4.36

–0.07

0.51

0.69

Sticky prices

0.60

0.72

10.55

–0.06

Flexible prices

0.55

0.70

9.56

–0.08

Sticky prices

0.34

0.73

5.41

–0.04

Flexible prices

0.28

0.71

4.62

–0.07

Taylor rule

Technology shocks only

NOTE: An augmented Dickey-Fuller equation with five lags of quarterly data was used to measure the largest root in M1 and CPI
growth rates. The standard error of the equation (SEE) was used to measure the volatility in these series. In other cases, we report the
standard deviation (SD) and first autocorrelation (AR(1)) in the growth rate series. The baseline calibration was used in the model
calculations.
SOURCE: CPI and M1 data come from the OECD Main Economic Indicators. For the United Kingdom, M1 was not available, so we
used currency outside banks ending in 1997:Q4. Exact sources and data definitions are listed in Appendix B.

unchanged. With interest rate rules, inflation
persistence can be driven by persistent shocks to
technology.
The high volatility of the money supply that
accompanies interest rate targeting obscures the
information content of monetary aggregates.
Cooley and LeRoy (1981) document problems
that econometricians have faced trying to estimate
money demand functions. One of the ironic
characteristics of the New Keynesian paradigm
is that the model embeds the quantity theory of
money as a long-run proposition, but money rarely
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

appears in the policy rule. McCallum (2001)
explores the reasons why money does not appear
in the policy rule and concludes with support for
the notion “that policy analysis in models without money, based on interest rate policy rules, is
not fundamentally misguided.”

CONCLUSION
A comparison of flexible- and sticky-price
models with both money growth and interest rate
S E P T E M B E R / O C TO B E R

2005

649

Gavin, Keen, Pakko

policy rules leads to the following conclusions.
First, interest rate rules rather than money growth
rules can capture the degree of inflation persistence and the relative volatility of the price level
observed in the data.
Second, with sticky prices the real effects of
transitory policy shocks differ under the different
policy rules. When the central bank uses a money
growth rule, the real effects are much too large to
be plausible in the sticky-price model unless
other frictions, such as investment adjustment
costs, are also included. But central banks do not
use money growth rules, so this counterfactual
implication does not seem important. It does,
however, suggest a reason why central banks
choose to implement monetary policy using an
interest rate instrument.
Third, and most importantly for model
builders, when shocks are highly persistent, the
distinction between monetary policy rules is more
important for price dynamics than is the choice
of the price-adjustment assumption. The reason
for this can be seen in how money demand
adjusts under an interest rate rule. In this case,
desired price changes are relatively smooth and
there is not much difference between flexibleand sticky-price equililbria. A corollary of this
result is that the response of nominal variables
such as inflation and the money supply are very
similar in both flexible- and sticky-price models
when the central bank uses an interest rate rule.
An important implication of this result is that it
will be difficult to use information about firms’
actual pricing policies to distinguish between
macro theories.
Finally, a central bank’s use of interest rate
rules obscures the information content of monetary aggregates and leads to subtle problems for
econometricians trying to estimate money demand
functions or to identify shocks to the trend and
cycle components of the money stock. Highly
persistent money shocks will be masked by the
high-frequency volatility associated with keeping
the interest rate relatively constant in the short
run.
650

S E P T E M B E R / O C TO B E R

2005

REFERENCES
Balke, Nathan S. and Wynne, Mark A. “Sectoral Effects
of Monetary Shocks.” Unpublished manuscript,
Federal Reserve Bank of Dallas, November 2004.
Barsky, Robert B. and Miron, Jeffrey A. “The Seasonal
Cycle and the Business Cycle.” Journal of Political
Economy, June 1989, 97(3), pp. 503-34.
Calvo, Guillermo A. “Staggered Prices in a Utility
Maximizing Framework.” Journal of Monetary
Economics, September 1983, 12(3), pp. 383-98.
Cho, Jang-Ok, and Cooley, Thomas F. “The Business
Cycle with Nominal Contracts.” Economic Theory,
1995, 6(1), pp. 13-33.
Christiano, Lawrence J. and Eichenbaum, Martin.
“Liquidity Effects and the Monetary Transmission
Mechanism.” American Economic Review, 1992,
82(2), pp. 346-53.
Cooley, Thomas F. and Hansen, Gary D. “The Inflation
Tax in a Real Business Cycle Model.” American
Economic Review, September 1989, 79(4), pp. 733-48.
Cooley, Thomas F. and Hansen, Gary D. “Money and
the Business Cycle?” in Thomas F. Cooley, ed.,
Frontiers of Business Cycle Research. Chap. 7.
Princeton: Princeton University Press, 1995, pp.
175-216.
Cooley, Thomas F. and LeRoy, Stephen F.
“Identification and Estimation of Money Demand.”
American Economic Review, December 1981, 71(5),
pp. 825-44.
Dittmar, Robert D.; Gavin, William T. and Kydland,
Finn E. “Inflation Persistence and Flexible Prices.”
International Economic Review, February 2005,
46(1), pp. 245-61.
Dotsey, Michael. “The Importance of Systematic
Monetary Policy for Economic Activity.” Federal
Reserve Bank of Richmond Economic Quarterly,
Summer 1999, 85(3), pp. 41-59.
Dressler, Scott. “Monetary Policy Regimes and
Causality.” Unpublished manuscript, University of
Texas, October 2003.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Erceg, Christopher J. and Levin, Andrew T. “Imperfect
Credibility and Inflation Persistence.” Journal of
Monetary Economics, May 2003, 50(4), pp. 915-44.
Friedman, Milton. The Optimum Quantity of Money—
And Other Essays. Chicago: Aldine Publishing,
1969.
Fuerst, Timothy S. “Liquidity, Loanable Funds, and
Real Activity.” Journal of Monetary Economics,
February 1992, 29(1), pp. 3-24.
Galí, Jordi. “Technology, Employment, and the
Business Cycle: Do Technology Shocks Explain
Aggregate Fluctuations?” American Economic
Review, March 1999, 89(1), pp. 249-71.
Goodfriend, Marvin. “Interest Rate Policy and the
Inflation Scare Problem: 1979-1992.” Federal
Reserve Bank of Richmond Economic Quarterly,
Winter 1993, 79(1), pp. 1-24.
Ireland, Peter N. “The Role of Countercyclical
Monetary Policy.” Journal of Political Economy,
August 1996, 104(4), pp. 704-23.
Ireland, Peter N. “Endogenous Money or Sticky
Prices?” Journal of Monetary Economics, November
2003, 50(8), pp. 1623-48.
Ireland, Peter N. “Changes in the Federal Reserve’s
Inflation Target: Causes and Consequences.”
Unpublished manuscript, Boston College, January
2005.
Keen, Benjamin D. “In Search of the Liquidity Effect
in a Modern Monetary Model.” Journal of
Monetary Economics, October 2004, 51(7), pp.
1467-94.
Kim, Soyoung. “Monetary Policy Rules and Business
Cycles.” Scandinavian Journal of Economics, 2003,
105(2), pp. 221-45.
Kimball, Miles S. “The Quantitative Analytics of the
Basic Neomonetarist Model.” Journal of Money,
Credit, and Banking, November 1995, 27(4, Part 2),
pp. 1241-77.
King, Robert G. and Watson, Mark W. “The Solution
of Singular Linear Difference Systems Under

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Rational Expectations.” International Economic
Review, 1998, 39(4), pp. 1015-26.
King, Robert G. and Watson, Mark W. “System
Reduction and Solution Algorithms for Singular
Linear Difference Systems Under Rational
Expectations.” Computational Economics, October
2002, 20(1-2), pp. 57-86.
King, Robert G. and Wolman, Alexander L. “Inflation
Targeting in a St. Louis Model of the Twenty-First
Century.” Federal Reserve Bank of St. Louis
Review, May/June 1996, 78(3), pp. 83-107.
Kozicki, Sharon and Tinsley, Peter A. “Permanent
and Transitory Policy Shocks in an Empirical
Macro Model with Asymmetric Information.” CFS
Working Paper No. 2003/41, Center for Financial
Studies, October 2003.
Lucas, Robert E. Jr. “Liquidity and Interest Rates.”
Journal of Economic Theory, April 1990, 50(2), pp.
237-64.
Lucas, Robert E. Jr. “Inflation and Welfare.”
Econometrica, March 2000, 68(2), pp. 247-74.
Mankiw, N. Gregory; Miron, Jeffrey A. and Weil,
David N. “The Adjustment of Expectations to a
Change in Regime: A Study of the Founding of the
Federal Reserve.” American Economic Review,
June 1987, 77(3), pp. 358-74.
McCallum, Bennett T. “Monetary Policy Analysis in
Models Without Money.” Federal Reserve Bank of
St. Louis Review, July/August 2001, 84(4), pp. 145-60.
McCallum, Bennett T. and Nelson, Edward. “An
Optimizing IS-LM Specification for Monetary Policy
and Business Cycle Analysis.” Journal of Money,
Credit, and Banking, August 1999, 31(3, Part 1),
pp. 296-316.
Rotemberg, Julio J. and Woodford, Michael. “An
Optimization-Based Econometric Framework for
the Evaluation of Monetary Policy,” in Ben S.
Bernanke and Julio J. Rotemberg, eds., NBER
Macroeconomics Annual 1997. Cambridge, MA:
MIT Press, 1997, pp. 297-345.

S E P T E M B E R / O C TO B E R

2005

651

Gavin, Keen, Pakko

Sustek, Roman. “Monetary Aggregates and Structural
Shocks.” Unpublished manuscript, Carnegie
Mellon University, August 2004.
Taylor, John B. “Discretion versus Policy Rules in
Practice.” Carnegie-Rochester Conference Series on
Public Policy, 1993, 39, pp. 195-214.

Woodford, Michael. Interest & Prices. Princeton
University Press: Princeton, 2003.
Yun, Tack. “Nominal Price Rigidity, Money Supply
Endogeneity, and Business Cycles.” Journal of
Monetary Economics, April 1996, 37(2), pp. 345-70.

APPENDIX A
TECHNICAL NOTES
This appendix provides detailed information on the sticky- and flexible-price models. It outlines
the relevant equations in the model, determines the steady state, and linearizes the model. Furthermore,
this appendix provides the necessary information to replicate the simulations of this paper, using the
solution methods outlined in King and Watson (1998, 2002).

The Equilibrium
These equations describe the equilibrium for the households’ problem. Households are infinitely
lived agents who seek to maximize their expected utility from consumption, ct , and leisure, lt ,


`



t =0



E0  ∑ β t  ln(ct ) + χ

lt1−σ 1  
,
1 − σ 1  

subject to the following budget constraint, time constraint, and capital accumulation equation:
(A1)

Pt (ct + it ) + M t + Bt = Ptw t nt + Pt qt kt + Dt + M t −1 + Rt −1Bt −1 + Tt ,
lt + nt + st = 1,
kt +1 = it + (1 − δ ) kt ,

(A2)

where Bt is government bonds, Pt is the price level, it is investment, Mt is the nominal money stock, wt is
the real wage rate, nt is labor, qt is the rental rate on capital, kt is the capital stock, Dt is the firms’ profits
remitted to the households, Rt is the gross nominal interest rate, Tt is a transfer from the monetary authority, δ is the depreciation rate, and st represents the shopping-time costs of holding money balances,
γ

st = ζ ( Pt ct M t ) .

(A3)

Utility maximization by the households yields the following first-order conditions for ct , lt , nt , Mt ,
Bt , it , and kt :
(A4)

∂u
= Pt λt + γ (st / ct ) τ 1,t ,
∂ct

(A5)

∂u
= τ 1,t ,
∂lt

652

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

(A6)

τ 1,t = Ptw t λt ,

(A7)

λt − γ (st / M t ) τ 1,t = β Et  λt +1  ,

(A8)

λt = β Et  Rt λt +1  ,
λt Pt = τ 2,t , and

(A9)
(A10)

τ 2,t = β Et τ 2,t +1 (1 − δ ) + λt +1Pt +1qt +1  ,

where λt , τ1,t , and τ2,t are the Lagrangian multipliers of the budget constraint, time constraint, and the
capital accumulation equation, respectively. By substituting (A5) into (A4), (A6), and (A7), the firstorder conditions for ct, nt, and Mt become
∂u
∂u
= Pt λt + γ (st / ct )
,
∂ct
∂lt

(A11)

∂u
= Ptw t λt , and
∂lt

(A12)

(A13)

λt − γ (st / M t )

∂u
= β Et  λt +1  .
∂lt

By substituting (A9) into (A10), the first-order condition for kt+1 becomes
(A14)

τ 2,t = β Et τ 2,t +1 (1 − δ ) + τ 2,t +1qt +1  ,.

The marginal utilities of ct and lt are

∂u
∂u 1
and
= χ lt −σ 1 .
=
∂lt
∂ct ct
As a result, the households’ problem is described by equations (A1), (A2), (A3), (A8), (A11), (A12),
(A13), and (A14).
The next set of equations comes from the firms. The firms are monopolistically competitive producers
of output, yj,t , according to
(A15)

( ) (n )

y j ,t = Zt k j ,t

α

1−α

j ,t

,

where j indicates the number of periods since the firm last adjusted its price, nj,t is firm labor demand,
kj,t is firm demand for capital, and Zt is an economywide productivity factor. This productivity factor
evolves in the following manner:
(A16)

ln ( Zt ) = ρZ ln ( Zt −1 ) + (1 − ρZ ) ln ( Z ) + ε Z ,t ,

where Z is the steady-state value of Zt and εZ,t is a mean-zero, independently and identically distributed (i.i.d.) shock. Each period, every firm must make two decisions. First, firms determine the costminimizing combination of kj,t and nj,t given their output level, the wage rate, and the rental rate of
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

653

Gavin, Keen, Pakko

capital services. Second, they make pricing decisions. In particular, the probability a firm can set a
new price, Pt*, is η and the probability a firm must charge the price that it last set j periods ago, P t*– j , is
(1 – η).
Each period, firms seek to minimize their costs,

w t nj ,t + qt k j ,t ,
subject to (A15). This cost minimization implies the following two-factor demand equations:

ψ tα Zt  nj ,t / k j ,t 

(A17)

1−α

= qt ,
α

ψ t (1 − α ) Z t  k j ,t / nj ,t  = w t ,

(A18)

where ψt is the Lagrangian multiplier from the production function and accordingly is interpretable
as the real marginal cost of output. The market-clearing conditions for capital, kt , and labor given the
conditional probability of price adjustment (η) are

kt =

(A19)

`

∑ η (1 − η )

j

k j ,t Ä and Ä nt =

j =0

`

∑ η (1 − η )

j

nj ,t .

j =0

Because the real wage and user cost of capital are economywide costs, the real marginal cost and capital
services/labor ratio will be the same for all firms (i.e., kj,t /nj,t = kt /nt ).
The composition of output purchased by households is
`

j
y t =  ∑ η (1 − η ) y (jε,t−1)/ε 
 j =0


(A20)

ε /(ε −1)

,

where

y t = ct + it .

(A21)

Cost minimization by households yields the following product-demand equation:

(

y j ,t = Pt*− j / P

(A22)

)

−ε

yt,

where Pt is a nonlinear price index such that
`

j
Pt =  ∑ η (1 − η ) Pt*−(1j − ε ) 
 j =0


(A23)

1/(1− ε )

.

Because the probability of price adjustment is constant, (A23) can be reduced to

Pt = ηPt*(1− ε ) + (1 − η ) Pt(−11− ε ) 

(A24)



1/(1− ε )



.

Furthermore, when (A19) is used to aggregate capital and labor over all firms, (A15) becomes

yt =

(A25)

654

S E P T E M B E R / O C TO B E R

2005

`

∑ η (1 − η )

j =0

j

α

1−α

y j ,t = Zt ( kt ) ( nt )

.

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Recall that the aggregate level of output is (A20). The relationship of yt to y–t is shown by substituting
(A22) into (A25) to get
yt =

(A26)

`

∑ η (1 − η )

j =0

j

(P

*
t− j

)

−ε

/ Pt

yt .

To eliminate the infinite number of lags of Pt* in (A26), an auxiliary price index is defined as

Pt = ηPt*− ε + (1 − η ) Pt−−ε1 

(A27)

−1/ε

.

Given this price index, (A27) is substituted into (A26) to produce

(

y t = Pt / Pt

(A28)

)

−ε

yt .

The fraction η of firms that are able to adjust its price seek to maximize the expected value of its profits:
`

∑ β j Et  λt + j (1 − η )

(A29)

j =0

j

 Pt* y 0,t + j − Pt + j w t + j n0,t + j − Pt + j qt + j k0,t + j   ,

 

subject to (A15). Using the factor-demand equations, (A17) and (A18), the production function, (A15),
and the firm-demand equation, (A22), the firms’ maximization problem, (A29), is rewritten as
`



∑ β j Et  λt + j (1 − η )

(A30)

j =0

j

(P

*
t

)(

− Pt + jψ t + j Pt* / Pt + j

)

−ε


yt+ j .


Maximizing (A30) with respect to Pt* yields
`

j
j
∑ β j Et  λt + j (1 − η ) (1 − ε ) Ptε+ j y t + j Pt* + β j λt + j η (1 − η ) ε Pt1++jεψ t + j y t + j  = 0.

j =0

Thus, the profit-maximizing price is

Pt* = VC ,t / VR,t ,

(A31)
where

VR,t =

`

j
∑ β j Et (1 − η ) (ε − 1) λt + j Ptε+ j y t + j 

j =0

and VC ,t =

`

j
∑ β j Et (1 − η ) ελt + j Pt1++jεψ t + j y t + j  .

j =0

Furthermore, the evolution of VR,t and VC,t can be written in the following manner:
(A32)

VR,t = (ε − 1) λt Ptε y t + β (1 − η ) Et VR,t +1  and

(A33)

VC ,t = ελt Pt1+ εψ t y t + β (1 − η ) Et VC ,t +1  .

Therefore, the firms’ problem is summarized by (A16), (A17), (A18), (A21), (A24), (A25), (A27), (A28),
(A31), (A32), and (A33).

The Steady State
These are the steady-state equations for the households. To begin, the steady-state equations for the
time constraint, (A1), the capital accumulation equation, (A2), and the shopping-time costs, (A3), are
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

655

Gavin, Keen, Pakko

n + l + s = 1,Ä
i k = δ k ,Ä and Ä
s = ξ ( Pc M ) .
γ

The steady-state first-order conditions for the households’ problem, (A8), (A11), (A12), (A13), and
(A14), are
π = β R,

1 c = Pλ + γ (s c ) χ l −σ 1 ,

χ l −σ 1 = Pwλ,
λ − γ (s / M ) χ l −σ 1 = βλ / π , and
1 = β (1 − δ ) + q  .
Next are the steady-state equations for the firms. The first two equations are the steady states of the
factor-demand equations, (A17) and (A18):

ψα Z  nj / k j 

1−α

= q and
α

ψ (1 − α ) Z  k j / nj  = w .
Recall that kj,t /nj,t = kt /nt , so that kj /nj = k/n in the steady state. The steady-state aggregate production
function, (A25), is
α

y = Z ( k ) ( n)

1−α

.

The steady-state relationship between y and y– from (A28) is

(

y = P/P

)− ε y ,

–
where the steady-state value of P from (A25) is
P = ηP *− ε + (1 − η ) π ε P − ε 

−1/ε

,

and where π is the steady-state inflation rate. The steady-state profit-maximizing price from (A31) is

P * = VC / VR ,
where the steady-state values of VR and VC in (A32) and (A33) are

VR = (ε − 1) λ P ε y + β (1 − η ) π ε −1VR and
VC = ελ P 1+ εψ y + β (1 − η ) π εVC .
Finally, the steady-state identity equations for y and P from (A21) and (A24) are

y = c + i Ä Ä and Ä P = ηP *(1−ε ) + (1 − η )( P / π )(


1− ε ) 



1/(1− ε )

.

Linearization Around the Steady State
This section linearizes the model around its steady state. A hat is used to signify percent deviation
from the steady state. Thus, n̂t is the percent deviation of labor from its steady state.
656

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Gavin, Keen, Pakko

Beginning with households, the linearized equations for the time constraint, (A1), the capital
accumulation equation, (A2), and the shopping-time costs, (A3), are

llˆt + nnˆ t + ssˆt = 0,
kˆ t +1 = ( i / k ) iˆt + (1 − δ ) kˆ t ,Ä an
nd

(

)

ˆ − 1 / γ sˆ = 0.
Pˆt + cˆt − M
t
t
The linearized first-order conditions for the households’ problem, (A8), (A11), (A12), (A13), and
(A14), are

λˆt = Rˆ t + Et  λˆt +1  ,


− (1 / c ) cˆt = Pλ λˆt + Pˆt + γ (s / c ) χ l −σ1 sˆt − cˆt − σ 1lˆt ,

(

)

(

)

λˆt + Pˆt + wˆ t + σ 1lˆt = 0,

(

)

λλˆt − γ (s / M ) χ l −σ 1 sˆt − Mˆ t − σ 1lˆt = λ ( β / π ) Et  λˆt +1  , and




τˆ2,t = Et τˆ2,t +1 + βqqˆt +1  .
Now, on the firm side, the linearized factor-demand equations for capital and labor, (A17) and (A18), are

(

)

ψˆ t + Zˆ t + (1 − α ) nˆ t − kˆ t = qˆt and

(

)

ψˆ t + Zˆ t + α kˆ t − nˆ t = wˆ t .
The linearizations of (A25) and (A28) are as follows:

yˆ t = Zˆ t + α kˆt + (1 − α ) nˆ t ,Ä and
ˆ
yˆ t = ε  Pˆt − Pt  + yˆ t,


where the linearization of (A27) is

(− ε ) ˆ *
Pˆt = η  P * / P 
Pt + (1 − η ) µ ε Pˆt −1.
The linearized profit-maximizing price from (A31) is
ˆ
Pt* = VˆC ,t − VˆR,t ,

where the linearized values of VR and VC in (A32) and (A33) are

(ε − 1) λ P ε y λˆ + ε Pˆ + yˆ + β 1 − η π ε −1E Vˆ  and
VˆR,t =
( )
t
t
t
t  R,t +1 
V
VˆC ,t =

R
ελ P 1+εψ y

VC

(

)

 λˆ + (1 + ε ) Pˆ + yˆ + ψˆ  + β (1 − η ) π ε E Vˆ

t
t
t
t  C ,t +1  .
 t


The linearized versions of the identity equations for aggregate output and the price level, (A21) and
(A24), are
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

S E P T E M B E R / O C TO B E R

2005

657

Gavin, Keen, Pakko

c
i
yˆ t = cˆt + iˆt and
y
y
(1−ε ) Pˆ * + (1 − η ) π (ε −1) Pˆ .
Pˆt = η ( P * / P )
t
t −1
The monetary authority’s policy instrument is either money or the nominal interest rate. When money
is the instrument, the linearized policy rule is

ˆ = µˆ * + (υ − υ ),
∆M
t
t
t
t −1

(A34)

where µt* is the target money growth rate, which follows an AR(1) process, µ̂ t* = ρµ µ̂ *t –1 + εµ t ; and υt is a
transitory shock to the money growth rule, which also follows an AR(1) process, υt = ρuυt –1 + ευ t . Both
error terms, εµ t and ευ t , are mean-zero, i.i.d. shocks.
When the nominal interest rate is the instrument, the linearized policy rule is

(

)

Rˆ t = πˆ t + θπ πˆ t − πˆ t* + θ y yˆ t + ut ,

(A35)

where π̂ t = P̂t – P̂t –1; π̂ t* is the target inflation rate, which follows an AR(1) process, π̂ t* = ρπ π̂ *t –1 + επ t ; and
ut is a shock to the interest rate rule, which also follows an AR(1) process, ut = ρuut –1 + εut . Both error
terms, επ t and εut , are mean-zero, i.i.d. shocks.

APPENDIX B
DATA
The data set contains quarterly time series for the G7 countries on the CPI and a narrow money
measure, usually M1. All the series are available from 1980:Q1 through 1998:Q4. We could not get M1
for the United Kingdom, so we used a measure of currency. For this series, we excluded the 1998 data
because there was a break in the series in the second quarter. All of the series are from the Organisation
for Economic Co-operation and Development’s (OECD) Main Economic Indicators database or the
International Financial Statistics database. The data are not seasonally adjusted, and the quarterly figures
are computed as averages of monthly data. The data were retrieved in mid-October 2004 from the Haver
database, and Haver mnemonics are listed for each variable.
Canada: money supply (M1) is c156fm1n@oecdmei; CPI inflation is c156czn@oecdmei.
France: money supply (M1) is c132fm1n@oecdmei; CPI inflation is c132czn@oecdmei.
Germany: money supply (M1) is c134fm1n@oecdmei; CPI inflation is c134czn@oecdmei.
Italy: money supply (M1) is c136fm1n@oecdmei; CPI inflation is c136czn@oecdmei.
Japan: money supply (M1) is c158fm1n@oecdmei; CPI inflation is c158czn@oecdmei.
United Kingdom: money supply (currency outside of banks) is c112mlc@ifs; CPI inflation is
c112czn@oecdmei.
United States: money supply (M1) is c111fm1n@oecdmei; CPI inflation is c111czn@oecdmei.

658

S E P T E M B E R / O C TO B E R

2005

F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W