View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Welcome
Robert H. Rasche

W

elcome to the Thirty-Second
Annual Policy Conference sponsored by the Federal Reserve Bank
of St. Louis. Our theme this year
is “Monetary Policy Under Uncertainty.” We
chose this topic to coordinate with Bill Poole’s
imminent completion of 10 years of service at
the Bank and his contributions over the years to
the policy debate. Now that Bill’s forthcoming
retirement as president of the Bank is official,
we can plainly say that this conference is being
held in his honor. We have tried to keep our
motivation below Bill’s radar screen, though I
suspect that we have not been completely successful. He has been gracious enough to limit his
inquiries and not spoil our fun.
Monetary policy under uncertainty has been
one of Bill’s professional interests throughout his
career. His 1970 Quarterly Journal of Economics
paper, “Optimal Choice of Monetary Policy
Instruments in a Simple Stochastic Macro Model,”1
is well-known and oft cited. (We have found 364
citations in the Social Sciences Citation Index to
this publication over the years, and citations still
continue 37 years later!) His interest in this subject
has been clear during his service on the Federal
Open Market Committee (FOMC) and in his
speeches and publications on topics such as “A
Policymaker Confronts Uncertainty,”2 “Perfecting
the Market’s Knowledge of Monetary Policy,”3
“The Impact of Changes in FOMC Disclosure
Practices on the Transparency of Monetary Policy:

Are Markets and the FOMC Better ‘Synched’?,”4
“Fed Transparency: How, Not Whether,”5 and
“How Predictable Is Fed Policy?”6 We are not
allowing Bill to sit back completely and consume
during this conference—we have included him
in our panel discussion.
We are very pleased with the distinguished
authors and discussants who have agreed to contribute to this program in honor of Bill, as well as
those of you who have set aside time to attend.
We look forward to an active and stimulating
discussion that will provide ideas for future
research on this topic and possibly even provoke
another speech from Bill before he retires.

REFERENCES
Poole, William. “Optimal Choice of Monetary Policy
Instruments in a Simple Stochastic Macro Model.”
Quarterly Journal of Economics, May 1970, 84(2),
pp. 197-216.
Poole, William. “A Policymaker Confronts
Uncertainty.” Federal Reserve Bank of St. Louis
Review, September/October 1998, 80(5), pp. 3-9.
1

Poole (1970).

2

Poole (1998).

3

Poole and Rasche (2000).

4

Poole and Rasche (2003).

5

Poole (2003).

6

Poole (2005).

Robert H. Rasche is a senior vice president and director of research at the Federal Reserve Bank of St. Louis.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 269-70.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

269

Rasche

Poole, William. “Fed Transparency: How, Not
Whether.” Federal Reserve Bank of St. Louis
Review, November/December 2003, 85(1), pp. 1-8.
Poole, William. “How Predictable Is Fed Policy?”
Federal Reserve Bank of St. Louis Review,
November/December 2005, 87(6), pp. 659-68.
Poole, William and Rasche, Robert H. “Perfecting the
Market’s Knowledge of Monetary Policy.” Journal
of Financial Services Research, December 2000,
18(2/3), pp. 255-98.
Poole, William and Rasche, Robert H. “The Impact of
Changes in FOMC Disclosure Practices on the
Transparency of Monetary Policy: Are Markets and
the FOMC Better ‘Synched’?” Federal Reserve Bank
of St. Louis Review, January/February 2003, 85(1),
pp. 1-10.

270

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Optimal Monetary Policy Under Uncertainty:
A Markov Jump-Linear-Quadratic Approach
Lars E.O. Svensson and Noah Williams
This paper studies the design of optimal monetary policy under uncertainty using a Markov jumplinear-quadratic (MJLQ) approach. To approximate the uncertainty that policymakers face, the
authors use different discrete modes in a Markov chain and take mode-dependent linear-quadratic
approximations of the underlying model. This allows the authors to apply a powerful methodology
with convenient solution algorithms that they have developed. They apply their methods to analyze
the effects of uncertainty and potential gains from experimentation for two sources of uncertainty
in the New Keynesian Phillips curve. The examples highlight that learning may have sizable effects
on losses and, although it is generally beneficial, it need not always be so. The experimentation
component typically has little effect and in some cases it can lead to attenuation of policy. (JEL E42,
E52, E58)
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 275-93.

I have long been interested in the analysis of
monetary policy under uncertainty. The problems arise from what we do not know; we must
deal with the uncertainty from the base of what
we do know…
The Fed faces many uncertainties, and must
adjust its one policy instrument to navigate as
best it can this sea of uncertainty. Our fundamental principle is that we must use that one
policy instrument to achieve long-run price
stability…
My bottom line is that market participants
should concentrate on the fundamentals. If
the bond traders can get it right, they’ll do most
of the stabilization work for us, and we at the
Fed can sit back and enjoy life.
—William Poole (1998),
President of the Federal Reserve Bank of St. Louis
(1998-2008)

E

arly in his tenure as president of the
Federal Reserve Bank of St. Louis,
William Poole laid out some of the
issues that policymakers face when
deciding on policy, as reflected in the quotations
here. In this paper we take up some of these
issues, applying a framework to help policymakers navigate the “sea of uncertainty.” We
focus particularly on the issue of the knowledge
and beliefs of the policymakers and the private
sector—showing how both groups of agents learn
from their observations and how this may or may
not lead to enhanced economic stability. We also
address the extent to which policymakers should
“sit back” or, instead, actively intervene in markets in order to gain knowledge to help mitigate
future uncertainty.
In previous work, Svensson and Williams
(2007a,b), we have developed methods to study
optimal policy in Markov jump-linear-quadratic

Lars E.O. Svensson is deputy governor of the Sveriges Riksbank and a professor of economics at Princeton University. Noah Williams is an
assistant professor of economics at Princeton University. The authors thank James Bullard, Timothy Cogley, Andrew Levin, and William Poole
for comments on this paper. Financial support from the National Science Foundation is gratefully acknowledged.

© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks, or the Executive Board of Sveriges Riksbank.
Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s),
and full citation are included. Abstracts, synopses, and other derivative works may be made only with prior written permission of the
Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

275

Svensson and Williams

(MJLQ) models with forward-looking variables:
models with conditionally linear dynamics and
conditionally quadratic preferences, where the
matrices in both preferences and dynamics are
random. In particular, each model has multiple
modes, a finite collection of different possible
values for the matrices, whose evolution is governed by a finite-state Markov chain. In our previous work, we have discussed how these modes
could be structured to capture many different
types of uncertainty relevant for policymakers.
Here we put those suggestions into practice in a
simple benchmark policy model.
In a first paper, Svensson and Williams
(2007a), we studied optimal policy design in
MJLQ models when policymakers can or cannot
observe the current mode, but we abstracted from
any learning and inference about the current
mode. Although in many cases the optimal policy
under no learning (NL) is not a normatively desirable policy, it serves as a useful benchmark for
our later policy analyses. In a second paper,
Svensson and Williams (2007b), we focused on
learning and inference in the more relevant situation, particularly for the model-uncertainty applications which interest us, in which the modes are
not directly observable. Thus, decisionmakers
must filter their observations to make inferences
about the current mode. As in most Bayesian
learning problems, the optimal policy thus typically includes an experimentation component
reflecting the endogeneity of information. This
class of problems has a long history in economics,
and it is well-known that solutions are difficult
to obtain. We developed algorithms to solve
numerically for the optimal policy.1 Due to the
1

In addition to the classic literature (on such problems as a
monopolist learning its demand curve), Wieland (2000 and 2006)
and Beck and Wieland (2002) have recently examined Bayesian
optimal policy and optimal experimentation in a context similar
to ours but without forward-looking variables. Tesfaselassie,
Schaling, and Eijffinger (2006) examine passive and active learning
in a simple model with a forward-looking element in the form of
a long interest rate in the aggregate-demand equation. Ellison and
Valla (2001) and Cogley, Colacito, and Sargent (2007) study situations like ours but where the expectational component is as in the
Lucas-supply curve (Et –1πt, for example) rather than our forwardlooking case (Et πt +1, for example). More closely related to our present paper, Ellison (2006) analyzes active and passive learning in a
New Keynesian model with uncertainty about the slope of the
Phillips curve.

276

J U LY / A U G U S T

2008

curse of dimensionality, the Bayesian optimal
policy (BOP) is feasible only in relatively small
models. Confronted with these difficulties, we
also considered adaptive optimal policy (AOP).2
In this case, in each period the policymaker does
update the probability distribution of the current
mode in a Bayesian way, but the optimal policy is
computed each period under the assumption that
the policymaker will not learn in the future from
observations. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. Moreover,
the AOP analysis is of some interest in its own
right because it is closely related to specifications
of adaptive learning that have been widely studied
in macroeconomics (see Evans and Honkapohja,
2001, for an overview). Further, the AOP specification rules out the experimentation that some
may view as objectionable in a policy context.3
In this paper, we apply our methodology to
study optimal monetary policy design under
uncertainty in dynamic stochastic general equilibrium (DSGE) models. We begin by summarizing
the main findings from our previous work, leading to implementable algorithms for analyzing
policy in MJLQ models. We then turn to examples
that highlight the effects of learning and experimentation for two sources of uncertainty in the
benchmark New Keynesian Phillips curve. In this
model we compare and contrast optimal policies
under NL, AOP, and BOP. We analyze whether
learning is beneficial—it is not always so, a fact
which at least partially reflects our assumption
of symmetric information between the policymakers and the public—and then quantify the
additional gains from experimentation.4 We find
2

What we call optimal policy under no learning, adaptive optimal
policy, and Bayesian optimal policy have in the literature also
been referred to as myopia, passive learning, and active learning,
respectively.

3

In addition, AOP is useful for technical reasons because it gives
us a good starting point for our more intensive numerical calculations in the BOP case.

4 In addition to our own previous work, MJLQ models have been
widely studied in the control-theory literature for the special case
when the model modes are observable and there are no forwardlooking variables (see Costa, Fragoso, and Marques, 2005, and the
references therein); do Val and Başar (1999) provide an application
of an adaptive-control MJLQ problem in economics. More recently,
Zampolli (2006) has used such an MJLQ model to examine mone-

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

that the experimentation component is typically
small. Recognizing the informational component
of policy actions often leads policy to be slightly
more aggressive, but, somewhat surprisingly, in
one example here it leads to a less aggressive
optimal policy.
The paper is organized as follows: The next
section presents the MJLQ framework and summarizes our earlier work. The third section presents
our analysis of learning and experimentation in
a simple benchmark New Keynesian model. The
fourth section presents some conclusions and
suggestions for further work.

MJLQ ANALYSIS OF OPTIMAL
POLICY
This section summarizes our earlier work,
Svensson and Williams (2007a,b).

An MJLQ Model
We consider an MJLQ model of an economy
with forward-looking variables. The economy has
a private sector and a policymaker. We let Xt
denote an nx vector of predetermined variables
in period t, xt an nx vector of forward-looking
variables, and it an nx vector of (policymaker)
instruments (control variables).5 We let model
uncertainty be represented by nj possible modes
and let jt 僆 Nj ⬅ {1,2,…,nj } denote the mode in
tary policy under shifts between regimes with and without an
asset-market bubble. Blake and Zampolli (2006) provide an extension of the MJLQ model with observable modes to include forwardlooking variables and present an algorithm for the solution of an
equilibrium resulting from optimization under discretion. Svensson
and Williams (2007a) provide a more general extension of the MJLQ
framework with forward-looking variables and present algorithms
for the solution of an equilibrium resulting from optimization
under commitment in a timeless perspective as well as arbitrary
time-varying or time-invariant policy rules, using the recursive
saddlepoint method of Marcet and Marimon (1998). They also
provide two concrete examples: an estimated backward-looking
model (a three-mode variant of Rudebusch and Svensson, 1999)
and an estimated forward-looking model (a three-mode variant of
Lindé, 2005). Svensson and Williams (2007a) also extend the MJLQ
framework to the more realistic case of unobservable modes,
although without introducing learning and inference about the
probability distribution of modes. Svensson and Williams (2007b)
focus on learning and experimentation in the MJLQ framework.
5

The first component of Xt may be unity, in order to allow for
mode-dependent intercepts in the model equations.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

period t. The model of the economy can then be
written
(1) X t +1 = A11 jt +1 X t + A12 jt +1 x t + B1 jt +1 it + C1 jt +1 ε t +1,
(2) Ε t H jt +1 xt +1 = A21 jt X t + A22 jt x t + B2 jt it + C2 jt εt ,

where εt is a multivariate normally distributed
random i.i.d. nε vector of shocks with mean zero
and contemporaneous covariance matrix Inε . The
matrices A11j ,A12j ,…,C2j have the appropriate
dimensions and depend on the mode j. Because
a structural model here is simply a collection of
matrices, each mode can represent a different
model of the economy. Thus, uncertainty about
the prevailing mode is model uncertainty.6
Note that the matrices on the right side of (1)
depend on the mode jt +1 in period t +1, whereas
the matrices on the right side of (2) depend on the
mode jt in period t. Equation (1) then determines
the predetermined variables in period t +1 as a
function of the mode and shocks in period t +1
and the predetermined variables, forward-looking
variables, and instruments in period t. Equation
(2) determines the forward-looking variables in
period t as a function of the mode and shocks in
period t, the expectations in period t of next
period’s mode and forward-looking variables, and
the predetermined variables and instruments in
period t. The matrix A22j is nonsingular for each
j 僆 Nj .
The mode jt follows a Markov process with
the transition matrix
P ;  Pjk  .7

The shocks εt are mean zero and i.i.d. with probability density ϕ ; and without loss of generality
we assume that εt is independent of jt .8 We also
assume that c1j εt and c2k εt are independent for
6

See also Svensson and Williams (2007a), where we show how
many different types of uncertainty can be mapped into our MJLQ
framework.

7

Obvious special cases are P = In , when the modes are completely
j
–′ ( j 僆 N ), when
persistent, and Pj = p
the modes are serially i.i.d.
j
–
with probability distribution p.

8

Because mode-dependent intercepts (as well as mode-dependent
standard deviations) are allowed in the model, we can still incorporate additive mode-dependent shocks.

J U LY / A U G U S T

2008

277

Svensson and Williams

all j,k 僆 Nj . These shocks, along with the modes,
are the driving forces in the model. They are not
directly observed. For technical reasons, it is
convenient but not necessary that they are independent. We let pt = 共p1t ,…,pnjt 兲′ denote the true
probability distribution of jt in period t. We let
pt+τ|t denote the policymaker and private sector
estimate in the beginning of period t of the probability distribution in period t +τ. The prediction
equation for the probability distribution is
pt +1|t = P ′pt|t .

(3)

We let the operator Et[.] in the expression
Et Hjt +1xt +1 on the left side of (2) denote expectations in period t conditional on policymaker and
private sector information in the beginning of
period t, including Xt , it , and pt|t but excluding jt
and εt . Thus, the maintained assumption is symmetric information between the policymaker and
the (aggregate) private sector. Because forwardlooking variables will be allowed to depend on jt ,
parts of the private sector, but not the aggregate
private sector, may be able to observe jt and parts
of εt . Note that although we focus on the determination of the optimal policy instrument, it , our
results also show how private sector choices as
embodied in xt are affected by uncertainty and
learning. The precise informational assumptions
and the determination of pt|t will be specified
below.
We let the policymaker intertemporal loss
function in period t be
Ε t ∑δ τ L ( X t +τ , x t +τ , it + τ , j t +τ ),
`

(4)

τ =0

where δ is a discount factor satisfying 0 < δ < 1,
and the period loss, L共Xt , xt , it , jt 兲, satisfies

(5)

 X t ′
 
L ( X t , x t , it , jt );  x t  W jt
 it 

 Xt 
 ,
 xt 
 it 

where the matrix Wj (j 僆 Nj ) is positive semidefinite. We assume that the policymaker optimizes
under commitment in a timeless perspective. As
explained below, we will then add the term
278

J U LY / A U G U S T

2008

(6)

1
Ξt −1 Ε t H jt x t
δ

to the intertemporal loss function in period t. As
we shall see below, the nx vector Ξt –1 is the vector
of Lagrange multipliers for equation (2) from the
optimization problem in period t –1. For the special case when there are no forward-looking
variables (nx = 0), the model consists of (1) only,
without the term A12jt+1xt ; the period loss function
depends on Xt , it , and jt only; and there is no role
for the Lagrange multipliers, Ξt –1, or the term (6).

Approximate MJLQ models
Although in this paper we start with an MJLQ
model, it is natural to ask where such a model
comes from, as usual formulations of economic
models are not of this type. However, the same
type of approximation methods that are widely
used to convert nonlinear models into their linear
counterparts can also convert nonlinear models
into MJLQ models. We analyze this issue in
Svensson and Williams (2007a) and present an
illustration as well. Here we briefly discuss the
main ideas. Rather than analyze local deviations
from a single steady state as in conventional linearizations, for an MJLQ approximation we analyze
the local deviations from (potentially) separate,
mode-dependent steady states. Standard linearizations are justified as asymptotically valid for small
shocks, as an increasing time is spent in the vicinity of the steady state. Our MJLQ approximations
are asymptotically valid for small shocks and
persistent modes, as an increasing time is spent
in the vicinity of each mode-dependent steady
state. Thus, for slowly varying Markov chains, our
MJLQ model provides accurate approximations
of nonlinear models with Markov switching.

Types of Optimal Policies
We will distinguish three cases: (i) optimal
policy when there is no learning (NL), (ii) adaptive optimal policy (AOP), and (iii) Bayesian optimal policy (BOP). By NL, we refer to a situation
when the policymaker and the aggregate private
sector have a probability distribution pt|t over
the modes in period t and update the probability
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

distribution in future periods using the transition
matrix only, so the updating equation is
(7)

pt +1|t +1 = P ′pt|t .

That is, the policymaker and the private sector
do not use observations of the variables in the
economy to update the probability distribution.
The policymaker then determines optimal policy
in period t conditional on pt|t and (7). This is a
variant of a case examined in Svensson and
Williams (2007a).
By AOP, we refer to a situation when the
policymaker in period t determines optimal policy
as in the NL case, but then uses observations of
the realization of the variables in the economy to
update the probability distribution according to
Bayes’s theorem. In this case, the instruments will
generally have an effect on the updating of future
probability distributions and through this channel
separately affect the intertemporal loss. However,
the policymaker does not exploit this channel in
determining optimal policy. That is, the policymaker does not do any conscious experimentation.
By BOP, we refer to a situation when the policymaker acknowledges that the current instruments
will affect future inference and updating of the
probability distribution and calculates optimal
policy taking this separate channel into account.
Therefore, BOP includes optimal experimentation, where for instance the policymaker may
pursue policy that increases losses in the short
run but improves the inference of the probability
distribution and therefore lowers losses in the
longer run.

Optimal Policy with No Learning
We first consider the NL case. Svensson and
Williams (2007a) derive the equilibrium under
commitment in a timeless perspective for the case
when Xt, xt, and it are observable in period t, jt is
unobservable, and the updating equation for pt|t
is given by (7). Observations of Xt, xt, and it are
then not used to update pt|t .
It will be useful to replace equation (2) with
the two equivalent equations,
(8)

Ε t H jt+1 x t +1 = zt ,

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(9)

0 = A21 jt X t + A22 jt x t − zt + B2 jt it + C2 jt ε t ,

where we introduce the nx vector of additional
forward-looking variables, zt . Introducing this
vector is a practical way of keeping track of the
expectations term on the left side of (2).
Furthermore, it will be practical to use (9) and
solve xt as a function of Xt , zt , it , jt , and εt :
(10)

x t = x% ( X t , zt , it , jt , εt )

(

)

−1
; A22
jt zt − A21 jt X t − B2 jt it − C 2 jt ε t .

We note that, for given jt , this function is linear
in Xt , zt , it , and εt .
In order to solve for the optimal decisions, we
use the recursive saddlepoint method (see Marcet
and Marimon, 1998, Svensson and Williams,
2007a, and Svensson, 2007, for details of the recursive saddlepoint method). Thus, we introduce
Lagrange multipliers for each forward-looking
equation, the lagged values of which become
state variables and reflect costs of commitment,
while the current values become control variables.
The dual period loss function can be written

(

Ε t L% X% t , zt , it , γ t , j t , ε t

(

)

)

; ∑p jt|t ∫L% X% t , zt , it , γ t , j , ε t ϕ ( εt )dε t ,
j

where X̃t ⬅ 共Xt′, Ξ′t –1兲′ is the 共nx + nx 兲 vector of
extended predetermined variables (that is, including the nx vector Ξt–1), γt is an nx vector of Lagrange
multipliers, ϕ 共.兲 denotes a generic probability
density function (for εt , the standard normal
density function), and

(

)

L% X% t , zt , it , γ t , j t , ε t ; L  X t , x% ( X t , zt , it , jt , ε t ), it , j t 
1
−γ t′zt + Ξt′−1 H j t x% ( X t , zt , it , j t , ε t ) .
δ

(11)

As discussed in Svensson and Williams
(2007a), the failure of the law of iterated expectations leads us to introduce the collection of value
functions, V̂共st ,j 兲, that condition on the mode,
whereas the value function Ṽ共st 兲 averages over
these and represents the solution of the dual
optimization problem. The somewhat unusual
J U LY / A U G U S T

2008

279

Svensson and Williams

 X t +1 
st +1 ;  Ξt  = g ( st , zt , it , γ t , jt , εt , j t +1, ε t +1 )
 pt +1|t +1 

Bellman equation for the dual problem can be
written
(12)
V% ( st ) ; ΕtVˆ ( st , jt ) ; ∑ j p jt|tVˆ ( st , j )

(

)


 L% X% t , zt , it , γ t , jt , ε t


= max min Εt 

st , zt , it , γ t , jt , 



γ t ( zt , it )
, j t +1  
 +δVˆ  g 

ε
,
j
,
ε

  t t +1 t +1 
 
 L% X% t , zt ,iit , γ t , j , ε t



; max min ∑ p jt|t ∫ 
st , zt , it , γ t ,   


j
ˆ
z
,
i
γt ( t t)
 + δ ∑ kPjkV  g 
 , k 

  j , ε t , k , εt +1   

(

)

ϕ ( ε t )ϕ ( ε t +1 )dε tdε t +1,

where st ⬅ 共X̃t′, p′t|t 兲′ denotes the perceived state
of the economy (it includes the perceived probability distribution, pt|t, but not the true mode) and
共st, jt 兲 denotes the true state of the economy (it
includes the true mode of the economy). As we
discuss in more detail below, it is necessary to
include the mode jt in the state vector because
the beliefs do not satisfy the law of iterated expectations. In the BOP case, beliefs do satisfy this
property, so the state vector is simply st . Also note
that, in the Bellman equation, we require that all
the choice variables respect the information constraints and thus depend on the perceived state,
st , but not the mode jt directly.
The optimization is subject to the transition
equation for Xt ,
(13)

X t +1 = A11 jt +1 X t + A12 jt +1 x% ( X t , zt , it , jt , εt )

+ B1 jt +1 it + C1 jt +1 ε t +1,

where we have substituted x̃ 共Xt, zt, it, jt, εt 兲 for xt ;
the new dual transition equation for Ξt ,
Ξt = γ t ;

(14)

(15)

and the transition equation (7) for pt|t . Combining
equations, we have the transition for st :

 A11 jt +1 X t + A12 jt +1 x% ( X t , zt , it , j , ε t ) 



 + B1 jt +1 it + C1 jt +1 ε t +1


γt
;
.


P ′pt|t





It is straightforward to see that the solution of
the dual optimization problem (12) is linear in
X̃t for given pt|t, jt :
 Fz ( pt|t )
 zt   z (st )



 =
%
(16)  it   i ( st )  = F ( pt|t ) X t ;  Fi ( pt|t )  X% t ,
 Fγ ( pt|t )
γ t  γ (st )



(17)

xt = x (st , jt , εt ) ; x% ( X t , z ( st ), i ( st ), jt , εt )

(

)

J U LY / A U G U S T

2008

)

This solution is also the solution to the original
primal optimization problem. We note that xt is
linear in εt for given pt|t and jt . The equilibrium
transition equation is then given by
(18)

st +1 = gˆ ( st , j t , ε t , jt +1 , ε t +1 )

; g st , z ( st ), i ( st ), γ (st ), j t , ε t , jt +1 , ε t +1  .

As can be easily verified, the (unconditional)
dual value function Ṽ共st 兲 is quadratic in X̃t for
given pt|t, taking the form
%
V% ( st ) ; X% t′V%XX
% % ( pt|t ) X t + w ( pt|t ) .

The conditional dual value function, V̂共st ,jt 兲,
gives the dual intertemporal loss conditional on
the true state of the economy, 共st ,jt 兲. It follows
that this function satisfies

(

)


 L% X% t , z (st ), i (st ), γ ( st ), j , ε t

Vˆ ( st , j ) ; ∫ 
 + δ ∑ k PjkVˆ  gˆ ( st , j , ε t , k , ε t +1 ), k  




ϕ ( εt )ϕ (ε t +1 )dε tdε t +1

280

(

; FxX% pt|t , jt X% t + Fxε pt|t , j t εt .

( j ∈ N ).
j

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

The function V̂共st ,jt 兲 is also quadratic in X̃t for
given pt|t and jt :
%
ˆ ( pt|t , j t ).
Vˆ (st , jt ) ; X% t′VˆXX
% % ( pt|t , jt ) X t + w
ˆ % % ( p , j ),
V%XX
% % ( pt|t ) ; ∑ j p jt|tVXX
t|t

It follows that we have

w ( pt|t ) ; ∑ j p jt|t wˆ ( pt|t , j ) .

1
V ( st ) ; V% ( st ) − Ξt′− 1 ∑p jt|t H j ∫x ( st , j , ε t )ϕ ( ε t )dεt
δ j

(19)

1
∑pjt|t H j x (st , j , 0),
δ j

where the second equality follows because
x共st,jt, εt 兲 is linear in εt for given st and jt. It is quadratic in X̃t for given pt|t:
%
V ( st ) ; X% t′VXX
% % ( pt|t ) X t + w ( pt|t )

(the scalar w共pt|t 兲 in the primal value function is
obviously identical to that in the dual value
function). This is the value function conditional
on X̃t and pt|t after Xt has been observed but before
xt has been observed, taking into account that jt
and εt are not observed. Hence, the second term
on the right side of (19) contains the expectation
of Hjtxt conditional on that information.9
Svensson and Williams (2007a,b) present
algorithms to compute the solution and the primal and dual value functions for the NL case.
For future reference, we note that the value function for the primal problem also satisfies
9

To be precise, the observation of Xt, which depends on C1j εt, allows
some inference of εt, εt|t. xt will depend on jt and on εt, but on εt
only through C2j εt. By assumption C1j εt and C2k εt are independent.
Hence, any observation of Xt and C1j εt does not convey any information about C2j εt, so EtC2j εt = 0.
t

t

t

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

where the conditional value function, V̆ 共st ,jt 兲,
satisfies

L  X t , x ( st , j , εt ), i ( st ), j 

(
V ( st , j ) = ∫ 
(

(20)
+ δ ∑ k PjkV  gˆ ( st , j , εt , k , εt +1 ), k  

ϕ ( εt )ϕ (ε t +1 )dε tdε t +1

Although we find the optimal policies from
the dual problem, in order to measure true
expected losses, we are interested in the value
function for the primal problem (with the original,
unmodified loss function). This value function,
with the period loss function Et L共Xt, xt, it, jt 兲,
rather than EtL̃ 共X̃t, zt, it, γt, jt , εt 兲, satisfies

= V% ( st ) − Ξt′− 1

(
V ( st ) ; ∑ j p jt|tV (st , j ),

( j ∈ N ).
j

Adaptive Optimal Policy
Consider now the case of AOP, where the
policymaker uses the same policy function as in
the NL case but each period updates the probabilities that this policy is conditioned on. This case
is thus simple to implement recursively, as we
have already discussed how to solve for the optimal decisions and below we show how to update
probabilities. However, the ex ante evaluation of
expected loss is more complex, as we show below.
In particular, we assume that C2jt ⬅
/ 0 and that both
εt and jt are unobservable. The estimate pt|t is the
result of Bayesian updating, using all information
available, but the optimal policy in period t is
computed under the perceived updating equation
(7). That is, the fact that the policy choice will
affect future pt+τ|t+τ and that future expected loss
will change when pt+τ|t+τ changes is disregarded.
Under the assumption that the expectations on
the left side of (2) are conditional on (7), the
variables zt, it, γt , and xt in period t are still determined by (16) and (17).
In order to determine the updating equation
for pt|t, we specify an explicit sequence of information revelation as follows, in no less than nine
steps. The timing assumptions are necessary in
order to spell out the appropriate conditioning
for decisions and updating of beliefs.
(i) The policymaker and the private sector
enters period t with the prior pt|t–1. They know
Xt–1, xt–1 = x共st–1,jt–1, εt–1兲, zt–1 = z共st–1兲, it–1 = i共st–1兲,
and Ξt–1 = γ 共st–1兲 from the previous period.
(ii) In the beginning of period t, the mode jt
and the vector of shocks εt are realized. Then the
vector of predetermined variables Xt is realized
according to (1).
J U LY / A U G U S T

2008

281

Svensson and Williams

(iii) The policymaker and the private sector
observe Xt . They then know X̃t ⬅ 共Xt ′, Ξ′t–1兲′. They
do not observe jt or εt .
(iv) The policymaker and the private sector
update the prior pt|t–1 to the posterior pt|t according to Bayes’s theorem and the updating equation

(21)

p jt|t =

ϕ ( X t | j t = j , X t −1, x t −1 , it −1, pt|t −1 )
p jt|t −1
ϕ ( X t | X t −1 , xt −1 , it −1, pt|t −1 )

( j ∈ N ),

The private sector and the policymaker can also
infer Ξt from
Ξt = γ ( st ).

(23)

This allows the private sector and the policymaker
to form the expectations
zt = z (st ) = Ε t  H jt +1 x t +1|st 
= ∑ j , kPjk p jt|t H k x k , t +1|jt ,

(24)

j

where again ϕ 共.兲 denotes a generic density function.10 Then the policymaker and the private
sector know st ⬅ 共X̃t′, p′t|t 兲′.
(v) The policymaker solves the dual optimization problem, determines it = i共st 兲, and implements/announces the instrument setting, it.
(vi) The private sector (and policymaker)
expectations,

where

x k , t +1|jt

ϕ (ε t )ϕ ( εt +1 )dε tdε t +1

)

(

  A11k X t + A12k x ( st , j, 0 ) + B1k i ( st ) 




Ξt
= x
 , k , 0 ,


P ′pt|t
 


zt = Ε t H jt +1 x t +1 ; Ε H jt +1 x t +1|st ,
are formed. In equilibrium, these expectations
will be determined by (16). In order to understand
their determination better, we look at this in some
detail.
These expectations are by assumption formed
before xt is observed. The private sector and the
policymaker know that xt will in equilibrium be
determined in the next step according to (17).
Hence, they can form expectations of the soon-tobe determined xt conditional on jt = j,11
(22)
10

x jt|t = x ( st , j , 0) .

The policymaker and private sector can also estimate the shocks
εt|t as

ε t t = ∑ j pjt t ε jt t ,

where

ε jt t ; X t − A11 j X t −1 − A12 j x t −1 − B1j it −1

( j ∈ N ).
j

However, because of the assumed independence of C1j εt and C2kεt,
j,k 僆 Nj , we do not need to keep track of εjt|t.
11

Note that 0 instead of εjt|t enters above. This is because the inference
εjt|t above is inference about C1j εt, whereas xt depends on εt through
C2j εt. Because we assume that C1j εt and C2j εt are independent, there
is no inference of C2j εt from observing Xt. Hence, EtC2j εt = 0. Because
of the linearity of xt in εt, the integration of xt over εt results in
x共st, jt, 0t 兲.

where we have exploited the linearity of xt =
x共st, jt, εt 兲 and xt +1 = x共st +1, jt +1, εt +1兲 in εt and εt +1.
Note that zt is, under AOP, formed conditional
on the belief that the probability distribution in
period t +1 will be given by pt+1|t+1 = P ′pt|t , not by
the true updating equation that we are about to
specify.
(vii) After the expectations zt have been
formed, xt is determined as a function of Xt, zt, it,
jt, and εt by (10).
(viii) The policymaker and the private sector
then use the observed xt to update pt|t to the new
+
posterior pt|t
according to Bayes’s theorem, via
the updating equation
(25) p+jt|t =

J U LY / A U G U S T

2008

ϕ ( xt | jt = j , X t , zt , it , pt|t )
p jt|t
ϕ ( xt | X t , zt , it , pt|t )

( j ∈ N ).
j

(ix) The policymaker and the private sector
then leave period t and enter period t +1, with
the prior pt+1|t given by the prediction equation

t

282


  A11k X t + A12 k x (st , j , εt )




  + B1k i ( st )

 , k, ε 
= ∫ x  
Ξt
t +1 





P ′pt|t



 




(26)

pt +1|t = P ′pt+|t .

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

In the beginning of period t +1, the mode jt +1 and
the vector of shocks εt +1 are realized, and Xt +1 is
determined by (1) and observed by the policymaker and private sector. The sequence of the nine
steps above then repeats itself. For more detail on
the explicit densities in the updating equations
(21) and (25), see Svensson and Williams (2007b).
The transition equation for pt+1|t+1 can be
written
(27)

pt +1|t +1 = Q ( st , zt , it , jt , εt , j t +1, ε t +1 ),

where Q共st, zt, it, jt, εt , jt +1, εt +1兲 is defined by the
combination of (21) for period t +1 with (13) and
(26). The equilibrium transition equation for the
full state vector is then given by
 X t +1 


st +1 ;  Ξt  = g (st , jt , εt , j t +1, ε t +1 )
 pt +1|t +1 
(28)

 A11 jt +1 X t + A12 jt +1 x (st , jt , εt ) 



 + B1 jt +1 i (st ) + C1 jt +1 ε t +1


;
γ ( st )
,

Q s , z s , i s , j , ε , j , ε
 ( t ( t ) ( t ) t t t +1 t +1 ) 





where the bottom block is given by the true
updating equation (27) together with the policy
function (16). Thus, we note that, in this AOP
case, there is a distinction between the perceived
transition and equilibrium transition equations,
(15) and (18), which in the bottom block include
the perceived updating equation, (7), and the
true equilibrium transition equation, (28), which
replaces the perceived updating equation, (7) with
the true updating equation, (27).
Note that V共st 兲 in (19), which is subject to the
perceived transition equation, (15), does not give
the true (unconditional) value function for the
AOP case. This is instead given by
(
V ( st ) ; ∑ j p jt|tV (st , j ),

where the true conditional value function, V̆ 共st ,jt 兲,
satisfies
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

 L  X t , x ( st , j , εt ), i ( st ), j  


(
V ( st , j ) = ∫ 
(   st , j , ε t ,   
(29)
+ δ ∑ k PjkV  g 
 , k 

  k , εt +1   

ϕ ( εt )ϕ (ε t +1 )dε tdε t +1

( j ∈ N ).
j

–
That is, the true value function, V 共st 兲, takes into
account the true updating equation for pt|t , (27),
whereas the optimal policy and the perceived
value function, V共st 兲, in (19), are conditional on
the perceived updating equation, (7), and thereby
the perceived transition equation, (15). Note also
–
that V 共st 兲 is the value function after X̃t has been
observed but before xt is observed, so it is condi+
. Because the full
tional on pt|t rather than pt|t
transition equation, (28), is no longer linear due
to the belief-updating equation, (27), the true
–
value function, V 共st 兲, is no longer quadratic in X̃t
for given pt|t . Thus, more-complex numerical
methods are required to evaluate losses in the
AOP case, although policy is still determined
simply as in the NL case.
As we discuss in Svensson and Williams
(2007b), the difference between the true updating
equation for pt+1|t+1, (27), and the perceived updating equation, (7), is that, in the true updating equation, pt+1|t+1 becomes a random variable from the
point of view of period t, with mean equal to pt+1|t .
This is because pt+1|t+1 depends on the realization
of jt +1 and εt +1. Thus Bayesian updating induces
a mean-preserving spread over beliefs, which in
turn sheds light on the gains from learning. If the
conditional value function, V̆ 共st ,jt 兲, under NL is
concave in pt|t for given X̃t and jt , then by Jensen’s
inequality the true expected future loss under
AOP will be lower than the true expected future
loss under NL. That is, the concavity of the value
function for beliefs means that learning leads to
lower losses. Although it is likely that V̆ is indeed
concave, as we show in the applications, it need
not be globally so and thus learning need not
always reduce losses. In some cases, the losses
incurred by increased variability of beliefs may
offset the expected precision gains. Furthermore,
under BOP, it may be possible to adjust policy
to further increase the variance of pt|t , that is,
achieve a mean-preserving spread that might
J U LY / A U G U S T

2008

283

Svensson and Williams

further reduce the expected future loss.12 This
amounts to optimal experimentation.

Bayesian Optimal Policy
Finally, we consider the BOP case, when
optimal policy is determined while taking the
updating equation, (27), into account. That is, we
now allow the policymaker to choose it taking into
account that his actions will affect pt+1|t+1, which
in turn will affect future expected losses. In particular, experimentation is allowed and is optimally chosen. For the BOP case, there is hence
no distinction between the perceived and true
transition equation.
The transition equation for the BOP case is
 X t +1 


st +1 ;  Ξt  = g ( st , zt , it , γ t , jt , εt , j t +1, ε t +1 )
 pt +1|t +1 
(30)

 A11 jt +1 X t + A12 jt +1 x% ( st , zt , it , jt , εt )



 + B1 jt +1 it + C1 jt +1 ε t +1


γt
;
.


Q ( st , zt , it , jt , ε t , j t +1, ε t +1 )





Then the dual optimization problem can be written as (12) subject to the above transition equation
(30). However, in the Bayesian case, matters simplify somewhat, as we do not need to compute the
conditional value functions, V̂共st ,jt 兲, which we
recall were required because of the failure of the
law of iterated expectations in the AOP case. We
note now that the second term on the right side
of (12) can be written as
Ε tVˆ ( st +1, jt +1 ) ; Ε Vˆ (st +1 , jt +1 ) st  .



Because, in the Bayesian case, the beliefs do satisfy the law of iterated expectations, this is then
the same as
Ε Vˆ ( st +1 , j t +1 ) st  = Ε V% (st +1 ) st  .



12

Kiefer (1989) examines the properties of a value function, including
concavity, under Bayesian learning for a simpler model without
forward-looking variables.

284

J U LY / A U G U S T

2008

See Svensson and Williams (2007b) for a proof.
Thus, the dual Bellman equation for the
Bayesian optimal policy is

(

(31)

)

% %

 L X t , zt , it , γ t , jt , ε t
V% ( st ) = max min Εt 

γt
%
( zt , it ) 
+δV  g ( st , zt , it , γ t , jt , ε t , jt +1, ε t +1 )  
 L% X% t , zt , it , γ t , j , εt



; max min ∑ p jt|t ∫ 
  st , zt , it , γ t ,   
%
γt
( zt , it ) j
 + δ ∑ k PjkV  g 
 

  j , ε t , k , ε t +1   
ϕ ( εt )ϕ ( εt +1 )dεtdεt +1,

(

)

where the transition equation is given by (30).
The solution to the optimization problem
can be written

(32)

(33)

 z ( st ) 
 zt 




ı%t ;  it  = ı% (st ) ≡  i (st ) 
γ ( st ) 
γ t 

 Fz X% t , pt|t

= F X% t , pt|t ;  Fi X% t , pt|t

 Fγ X% t , pt|t


(

(
(
(

)

)
) ,
)

xt = x (st , jt , εt ) ; x% ( X t , z ( st ), i ( st ), jt , εt )

(

)

; Fx X% t , pt|t , jt , ε t .

Because of the nonlinearity of (27) and (30), the
solution is no longer linear in X̃t for given pt|t .
The dual value function, Ṽ共st 兲, is no longer quadratic in X̃t for given pt|t . The value function of
the primal problem, V共st 兲, is given by, equivalently, (19), (29) (with the equilibrium transition
equation (28) with the solution (32)), or
 L  X t , x ( st , j , ε t ), i ( st ), j 

V ( st ) = ∑p jt|t ∫ 

(34)
j
 + δ ∑ k PjkV  g (st , j , ε t , k , ε t +1 )  

ϕ (ε t )ϕ ( ε t +1 )dεtdεt +1.

It is also no longer quadratic in X̃t for given pt|t .
Thus, more complex and detailed numerical
methods are necessary in this case to find the
optimal policy and the value function. Therefore,
little can be said in general about the solution of
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

the problem. Nonetheless, in numerical analysis
it is very useful to have a good starting guess at a
solution, which in our case comes from the AOP
case. In our examples below we explain in more
detail how the BOP and AOP cases differ and
what drives the differences.

Observable Modes
In this paper we largely focus on the cases
where the policymakers do not observe the current mode, which is certainly the more relevant
case when analyzing model uncertainty. However,
some situations may arguably be better modeled
by observable shifts in modes, as in most of the
econometric literature on regime-switching
models. Moreover, one way to gauge the effects of
uncertainty in a model is to move from a constantcoefficient specification to one in which the
parameters are observable but may vary. (That is,
the current values of parameters are known, but
future values are uncertain.) For this reason, we
use the observable mode case, to analyze implications of uncertainty on policy. In Svensson and
Williams (2007a), we develop simple algorithms
for observable changes in modes, which play off
the fact that conditional on the mode the evolution of the economy is linear and preferences are
quadratic. Thus, the optimal policy consists of a
mode-dependent collection of linear policy rules
and can be written
it = Fijt X% t

(35)
for jt 僆 Nj .

LEARNING AND
EXPERIMENTATION IN A SIMPLE
NEW KEYNESIAN MODEL
The Model
For our policy exercises, we consider a benchmark hybrid New Keynesian Phillips curve (see
Woodford, 2003, for an exposition):

(

)

(36) π t = 1 − ω jt π t −1 + ω jt Ε t π t +1 + γ jt y t + cε t .

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Here πt is the inflation rate, yt is the output gap,
ωjt is a parameter reflecting the degree of forwardlooking behavior in price setting, and γjt is a
composite parameter reflecting the elasticity of
demand and frequency of price adjustment. For
simplicity, we assume that policymakers can
directly control the output gap, yt . In another
paper, Svensson and Williams (2008), we consider
optimal policy in the standard two-equation
New Keynesian model that also includes a loglinearized consumption Euler equation. Many of
the same issues that we focus on here arise there
as well, but the simpler setting in the present
paper allows us to focus more directly on the
effects of uncertainty on policy.
We focus on two key sources of uncertainty
in the New Keynesian Phillips curve. Our first
example considers the degree of forward-looking
behavior in inflation. In the model, this translates
to uncertainty about ωj . If this parameter is large,
inflation is largely determined by current shocks
and expectations of the future, whereas if ωj is
small, then there is a substantial exogenous inertia in the inflation process. Our second example
analyzes uncertainty about the slope of the Phillips
curve, as reflected in the parameter γj . This could
reflect changes in the degree of monopolistic
competition (which also lead to varying markups)
and/or changes in the degree of price stickiness.
In each example, we look first at the effect of
uncertainty, going from a constant-coefficient
model to a model with random coefficients. Then,
we analyze the effects of learning and experimentation on policy and losses.
In both examples, we use the following loss
function:
(37)

Lt = π t2 + λ y t2 .

We set the loss-function parameters as δ = 0.98,
λ = 0.1, and set the shock standard deviation to
c = 0.5. Even though different structural parameters vary in the two examples, both examples use
two modes and set the transition matrix to
 0.98 0.02
P=
.
 0.02 0.98
J U LY / A U G U S T

2008

285

Svensson and Williams

Figure 1
Policies and Losses from Observable and Constant Modes
Policy: Observable and Constant Modes

Loss: Observable and Constant Modes

yt

Loss

8

35

6

30

4

25

2

20

0
−2

Obs 1

15

−4

Obs 2

10

−6

E(Obs)

−8

Constant

−5

5
0

5

0
−5

0

5

πt−1

πt−1

NOTE: Obs 1 (2) is observable mode 1 (2); E(Obs) is the unconditional average policy.

In both examples, we examine the value functions and optimal policies for this simple New
Keynesian model under NL, AOP, and BOP. We
have one forward-looking variable (xt ⬅ πt ) and
consequently one Lagrange multiplier (Ξt–1 ⬅
Ξπ,t–1). We have one predetermined variable
(Xt ⬅ πt–1) and the estimated mode probabilities
(pt|t ⬅ 共p1t|t , p2t|t 兲′, of which we only need keep
track of one, p1t|t ). Thus, the value and policy
functions, V共st 兲 and i共st 兲, are all three dimensional
(st = 共 πt–1, Ξπ,t–1, p1|t 兲′). For computational reasons,
we are forced to restrict attention to relatively
sparse grids with few points. The following plots
show two-dimensional slices of the value and
policy functions, focusing on the dependence on
πt–1 and p1t|t (which we for simplicity denote by
p1|t in the figures). In particular, all of the plots
are for Ξπ,t–1 = 0.

Example 1: How Forward-Looking Is
Inflation?
This example analyzes one of the main sources
of uncertainty in the New Keynesian framework—
the degree to which inflation is a forward-looking
variable responding to expectations of future
developments. Specifications that suggest that
286

J U LY / A U G U S T

2008

inflation has substantial exogenous persistence
have tended to fit better empirically, while perhaps
being less rigorous in their micro-foundations.
In this example, we see how uncertainty about
the degree of forward-looking behavior, as indexed
by ωj, affects policy. Thus, we assume that there
are two modes, one more forward looking, with
ω1 = 0.8, and the other more backward looking,
with ω2 = 0.2. Note that, with the transition matrix
P as specified above, this means E共ωj 兲 = 0.5. For
this example, we fix the slope parameter at γ = 0.1.
In Figure 1, we illustrate the effects of uncertainty on policy and losses. In the left panel, we
plot the two mode-dependent optimal policy
functions for the MJLQ model with observable
modes, labeled “Obs 1” for mode 1 and “Obs 2”
for mode 2. Here, we see that the optimal policy
is more aggressive in the more backward-looking
mode 2, because in response to a higher inflation
the optimal policy involves larger negative output gaps. The unconditional average policy is
labeled “E(Obs)” and shown with a gray line. For
comparison, the constant-coefficient case, where
we set ω1 = ω2 = E共ωj 兲 = 0.5, is plotted with a black
dashed line. Here, we see that optimal policy
under uncertainty is more aggressive in respondF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

Figure 2
Losses and Differences in Losses from NL, AOP, and BOP
Loss: BOP

Loss: NL
Loss

Loss

πt−1 = 0
πt−1 = −5
πt−1 = 3.33

80
70

80
70
60

60

50

50

40
40
0.2

30
0.2

0.4

0.6

0.4

0.8

0.6

0.8

p1t

p1t

Loss Differences: AOP−NL
Loss

Loss Differences: BOP−AOP
Loss

1.5

−2
−3

1.0
−4
−5

0.5

−6
0
0.2

0.4

0.6

0.8

p1t

ing to inflation movements than optimal policy
in the absence of uncertainty.
A common starting point for thinking about
the effects of uncertainty on policy is Brainard’s
(1967) classic analysis, which suggested that
uncertainty should make policy more cautious.
However, Brainard worked in a static framework
and the source of uncertainty he analyzed was a
slope coefficient on how policy affects the economy. Our second example below is closer to
Brainard’s and comes to similar conclusions. But,
in this example, our results suggest, at least for
this parameterization, that uncertainty about the
dynamics of inflation leads to more-aggressive
policy. This is similar to what Söderström (2002)
found in a backward-looking model.
The right panel of Figure 1 plots the losses
associated with the optimal policies in the different cases. When inflation is more forward looking,
it is easier to control and so overall losses are
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

0.2

0.4

0.6

0.8

p1t

lower even with less-aggressive policies. However,
uncertainty about the dynamics of inflation can
have significant effects on losses for moderate to
high inflation levels. This is evident by comparing
the constant-coefficient and average observable
curves, where we see that the loss nearly doubles
at the edges of the plot.
Now we keep the same specification, but make
the more realistic assumption that the current
mode is not observed. Thus, we analyze the effects
of learning and experimentation on policy and
losses. The top-two panels of Figure 2 show losses
under NL and BOP as functions of p1t . The bottomtwo panels of the figure show the differences
between losses under NL, AOP, and BOP. Figure 3
shows the corresponding policy functions and
their differences. The top-two panels plot the
policy functions under AOP and BOP as a function of inflation. The AOP policy is linear in πt ,
and clearly the BOP policy is nearly so. The botJ U LY / A U G U S T

2008

287

Svensson and Williams

Figure 3
Optimal Policies and Their Differences Under AOP and BOP
Policy: AOP

Policy: BOP

yt

yt

5

5

p1t = 0.89
p1t = 0.5
p1t = 0.11

0

0

−5
−5
−5
−5

0

5

0

5

πt−1

πt−1
Policy Differences: BOP−AOP

Policy: BOP

yt

yt

πt−1 = 0
πt−1 = −5
πt−1 = 3.33

5

3
2
1
0

0

−1
−2

−5
0.2

0.4

0.6

0.8

p1t

tom-left panel plots the BOP policy as a function
of p1t , showing that policy is less aggressive (that
is, has a smaller magnitude of response) the greater
is the probability of being in the more forwardlooking mode 1. The bottom-right panel shows
that the policy differences between AOP and BOP,
the experimentation component of policy, are
incredibly small.
In Svensson and Williams (2007b), we show
that learning implies a mean-preserving spread
of the random variable pt +1|t +1 (which under
learning is a random variable from the vantage
point of period t). Hence, concavity of the value
function under NL in p1t implies that learning is
beneficial, because then a mean-preserving spread
reduces the expected future loss. However, we
see in Figure 2 that the value function is actually
slightly convex in p1t , so learning is not beneficial
here. Consequently, we see in Figure 2 that AOP
gives higher losses than NL. In contrast, for a
backward-looking example in Svensson and
288

J U LY / A U G U S T

2008

−3
−5

0

5

πt−1

Williams (2007b), the value function is concave
and learning is beneficial. Experimentation is
beneficial here, as BOP does give lower losses
than AOP, but the difference is minuscule. So,
for this example, learning has sizable effects on
losses and is detrimental, whereas experimentation is beneficial but has negligible effects.
Why would learning not be beneficial with
forward-looking variables? It may at least partially
be a remnant of our assumption of symmetric
beliefs and information between the private sector and the policymaker. With backward-looking
models, we have generally found that learning is
beneficial. However, under our assumption of
symmetric information and beliefs between the
private sector and the policymaker, both the private sector and the policymaker learn. The difference between backward- and forward-looking
models then comes from the way that private sector beliefs also respond to learning. Having more
reactive private sector beliefs may add volatility
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

Figure 4
Policies and Losses from Observable and Constant Modes
Policy: Observable and Constant Modes
yt

Loss: Observable and Constant Modes
Loss

4
3

15

2
1
10

0
−1

Obs 1

−2

Obs 2

−3
−4
−5

5

E(Obs)
Constant
0

5

0
−5

0

5

πt−1

πt−1

NOTE: Obs 1 (2) is observable mode 1 (2); E(Obs) is the unconditional average policy.

and make it more difficult for the policymaker to
stabilize the economy.

Example 2: What Is the Slope of the
Phillips Curve?
This example analyzes the other main source
of uncertainty in the New Keynesian Phillips
curve—the extent to which inflation responds to
fluctuations in the output gap. Once again, we
assume that there are two modes: Now one has a
Phillips curve that is flatter, with γ1 = 0.05, and
the other has a steeper curve, with γ2 = 0.25. Note
that with the transition matrix P as specified
above, this means E共γj 兲 = 0.15. For this example,
we fix the forward-looking expectations parameter at ω = 0.5. Because policymakers once again
directly control the output gap, this example is
a forward-looking counterpart to the classic
Brainard (1967) analysis of uncertainty about the
effectiveness of the control.
In Figure 4 we illustrate the effects of uncertainty on policy and losses. As in the previous
example, the left panel plots the two modedependent optimal policy functions for the MJLQ
model with observable modes. Here, we see that
the MJLQ optimal policies in both modes are less
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

aggressive than the constant-coefficient case. Thus,
our results here are in accord with Brainard’s—
uncertainty about the slope of the Phillips curve
leads to more cautious policy.
The right panel of Figure 4 plots the losses
associated with the optimal policies in the different cases. When the Phillips curve is steeper,
inflation responds more to the output gap, making
inflation easier to control. Thus, overall losses
are lower in mode 2, even with less-aggressive
policies. However, once again uncertainty about
this key parameter can have significant effects on
losses for high inflation levels. This is evident by
comparing the constant-coefficient and average
observable curves, where we see that the loss
nearly doubles at the edges of the plot.
Now we again keep the same specification,
but make the more realistic assumption that the
current mode is not observed. The top-two panels
of Figure 5 show losses under NL and BOP as
functions of p1t . The bottom-two panels of the
figure show the differences between losses under
NL, AOP, and BOP. We see in Figure 2 that the
value function is once again slightly convex in p1t ,
so learning is not beneficial here. Consequently,
we see in the bottom-right panel of Figure 2 that
AOP gives higher losses than NL. Thus, once
J U LY / A U G U S T

2008

289

Svensson and Williams

Figure 5
Losses and Differences in Losses from NL, AOP, and BOP
Loss: BOP

Loss: NL
Loss

Loss

πt−1 = 0
πt−1 = −2
πt−1 = 3

28
26

28
26
24

24

22

22

20

20

0.2
0.2

0.4

0.6

0.4

0.8

0.6

0.8

p1t

p1t

Loss Differences: AOP−NL
Loss

Loss Differences: BOP−AOP
Loss

0.7
−0.015

0.6
0.5

−0.02
0.4
0.3

−0.025
0.2

0.4

0.6

0.8

p1t

again, the additional volatility outweighs the
improved inference and makes learning detrimental in this example. Experimentation is once
again beneficial, as BOP gives lower losses than
AOP. And, while the effects of experimentation
are an order of magnitude smaller than the effects
of learning, the gains from recognizing the endogeneity of information are nonnegligible here.
Thus, for uncertainty about the slope of the
Phillips curve, policymakers may have an incentive to experiment—that is, to take actions to
mitigate future uncertainty.
Figure 6 shows the corresponding policy
functions and their differences. The top-two
panels plot the policy functions under AOP and
BOP as a function of inflation. The AOP policy
is linear in πt –1, and clearly the BOP policy is
nearly so, although some differences are evident
at the edge of the plot. The bottom-left panel plots
the BOP policy as a function of p1t , showing that
290

J U LY / A U G U S T

2008

0.2

0.4

0.6

0.8

p1t

the policy function is relatively flat in this dimension. The bottom-right panel plots the difference
between the AOP and BOP policy functions,
which shows that here the experimentation motive
leads toward less-aggressive policy. This is counter
to an example in Svensson and Williams (2007b),
where we show that in a backward-looking model
experimentation may lead to more-aggressive
policy. There, policy makes outcomes more dispersed in order to sharpen inference over the
modes. However, here, because learning is detrimental, the experimentation component of policy
seeks to slow the effects of learning by making
outcomes less dispersed. This serves to illustrate
that the experimentation component of policy
need not be associated with wild or aggressive
policy action, but rather it optimally takes into
account how information influences the targets
of policy.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

Figure 6
Optimal Policies and Their Differences Under AOP and BOP
Policy: AOP

Policy: BOP

yt

yt

4

4

2

2
0

0

−2

−2

p1t = 0.92
p1t = 0.5
p1t = 0.08

−4
−4

−5

−5

0

5

0

5

πt−1

πt−1
Policy Differences: BOP−AOP

Policy: BOP

yt

yt

0.4
1
0.2
0
0

πt−1 = 0
πt−1 = −2
πt−1 = 3

−1
−2

−0.2
−0.4

0.2

0.4

0.6

0.8

p1t

CONCLUSION
In this paper, we have presented a relatively
general framework for analyzing model uncertainty and the interactions between learning and
optimization. Although this is a classic issue,
very little to date has been done for systems with
forward-looking variables, which are essential
elements of modern models for policy analysis.
Our specification is general enough to cover many
practical cases of interest, yet remains relatively
tractable in implementation. This is definitely
true for cases when decisionmakers do not learn
from the data they observe (our case of no learning,
NL) or when they do learn but do not account for
learning in optimization (our case of adaptive
optimal policy, AOP). In both of these cases, we
have developed efficient algorithms for solving
for the optimal policy, which can handle relatively large models with multiple modes and
many state variables. However, in the case of the
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

−5

0

5

πt−1

Bayesian optimal policy (BOP), where the experimentation motive is taken into account, we
must solve more-complex numerical dynamic
programming problems. Thus, to fully examine
optimal experimentation, we are haunted by the
curse of dimensionality, forcing us to study relatively small and simple models.
Thus, an issue of much practical importance
is the size of the experimentation component of
policy and the losses entailed by abstracting from
it. Although our results in this paper are far from
comprehensive, they suggest that in practical
settings the experimentation motive may not be
a concern. The above and similar examples that
we have considered indicate that the benefits of
learning (moving from NL to AOP) may be substantial, whereas the benefits from experimentation (moving from AOP to BOP) are modest or
even insignificant. If this preliminary finding
stands up to scrutiny, experimentation in economic policy in general and monetary policy in
J U LY / A U G U S T

2008

291

Svensson and Williams

particular may not be very beneficial, in which
case there is little need to face the difficult ethical
and other issues involved in conscious experimentation in economic policy. Furthermore, the
AOP is much easier to compute and implement
than the BOP. To have this truly be a robust implication, more simulations and cases need to be
examined.

Beck, Günter W. and Wieland, Volker. “Learning and
Control in a Changing Economic Environment.”
Journal of Economic Dynamics and Control, August
2002, 25(9-10), pp. 1359-77.
Blake, Andrew P. and Zampolli, Fabrizio. “Time
Consistent Policy in Markov Switching Models with
Rational Expectations.” Working Paper No. 298,
Bank of England, 2006.
Brainard, William C. “Uncertainty and the
Effectiveness of Policy.” American Economic
Review, May 1967, 57(2), pp. 411-25.
Cogley, Timothy; Colacito, Riccardo and Sargent,
Thomas J. “The Benefits from U.S. Monetary
Policy Experimentation in the Days of Samuelson
and Solow and Lucas.” Journal of Money, Credit,
and Banking, February 2007, 39(2), pp. 67-99.
Costa, Oswaldo L.V.; Fragoso, Marecelo D. and
Marques, Ricardo P. Discrete-Time Markov Jump
Linear Systems. London: Springer, 2005.
do Val, João B.R. and Başar, Tamer. “Receding
Horizon Control of Jump Linear Systems and a
Macroeconomic Policy Problem.” Journal of
Economic Dynamics and Control, August 1999,
23(8), pp. 1099-31.
Ellison, Martin. “The Learning Cost of Interest Rate
Reversals.” Journal of Monetary Economics,
November 2006, 53(8), pp. 1895-907.
Ellison, Martin and Valla, Natacha. “Learning,
Uncertainty and Central Bank Activism in an
Economy with Strategic Interactions.” Journal of
Monetary Economics, August 2001, 48(1),
pp. 153-71.

J U LY / A U G U S T

Kiefer, Nicholas M. “A Value Function Arising in the
Economics of Information.” Journal of Economic
Dynamics and Control, April 1989, 13(2), pp. 201-23.
Lindé, Jesper. “Estimating New-Keynesian Phillips
Curves: A Full Information Maximum Likelihood
Approach.” Journal of Monetary Economics,
September 2005, 52(6), pp. 1135-49.

REFERENCES

292

Evans, George and Honkapohja, Seppo. Learning and
Expectations in Macroeconomics. Princeton, NJ:
Princeton University Press, 2001.

2008

Marcet, Albert and Marimon, Ramon. “Recursive
Contracts.” Working paper, Universitat Pompeu
Fabra, Department of Economics and Business,
1998; www.econ.upf.edu.
Poole, William. “A Policymaker Confronts
Uncertainty.” Federal Reserve Bank of St. Louis
Review, September 1998, 80(5), pp. 3-8.
Rudebusch, Glenn D. and Svensson, Lars E.O.
“Policy Rules for Inflation Targeting,” in John B.
Taylor, ed., Monetary Policy Rules. Chicago:
University of Chicago Press, 1999.
Söderström, Ulf. “Monetary Policy with Uncertain
Parameters.” Scandinavian Journal of Economics,
2002, 104(1), pp. 125-45.
Svensson, Lars E.O. “Optimization under Commitment
and Discretion, the Recursive Saddlepoint Method,
and Targeting Rules and Instrument Rules.” Lecture
notes, Princeton University, 2007;
www.princeton.edu/svensson.
Svensson, Lars E.O. and Williams, Noah. “Monetary
Policy with Model Uncertainty: Distribution Forecast
Targeting.” Working paper, Princeton University,
May 2007a; www.princeton.edu/svensson/.
Svensson, Lars E.O. and Williams, Noah. “Bayesian
and Adaptive Optimal Policy Under Model
Uncertainty.” NBER Working Paper No. 13414,
National Bureau of Economic Research, 2007b.
Svensson, Lars E.O. and Williams, Noah. “Optimal
Monetary Policy in DSGE Models: A Markov JumpLinear-Quadratic Approach.” NBER Working Paper

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Svensson and Williams

No. W13892, National Bureau of Economic Research,
2008.
Tesfaselassie, Mewael F.; Schaling, Eric and Eijffinger,
Sylvester C.W. “Learning about the Term Structure
and Optimal Rules for Inflation Targeting.” CEPR
Discussion Paper No. 5896, Centre for Economic
Policy Research, 2006.
Wieland, Volker. “Learning by Doing and the Value
of Optimal Experimentation.” Journal of Economic
Dynamics and Control, March 2000, 24(4),
pp. 501-34.
Wieland, Volker. “Monetary Policy and Uncertainty
about the Natural Unemployment Rate: BrainardStyle Conservatism versus Experimental Activism.”
Advances in Macroeconomics, March 2006, 6(1),
pp. 1-34.
Woodford, Michael. Interest and Prices: Foundations
of a Theory of Monetary Policy. Princeton, NJ:
Princeton University Press, 2003.
Zampolli, Fabrizio. “Optimal Monetary Policy in a
Regime-Switching Economy: The Response to
Abrupt Shifts in Exchange-Rate Dynamics.”
Working Paper No. 297, Bank of England, 2006.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

293

294

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Timothy W. Cogley

W

illiam Poole has made a number
of fundamental contributions to
the theory and practice of monetary policy during his long and
productive career. Among other things, Poole has
long emphasized the importance of uncertainty
in shaping monetary policy. Uncertainty takes
many forms. The central bank must act in anticipation of future conditions, which are currently
unknown. Because economists have not formed
a consensus about the best way to model the
monetary transmission mechanism, policymakers
must also contemplate alternative theories with
distinctive operating characteristics. Finally, even
economists who agree on a modeling strategy
sometimes disagree about the values of key
parameters. Thus, central bankers must also
confront parameter uncertainty within macroeconomic models.
Addressing all these sources of uncertainty
is a tall order, but economists have made considerable progress. Lars Svensson and Noah Williams
are in the vanguard. In a series of important
papers, they adapt and extend Markov jumplinear-quadratic (MJLQ) control algorithms so
that they are suitable for application to monetary
policy.1 Among other things, they extend MJLQ
algorithms to handle forward-looking models
and show how to design optimal policies under
commitment. Their contribution to this volume
(Svensson and Williams, 2008) provides a concise
1

See Svensson and Williams (2007a,b and 2008).

technical introduction to their work and also
describes a pair of thoughtful and well-designed
examples that illustrate how uncertainty about
the monetary transmission mechanism influences
optimal policy. One lesson that emerges from their
examples is that the benefits of learning are often
substantial but that the gains from deliberate
experimentation are slight. In their parlance, an
“adaptive optimal policy” is almost as good as
the fully optimal Bayesian policy.

ATTITUDES ABOUT POLICY
EXPERIMENTATION
My comment focuses on the role of experimentation. A natural way to address parameter
and/or model uncertainty is to cast an optimal
policy problem as a Bayesian decision problem.
The decisionmaker’s posterior distribution over
unknown parameters and/or model probabilities
becomes part of the state vector, and Bayes’s law
becomes part of the transition equation. Because
Bayes’s law is nonlinear, this breaks certainty
equivalence,2 making the decision rule nonlinear.
A Bellman equation instructs the decisionmaker
to vary the policy instrument in order to generate
information about unknown parameters and
model probabilities. Hence, policymakers have an
2

Certainty equivalence would hold if the central bank’s objective
function were quadratic and the transition equation were linear.
The presence of Bayes’s law as a component of the transition equation makes it nonlinear and hence breaks certainty equivalence.

Timothy W. Cogley is a professor of economics at the University of California, Davis.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 295-300.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

295

Cogley

incentive to experiment to tighten that posterior
in the future. Although experimentation causes
near-term outcomes to deteriorate, it speeds learning and improves outcomes in the longer run.
Whether the decisionmaker should experiment
a little or a lot is unclear, but it is clear that a
Bayesian policy should include some deliberate
experimentation.
Yet there is much aversion to deliberate
experimentation among macroeconomists and
policymakers. For instance, Robert Lucas (1981,
p. 288) writes,
Social experiments on the grand scale may be
instructive and admirable, but they are best
admired at a distance. The idea...must be to
gain some confidence that the component parts
of the program are in some sense reliable prior
to running it at the expense of our neighbors.

Alan Blinder (1998, p. 11) concurs, asserting
that
while there are some fairly sophisticated techniques for dealing with parameter uncertainty
in optimal control models with learning, those
methods have not attracted the attention of...
policymakers. There is a good reason for this
inattention, I think: You don’t conduct policy
experiments on a real economy solely to
sharpen your econometric estimates.

One way to make sense of these conflicting
attitudes is to invoke Milton Friedman’s precept
that the best should not be an enemy of the good.
According to Svensson and Williams, a good baseline policy involves learning but not deliberate
experimentation. In principle, optimal experiments can improve on this baseline policy, but
optimal experiments are hard to design because
the policymaker’s Bellman equation is difficult
to solve, the chief obstacle being the curse of
dimensionality. Because Bellman equations for
policy-relevant models are hard to solve, actual
policy experiments are unlikely to be optimal.
And although optimal experiments are guaranteed
to be no worse than the “learn but don’t experiment” benchmark, suboptimal experiments are
not. Indeed, they might be much worse. Perhaps
this is what Lucas had in mind when he deprecated “grand” policy experiments.3
296

J U LY / A U G U S T

2008

Svensson and Williams have made substantial progress improving algorithms for solving
Bayesian optimal policy problems. Without disparaging this contribution, my sense is that the
curse of dimensionality will continue to be a significant barrier in practice. In view of this, their
finding that the maximum benefit of experimentation is slight takes on greater importance, for it
strengthens the case in favor of adaptive optimal
policies. Their findings are example specific, but
they are consistent with other examples in the
literature. More examples would help clinch the
argument.

ANOTHER EXAMPLE
Cogley, Colacito, and Sargent (2007; CCS)
examine a central bank’s incentive to experiment
in the context of two models of the Phillips curve.
One model follows Samuelson and Solow (1960)
and assumes an exploitable inflation-unemployment tradeoff. The other is inspired by Lucas
(1972 and 1973) and Sargent (1973) and represents a rational expectations version of the natural rate hypothesis. Based on data through the
mid-1960s, CCS estimate the following two
specifications:
Samuelson and Solow:

U t = .0023 + .7971U t −1 − .2761πt + .0054η1,t
π t = vt −1 + .0055η3t .

U t = .0007 + .8468U t −1 − .2489 (πt − v t −1 ) + .0055η2,t
Lucas and Sargent:

πt = v t −1 + .0055η4t .

The variable Ut represents the unemployment gap
(i.e., the difference between actual unemployment
and the natural rate), πt is inflation, vt –1 is programmed or expected inflation for period t con3

One of the initial objectives of Cogley, Colacito, and Sargent (2007)
was to assess whether the Great Inflation could be interpreted as
an optimal experiment. We found that it could not. At least in our
model, optimal experiments did not generate a decade-long surge
in inflation. On the contrary, they generated small, cyclically
opportunistic perturbations of inflation relative to an adaptive,
non-experimental policy. Whether the Great Inflation was initiated
by a suboptimal policy experiment remains an open question.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cogley

Figure 1
Two Decision Rules
α≈0

α = 0.2

Programmed Inflation
0.06

Programmed Inflation
0.06

0.04

0.04

0.02

0.02

0

0

–0.02

–0.02
–0.01

0

0.01

0.02

0.03

–0.01

Lagged Unemployment

0

0.01

0.02

0.03

Lagged Unemployment

α = 0.4

α = 0.6

Programmed Inflation

Programmed Inflation

0.06

0.06

0.04

0.04

0.02

0.02

0

0

–0.02

–0.02
–0.01

0

0.01

0.02

0.03

–0.01

Lagged Unemployment

0

0.01

0.02

0.03

Lagged Unemployment

α = 0.8

α≈1

Programmed Inflation

Programmed Inflation

0.06

0.06

0.04

0.04

0.02

0.02

0

0

–0.02

–0.02
–0.01

0

0.01

0.02

0.03

Lagged Unemployment

ditioned on t –1 information, and ηit , i = 1,…,4,
are standard normal innovations.
CCS assume that one of these specifications
is true but that the central bank does not know
which one. As in Svensson and Williams (2008),
the central bank formulates policy by solving
Bayesian and adaptive optimal control problems.
The key unknown parameter is the posterior
probability, α, on the Samuelson and Solow
model. This probability is updated every period
in accordance with Bayes’s law. The cental bank
minimizes a discounted quadratic loss function
subject to the “natural” transition equations for
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

–0.01

0

0.01

0.02

0.03

Lagged Unemployment

the two models and also a transition equation for
α. The state vector consists of lagged unemployment and the posterior model probability, α, and
the control variable is programmed inflation.
For the adaptive policy, the central bank
updates α every period, but then treats the current
estimate as if it would remain constant forever.
Thus, for adaptive control problem, the α transition equation is

αt + j = αt ∀j ≥ 0.

Because the other transition equations are also
linear and the loss function is quadratic, it follows
J U LY / A U G U S T

2008

297

Cogley

Figure 2
Two Hard-to-Distinguish Value Functions

Value Function

–0.005
–0.01
–0.015
–0.02
–0.025
–0.03
–0.035
0.03
0.02
0.01
Unemployment (U)

0
–0.01
–0.02

0

that certainty equivalence holds and that the
policy rule is linear. The thin gray lines in Figure 1
illustrate how programmed inflation is set as a
function of α and lagged unemployment. Each
panel refers to a different value for α , model uncertainty being most pronounced for α ⬇ 0.4. Lagged
unemployment is shown on the x-axis in each
panel, and programmed inflation is on the y-axis.
Except when α is close to zero (the central bank
puts high probability on the Lucas and Sargent
model), programmed inflation is countercyclical.
For the Bayesian optimal policy, the central
bank recognizes that actions taken today influence
future beliefs about the two models. Hence the
α -transition equation is governed by Bayes’s law,

αt = B (αt −1 , st ),

where st represents the “natural” state variables
for the two models. The thick blue lines in
Figure 1 depict the Bayesian decision rule. For
the most part, they differ only slightly from the
adaptive optimal policy. The chief difference is
298

J U LY / A U G U S T

2008

0.2

0.4

0.6

0.8

1

Prior on Samuelson and Solow (α )

that the Bayesian policy is cyclically opportunistic when there is a lot of model uncertainty. When
α ⬇ 0.4, the Bayesian policy calls for higher
(lower) programmed inflation relative to the
adaptive optimal policy when unemployment is
high (low). In other words, a recession is the best
time to experiment with Keynesian stimulus
and a boom is the best time to experiment with
disinflation.
Because the two policy functions are so alike,
it is not surprising that the benefits of deliberate
experimentation are small. Figure 2 portrays the
value functions associated with the adaptive and
Bayesian policy rules. Because the adaptive policy
is not optimal, it follows that VB 共s,α 兲 ≥ VB 共s;α 兲,
with the discrepancy measuring the benefits of
deliberate experimentation. However, the differences are so slight that they cannot be detected
in the figure. Thus, the results of CCS agree with
those of Svensson and Williams.4
4

Other aspects of monetary policy experimentation are examined
by Wieland (2000a,b) and Beck and Wieland (2002).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cogley

WHY THE BENEFITS OF
EXPERIMENTATION ARE SLIGHT
Deliberate experiments are substitutes for
natural experiments. Hence, the incidence of
natural experiments arising from exogenous
shocks influences the value of intentional experiments. In the CCS example, one reason why the
adaptive policy well approximates the Bayesian
policy is that enough natural experimentation
occurs for the central bank eventually to learn the
true model under the adaptive policy.5 Deliberate
experimentation would speed learning, but not
alter the limit point. In other models, such as Kasa
(1999), there isn’t enough natural experimentation
to learn the truth in the absence of intentional
experimentation. In those environments, deliberate experimentation would alter not only the
transition path but also the limit point of the learning process. Presumably that would enhance the
value of deliberate experimentation, for in that
case the central bank would collect dividends on
experimentation forever.
Another reason why the benefits of experimentation are small is that Bayesian updating makes
posterior model probabilities a martingale (Doob,
1948), implying Et共αt+j 兲 = αt . Thus, the adaptive
transition equation well approximates the center
of the Bayesian predictive density for αt . The
adaptive model poorly approximates its tails,
however, because it disregards uncertainty about
future model probabilities. Nevertheless, when
precautionary motives are weak, decisions depend
mostly on the mean, and errors in approximating
the tails don’t matter much. In these examples,
the central bank’s loss function is quadratic, so
precautionary motives do not enter through preferences. Precautionary behavior comes in only
because of nonlinearity in the transition equation.
Accordingly, motives for experimentation might
be strengthened by altering the central bank’s
objective function.
In principle, one way to reinforce precautionary motives is by introducing a concern for robustness. Building on work by Hansen and Sargent
5

El-Gamal and Sundaram (1993) highlight the importance of natural
experiments.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(2007), Cogley et al. (2008) replace the expectations operators that appear in a Bayesian value
function with a pair of risk-sensitivity operators.
One risk-sensitivity operator guards against misspecification of each of the submodels, and the
other guards against misspecification of the central
bank’s prior. The two risk-sensitivity operators
can be interpreted as ways of seeking robustness
with respect to forward- and backward-looking
features of the model, respectively. Applying
these operators to the Phillips curve models examined in CCS, Cogley et al. find that the forwardlooking risk-sensitivity operator strengthens
experimental motives, whereas the backwardlooking operator mutes them. The combined effect
is ambiguous and depends on the relative weight
placed on the two operators.

CONCLUSION
Designing an optimal policy is substantially
more complex when experimental motives are
active. That easy-to-compute, nonexperimental
policies well approximate hard-to-compute, fullyoptimal policies is an important result. If this conclusion holds up to further scrutiny, the analysis
of monetary policy under model uncertainty will
be greatly simplified. In this instance, it seems
that “the good” is an excellent substitute for “the
best.”

REFERENCES
Beck, Günter and Wieland, Volker. “Learning and
Control in a Changing Environment.” Journal of
Economic Dynamics and Control, November 2002,
26(9/10), pp. 1359-77.
Blinder, Alan S. Central Banking in Theory and
Practice. Cambridge, MA: MIT Press, 1998.
Cogley, Timothy; Colacito, Riccardo and Sargent,
Thomas J. “Benefits from U.S. Monetary Policy
Experimentation in the Days of Samuelson and
Solow and Lucas.” Journal of Money, Credit, and
Banking, February 2007(Suppl.), 39, pp. 67-99.

J U LY / A U G U S T

2008

299

Cogley

Cogley, Timothy; Colacito, Riccardo; Hansen, Lars P.
and Sargent, Thomas J. “Robustness and U.S.
Monetary Policy Experimentation.” Journal of
Money, Credit, and Banking, 2008 (forthcoming).
Doob, Joseph L. “Application of the Theory of
Martingales.” Colloques Internationaux du Centre
National de la Recherché Scientifique, 1948, 36,
pp. 23-27.
El-Gamal, Mahmoud A. and Sundaram, Rangarajan K.
“Bayesian Economists...Bayesian Agents: An
Alternative Approach to Optimal Learning.”
Journal of Economic Dynamics and Control, May
1993, 17(3), pp. 355-83.
Hansen, Lars P. and Sargent, Thomas J. “Robust
Estimation and Control without Commitment.”
Journal of Economic Theory, September 2007,
136(1), pp. 1-27.
Kasa, Kenneth. “Will the Fed Ever Learn?” Journal of
Macroeconomics, Spring 1999, 21(2), pp. 279-92.
Lucas, Robert E. Jr. “Expectations and the Neutrality
of Money.” Journal of Economic Theory, April 1972,
4(2), pp. 103-24.
Lucas, Robert E. Jr. “Some International Evidence on
Output-Inflation Trade-Offs.” American Economic
Review, June 1973, 63(3), pp. 326-34.
Lucas, Robert E. Jr. “Methods and Problems in
Business Cycle Theory,” in Robert E. Lucas Jr., ed.,
Studies in Business-Cycle Theory. Cambridge, MA:
MIT Press, 1981.

300

J U LY / A U G U S T

2008

Samuelson, Paul A. and Solow, Robert M. “Analytical
Aspects of Anti-Inflation Policy.” American
Economic Review, May 1960, 50(2), pp. 177-84.
Sargent, Thomas J. “Rational Expectations, the Real
Rate of Interest, and the Natural Rate of
Unemployment.” Brookings Papers on Economic
Activity, 1973, Issue 2, pp. 429-72.
Svensson, Lars E.O. and Williams, Noah. “Monetary
Policy with Model Uncertainty: Distribution
Forecast Targeting.” Working paper, Princeton
University, 2007a; www.princeton.edu/~svensson.
Svensson, Lars E.O. and Williams, Noah. “Bayesian
and Adaptive Optimal Policy Under Model
Uncertainty.” Working paper, Princeton University,
2007b; www.princeton.edu/~svensson.
Svensson, Lars E.O. and Williams, Noah. “Optimal
Monetary Under Uncertainty: A Markov JumpLinear-Quadratic Approach.” Federal Reserve Bank
of St. Louis Review, July/August 2008, 90(4),
pp. 275-93.
Wieland, Volker. “Monetary Policy, Parameter
Uncertainty, and Optimal Learning.” Journal of
Monetary Economics, August 2000a, 46(1),
pp. 199-228.
Wieland, Volker. “Learning by Doing and the Value
of Optimal Experimentation.” Journal of Economic
Dynamics and Control, April 2000b, 24(4),
pp. 501-34.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Andrew T. Levin

O

ver the past decade or so,
researchers at academic institutions
and central banks have been active
in specifying and estimating
dynamic stochastic general equilibrium (DSGE)
models that can be used to analyze monetary
policy.1 Although the first-generation models
were relatively small and stylized, more recent
models typically embed a much more elaborate
dynamic structure aimed at capturing key aspects
of the aggregate data.2 Indeed, several central
banks now use DSGE models in the forecasting
process and in formulating and communicating
policy strategies.3 In following that approach,
however, it is crucial to investigate the sensitivity
of the optimal policy prescriptions of a given
model—that is, comparing the policy implications of alternative specifications of the behavioral mechanisms or exogenous shocks—and to
identify policy strategies that provide robust performance under model uncertainty.
The authors’ paper (Svensson and Williams,
2008) makes an important contribution in analyzing Bayesian optimal monetary policy in an envi1

Pioneering early studies include King and Wolman (1996, 1999),
Goodfriend and King (1997), Rotemberg and Woodford (1997, 1999),
Clarida, Galí, and Gertler (1999), and McCallum and Nelson (1999).

2

See Christiano, Eichenbaum, and Evans (2005), Smets and Wouters
(2003), Levin et al. (2006), and Schmitt-Gröhé and Uribe (2006).

3

Examples include the Bank of Canada, the Bank of England, the
European Central Bank, and the Sveriges Riksbank. Recent DSGE
model development at the Federal Reserve Board is described in
Erceg, Guerrieri, and Gust (2006) and Edge, Kiley, and Laforte
(2007).

ronment in which the central bank faces a set of
competing models and uses incoming information
to update its probability assessments regarding
which model is the best representation of the
actual economy. Moreover, because private sector
expectations play a key role in determining economic outcomes, the optimal policy not only
characterizes the central bank’s current actions
but also involves a complete set of commitments
regarding which future actions will be taken under
every possible contingency. Given this approach,
the analysis is made tractable—and very elegant—
by the use of Markov jump-linear-quadratic
methods.
In this environment, the Bayesian optimal
policy is influenced by an “experimentation”
motive, because the central bank recognizes that
its current policy actions can influence the flow
of incoming information and thereby affect the
degree of model uncertainty in subsequent periods. In effect, experimentation is a form of public
investment that incurs a short-run cost (in terms
of greater macro volatility) in exchange for the
medium-run benefit of a more precise estimate
of the structure of the economy that will thereby
facilitate better stabilization policies. Thus, the
paper also makes a valuable contribution by comparing the Bayesian optimal policy with an
“adaptive optimal control” strategy (in which
the central bank updates its probability assessments of the competing models but does not
engage in experimentation) and with the case of

Andrew Levin is a deputy associate director in the Division of Monetary Affairs, Board of Governors of the Federal Reserve System.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 301-305.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

301

Levin

“no learning” (in which the central bank never
changes its probability assessments).
Interestingly, this analysis reaches conclusions
regarding the role of experimentation that are
broadly similar to those obtained in earlier studies
such as Wieland (2000, 2006). In particular, the
experimentation motive has relatively modest
effects on the characteristics of the Bayesian
optimal policy, and welfare comparisons indicate
fairly minimal costs of using adaptive optimal
control. Indeed, as John Taylor described in a
recent interview (Leeson, 2007), he arrived at
essentially the same conclusions several decades
ago when he applied Bayesian optimal control
to a small structural macro model: “My Ph.D.
thesis...problem was to find a good policy rule
in a model where one does not know the parameters and therefore had to estimate them and control the dynamic system simultaneously. My main
conclusion...was that in many models, simply
following a rule without special experimentation
features was a good approximation [to the optimal
policy].”
In the remainder of this commentary, I discuss
a few conceptual issues regarding the formulation
of model uncertainty, the characterization of
optimal policy under commitment, and the specification of how the private sector’s information
set differs from that of the central bank.

CHARACTERIZING MODEL
UNCERTAINTY
In analyzing the monetary policy implications
of model uncertainty, it seems reasonable to
assume that there will never be any single “true”
model, because every macro model is merely a
stylized approximation of reality. Moreover, ongoing progress in economic theory and empirical
analysis not only shifts policymakers’ probability
assessments about which existing model is the
best approximation, but it also inevitably generates a winnowing process whereby new modeling mechanisms are developed while obsolete
models are completely discarded. Over the past
few decades, for example, many central banks
have undergone a sequence of transitions from
302

J U LY / A U G U S T

2008

traditional Phillips curve models (which implied
a positive long-run relationship between output
and inflation) to structural macro models embedding rational expectations—most recently to
DSGE models with formal microeconomic foundations. Furthermore, it seems reasonable to anticipate that this process of model development and
refinement will continue at a similar pace in the
years ahead.
From this perspective, a stationary Markov
process does not seem to be the ideal approach
to represent the sort of model uncertainty that is
relevant for monetary policymaking. In the present analysis, each competing model corresponds
to a specific node or “state” of the Markov process;
hence, model uncertainty is represented by the
policymaker’s assessments of the probability
that each of these nodes is the correct model of
the economy, and the learning process is represented by how these probability assessments are
updated in response to incoming information.
Thus, if the economy switches from one node to
another, this implies that the “true” model of the
economy has suddenly shifted. Such shifts may
well occur, but it seems doubtful that the process
is stationary: that is, the true economy does not
shift back and forth among the members of the
set of competing models.
For example, a recent study of an empirical
DSGE model of the U.S. economy found that two
alternative specifications of the structure of nominal wage contracts—namely, Calvo-style contracts
with random duration versus Taylor-style contracts with fixed duration—have markedly different implications for optimal monetary policy
and welfare (Levin, Onatski, J. Williams, and
N. Williams, 2006). The analytical framework of
this paper can easily be used to characterize the
Bayesian optimal policy for this specification
uncertainty: One node would correspond to the
Calvo-style contract structure, and the other node
would correspond to the Taylor-style contract
structure. But it does not seem plausible to specify
this uncertainty as a stationary Markov process—
after all, that would imply that the economy
occasionally shifts back and forth between Calvostyle contracts and Taylor-style contracts!
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Levin

Of course, a stationary Markov regime–
switching specification may well be useful for
representing occasional shifts in the state of the
economy, such as stochastic transitions between
low growth and high growth. But in the case of
model uncertainty, it seems reasonable to specify
a diagonal structure for the Markov transition
matrix: that is, the true economy never shifts
between competing models. In that case, the
policymaker has prior beliefs that assign some
positive weight to each of these models; these
priors are then updated in response to incoming
information. Alternatively, one might consider a
triangular Markov transition matrix with very
small off-diagonal elements, representing the
notion that the true structure of the economy
might experience very rare shifts but would never
revert to its original structure.

CHARACTERIZING OPTIMAL
POLICY UNDER COMMITMENT
The “timeless perspective” is an appealing
approach to characterizing optimal policy under
commitment in a stationary environment
(Woodford, 2003). This approach is equivalent
to assuming that the government agency established a complete set of state-contingent policy
commitments at some point in the distant past
(that is, time t = –⬁), and that the economy has
converged to its stationary steady state under that
regime by now (t = 0). Moreover, in the general
case in which this steady state is not Pareto optimal, the quadratic approximation of household
welfare depends on the steady-state values of the
Lagrange multipliers of the original policymaking
problem (Benigno and Woodford, 2005).
In contrast, for the reasons described here
previously, an environment of model uncertainty
may be viewed as implying that the economy
has not yet reached any stationary steady state,
and hence that policy should not be characterized
from a timeless perspective. Indeed, in this context it might be more natural to characterize optimal policy from the Ramsey perspective, that is,
assuming that the policymaker is prepared to
establish a complete set of state-contingent comF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

mitments starting in the present period (that is,
as of time t = 0), where these commitments would
reflect the anticipation that incoming information in future periods will gradually enable the
policymaker to learn which model correctly represents the economy. Of course, that specification
would raise further computational issues: Under
the Ramsey policy (as opposed to the timeless
perspective), the Lagrange multipliers corresponding to the implementation constraints cannot
be substituted out of the problem but remain as
essential state variables of the linear-quadratic
approximation.

CHARACTERIZING THE PRIVATE
SECTOR’S INFORMATION
Finally, it is worthwhile to consider the
assumptions used in this analysis regarding the
information available to private agents:
1. In the benchmark case of Bayesian optimal
control, the analysis of this paper proceeds
under the assumption that neither private
agents nor the policymaker know which
model is true. Unfortunately, this assumption is somewhat problematic in the context
of DSGE models with explicit microeconomic foundations, because those models
are formulated under the assumption that
each household is aware of its own preferences and that each firm is aware of its own
production technology and the characteristics of consumer demand.
For example, in New Keynesian DSGE
models with monopolistic competition and
staggered price contracts, it is assumed
that each firm sets the price of its product
with full knowledge of its own production
function and the elasticity of demand for
its product. Nevertheless, econometricians
may be unable to make precise distinctions
regarding the extent to which aggregate
price-setting behavior is influenced by
factors such as firm-specific inputs and
quasi-kinked demand; hence, there may
be a strong motive for designing a monetary
policy strategy that is robust to this source
J U LY / A U G U S T

2008

303

Levin

of model uncertainty (Levin, Lopez-Salido,
and Yun, 2007).
Similarly, DSGE models typically
involve a consumption Euler equation
that is derived from a particular specification of household preferences for consumption and leisure—and of course, each
individual household is assumed to have
full knowledge of its own preferences in
making decisions about spending, labor
supply, etc. Nevertheless, the available
data may be insufficient to enable econometricians to resolve uncertainty regarding several competing specifications of
household preferences. Therefore, the
central bank may wish to follow a policy
strategy that is robust to this source of
model uncertainty (Levin et al., 2008).
2. In the case of adaptive optimal control, the
analysis proceeds under the more restrictive assumption that neither private
agents nor the policymaker observe the
current vector of shocks—an assumption
that precludes consideration of most (if
not all) existing DSGE models. In many
such models, for example, shocks to total
factor productivity play a key role as a
source of aggregate volatility in output
and employment. But it is by no means
clear how an individual firm could determine its own production if the firm did
not have contemporaneous knowledge of
its own productivity.
3. The case of “no learning” assumes that
neither private agents nor the policymaker
can recall any of the data that were observed
in previous periods. In many DSGE models,
however, these data do enter explicitly
into agents’ decision rules. For example,
in specifications with habit persistence in
consumption, the household’s current
spending decision partly reflects its spending in previous periods. Similarly, when
investment in physical capital is subject to
adjustment costs, each individual firm’s
decision regarding its current level of
investment depends explicitly on its prior
investment decisions.
304

J U LY / A U G U S T

2008

Evidently, in analyzing optimal policy under
model uncertainty in the context of DSGE models
with explicit micro foundations, further progress
is needed to distinguish between the information
available to the central bank and the information
that is available to individual households and
firms.

REFERENCES
Benigno, Pierpaolo and Woodford, Michael.
“Inflation Stabilization and Welfare: The Case of a
Distorted Steady State.” Journal of the European
Economics Association, 2005, 3, pp. 1185-236.
Christiano, Lawrence; Eichenbaum, Martin and
Evans, Charles. “Nominal Rigidities and the
Dynamic Effects of a Shock to Monetary Policy.”
Journal of Political Economy, 2005, 113, pp. 1-45.
Clarida, Richard; Galí, Jordi and Gertler, Mark. “The
Science of Monetary Policy: A New Keynesian
Perspective.” Journal of Economic Literature, 1999,
37, pp. 1661-707.
Edge, Rochelle; Kiley, Michael and Laforte, JeanPhillipe. “Natural Rate Measures in an Estimated
DSGE Model of the U.S. Economy.” Finance and
Economics Discussion Series, No. 2007-08, Board
of Governors of the Federal Reserve System, 2007.
Erceg, Christopher; Guerrieri, Luca and Gust,
Christopher. “SIGMA: A New Open Economy
Model for Policy Analysis.” International Journal
of Central Banking, 2006, 2, pp. 1-50.
Goodfriend, Marvin and King, Robert G. “The New
Neoclassical Synthesis and the Role of Monetary
Policy.” NBER Macroeconomics Annual 1997.
Cambridge, MA: MIT Press, 1997.
King, Robert G. and Wolman, Alexander L.“Inflation
Targeting in a St. Louis Model of the 21st Century,”
Federal Reserve Bank of St. Louis Review,
May/June 1996, 78(3), pp. 83-107.
King, Robert G. and Wolman, Alexander L. “What
Should the Monetary Authority Do When Prices
Are Sticky?” in John B. Taylor, ed., Monetary

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Levin

Policy Rules. Chicago: University of Chicago Press,
1999, pp. 349-98.
Leeson, Robert. “An Interview with John B. Taylor.”
Unpublished manuscript, Murdoch University,
2007.
Levin, Andrew; Onatski, Alexei; Williams, John C.
and Williams, Noah. “Monetary Policy under
Uncertainty in Micro-Founded Macroeconometric
Models,” in Mark Gertler and Kenneth Rogoff,
eds., NBER Macroeconomics Annual 2005.
Cambridge, MA: MIT Press, 2006.
Levin, Andrew; Lopez-Salido, J. David and Yun,
Tack. “Strategic Complementarities and Optimal
Monetary Policy.” Discussion Paper No. 6423,
Centre for Economic Policy Research, 2007.
Levin, Andrew; Lopez-Salido, J. David; Nelson,
Edward and Yun, Tack. “Macroeconometric
Equivalence, Microeconomic Dissonance, and the
Design of Monetary Policy.” Journal of Monetary
Economics, 2008 (forthcoming).
McCallum, Bennett T. and Nelson, Edward.
“Performance of Operational Policy Rules in an
Estimated Semi-Classical Structural Model,” in
John B. Taylor, ed., Monetary Policy Rules. Chicago:
University of Chicago Press, 1999, pp. 15-45.
Rotemberg, Julio J. and Woodford, Michael. “An
Optimization-Based Econometric Framework for
the Evaluation of Monetary Policy.” NBER
Macroeconomics Annual 1997. Cambridge, MA:
MIT Press, 1997, pp. 297-346.

Schmitt-Gröhé, Stephanie and Uribe, Martin.
“Optimal Fiscal and Monetary Policy in a
Medium-Scale Macroeconomic Model,” in Mark
Gertler and Kenneth Rogoff, eds., NBER
Macroeconomics Annual 2005. Cambridge, MA:
MIT Press, 2006.
Smets, Frank and Wouters, Raf. “An Estimated
Dynamic Stochastic General Equilibrium Model of
the Euro Area.” Journal of the European Economic
Association, 2003, 1, pp. 1123-75.
Svensson, Lars E.O. and Williams, Noah. “Optimal
Monetary Policy Under uncertainty: A Markov
Jump-Linear-Quadratic Approach.” Federal
Reserve Bank of St. Louis Review, July/August
2008, 90(4), pp. 275-93.
Wieland, Volker. “Monetary Policy, Parameter
Uncertainty, and Optimal Learning.” Journal of
Monetary Economics, 2000, 46, pp. 199-228.
Wieland, Volker. “Monetary Policy and Uncertainty
about the Natural Unemployment Rate: BrainardStyle Conservatism versus Experimental
Activism.” Berkeley Electronic Journal of
Macroeconomics: Advances in Macroeconomics,
2006, 6(1), Article 1.
Woodford, Michael. Interest and Prices: Foundations
of a Theory of Monetary Policy. Princeton, NJ:
Princeton University Press, 2003.

Rotemberg, Julio J. and Woodford, Michael. “Interest
Rate Rules in an Estimated Sticky-Price Model,”
in John B. Taylor, ed., Monetary Policy Rules.
Chicago: University of Chicago Press, 1999.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

305

306

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Economic Projections and Rules of Thumb
for Monetary Policy
Athanasios Orphanides and Volker Wieland
Monetary policy analysts often rely on rules of thumb, such as the Taylor rule, to describe historical
monetary policy decisions and to compare current policy with historical norms. Analysis along these
lines also permits evaluation of episodes where policy may have deviated from a simple rule and
examination of the reasons behind such deviations. One interesting question is whether such rules
of thumb should draw on policymakers’ forecasts of key variables, such as inflation and unemployment, or on observed outcomes. Importantly, deviations of the policy from the prescriptions of a
Taylor rule that relies on outcomes may be the result of systematic responses to information captured
in policymakers’ own projections. This paper investigates this proposition in the context of Federal
Open Market Committee (FOMC) policy decisions over the past 20 years, using publicly available
FOMC projections from the semiannual monetary policy reports to Congress (Humphrey-Hawkins
reports). The results indicate that FOMC decisions can indeed be predominantly explained in terms
of the FOMC’s own projections rather than observed outcomes. Thus, a forecast-based rule of thumb
better characterizes FOMC decisionmaking. This paper also confirms that many of the apparent
deviations of the federal funds rate from an outcome-based Taylor-style rule may be considered
systematic responses to information contained in FOMC projections. (JEL E52)
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 307-24.

W

illiam Poole has been a long-time
proponent of rules of thumb for
monetary policy. Nearly four
decades ago, as staff economist at
the Board of Governors of the Federal Reserve
System (BOG), Poole presented a reactive rule
of thumb that he argued could serve as a robust
guide to policy decisions (Poole, 1971). More
recently, as president of the Federal Reserve
Bank of St. Louis and a member of the Federal
Open Market Committee (FOMC), he has highlighted how a simple Taylor rule that systematically responds to economic activity and inflation

can serve as a useful tool for understanding historical monetary policy decisions (Poole, 2007).
In both his recent and earlier work, Poole highlighted the usefulness of rules of thumb in the
context of the complexity of the macroeconomy
and our limited knowledge regarding it. In this
light, a policy adviser cannot offer precise guidance about how the monetary authority should
respond to every conceivable contingency to best
achieve its goals. What a policy adviser can do is
identify useful rules of thumb that can serve as
appropriate guides to policy under most circumstances. To the extent policymakers rely on

Athanasios Orphanides is the Governor of the Central Bank of Cyprus, and Volker Wieland is a professor at the Goethe University Frankfurt,
director at the Center for Financial Studies, and fellow at the Centre for Economic Policy Research. Volker Wieland thanks the Stanford Center
for International Development, where he was a visiting professor while writing this paper. The authors are grateful for excellent research
assistance by Sebastian Schmidt from Goethe University Frankfurt. Helpful comments were provided by Greg Hess, Jim Hamilton, participants
at the St. Louis conference, and the paper’s discussants, Charles Plosser and Patrick Minford.

© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, the regional Federal Reserve Banks, the Central Bank of Cyprus, or the Governing
Council of the European Central Bank. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in their
entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made only
with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

307

Orphanides and Wieland

a simple rule of thumb as an approximate policy
guide, it should be possible to identify this rule
and use it to understand historical policy decisions and to improve future policy.
One of the difficulties in identifying a simple
rule that can serve as a useful description of policy
is that the policy prescriptions relevant for policy
advice at any point in time reflect the information
available to policymakers at that time. To the
extent policy is based on observable macroeconomic variables, a simple rule could be estimated
using real-time historical data. However, to the
extent policymakers view projections of key
macroeconomic variables as more useful summary
descriptions of the current state of the economy,
estimation of a simple rule based on those same
policymaker projections would provide a more
promising avenue. Poole (2007) examines FOMC
policy decisions over the past 20 years using the
simple outcome-based rule proposed by Taylor
(1993). This rule uses the current inflation rate
and output gap as inputs for federal funds rate
decisions. Poole identifies some deviations of
policy from the systematic prescriptions suggested
by the rule that could, however, reflect a systematic response of the FOMC to its own projections.
Our objective in this paper is to investigate
this proposition. To this end we compare estimated policy rules that are based on recent economic outcomes with policy rules based on the
economic projections of the FOMC. We investigate whether the federal funds rate target set by
the FOMC when these projections are made
responds systematically to these projections as
opposed to recent economic data.
Our results, which are based on real-time data
and projections over the past 20 years, indicate
that interest rates respond predominantly to
FOMC projections and thus that a forecast-based
rule better characterizes FOMC decisionmaking
during this period. Furthermore, we check to what
extent deviations from an outcome-based Taylor
rule may be better explained by the information
incorporated in FOMC forecasts. Our analysis
suggests that by distinguishing between forecasts
and outcomes one can explain a number of deviations of policy from the simple underlying rule,
though it can also identify episodes where devi308

J U LY / A U G U S T

2008

ations remain. This includes episodes where one
would expect systematic policy to deviate from a
simple rule of thumb, such as the response to
financial turbulence experienced in 1998.
Overall, our analysis suggests that FOMC
projections used in the context of a rule of thumb
are quite informative for understanding historical
monetary policy, whereas similar analysis based
on economic outcomes can often be of much lower
value.

ON RULES OF THUMB FOR
MONETARY POLICY
Simple estimated rules can be useful devices
for understanding historical monetary policy if
central banks conduct policy sufficiently systematically to be captured by such rules. Poole (1971)
suggested that it is reasonable for individual
policymakers to behave in a systematic manner:
Individual policy-makers inevitably use informal rules of thumb in making decisions. Like
everyone else, policy-makers develop certain
standard ways of reacting to standard situations. These standard reactions are not, of
course, unchanging over time, but are adjusted
and developed according to experience and
new theoretical ideas. (p. 151)

Though it did not attract much attention at
the time, the particular rule of thumb proposed
by Poole in 1971 is of interest in that it incorporated both a reaction of the interest rate to real
economic activity (specifically the deviation of
the unemployment rate from the Federal Reserve’s
estimate of the full employment rate at the time),
as well as a nominal variable in a way that would
ensure price stability over the long run. The latter
was not based on the response of the interest rate
to inflation, as is commonly specified today.
Rather, Poole’s rule specified that the money supply should always be contained within bounds
as a robust means of controlling inflation and
suggested adjusting the interest rate to respond
to deviations of unemployment from full employment only when doing so would respect these
bounds. In essence, Poole’s rule of thumb uses
money growth to ensure the maintenance of price
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

stability and, subject to that, provides countercyclical policy prescriptions. He provided the
following summary description:
The proposed rule assumes that full employment exists when the unemployment rate is
in the 4.0 to 4.4 per cent range. The rule also
assumes that at full employment, a growth rate
of the money stock of 3 to 5 per cent per annum
is consistent with price stability. Therefore,
when unemployment is in the full employment
range, the rule calls for monetary growth at the
3 to 5 per cent rate.
The rule calls for higher monetary growth
when unemployment is higher, and lower
monetary growth when unemployment is
lower. Furthermore, when unemployment is
relatively high the rule calls for a policy of
pushing the Treasury bill rate down provided
monetary growth is maintained in the specified
range; similarly, when unemployment is relatively low the rule calls for a policy of pushing
the Treasury bill rate up provided monetary
growth is in the specified range. Finally, the
rule provides for adjusting the rate of growth
of money according to movements in the
Treasury bill rate in the recent past. (p. 183)

process that consists largely of reactions to
current developments. Only gradually will
policy-makers place greater reliance on formal
forecasting models. (pp. 152-53)

In 2007, Poole used a version of the classic
Taylor (1993) rule to describe Federal Reserve
behavior over the past 20 years.1 As is well known,
this rule posits that the systematic component of
monetary policy may be described as a notional
target for the federal funds rate, fˆ:
(1)

The FOMC, and certainly John Taylor himself,
view the Taylor rule as a general guideline.
Departures from the rule make good sense
when information beyond that incorporated
in the rule is available. For example, policy is
forward looking, which means that from time to
time the economic outlook changes sufficiently
that it makes sense for the FOMC to set a funds
rate either above or below the level called for
in the Taylor rule, which relies on observed
recent data rather than on economic forecasts of
future data. Other circumstances—an obvious
example is September 11, 2001—call for a
policy response. These responses can be and
generally are understood by the market. Thus,
such responses can be every bit as systematic
as the responses specified in the Taylor rule.
(p. 6)

It is not proposed that this rule of thumb or
guideline be followed if there is good reason
for departure. But departures should be justified by evidence and not be based on vague
intuitive feelings of what is needed since the
rule was carefully designed from the theoretical
and empirical analysis...and from a careful
review of post-accord monetary policy. (p. 183)

Given the accuracy of forecasts at the current
state of knowledge, it seems likely that for
some time to come forecasts will be used primarily to supplement a policy-decisionmaking

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

)

where π and y reflect contemporaneous readings
of inflation and a measure of the output gap,
respectively. Following Taylor, Poole assumed a
constant inflation target, π *, and a constant equilibrium real interest rate, r*. Poole’s rendition of
the Taylor rule is reproduced in Figure 1.
As in his work 36 years earlier, Poole (2007)
explained potential sources of deviation from the
rule and also the potential use of forecasts:

Poole also explicitly recognized a scope for
deviations from his suggested rule of thumb, even
if policymakers had decided to adopt it in principle. What was more important in Poole’s view
was transparency in explaining the rationale for
such deviations:

As to whether rules could usefully rely on
economic projections, Poole (1971) argued that
an important factor would be the accuracy of the
forecasts:

(

fˆ = r * + π + 0.5 π − π * + 0.5y ,

This last remark suggests that a better rule
of thumb for understanding the behavior of the
Federal Reserve over the past 20 years could be a
version of the Taylor rule that is explicitly based
1

Taylor (1993) showed that the rule could describe Federal Reserve
behavior from 1987 to 1992 quite well. Interest rate rules had also
acquired a normative dimension at that time because of their success
in a large-scale model comparison project reported in Bryant, Hooper,
and Mann (1993) (see also Henderson and McKibbin, 1993).

J U LY / A U G U S T

2008

309

Orphanides and Wieland

Figure 1
Poole’s (2007) Version of the Taylor Rule
Percent
12

BOG Output Gap: CPI, 1987:09–2000:10
Federal Funds Rate

10

CBO Output Gap: CPI, 2000:11–2006:06

8

6

4

2

0
1986

1988

1990

1992

1994

1996

1998

2000

2002

2004

FOMC Meeting Dates

NOTE: The solid blue line shows the Taylor rule constructed using the BOG real-time output-gap estimate. The blue dashed line
extends the rule using the output-gap estimate of the CBO for those years for which the BOG estimate is not yet public information.

on the FOMC’s own projections. This is the subject of the investigation that follows.

We begin by describing how to construct
constant-horizon forecasts that can be used in
estimating a policy rule from publicly available
projections. The semiannual monetary policy
reports to Congress (the Humphrey-Hawkins
reports) have presented information on the range
and central tendency of annual forecasts of FOMC
members since 1979.2
Following Poole’s (2007) analysis, we create
a dataset of FOMC projections and corresponding
real-time data on observed outcomes that focuses
our attention on the past 20 years.3

Regarding projections, we take the midpoints
of the central tendencies reported in each of the
reports, starting with the February 1988 report and
ending with the July 2007 report, and use these
as proxies for the modal forecasts of FOMC expectations. Our objective using these data is to examine whether deviations from an outcome-based
Taylor rule may be explained by the additional
information contained in policymakers’ forecasts.
These include inflation, the rate of unemployment,
and output growth. Because we could not make
approximate inferences of the FOMC forecasts of
the output gap from these variables, although we
do have the FOMC’s unemployment projections,
we focus on a version of the Taylor rule that substitutes the unemployment rate for the output gap.
Consequently, in our dataset we focus on data and
forecasts regarding inflation and unemployment.

2

3

FOMC ECONOMIC PROJECTIONS
AND REAL-TIME OUTCOMES

A month after this paper was first presented, on November 14, 2007,
the Federal Reserve announced that going forward the FOMC would
compile and release these economic projections four times a year
instead of just two times a year, which was the practice until then.

310

J U LY / A U G U S T

2008

In earlier work, Lindsey, Orphanides, and Wieland (1997), we
examined the implications of FOMC projections for understanding
policy in the sample prior to 1988 and presented some comparisons
with the 1988-96 period.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

Figure 2
The Timing of Forecasts in Humphrey-Hawkins Reports: Unemployment Rates
February Report
HH
ut+3|t

Q4

Q1

Q2

Q3

Q4

Q1

Q2

Q3

Q4

Q1

HH
ut+1|t
HH
ut+3|t
HH
ut+5|t

July Report

Some of the particular measures have been
redefined over the years. For inflation, the implicit
deflator of the gross national product was used
through July 1988, thereafter replaced by the consumer price index (CPI). In February 2000, the
CPI was replaced by the personal consumption
expenditures (PCE) deflator measure of inflation,
and from July 2004 onward the FOMC decided to
focus on the core PCE deflator that excludes food
and energy prices because of their volatility. These
changes are of particular interest because the
alternative measures do not always provide similar summary readings of inflationary pressures.
They may differ both in their level and in their
variability over time, especially in small samples,
which poses some interpretation challenges.
Tables 1 and 2 provide two recent examples
useful for understanding what information on
projections is released with the monetary policy
reports. Forecasts for 2007 were first reported in
July 2006 (not shown). In February 2007, revised
forecasts for 2007 and first forecasts for 2008 were
reported (Table 1). The final updated forecasts for
2007 were then published in July together with
updated forecasts for 2008 (Table 2).
Although we have only two observations per
year, it is convenient to describe our dataset in
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

terms of a quarterly frequency because the FOMC
projections report either quarterly data or growth
rates over four quarters. Denoting time (measured
in quarters) with t, we associate the February
Humphrey-Hawkins report with the first quarter
of the year and the July Humphrey-Hawkins report
with the third quarter. We construct a dataset containing two sets of forecasts for each year, covering four-quarter intervals that always end three
quarters in the future. For any variable x, let xt +i|t
denote the estimated outcome (for i ⱕ 0) or forecast (for i > 0) of the value of the variable x at t +i
as of time t.4 Then, letting u denote the unemployment rate, ut+3|t represents the three-quarter-ahead
forecast of the unemployment rate formed during
quarter t, and ut–1|t the estimate as of quarter t of
what the outcome for the unemployment rate was
in the previous quarter.
As shown on the time chart in Figure 2,
using the unemployment rate as an example, the
forecasts reported to Congress in the February
4

Importantly, because of the lags with which information about the
past becomes available, we need to keep track not only of revisions
of forecasts but also of revisions regarding outcomes when trying
to understand the environment in which FOMC decisions were
taken. We later describe the data we use for outcomes.

J U LY / A U G U S T

2008

311

Orphanides and Wieland

Table 1
FOMC Forecasts for 2007 and 2008 from the February 2007 Humphrey-Hawkins Report
2007
Indicator

2008

Memo 2006 actual

Range

Central tendency

Range

Central tendency

Nominal GDP

5.9

4 3/4–5 1/2

5–5 1/2

4 3/4–5 1/2

4 3/4–5 1/4

Real GDP

3.4

2 1/2–3 1/4

2 1/2–3

2 1/2–3 1/4

2 3/4–3

2.3

2–2 1/4

2–2 1/4

1 1/2–2 1/4

1 3/4–2

4.5

4 1/2–4 3/4

4 1/2–4 3/4

4 1/2–5

4 1/2–4 3/4

Change, fourth quarter to fourth quarter*

PCE price index excluding food and energy
Average level, fourth quarter
Civilian unemployment rate

NOTE: *Change from average for fourth quarter of previous year to average for fourth quarter of year indicated.
SOURCE: “Economic Projections of Federal Reserve Governors and Reserve Bank Presidents” from the February 2007 HumphreyHawkins report.

Table 2
FOMC Forecasts for 2007 and 2008 from the July 2007 Humphrey-Hawkins Report
2007
Indicator

2008

Range

Central tendency

Range

Central tendency

4 1/2–5 1/2

4 1/2–5

4 1/2–5 1/2

4 3/4–5

Real GDP

2–2 3/4

2 1/4–2 1/2

2 1/2–3

2 1/2–2 3/4

PCE price index excluding food and energy

2–2 1/4

2–2 1/4

1 3/4–2

1 3/4–2

4 1/2–4 3/4

4 1/2–4 3/4

4 1/2–5

About 4 3/4

Change, fourth quarter to fourth quarter*
Nominal GDP

Average level, fourth quarter
Civilian unemployment rate

NOTE: *Change from average for fourth quarter of previous year to average for fourth quarter of year indicated.
SOURCE: “Economic Projections of Federal Reserve Governors and Reserve Bank Presidents” from the July 2007 Humphrey-Hawkins
report.

312

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Humphrey-Hawkins report have exactly the
desired timing. That is, when t is the first quarter,
the three-quarter-ahead forecast of unemployment,
ut+3|t, corresponds to the February HumphreyHawkins forecast of the unemployment rate in
the fourth quarter of the same year. That is, when
t represents the first quarter of a year, we have
(2)

ut +3|t ; utHH
+3|t ,

where we employ the superscript HH to denote
the Humphrey-Hawkins forecasts.
Note that in Figure 2 under the heading
‘‘February Report” the solid arrow points to the
quarter on the time line for which the unemployment rate is predicted (t +3) and the dotted line
points to the quarter in which the forecast is made
(t). Similarly, for inflation, when t represents the
first quarter of a year, the three-quarter-ahead forecast corresponds to the rate of growth of prices
from the fourth quarter of the previous year to the
fourth quarter of the current year, exactly matching the horizon of the February HumphreyHawkins forecast. Letting π represent the rate of
inflation over four quarters, when t is the first
quarter of a year, we have
(3)

π t +3|t ; π tHH
+3|t .

For the July Humphrey-Hawkins reports,
some additional work is required to obtain threequarter-ahead projections; that is, we combine
available information to estimate the forecast of
the unemployment rate for the second quarter of
next year and the corresponding forecast of the
four-quarter growth rate of prices that ends in
the same quarter. The timing of the two July
Humphrey-Hawkins forecasts and the constructed
three-quarter-ahead unemployment forecast is also
shown with respect to the time line in Figure 2.
In this case, the dashed arrow refers to the threequarter-ahead observation for which an unemployment forecast is needed. To approximate the
unemployment forecast for the second quarter of
the following year, we simply take from the July
Humphrey-Hawkins report the forecasted unemployment rates for the current year’s fourth quarter
and next year’s fourth quarter and average them.
That is, when t represents the third quarter of
the year, we set
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(4)

ut +3|t =

(

Orphanides and Wieland

)

1 HH
ut +1|t + utHH
+5|t .
2

Other than the rare occurrence of when a
shock is known to have only transitory effects,
for a four-quarter interval that starts two quarters
later, it is doubtful that FOMC members would
have strong views about the likelihood of different
changes in the unemployment rate over the two
halves of that period. Implicitly, we assume that
the changes forecasted in July for the unemployment rate in each half of next year are about the
same.
The desired second-quarter-to-second-quarter
forecasts of the growth rate of prices is obtained
by constructing two forecasted half-year annualized growth rates and then averaging them. In
other words, when t represents the third quarter
of the year, we set
(5)

π t + 3|t =

)

(

1 S
π t +1|t + π tS+3|t ,
2

S
where S stands for semiannual, so that π t+1|t
is
the inflation forecast for the second half of the
S
current year and π t+3|t
is the forecast for the first
half of the following year.
The inflation forecasted for the second half
S
of the current year, π t+1|t
, can be inferred from the
forecast reported for all of the year from a base of
HH
last year’s fourth quarter, π t+1|t
, and the estimated
inflation over the first half of the current year from
S
a base of last year’s fourth quarter, π t–1|t
. That is,
expressing all terms as annualized growth rates,
when t represents the third quarter of the year,

(6)

S
π tS+1|t = 2π tHH
+1|t − π t −1|t .

S
For π t+3|t
, inflation over the first half of the
next year, we simply set it equal to the July
Humphrey-Hawkins forecast for all of next year.
That is, we set

(7)

π tS+3|t = π tHH
+ 5|t .

The July Humphrey-Hawkins report does not
provide an estimate of inflation for the first half
S
of the current year, that is, for π t–1|t
. Thus, instead
we make use of alternative real-time data sources,
which are discussed below.
J U LY / A U G U S T

2008

313

Orphanides and Wieland

To allow for a direct comparison of rules
based on the forecasts described above with rules
based on outcomes of these variables, we construct
parallel variables reflecting the latest historical
information available to the FOMC at the time of
their meetings preceding the two HumphreyHawkins reports each year.
Thus, for the unemployment rate, we create
the variable ut–1|t, which for the February observation reflects the average level in the fourth quarter
of the prior year and for the July observation
reflects the average level in the second quarter of
the current year. Similarly, for inflation, we create
the variable πt–1|t, which reflects the four-quarter
growth rate of prices ending in the fourth quarter
of the prior year for the February observation and
ending in the second quarter of the current year
for the July observation.
An important aspect of our analysis is to
ensure that our definition of outcomes reflects
only information available to the FOMC in real
time. To that end, we rely only on data that would
have been available to the FOMC by early February
or early July. This implies that the data we use
correspond either to preliminary estimates, firstreported quarterly data, or estimates based on
partial data for the quarter.
To match the timing of this information as
closely as possible, for the years 1988 through
2001 inclusive, we use BOG staff estimates of
outcomes ending in the prior quarter, which are
contained in the Greenbook that is distributed to
the FOMC prior to the early-February and earlyJuly FOMC meetings. Even so, because Greenbook
data remain confidential for five years, we cannot
rely on that source for the last few years of our
sample. Instead, for 2002-07 we use real-time
vintage data from the Federal Reserve Bank of
St. Louis ALFRED database.5 For these dates we
use the data vintage from ALFRED that was available one week after the respective February and
July Humphrey-Hawkins meetings. We choose
this timing because FOMC members have the
5

As a robustness check, we have investigated how much the
ALFRED-based information differs from Greenbook information
in the years until 2001, when both are available. Although the
data source does influence the data values somewhat, the differences were small.

314

J U LY / A U G U S T

2008

opportunity to revise their projections during a
window of a few days following the meetings.

ESTIMATED POLICY RULES:
FOMC PROJECTIONS VERSUS
RECENT OUTCOMES
Specification
The interest rate rules we estimate all share
the following underlying structure with Taylor’s
(1993) rule. They posit that the systematic component of monetary policy can be described as a
notional target for the federal funds rate, fˆ, which
increases with inflation, π, and real activity.
As already mentioned with regard to projections of real activity, we do not have information
about the FOMC’s assessment of the output gap.
Thus, we cannot directly estimate an exact counterpart of the rule proposed by Taylor. Instead, an
indirect comparison is feasible using the unemployment rate, u, as a measure of the level of
economic activity.6
Following Taylor, we restrict attention to a
linear specification of the rule and posit that7
(8)

fˆ = a0 + aπ π + auu.

Note that we do not have direct information on
the policymakers’ views regarding the equilibrium
interest rate, r*, the inflation target, π *, or the
natural rate of unemployment, u*. If these concepts are roughly constant over the sample
period, then they would be subsumed in the
estimated intercept,
a0 = r * − (aπ − 1)π * − auu* .

In estimating our specification, we need to
take an explicit stand regarding the explanatory
6

The difference between the unemployment rate and a constant
natural rate (NAIRU) can then be translated into an estimate of
the output gap by means of Okun’s law.

7

The linearity assumption is purely for simplicity in the spirit of
the Taylor rule. Nonlinear reaction functions, such as those characterizing “opportunistic disinflation” examined by Orphanides
and Wilcox (2002) and Aksoy et al. (2006) and those incorporating
asymmetric easing near the zero-bound for nominal interest rates
as derived by Orphanides and Wieland (2000), would likely be
more-realistic but more-complicated depictions of policy.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

Figure 3
The Timing of the Explanatory Variables in Humphrey-Hawkins Reports: Outcomes and
Forecasts of Unemployment
February Report
ut+3|t
ut–1|t

Q4

Q1

Q2

Q3

Q4

Q1

Q2

Q3

Q4

Q1

ut–1|t
ut+3|t

July Report

variable as well as the timing of the information
about inflation and real activity that the FOMC
takes into account in their policy decision.
Regarding the FOMC’s policy instrument, that is,
the interest rate on the left-hand side of the rule,
we use the FOMC’s intended level of the federal
funds rate as of the close of financial markets
on the day after the February and July FOMC
meetings.
Regarding the information on the current or
projected state of the economy, we set
(9)

fˆ = a0 + aπ π τ t + auuτ t ,

where τ captures the particular timing. The
explanatory variables, πτ|t and uτ|t, are meant to
encompass the information variables to which
the FOMC may be reacting. In this specification,
τ = t –1 if the rule of thumb is outcome based,
whereas τ = t+3 if it is forecast based, that is, based
on the three-quarter-ahead projections.
Figure 3 again employs a time line to put the
timing of the explanatory variables into perspective, using the unemployment outcomes and forecasts as an example. Again, the arrows point to
the quarters to which the forecast or outcome
applies, and the dotted lines indicate the dates
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

on which the forecast or the estimate of the outcome are made.
In our estimation, we also allow for the possibility that the FOMC has a preference for policy
inertia and perhaps only partially adjusts the
intended federal funds rate, f, toward its notional
target, fˆ. We introduce such inertial behavior by
allowing the FOMC decision prior to a HumphreyHawkins report to be influenced by the level of
the intended federal funds rate decided at the
FOMC meeting before the previous HumphreyHawkins report. With our timing convention,
this can be written as
(10)

ft = (1 − ρ ) fˆt + ρ ft − 2 ,

where ρ provides a measure of the degree of partial
adjustment. Thus, the restriction, ρ = 0, would
reflect an immediate adjustment of the intended
federal funds rate to its notional target.

Regression Estimates: 1988-2007
The results from our regression analysis using
our sample of Humphrey-Hawkins report data
from 1988 to 2007 are summarized in Table 3.
The estimates shown are obtained by non-linear
J U LY / A U G U S T

2008

315

Orphanides and Wieland

Table 3
Policy Reaction to Inflation and Unemployment Rates: Outcomes versus FOMC Forecasts,
1988-2007:Q2
Regressions based on

Outcomes

Forecasts

(1)

(2)

(3)

(4)

a0

8.29
1.08

10.50
3.07

6.97
0.69

8.25
0.85

aπ

1.54
0.16

1.29
0.43

2.34
0.12

2.48
0.14

au

–1.40
0.21

–1.70
0.55

–1.53
0.14

–1.84
0.17

ρ

0

0

–
R2

0.69
0.14

0.39
0.06

0.74

0.84

0.91

0.96

SEE

1.10

0.85

0.64

0.44

SW

1.00

1.03

1.74

1.94

ft = ρ ft −2 + (1 − ρ ) (a0 + aπ π τ|t + auuτ|t ),

NOTE: The regressions shown are least-squares estimates of

Here, f denotes the intended federal funds rate, π the inflation rate over four quarters, and u the unemployment rate. The horizon τ
either refers to three-quarter-ahead forecasts, τ = t +3, or outcomes observed in the preceding quarter, τ = t –1.

ft = ρ ft − 2 + (1 − ρ ) (a0 + aπ π τ|t + auuτ|t ).

least-squares regressions applied to the equation
(11)

Columns 1 and 2 of Table 3 show the results
for the outcome-based regressions with τ = t –1;
columns 3 and 4 show the results for the forecastbased regressions with τ = t+3. Standard errors
are shown under the parameter estimates. In
columns 1 and 3 the restriction, ρ = 0, is imposed,
whereas in columns 2 and 4 the unrestricted
partial-adjustment specification is shown.
In all regressions shown in the table, we find
that the estimated rules of thumb suggest a systematic response to inflation and unemployment. The
response to inflation is positive and noticeably
greater than 1, suggesting that all of these rules
satisfy the Taylor principle. And the response to
unemployment is negative and also quite large,
suggesting a strong countercyclical stabilization
response. These findings are quite robust and
hold regardless of whether we employ FOMC
projections or recent economic outcomes and
316

J U LY / A U G U S T

2008

regardless of whether we allow for some degree
of interest rate smoothing or not.
However, not all specifications describe policy
decisions equally successfully. A comparison of
the regressions based on recent outcomes, columns
1 and 2, with those based on FOMC projections,
3 and 4, reveals that the forecast-based rules
describe policy decisions quite a bit better than
the corresponding outcome-based rules. We also
estimate a richer but more complicated specification that nests the regressions with forecasts and
outcomes as limiting cases.8 Estimates of this
specification with an estimated weight on forecasts near unity (not shown) confirm the above
result. Furthermore, our results suggest a substantial degree of inertia in setting policy.
We conclude that a rule of thumb that is based
8

In this case, the measure of inflation conditions in the regression
is defined as

π τ|t ≡ (1 − φ ) π t −1|t + φπ t +3 t .

Similarly, the measure on unemployment conditions depends on
the weight φ.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

on the FOMC’s own projections of inflation and
unemployment and allows for inertial behavior
can serve as a very good guide for understanding
the systematic nature of FOMC decisions over
the past 20 years.
The improved fit of the forecast-based rule
relative to the outcome-based rule also suggests
that at least some of the apparent deviations of
actual interest rates from an outcome-based Taylor
rule, such as described in Poole (2007), may be
easily explained once FOMC forecasts are examined. To explore this question further, Figure 4
plots the fitted values of the forecast-based and
outcome-based rules estimated in Table 3. The
upper panel of the figure contains the rules without interest rate smoothing, which correspond to
columns 1 and 3 in Table 3. The black line denoted
‘‘Fed Funds’’ indicates the actual federal funds
rate target decided at each of the February and
July FOMC meetings from 1988 to 2007. The solid
blue line indicates the outcome-based rule and
the blue dashed line the forecast-based rule.
The figure confirms visually that the forecastbased rule explains the path of the federal funds
rate target better than the outcome-based rule. Of
course, the fit is further improved once we allow
for interest rate smoothing, in other words, partial
adjustment of the funds rate depending on last
period’s realization. This can be seen in the lower
panel in the figure, where the paths implied by the
fitted outcome- and forecast-based rules, respectively, are smoother because they take into account
the estimated degree of partial adjustment.
Based on the figure, we can identify five
periods where the outcome- and forecast-based
rules diverge from each other in an interesting
manner and that can improve our understanding
of the role of projections for FOMC policy decisions. Two of these episodes, around 1988 and
1994, correspond to periods of rising policy rates.
In both of these periods, the FOMC was raising
rates preemptively because of concerns regarding
the outlook for inflation. Correspondingly, the
forecast-based rules track policy decisions better,
while the outcome-based rules only manage to
describe policy with a noticeable lag.
Two other episodes, in 1990-91 and in 2001,
correspond to periods of falling policy rates. In
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

both of these periods, the FOMC was easing policy
out of concern of a faltering economy, clearly
influenced by its projections of relatively weak
economic activity. Again, the forecast-based rules
track policy decisions better, while the outcomebased rules exhibit a noticeable lag.
The last episode is 2002-03, when the forecastbased rule correctly tracked the further policy
easing at the early stages of the recovery from the
recession, while the outcome-based rule suggested
that policy should have been considerably tighter.
Of interest are also two additional episodes
when the forecast-based rule did not track the
actual policy setting as well but where the resulting deviations can be explained by other factors
that are not part of the rule. The first of these is
the 1998 policy easing. On this occasion, the
FOMC was responding to the underlying financial
turbulence that intensified that fall, a factor not
well reflected in the rule of thumb, even considering its forward-looking nature.
The second and arguably more controversial
episode is the “miss” reflected in the forecastbased rule during 2004. This is more controversial
because of recent criticisms that policy was much
easier during this episode than would have been
suggested by simple Taylor rules. This is evident,
for example, in Poole’s rendition of the classic
Taylor rule, reproduced in Figure 1. It has been
argued that this policy stance may have contributed to the subsequent housing boom and
associated price adjustments and liquidity difficulties experienced in financial markets (Taylor,
2007). Indeed, as is well-known, around 2003-04,
the FOMC was particularly concerned with the
risks of deflation and perceived an important
asymmetry in the costs associated with a possible
policy misjudgment. In particular, the costs of
policy proving too tight were perceived as considerably exceeding the costs of policy proving
too easy.9 Under these circumstances, it should
be expected that even a rule of thumb that might
track policy nearly perfectly under normal circum9

The suggested rationale was the uncertainty arising with operating
policy near the zero bound. See Orphanides and Wieland (2000)
for a model demonstrating the optimality of unusually accommodative policy in light of the asymmetric risks associated with the
zero bound on nominal interest rates.

J U LY / A U G U S T

2008

317

Orphanides and Wieland

Figure 4
Outcome-Based versus Forecast-Based Rules, 1988-2007
No Interest Rate Smoothing
10.0

7.5

5.0

2.5

Fed Funds
Outcomes
Forecasts

19

88
19
89
19
90
19
91
19
92
19
93
19
94
19
95
19
96
19
97
19
98
19
99
20
00
20
01
20
02
20
03
20
04
20
05
20
06
20
07

0.0

With Interest Rate Smoothing
10.0

7.5

5.0

2.5

Fed Funds
Outcomes
Forecasts

19
90
19
91
19
92
19
93
19
94
19
95
19
96
19
97
19
98
19
99
20
00
20
01
20
02
20
03
20
04
20
05
20
06
20
07

89
19

19

88

0.0

NOTE: “Fed Funds” refers to the federal funds rate target. “Outcomes” refers to fitted values of the outcome-based rule without and
with interest rates smoothing, that is, columns 1 and 2 in Table 3, respectively. “Forecasts” refers to the fitted values of the forecastbased rule without and with interest rate smoothing, that is, columns 3 and 4 in Table 3, respectively.

318

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

stances would not accurately characterize policy
and that policy would be easier than suggested by
the rule. Even so, we find that the forecast-based
rule, which is based on FOMC projections, tracks
the federal funds rate target quite well through
the first half of 2004 and that the only noticeable
deviation is that it would have already called for
much more aggressive tightening starting in the
second half of 2004 than actually took place.

Time-Variation in Natural Rates
One might have suspected that the FOMC
projections-based rule of thumb, presented in
Table 3, could have proved too simple to capture
the contours of FOMC decisions during the past
20 years. In that light, the explanatory power of
the rule shown in Figure 4 may be considered
surprisingly good.
One reason to suspect that a rule based on the
notional target,
(12)

fˆt = a0 + aπ π t|t + 3 + auut|t + 3 ,

might be too simple is the constant intercept. As
already mentioned, this would not be of concern
if FOMC beliefs regarding its inflation objective
and natural rates of interest and unemployment
were roughly constant over the estimation sample.
If any of the above exhibited time variation, however, a better description of FOMC behavior would
be in terms of the following similar, but not identical, rule:

(

)

(

)

(13) fˆt = rt* + π t* + aπ π t|t + 3 − π t* + au ut|t +3 − ut* ,
which suggests a time-varying intercept,
a0,t = rt* − (aπ − 1) π t* − auut* .

Unfortunately, absent the necessary information
required to proxy the FOMC’s real-time assessments of π *, u*, and r* in our sample, it is difficult
to examine if a version of the rule allowing for
such variation could explain the data even better
than the rule of thumb based on equation (12).
As a simple check in that direction, however,
we reestimated the rule using a possible proxy of
the FOMC’s likely perceptions of the natural rate
of unemployment, u*. Absent the FOMC’s own
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

assessment, we relied on the real-time estimates
published by the Congressional Budget Office
(CBO) over the past 20 years. This is the same
source of real-time estimates used by Poole (2007)
as a proxy for Federal Reserve staff estimates.
The results (not shown) were broadly similar
to those presented in Table 3 and Figure 4. As with
the baseline specification, the data suggest that
the FOMC projection-based rule can describe
policy decisions quite well. However, the overall
fit of our preferred forecast-based regression does
not improve with the inclusion of the real-time
CBO estimate of the natural rate of unemployment.
Rather, the fit deteriorates slightly. Two possible
explanations for this are as follows. First, the CBO
estimate may not capture the updating patterns
of the FOMC’s own real-time estimates of the
natural rate. Second, even in the presence of time
variation in the natural rate of unemployment,
countervailing time variation in the natural rate
of interest might keep the intercept in the rule of
thumb, a0,t , roughly constant. If so, correcting for
the time variation in u* without a parallel correction for the time variation in r* should result in a
deterioration in the fit of the rule.

Interpreting Changes in the FOMC’s
Preferred Inflation Concept
Another reason one might be concerned that
the rule of thumb based on equation (12), as estimated in Table 3, might be too simple relates to
the FOMC’s choice of inflation concept. The decisions of the FOMC to change its inflations projections, for example, from CPI to PCE in 2000 and
from PCE to core PCE in 2004, may be due to
changes in preference as to the most appropriate
concept for the measurement of inflation for policy
purposes. To the extent that the typical dynamic
behavior of each new measure differs from the one
used previously, FOMC members would probably have made adjustments in their systematic
response to movements in the inflation measure.
To gain some insight into the possible implications of the FOMC turning from the overall CPI
measure of inflation, to overall PCE, and then the
core PCE measure excluding food and energy
J U LY / A U G U S T

2008

319

Orphanides and Wieland

prices, we compare the three series in Figure 5.
The top panel shows the three series (percentage
change in the price index relative to four quarters
earlier) for the full 1988-2007 sample. The lower
panel provides a detailed view of the most recent
10 years, 1997-2007.
As the top panel shows, from 1990 to 1998 the
three alternative inflation series steadily declined
more or less in lockstep with each other, with the
CPI series starting from a higher level than the
other two measures. The core PCE seems to best
capture the downward trend over this period. The
comparison suggests that, ex post, a policy rule
could have delivered fairly similar policy implications regardless of which of these inflation
measures was used over this period.10
From 1999 onward, the three series exhibit
some important differences. For instance, although
all three inflation rates indicate rising inflation
in 1999, the inflationary surge seemed much
stronger in the overall CPI and PCE measures than
in the core PCE. In fact, core PCE inflation stayed
largely within the Federal Reserve’s so-called
“comfort zone” of 1 to 2 percent all the way
through 2007. CPI and PCE inflation, however,
surged up two more times, in 2002 and in 2004,
with CPI inflation reaching 4 percent in 2006.
The overall PCE measure more or less follows
the movements of the CPI, albeit staying somewhat lower than the CPI throughout. Clearly, the
greater increases in PCE and CPI relative to core
PCE must have been related to the movements of
food and energy prices.
These differences pose a challenge in that the
different statistical properties of the alternative
measures could in principle influence, perhaps
in subtle ways, the specification of a rule of thumb.
One potential result of the switch from CPI to PCE,
for instance, could have been a change in the
operational definition of price stability embedded in the rule, that is π *. Stated in PCE terms, π *
could be 50 or so basis points lower than the corresponding object stated in CPI terms, reflecting
recent estimates of the 50-basis-point average
difference in the two series. On the other hand,
10

Note, however, that these series are compared from the July 2007
vintage perspective and not the real-time policymaker perspective.

320

J U LY / A U G U S T

2008

given the uncertainty associated with price measurement and the quantitative definition of price
stability most appropriate for monetary policy, it
is not entirely clear that such a change in the π *
embedded in a rule of thumb should be incorporated in the analysis when the FOMC changes its
preferred inflation measure.
In light of these uncertainties and the differential movements of core PCE, PCE, and CPI
inflation—especially from 2000 onward—we
decided to perform two experiments to help
examine how changes in the inflation concept
potentially influence policy.
One way to examine whether the policy rule
changed when the FOMC switched inflation
measures is to allow for changes in the intercept
and/or slope coefficients at those points in time.
We did so by introducing the appropriate additive and multiplicative dummy variables in our
regression equations and reestimating over the
full 1988-2007 sample. We consider possible
shifts in 2000:Q1 (for the switch to PCE) as well
as in 2004:Q3 (for the switch to core PCE). The
results (not shown) did not indicate any significant shifts, suggesting the use of a new inflation
measure may not have resulted in a corresponding
change in the rule of thumb the FOMC used to
make decisions or that, because of the limited
sample, the change may have been too small to
identify.
Another way to examine possible differences
since 1999 is to reestimate the regressions presented in Table 3 using only the subsample
1988-99 to see if excluding the period following
the switch to PCE and later to core PCE would
materially influence the results. The regression
estimates, based on equation (11), are reported in
Table 4 in identical fashion as those in Table 3.
A comparison of Tables 3 and 4 shows that the
coefficients of the outcome-based rule change
quite a bit. This instability reinforces the prior
evidence that the outcome-based rule is misspecified as a description of FOMC policy because it
does not account properly for forecasts.
The key result in Table 4 is that the estimates
corresponding to the forecast-based rule for the
subsample ending in 1999 do not materially differ
from those corresponding to the full sample. This
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

Figure 5
CPI, PCE, and Core PCE Inflation (vintage July 2007)
1998-2007
6.4
5.6
4.8
4.0
3.2
2.4

CPI
PCE
Core PCE

1.6

19

88
19
89
19
90
19
91
19
92
19
93
19
94
19
95
19
96
19
97
19
98
19
99
20
00
20
01
20
02
20
03
20
04
20
05
20
06
20
07

0.8

1997-2007
4.0
3.5
3.0
2.5
2.0
1.5

CPI
PCE
Core PCE

1.0

98
19
99
19
99
20
00
20
00
20
01
20
01
20
02
20
02
20
03
20
03
20
04
20
04
20
05
20
05
20
06
20
06
20
07

19

97

98

19

19

19

97

0.5

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

321

Orphanides and Wieland

Table 4
Policy Reaction to Inflation and Unemployment Rates: FOMC Forecasts of CPI Inflation, 1988-99
Regressions based on

Outcomes

Forecasts

(1)

(2)

(3)

(4)

a0

9.78
1.38

12.73
4.57

6.31
0.99

7.34
1.16

aπ

1.11
0.19

0.72
0.62

2.32
0.20

2.54
0.23

au

–1.35
0.25

–1.68
0.71

–1.41
0.17

–1.72
0.22

ρ

0

0

–
R2

0.69
0.20

0.41
0.08

0.68

0.78

0.87

0.94

SEE

1.03

0.84

0.64

0.43

SW

0.98

1.18

1.65

1.96

ft = ρ ft −2 + (1 − ρ ) (a0 + aπ π τ|t + auuτ|t ),

NOTE: The regressions shown are least-squares estimates of

where f denotes the intended federal funds rate, π the inflation rate over four quarters, and u the unemployment rate. The horizon τ
either refers to three-quarter-ahead forecasts, τ = t +3, or outcomes observed in the preceding quarter, τ = t –1.

suggests that the change in inflation concepts may
not have resulted in a corresponding change in
the rule of thumb describing FOMC decisions or
that this corresponding change may have been
rather small. Indeed, this is confirmed in the top
panel of Figure 6, which shows the estimated
forecast-based rule (dashed line) over the subsample ending in 1999 and a simulation that uses
the parameter estimates from this rule together
with the FOMC projections through 2007. This
simulation confirms that interest rate setting in
the 2000-06 period seemed in line with a systematic interest rate response to FOMC projections
with the same coefficients, despite the change in
inflation concepts. Note that the results for the
policy rules do not include interest rate smoothing.
This finding is somewhat puzzling, especially
in light of the average difference expected in measured inflation in terms of CPI as opposed to PCE
or core PCE (approximately 50 basis points). One
might have expected that the switch to PCE would
be accompanied by a countervailing adjustment
in the parameters of the rule. Instead, use of the
identical rule with the PCE instead of the CPI,
322

J U LY / A U G U S T

2008

assuming that PCE inflation forecasts are lower
on average than corresponding CPI forecasts,
would result in lower interest rate prescriptions
on average.
To get a sense of the magnitude of this effect,
we simulated the rule with parameters estimated
over the subsample ending in 1999, using the
Blue Chip consensus forecasts of CPI inflation
from 1988 to 2007. The results, indicated by the
dashed line in the lower panel of Figure 6, show
that from 1988 to the first half of 2002 the interest
rate prescriptions based on the Blue Chip CPI
forecasts are broadly in line with those based on
the FOMC projections. From the second half of
2002 to 2006, the rule simulated with Blue Chip
CPI forecasts implies a higher federal funds rate
target. In other words, if the FOMC had continued
to forecast CPI inflation and if its forecasts had
been similar to those of the Blue Chip consensus
from 2002 onward, the FOMC projections-based
rule of thumb would have suggested systematically tighter policy than the policy setting suggested with the PCE and core PCE projections.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Orphanides and Wieland

Figure 6
Rules Estimated for 1988-99 and Extrapolated to 2007
Simulation Using PCE and Core PCE Inflation
10.0
9.0
8.0
7.0
6.0
5.0
4.0
3.0
2.0
1.0

Fed Funds
Outcomes
Forecasts

19

88
19
89
19
90
19
91
19
92
19
93
19
94
19
95
19
96
19
97
19
98
19
99
20
00
20
01
20
02
20
03
20
04
20
05
20
06
20
07

0.0

Simulation Using CPI Outcomes and Blue Chip CPI Forecasts
10.0
9.0
8.0
7.0
6.0
5.0
4.0
3.0
2.0
1.0

Fed Funds
CPI Outcomes
Blue Chip CPI Forecasts

19

88
19
89
19
90
19
91
19
92
19
93
19
94
19
95
19
96
19
97
19
98
19
99
20
00
20
01
20
02
20
03
20
04
20
05
20
06
20
07

0.0

NOTE: “Fed Funds” refers to the federal funds rate target. “Outcomes” refers to fitted values of the outcome-based rule without interest
rates smoothing, that is, column 1 in Table 3. “Forecasts” refers to the fitted values of the forecast-based rule without interest rate
smoothing, that is, column 3 in Table 3. In the lower panel, these two rules are simulated with CPI inflation outcomes and Blue Chip
CPI forecasts, respectively, from 1988-2007.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

323

Orphanides and Wieland

CONCLUSION
Many analysts often rely on rules of thumb,
such as Taylor rules, to describe historical monetary policy decisions and to compare current
policy to historical norms. William Poole’s (1971)
study, written explicitly to offer advice to the
FOMC, serves as an early example of such work.
Analyses along these lines also permit evaluation
of episodes where policy may have deviated from
a simple policy rule and examination of the reasons behind such deviations. But there is disagreement as to whether the canonical rules of thumb
for such work should draw on forecasts or recent
outcomes of key variables such as inflation and
unemployment. Poole (2007) points out that deviations of the actual funds rate from the prescriptions of a Taylor rule that relies on current readings
of inflation and the output gap may be the result
of systematic responses of the FOMC to information not contained in these variables. He notes,
however, that much of this additional information
may be captured in economic projections. We
investigate this proposition in the context of
FOMC policy decisions over the past 20 years,
using publicly available FOMC projections from
the Humphrey-Hawkins reports that are published
twice a year. Our results indicate that FOMC decisions can be predominantly explained in terms
of the FOMC’s own projections rather than recent
economic outcomes. Thus, a forecast-based rule
better characterizes FOMC decisionmaking. We
also identify a difficulty associated with the FOMC
switching the inflation concept it has used to
communicate its inflation projections. Finally,
we confirm that many of the apparent deviations
of the federal funds rate from an outcome-based
Taylor-style rule may be viewed as systematic
responses to information contained in FOMC
projections.

Bryant, Ralph; Hooper, Peter and Mann, Catherine, eds.
Evaluating Monetary Policy Regimes: New
Research in Empirical Macroeconomics.
Washington, DC: Brookings Institution, 1993.
Henderson, Dale and McKibbin, Warwick. “A
Comparison of Some Basic Monetary Policy Regimes
for Open Economies: Implications of Different
Degrees of Instrument Adjustment and Wage
Persistence.” Carnegie-Rochester Conference Series
on Public Policy, December 1993, 39, pp. 221-318.
Lindsey, David; Orphanides, Athanasios and
Wieland, Volker. “Monetary Policy Under Federal
Reserve Chairmen Volcker and Greenspan: An
Exercise in Description.” Unpublished manuscript,
Board of Governors of the Federal Reserve System,
1997.
Orphanides, Athanasios and Wieland, Volker.
“Efficient Monetary Policy Design Near Price
Stability.” Journal of the Japanese and International
Economies, December 2000, 14, pp. 327-65.
Orphanides, Athanasios and Wilcox, David. “The
Opportunistic Approach to Disinflation.”
International Finance, 2002, 5(1), pp. 47-71.
Poole, William. “Rules-of-Thumb for Guiding
Monetary Policy,” in Open Market Policies and
Operating Procedures—Staff Studies. Washington,
DC: Board of Governors of the Federal Reserve
System, 1971.
Poole, William. “Understanding the Fed.” Federal
Reserve Bank of St. Louis Review, January/February
2007, 89(1), pp. 3-13.
Taylor, John B. “Discretion versus Policy Rules in
Practice.” Carnegie-Rochester Conference Series on
Public Policy, December 1993, 39, pp. 195-214.
Taylor, John B. “Housing and Monetary Policy.”
NBER Working Paper No. 13682, National Bureau
of Economic Research, December 2007.

REFERENCES
Aksoy, Yunus; Orphanides, Athanasios; Small, David;
Wieland, Volker and Wilcox, David. “A Quantitative
Exploration of the Opportunistic Approach to
Disinflation.” Journal of Monetary Economics,
November 2006, 53(8), pp. 1877-93.

324

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Charles I. Plosser

I

t is indeed a pleasure to have the opportunity to be here today at the Thirty-Second
Annual Economic Policy Conference of
the Federal Reserve Bank of St. Louis. It
is a conference I have attended a dozen times
or more in various roles. In each and every case,
it has proven to be a timely and thoughtful
interchange among academic economists and
policymakers.
I am particularly pleased to be participating
this year, since this is the last of these conferences
that will be held during Bill Poole’s tenure as
president of the Federal Reserve Bank of St. Louis.
I have had the privilege of knowing Bill for at
least 25 years, perhaps more. I am sure that many
of you in the room can also make similar claims.
Over the years I have learned a great deal from
Bill. His seminal contributions in the area of
monetary theory and policy are widespread and
span four decades. Whether it be his contributions
on monetary policy under uncertainty, his early
investigations of simple rules for setting the federal funds rate, or his analysis of rational expectations models of the term structure for monetary
policy, his theoretical contributions provided
fundamental insights and played an important
role in developing what we now view as the core
of modern monetary theory. He has continued
his contributions to monetary policy as a member
of the Federal Open Market Committee (FOMC),
bringing the same sound, thoughtful, and consistent economic analysis to policy deliberations.

One of the themes of Bill’s work was the
importance of uncertainty for monetary policy.
One dimension of uncertainty involves our uncertainty regarding the nature of the macroeconomic
model that governs the economy. In Poole (1971),
Bill investigated the performance of simple “rules
of thumb” for setting the federal funds rate. He
argued that these simple rules appeared to be
“robust” across various model specifications.
This line of research has become increasingly
active and has some important things to say about
the conduct of monetary policy. I find the analysis
of simple rules intriguing for a couple of reasons.
First and foremost, they are rules. Second, in a
framework where policy is decided by committee,
simple rules that are robust across models provide
a valuable focal point for discussion among people
with different world views.
Bill’s primary concern dealt with uncertainty
in a given model, but he also discussed how optimal policy would vary when model parameters
changed. Thus, his concern reflected a desire to
analyze optimal policy under model uncertainty.
As I indicated, this area of research has seen a
resurgence in recent years, with various methodologies ranging from robust control to Bayesian
model averaging being employed to analyze optimal policy under uncertainty. Perhaps even more
interesting is the research that considers the
robustness of simple rules across models that are
non-nested and thus potentially very different.1
1

McCallum (1988) was one of the first to investigate the robustness
properties of simple rules.

Charles I. Plosser is president of the Federal Reserve Bank of Philadelphia.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 325-29.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

325

Plosser

Before I discuss the contributions of the
Orphanides and Wieland (2008) paper, which I
found very interesting, I would like to backtrack
and talk about why I believe explorations of rules
of thumb are important and what we know about
their performance.
As I stated at the outset, one of the greatest
attractions of simple rules is that they are in fact
rules. Since the pioneering work of Kydland and
Prescott (1977), we have come to understand the
theoretical foundations for the importance of
commitment by policymakers.
One way that commitment manifests itself is
that, in model economies, the optimal monetary
policy typically takes the form of a rule. In these
models the researcher looks for policies that
deliver efficient allocations, that is, the allocations
that would be selected by a Ramsey planner. In
this context, optimal policy need not be simple,
but it does need to be a rule.
However, the Fed does not pick allocations
like the Ramsey planner—its picks an instrument
and moves that instrument to influence economic
outcomes. Thus, there are important issues regarding the implementation of policy that must be
considered, and this is where I believe that simple
policy rules have a role to play.
The question then is, why might we choose
to adopt simple rules? If everyone had the same
model of the economy, there would be no reason
to do so. So the underlying attraction of investigating simple rules is twofold. First, everyone may
not agree on a common model. Thus, optimal
policy for one policymaker may not be optimal
for another. Second, even if there is an agreedupon model, the economy is likely to be more
complicated than the model, so the optimal policy
for that model may not be the optimal policy for
the true underlying economy.
So it seems natural to ask if there are simple
rules that capture the essence of optimal rules
and that give good results in a variety of theoretical environments. In other words, how different
are the allocations under simple rules from those
obtained under optimal policy? How costly is a
simple rule?
A number of researchers have analyzed the
performance of simple rules. One interesting
326

J U LY / A U G U S T

2008

approach is that by Schmitt-Grohé and Uribe
(2006). The underlying model is quite rich, incorporating price stickiness, investment adjustment
costs, habit persistence, variable capacity utilization, and monopolistic competition. The model
considers three types of shocks: policy shocks,
total factor productivity shocks, and investmentspecific technology shocks.
They find that a simple Taylor-like rule that
responds aggressively to inflation, wage growth,
and very little to deviations of output growth from
target comes very close to achieving the optimal
allocations. In fact, a rule that responds solely to
price inflation yields good results in this model.
The basic message here is that, in a model with
large non-neutralities, but primarily forwardlooking agents, a simple Taylor-like rule comes
fairly close to implementing Ramsey allocations.
Perhaps surprisingly, the properties of the rule
place significant weight on an aggressive response
to deviations of inflation from target.
Although the model of Schmitt-Grohé and
Uribe does a fairly good job of matching the data,
it may not be the only model to do so. Other
policymakers or researchers may wish to stress
different model features. For example, I may not
be as keen on models with the degree of price
stickiness or as comfortable with the adjustment
costs and habit formation built into the model, but
I do place a lot of stock in models with forwardlooking rational agents. So it is of some interest
and importance to question whether these intriguing results are robust to perturbations in the
model not considered by the authors. Basically,
we want to know how robust these findings are
to models that accommodate very different views
of behavior, models that perhaps fit the data as
well as the one Schmitt-Grohé and Uribe consider.
The question of robustness has been addressed
in a number of ways. The most common strategy
is to look at the performance of simple rules in a
host of different, sometimes non-nested models.
In these exercises, the most interesting questions
in my mind are these: How similar are the optimal
rules from different models? And are there simple
rules that work well across models?
Levin, Wieland, and Williams (1999 and
2003) have explored the performance of various

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Plosser

simple rules in a number of model contexts. They
characterize the optimal simple rule for each
model and find that there are broad similarities
that describe the best simple rule. In some cases
they find the best simple rule that minimizes the
average loss across models. In their 2003 article,
this rule responds to smoothed inflation forecasts
at most one-year ahead, current inflation, lagged
interest rates, and, unlike the Schmitt-Grohé and
Uribe analysis, in a significant way to the output
gap. In part, this distinction is driven by differences in the loss functions. One feature of the
robust rule is that it exhibits inertia in that the
coefficient on the lagged interest rate is close to 1.
This is in contrast to the results found in SchmittGrohé and Uribe, where inertia was not important.
Besides uncertainty resulting from stochastic
factors or from not knowing the true model,
policymakers and the public may be unaware of
the true processes governing the stochastic elements of the model.
Orphanides and Williams (2002 and 2007),
for example, examine the usefulness of simple
rules when the processes for the natural rate of
interest and employment are unobserved and not
known. Further, agents form forecasts of relevant
macroeconomic variables using a learning methodology. The learning mechanism along with persistent errors in estimating natural rates yields highly
nonlinear behavior and implies significant departures from the rational expectations equilibrium.
An important feature of this model is that there
is the possibility that expectations of inflation can
become unanchored. The main lesson I take away
from their analysis is that, even in this environment, simple rules can work quite well, but those
rules should be based on rates of change in output or employment. This avoids the notoriously
difficult problems associated with estimating
natural rates of output or unemployment.
These analyses indicate that in the presence
of very different types of uncertainty—stochastic
disturbances, uncertainty about the correct model,
and uncertainty about the nature of true driving
processes—simple rules for monetary policy are
able to deliver good economic outcomes.
To many observers it comes as somewhat of a
surprise that simple rules should do as well as
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

they do. The reason they do is not obvious. It is
hard to know whether some form of Occam’s
razor is at work or something else. This is somewhat uncomfortable from a theoretical perspective, yet I find the analyses to date convincing
and useful from the perspective and experience
of a policymaker and economist.
Given that simple rules appear to have desirable properties in terms of delivering good outcomes in a variety of theoretical settings, I think
they have other desirable characteristics that
suggest they are of value to policymakers.
First, policy guided by simple rules is easy
to monitor and to communicate. Transparency is
an important attribute of good monetary policy.
In a world where expectations of the future play
a critical role in economic outcomes, the transparency and predictability/credibility of monetary policy can reduce expectational errors and
contribute to a more stable economic environment. Using simple rules, which can be easily
communicated to the public, as benchmarks or
guidelines can enhance both transparency and
credibility of policy.
I believe that there is another benefit from
using simple rules as a guide for policy. Specifically, I believe that simple rules serve as a useful
focal point in policy deliberations. The underlying
models employed by various FOMC members can
be quite different and, in some cases, may not
even share the same set of state variables. Thus,
deliberating the implications of various policy
options or the workings of the economy can be
quite complicated. Indeed, as I previously discussed, the optimal rule, even from a wellarticulated model, can be quite complex and
quite different across models. If the underlying
model is not well articulated or completely specified, then we may not even know what the optimal
or best rule might be. Thus, trying to reach a consensus on appropriate policy can be difficult.
By focusing on simple rules, deliberations
can focus on a few key variables and our assumptions or forecasts that shape them. It also leads to
a more focused discussion of the shape and parameters of the loss functions that may be applicable.
I believe this would greatly improve policy deliberations by directing attention to the key factors
J U LY / A U G U S T

2008

327

Plosser

that matter for the policy choices. Of course, these
benefits arise only if the simple rules have some
good robustness properties associated with them.
As I have indicated, my reading of the literature
to date makes me optimistic that this is indeed
the case.
Finally, let me turn my attention to the
Orphanides and Wieland (2008) paper. How does
their paper fit into this broader literature? First,
the investigation is a positive analysis rather than
a normative one. That is, they seek to study and
uncover the characteristics of FOMC decisions
over the past 20 years in the context of a simple
rule.
The rules investigated are simple and, importantly, are based on the real-time information
that policymakers actually possess at the time
decisions are made. The rule that seems to explain
Fed behavior the best is a forward-looking rule
that responds aggressively to deviations of inflation from target. In this regard, Fed behavior
seems in accord with the guidance of robust rules.
If anything, the response seems more aggressive
than is sometimes indicated by the more theoretical investigations.
The Fed also appears to respond to forecasts
of unemployment and its deviations from some
natural rate. The authors do not directly estimate
a natural rate of unemployment; they allow it to
be subsumed in the constant term. This strategy
may or may not be a good one. Indeed, some of
Orphanides’s own work suggests that looking at
growth rate rules might be a better practice, and
it would have been interesting to see how they
would have stacked up in this comparison. Of
course, this might not make too much difference
over this period if there was not much movement
in the natural rate.
Another interesting feature of the results is
that the degree of inertia is markedly less when
the rules are assumed to be forward looking, that
is, based on forecasts, than when they are based
on outcomes. The robust rules prescribed in
Levin, Wieland, and Williams (1999 and 2003)
or Orphanides and Williams (2007) suggest that
policy should be more inertial. What might be
the reasons for this finding? One possibility is
that the robustness results may be relying too
328

J U LY / A U G U S T

2008

heavily on learning and the world may be more
rational and forward looking than the models
presume.
Probably one of the more interesting findings
in the paper concerns the shifting focus of the
Fed’s preferred inflation measure. The puzzle is
that, as the Fed shifted its emphasis from the consumer price index to the core personal consumption expenditures, it apparently did not change
the parameters of the estimated rule. This is a
puzzle because the personal consumption expenditures measure generally is about 50 basis points
below the consumer price index, on average. Thus,
changing the inflation measure allowed the Fed
to maintain a lower funds rate for a given rate of
inflation. I will not speculate on why this is the
case. But it does suggest that policy was not as
committed to a policy rule as the regression estimates seem to suggest.
Overall, I found the paper interesting and
useful in furthering our understanding of simple
rules. It points to some unresolved issues, particularly those that pertain to the ability to describe
actual policy as rule based. That does not mean
that simple rules are any less useful or valuable.
Indeed, it may suggest that policy can be improved
by being more transparent and committed to systematic behavior.

REFERENCES
Kydland, Finn, and Prescott, Edward. “Rules Rather
Than Discretion: The Inconsistency of Optimal
Plans.” Journal of Political Economy, June 1977,
85(3), pp. 473-91.
Levin, Andrew; Wieland, Volker and Williams, John
C. “Robustness of Simple Monetary Policy Rules
under Model Uncertainty,” in John Taylor, ed.,
Monetary Policy Rules. Chicago: University of
Chicago Press, 1999.
Levin, Andrew; Wieland, Volker and Williams, John
C. “The Performance of Forecast-Based Monetary
Policy Rules under Model Uncertainty.” American
Economic Review, June 2003, 93(3), pp. 622-41.
McCallum, Bennett. “Robustness Properties of a Rule
for Monetary Policy.” Carnegie Rochester Conference

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Plosser

Series on Public Policy, Autumn 1988, 29,
pp. 173-203.
Orphanides, Athanasios and Williams, John C.
“Robust Monetary Policy Rules with Unknown
Natural Rates.” Brookings Papers on Economic
Activity, 2002, 2, pp. 63-118.
Orphanides, Athanasios and Williams, John C.
“Robust Monetary Policy with Imperfect
Knowledge.” Unpublished manuscript, 2007.
Orphanides, Athanasios and Williams, John C.
“Economic Projections and Rules of Thumb for
Monetary Policy.” Federal Reserve Bank of St. Louis
Review, July/August 2008, 90(4), pp. 307-24.
Poole, William. “Rules of Thumb for Guiding
Monetary Policy,” in Open Market Policies and
Operating Procedure—Staff Studies. Washington, DC:
Board of Governors of the Federal Reserve System,
1971, pp. 135-89.
Schmitt-Grohé, Stephanie and Uribe, Martin.
“Optimal Simple and Implementable Monetary
and Fiscal Rules.” Unpublished manuscript, 2006.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

329

330

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Patrick Minford

T

he Taylor rule is widely seen as a good
summary of what the Federal Reserve
does. Though the rule cannot easily be
fitted to actual data as subsequently
revised, at least for a full postwar sample, it can
be fitted to real-time data (i.e., data as seen at the
time), as shown by earlier work by Orphanides
(2003). But in practice the Fed’s Federal Open
Market Committee (FOMC), if it is using a Taylor
rule, will look at its own forecasts or projections.
Orphanides and Wieland (2008) examine whether
a Taylor rule can be fitted to the FOMC’s own
projections since 1988. They find that it can with
appropriate parameters that satisfy the Taylor
principle—that is, that give a unique stable solution under rational expectations. Furthermore,
they find that the rule works better with these
projections and resolves various puzzles regarding the data on outcomes.
This is without question an interesting finding; the paper is clear, cogent, and persuasive.
Many will be totally persuaded by it; however, I
do have a few doubts. Let me begin with some
issues of specification and estimation and then
proceed with two wider issues.

SPECIFICATION AND ESTIMATION
The Specification of the Taylor
Projections Rule for Changes in
Targets and Definitions
As the authors note, there remains a puzzle:
In spite of the change in the inflation definitions,

particularly that from the consumer price index
(CPI) to the personal consumption expenditures
(PCE) deflator, their estimates find no shift in the
Fed rule. Their experiments with a rule estimated
for the CPI in the 1990s shows that the rule should
have shifted up on the move to the PCE in the
2000s. The rule might have also shifted with the
natural rate of employment; however, when they
included this rate in the equation along with the
inflation definition, the rule did not shift in line
with either or both together. Had the equilibrium
rate of interest been known, there may have been
no puzzle. However, the authors argue that they
had no estimate of this to include as a test; but
surely index-linked government bond yields provide some idea of shifting real rate equilibria?
This puzzle is particularly odd when viewed
side by side with the explicit 0.5 percent shift in
target inflation that occurred when the United
Kingdom made an essentially similar change—
from the retail price index to CPI. The U.S. CPI,
too, systematically grows 0.5 percent or so faster
than the PCE. The absence of a noticeable shift
in the rule makes one wonder exactly what the
FOMC projections are—a topic I return to below.
The logic of the Taylor projections rule
absolutely requires that the rule shifts when the
inflation definition changes; this shift should
have been imposed on the equation, together
with some estimate of changing real interest rate
equilibria based on index-linked bond-yield
trends.

Patrick Minford is professor of applied economics at Cardiff University and a research fellow of the Centre for Economic Policy Research.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 331-38.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

331

Minford

Estimation
I am concerned with the authors’ estimation.
They use ordinary least squares, a single-equation
estimator, which is open to bias because of the
correlation of the error term (the FOMC’s monetary judgment, or interest rate “shock”) with both
endogenous variables. These are defined as lagged
variables; but truly they are the FOMC’s current
view of the forecast environment at the time
interest rates are set and use contemporaneous
data and reports on both output and inflation.
Given signal extraction and the semiannual frequency of the data, it is clear that current data will
influence projections and so the current interest
rate judgment; at the same time, the interest rate
shock will affect output and inflation in the semiannual time frame. Furthermore, the error is autocorrelated, except in the projections version when
a lagged interest rate is included for “adjustment”
reasons. However, in principle, even if the FOMC
revises its judgment each semiannual period, each
new judgment is unlikely to be independent of
the last one; given that it represents views on such
things as asset price movements, exchange rate
behavior, and special factors like the 9/11 attacks.
The FOMC’s judgment should show some persistence, and indeed that is what most dynamic
stochastic general equilibrium (DSGE) modelers
assume about a monetary shock.
Given these issues, I regard the estimation
methods of this paper as rather casual. For a start,
we need more information on the error process;
does “adjustment” really eliminate autoregressiveness in the error? Second, we need some effort to
estimate the equation in a bias-free manner; fullinformation methods are ruled out by the absence
of the rest of the model, but on this front it would
be helpful to see some instrumental variable or
two-stage least-squares results.
Third, however, there are difficulties with
any single-equation estimator, as pointed out by
Cochrane (2007a). To illustrate his point, consider
a standard New Keynesian model with a strict
inflation-targeting rule, Rt = ψπt + it (it, the shock,
will in general be autocorrelated and also correlated with πt). If we solve the model by imposing
a stable solution, inflation is an autoregression,
332

J U LY / A U G U S T

2008

say, πt = ρπt –1 + ut (where the error is also autocorrelated, say, with a root κ ), and it follows that
the Fisher identity gives interest rates as Rt = rt +
Et πt +1, which thus equals ρπt + [κ ut + rt ], where
the term in square brackets is an autocorrelated
error, correlated with πt. How can this regression
be distinguished from the inflation-targeting
regression? A systems estimator imposing all
over-identifying restrictions on the model is the
only way.

Modeling the FOMC Projections Rule
To use this FOMC projections rule in a model
requires some transfer function relating the Fed’s
projections to the actual state of the economy.
Thus, if the version here is to be taken seriously
as a representation of policy, we need to know
its properties in a full model, but of course those
properties depend on how the FOMC projections
are related to the actual economy.
It matters a lot whether they are, for example,
biased and/or subject to learning or rational
expectations. A key reason for knowing these
details is that they would make it possible to estimate the rule appropriately by full-information
methods, as already argued.

SOME WIDER ISSUES
There are some wider issues I see as interestingly raised by this paper. The first is what exactly
the Taylor rule is and how it fits into economic
thought on policy rules. The second is whether
this paper and associated work clinches the debate
on which monetary rule was actually being pursued by the FOMC; I will argue that this turns on
a difficult issue of identification.

What Exactly Is the Taylor Rule?
Origins and Application
John Taylor wrote his paper (Taylor, 1993)
proposing the rule in the early 1990s. It seems
to have been heavily influenced by a 1989/90
Brookings conference event, which discussed
the performance of different monetary rules
(money supply, exchange rate targeting, or pegging, mainly) within large models of the world
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Minford

economy, one of which was John Taylor’s “Taylor
World Model” (another was my “Liverpool World
Model”). As a new departure, Dale Henderson
and Warwick McKibbin asked the modelers—
around a dozen teams—to evaluate a new suggestion that money be bypassed by setting interest
rates directly in response to macro data. Various
formulations were tried.
The modeling teams drew a blank initially in
solving their models under these rules; it seems
that they were tripping over indeterminacy and
had not discovered the Taylor principle, but it
may also have been that the algorithms being
used at that time (mostly variants of the Fair and
Taylor, 1993, method) simply had difficulty homing in on the solution.
These proposed rules, we may well now have
forgotten, were a quite unfamiliar way of thinking
about monetary policy. It is true that rules for
setting interest rates had had a long history (as
pointed out by Stanley Black at this Federal
Reserve Bank of St. Louis conference); indeed such
rules were dominant in the postwar Keynesian
era up to the 1970s. However, there was a strong
reaction against such ideas in the late 1970s and
1980s as the rational expectations revolution took
effect; interest rate rules were felt to give a poor
nominal anchor (and would give rise to indeterminacy unless tied to a nominal target) and instead
the setting of the money supply was emphasized.
This accounts for the fact that the primary rules
investigated in the Brookings conference were
either money supply rules or exchange rate rules.
When the teams had succeeded in solving
their models for these new rules, they were found
to perform surprisingly well and the results were
written up by Henderson and McKibbin (1993a,b)
at great length (1993a is in Bryant, Hooper, and
Mann, 1993, Chap. 2; 1993b was a version of this
chapter given at the same Carnegie-Rochester
conference where Taylor presented his own paper,
Taylor, 1993). It seems that the success of these
rules in a wide variety of models indicated a surprising robustness, and it was this robustness that
Taylor later emphasized as a major attraction of
his own rule. He elaborated on this in further tests
on other models. After the Brookings conference,
in any case, John Taylor formulated his rule,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

which could reasonably be termed the HendersonMcKibbin-Taylor rule.
Nevertheless, there seems to be a difference
between these authors’ views. Whereas Henderson
and McKibbin were solely discussing what would
be a good rule and never, as far as I am aware,
argued that it was actually pursued, John Taylor
went further and argued not only that it worked
well but also that monetary policy could be
thought of as being done this way. A paraphrase
of his distinctive message could be “Look, here
is a rough approximation of what a good central
bank actually does and has done in the United
States in recent years.”
Thus, the attraction of the Taylor rule was
that it was descriptive as well as normative; this
was the new ingredient that Taylor added.1
Orphanides has in his earlier (2003) work argued
that it can indeed describe FOMC behavior for the
whole postwar period if real-time data are used.
Yet, as I shall argue below, it is this implicit claim
that the rule is descriptive that is problematic.
We can pursue this history further with a
review of how New Keynesian authors use the
Taylor rule to account for inflation in the postwar period. Here, I follow the points made by
Cochrane (2007b). He notes that these authors (e.g.,
Clarida, Galí, and Gertler, 2000) have argued that
up to around 1980 the Taylor rule being pursued
by the Fed violated the Taylor principle and thus
produced or permitted high inflation; after 1980,
the Fed raised the coefficient on inflation above
unity and inflation was brought down. Yet, if the
Fed before 1980 had such a Taylor rule, then inflation would have been indeterminate. So, in what
sense does this account for any inflation path at
all?2 (This is resolved by Orphanides, who says
that, throughout, the Fed had a good rule but just
1

Yet, there is ambivalence even here. For example, McCallum
stated in answer to my question at the Carnegie-Rochester 2002
conference on the Taylor rule that the rule was essentially normative, not descriptive. Ireland (2003), at the same conference, however, took the view that it was both a normative rule (enabling
monetary economists to coalesce around inflation targeting after
years of wrangling about other rules) and positive, in that central
banks actually thought of policy in terms of Taylor rules.

2

The Taylor principle and this stable-sunspot corrollary can be
illustrated for a simple model in which real interest rates are an
exogenous AR(1) process (more complex models can produce

J U LY / A U G U S T

2008

333

Minford

had bad estimates of the output gap in the 1970s;
to account for inflation, then, a full model including private sector information and learning is
needed, which then makes this a branch of the
learning literature and not a rational expectations
model like the New Keynesian one.) For the post1980 period, Cochrane (2007b) argues that the
way the rule works to discipline inflation is in
any case incredible: In effect, the Fed threatens
to raise inflation and interest rates without limit
should inflation deviate from the stable path.
Because people believe this threat, inflation goes
to this unique path. Yet, what stops them from
choosing one of these deviant paths, so that the
Fed has to go along with them? Deviant paths in
models with money supply targets can be suppressed by Fed action on the money supply; here
it is not clear what the Fed will do to rule out
deviant paths.
Thus, there is a doctrinal puzzle in the Taylor
rule approach. The Taylor rule emerged from a
money-supply-rule world because models were
found to behave rather well when the rule was
imposed together with some unspecified device
to rule out unstable paths. However, it was forgotten that in previous models that device had
involved action on the money supply. I think
what this shows is that the Taylor rule is an essentially incomplete statement about monetary policy. One has to assume that the authorities have
some additional tool in their locker to rule out
unstable paths. Cochrane (2007b) argues this can
be a non-Ricardian fiscal policy. It could also be
a money supply policy of the central bank.

Does This Work Compel Us To Believe
the Fed Really Was Pursuing a Taylor
Rule?
Identification Across Possible Models. The
problem with the claim that the Fed projections
slightly different Taylor conditions): rt = ρrt –1 + εt . Now add a
Taylor rule for inflation only, Rt = απt, and the Fisher identity,
Rt = rt + Etπt +1. The general solution of the model is πt = krt + ξt ,
where k = 共1/α – ρ兲 and the sunspot ξt = αξt –1 + ηt with ηt chosen
randomly (the solution can be verified by substituting it into
rt = –Etπt +1 + απt ). If α ≥ 1, then the sunspot is ruled out by the condition that the solution must be stable. But if α < 1, then inflation
is a stable process with a sunspot and hence indeterminate in that
each period the path can jump anywhere.

334

J U LY / A U G U S T

2008

rule is descriptive is in a general sense one of
identification across possible models. DSGE
models give rise to the same correlations between
interest rates and inflation, even if the Fed is
doing something quite different, such as targeting the money supply. For example, Minford,
Perugini, and Srinivasan (2002 and 2003) show
this in a DSGE model with Fischer wage contracts. Gillman, Le, and Minford (2007) use a
real business cycle growth model with cash and
credit in advance to derive a steady-state, or
cointegrating, relation between interest rates
and inflation and the growth rate when money
supply growth is fixed—a “speed limit” version
of the rule. The route they use to obtain an apparent Taylor rule is the Fisher equation, which links
nominal interest rates with expected future
inflation and real interest rates; they then use
the relation elsewhere in the model, equating
growth with the real interest rates to obtain a
“Taylor rule.”
This identification would still be a problem
under the projections rule because of how the
transfer function relates the actual data to the
projections; that is, any relationship between
interest rates and FOMC projections could be
translated by this function into one between interest rates and actual data. I will return below to
what this transfer function might look like. For
now, let us just compare the normal Taylor rule
using actual data with the other rule’s implied
Taylor-type equation.
To illustrate the point in detail, consider a
popular DSGE model but with a money supply
rule instead of a Taylor rule:
(IS) y t = γ E t −1 y t +1 − φ rt + v t

(

)

(Phillips) π t = ζ y t − y ∗ + ν E t −1π t +1 + (1 − ν ) πt −1 + ut
(Money supply target) ∆mt = m + µt

(Money demand) mt − pt = ψ 1E t −1 y t +1 − ψ 2 Rt + εt
(Fisher identity) Rt = rt + E t −1πt +1
This model implies a Taylor-type relationship
that looks like
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(

)

(

)

Rt = r ∗ + π ∗ + γ χ−1 π t − π ∗ + ψ 1 χ −1 y t − y ∗ + w t ,

where χ = ψ2γ – ψ1φ and the error term, wt , is
both correlated with inflation and output and
autocorrelated; it contains the current money
supply/demand and aggregate demand shocks
and also various lagged values (the change in
lagged expected future inflation, interest rates,
the output gap, the money demand shock, and
the aggregate demand shock). This particular
Taylor-type relation was created with a combination of equations—the solution of the money
demand and supply curves for interest rates, the
Fisher identity, and the IS curve for expected
future output.3 But other Taylor-type relationships
could be created with combinations of other equations, including the solution equations, generated
by the model. They will all exhibit autocorrelation
and contemporaneous correlation with output
and inflation, clearly of different sorts depending
on the combinations used.
Identification is of course a quite separate
matter from estimation; the usual assumption is
that we have infinite amounts of data to carry out
completely accurate estimation. In fact, OLS estimation would be inappropriate, as we have seen,
because it forces the error term to be orthogonal
3

From the money demand and money supply equation,

ψ 2 ∆Rt = π t − m + ψ 1∆Et −1y t +1 + ∆εt − µt :

Substitute Et –1yt +1 from the IS curve and then inside that for real
interest rates from the Fisher identity, giving

ψ 2 ∆Rt = πt − m + ψ 1

(ψ

( ){φ ( ∆R − ∆E
1
γ

t

) ∆ (R − R )

t −1π t +1

) + ∆y t − ∆vt } + ∆εt − µt ;

then, rearrange this as

−

ψ 1φ
γ

= (π t − m ) −
2

t

ψ 1φ
γ

∗

∆Et −1π t +1 +

ψ1
γ

(

)

∆ yt − y ∗ −

ψ1
γ

∆vt + ∆εt − µt ,

where the constants R* and y* have been subtracted from Rt and
yt, respectively, exploiting the fact that when differenced they
disappear. Finally, obtain

(

)

(

Rt = r ∗ + π ∗ + γ χ−1 π t − π ∗ + ψ 1 χ −1 y t − y ∗

(

)

(

)

 Rt −1 − R ∗ − ψ 1φχ −1∆Et −1π t +1 − ψ 1 χ −1 y t −1 − y ∗
+
−ψ 1 χ −1∆v t + γχ −1 ∆εt − γχ −1µt

),


where we have used the steady-state property that R* = r* + π*
and m = π*.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Minford

to the regressors, yet because this cannot be the
case, it induces bias. Instead, estimation is done
by a full-information estimator, which allows for
the model’s simultaneity, including of the error
term in this equation. With infinite data, we
retrieve the parameters exactly and also the error
terms. The error term in the Taylor rule proper is,
as we have seen, the “monetary shock” created by
FOMC special judgments on current events. This
is, therefore, like the errors in the Taylor-type
relationships, correlated with current events,
including the output gap and inflation, both
because these influence FOMC judgments (even
if they do not observe the correct values, they
know enough to extract signals from current
reports, snapshot statistics, etc.) and because these
shocks may affect current output and inflation.
Distinguishing between the two equations is
likely to be difficult in general. The error terms
of both the Taylor rule and Taylor-type relations
are autocorrelated and correlated with output
and inflation. The coefficients on output and
inflation in both are positive and that on inflation
in the Taylor rule will be higher than the one in
the Taylor-type relation if ψ2γ – ψ1φ is less than γ .
The constant in both is the steady-state value of
inflation plus the real rate of interest.
Identification by “Narrative Evidence” and
by Projections? Could we nevertheless be confident that there is a Taylor rule because of what
we definitely know about policymakers’ behavior
(what we might call narrative evidence)? In his
replies to my comments, Athanasios Orphanides
stated that FOMC minutes during this sample
period (from 1988) supported the interpretation
that the projections determined interest rate setting. However, the problem is that we cannot
see directly in this way what FOMC policymakers
were doing. They vote and there are minutes, but
we do not know what they are really trying to
do. We are familiar from psychology that people
may describe their actions in one way when in
truth they are being compelled to act (in a
“deterministic” way) by other forces; also there
may be reasons of prudence or politics that lead
people to disguise the motives for their actions.
Even when there is a legal objective, as in the
United Kingdom, policymakers pursue all sorts
J U LY / A U G U S T

2008

335

Minford

of private agendas. Thus, in the United Kingdom
recently we have had different members of the
Bank of England Monetary Policy Committee
being particularly concerned with measures like
house prices, other asset prices, the state of the
labor market, and latterly “moral hazard.” All
these have jostled in the voting for a place in
interest rate setting.
Furthermore, there have been many phases
in U.S. policy, as in U.K. policy. Under Bretton
Woods, the dollar’s fixed rate against the Deutsche
mark put some brakes on U.S. policy. After the
end of Bretton Woods, leading to the Louvre and
Plaza accords there were still flurries of concern
with exchange rates; intermittently right up to
present times there has been policy concern
with the current account deficit and the need for
exchange-rate movement. In 1979-81 there was a
big debate about money supply targets and an
episode of reserve targeting. Congress mandated
that the Fed give an account of its efforts to hit
various money supply targets in the 1970s and
1980s. Electoral pressures seem to have played a
part at times. Further, we know that for much of
the earlier postwar period some policymakers
believed that inflation could be contained by
wage/price controls and interest rates could be
used to bring down unemployment. Even in
recent times, influential policymakers have been
opposed to an inflation target—including some
policymakers inside the Fed itself—on the
grounds that there needs to be “flexibility” to
deal with unemployment.
Finally, I note that the Fed, more or less now
alone among central banks within the Organisation
for Economic Co-operation and Development,
does not have a formal inflation target set by law.
This certainly makes it harder, even in this recent
sample period from 1988, to use narrative evidence to identify the FOMC’s rule.
Can We Be Confident Because We See Such
a Close Correlation Between Projections and
Interest Rates? It may be argued that such a high
correlation (an R 2 of over 90 percent) proves
beyond doubt that Fed governors were using
their projections to produce their view on interest
rates. This too is problematic; indeed such a high
336

J U LY / A U G U S T

2008

R 2 arouses suspicion.4 We do not know how
these projections are produced, only that each
governor sends them to the meeting having produced them with the help of his or her staff.
They are then cropped and averaged to give the
published values for the Humphrey-Hawkins
legislation’s requirements. A reasonable suspicion
would be that the fit is so close because the
governors want to present a plausible public
case for their views on interest rates; hence,
governors that wish to raise rates will generate
forecasts of higher inflation and/or higher output gap (overheating). Their reasons for raising
rates may be quite different from these. Thus,
their projections are molded by their views, not
as assumed here, the other way round, views by
projections.
On this skeptical view of such a close fit, we
have no evidence of what was driving the governors’ views. It could be that they are closet monetarists. It could be that they worry about asset
prices or their latest regional data—any number
of things. In the end, it still comes out looking
like they follow a Taylor projections rule.
This way of thinking about FOMC decisions
could account for the lack of shift in the inflation
forecasts after the change from CPI to PCE: If the
governors are just rationalizing their interest rate
decisions by producing projections, they will
choose numbers not in the spirit of a good forecast but more in order to signal clearly the need
they perceive to raise or lower rates. The actual
number would be of little significance; the direction would be solely what mattered.
Consider now what the transfer function
might look like. It translates the governors’ average inflation and unemployment projections into
the state variables producing them. Hence, these
variables would be a mixture that could include
domestic asset prices, the exchange rate, the
money supply, unemployment and its dispersion—any variables that governors believe would
trigger their desired interest rate change.
4

I owe this point to Clemens Kool. In his conference comment,
Steven Cecchetti (2008) also questioned the meaning of these
forecasts.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Minford

CONCLUSIONS
This interesting paper shows that, if one
thinks the Taylor rule definitely describes the
FOMC’s behavior over the past two decades, then
a rather convincing relationship can be found,
though there are concerns about estimation, how
the transfer function relates projections to the
actual data, and the puzzling lack of shift in the
projections in response to well-known shifts in
the environment. Yet the Taylor rule, as its intellectual history suggests, is an incomplete description of monetary policy, at least within a New
Keynesian model; it cannot account for determinate inflation before 1980, and after 1980 it lacks
a clear mechanism for ruling out unstable paths.
If one is not a priori convinced it describes
the FOMC’s behavior in the past two decades,
then there is a nontrivial issue of identification:
Taylor-type relationships can emerge from a
DSGE model where no Taylor rule is guiding
monetary policy. To test the Taylor rule descriptive hypothesis convincingly, one really needs to
compare results for a full model with alternative
formulations of monetary policy. That way we
can see whether the data rejects one or other policy formulation when embedded in a full-model
structure.

REFERENCES
Bryant, Ralph; Hooper, Peter and Mann, Catherine,
eds. Evaluating Monetary Policy Regimes: New
Research in Empirical Macroeconomics.
Washington, DC: Brookings Institution Press, 1993.
Clarida, Richard; Galí, Jordi and Gertler, Mark.
“Monetary Policy Rules and Macroeconomic
Stability: Evidence and Some Theory.” Quarterly
Journal of Economics, February 2000, 115(1),
pp. 147-180.
Cochrane, John H. “Identification with Taylor Rules:
A Critical Review.” NBER Working Paper No. 13410,
National Bureau of Economic Research, September
2007a.
Cochrane, John H. “Inflation Determination with
Taylor Rules: A Critical Review.” NBER Working

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Paper No. 13409, National Bureau of Economic
Research, September 2007b.
Fair, Ray C. and Taylor, John B. “Solution and
Maximum Likelihood Estimation of Nonlinear
Rational Expectations Models.” Econometrica, July
1983, 51(4), pp. 1169-86.
Gillman, Max; Le, Vo Phuong Mai and Minford,
Patrick. “An Endogenous Taylor Condition in an
Endogenous Growth Monetary Policy Model.”
Economics Working Paper E2007/29, Cardiff
University, 2007.
Henderson, Dale W. and McKibbin, Warwick J. “An
Assessment of Some Basic Monetary Policy Regime
Pairs: Analytical and Simulation Results from
Simple Multi-Region Macroeconomic Models,” in
Ralph Bryant, Peter Hooper, and Catherine Mann,
eds., Evaluating Monetary Policy Regimes: New
Research in Empirical Macroeconomics. Chap. 2.
Washington, DC: Brookings Institution Press, 1993a,
pp. 45-218.
Henderson, Dale W. and McKibbin, Warwick J. “A
Comparison of Some Basic Monetary Policy Regimes
for Open Economies: Implications of Different
Degrees of Instrument Adjustment and Wage
Persistence.” Carnegie-Rochester Conference Series
on Public Policy, December 1993b, 39, pp. 221-317.
Ireland, Peter N. “Robust Monetary Policy with
Competing Reference Models: Comment.” Journal of
Monetary Economics, July 2003, 50(5), pp. 977-82.
Minford, Patrick; Perugini, Francesco and Srinivasan,
Naveen. “Are Interest Rate Regressions Evidence
for a Taylor Rule?” Economics Letters, 2002, 76(1),
pp. 145-50.
Minford, Patrick; Perugini, Francesco and Srinivasan,
Naveen. “How Different Are Money Supply Rules
from Taylor Rules?” Indian Economic Review, JulyDecember 2003, 38(2), pp. 157-166; published version of “The Observational Equivalence of the
Taylor Rule and Taylor-Type Rules.” CEPR Working
Paper 2959, Centre for Economic Policy Research,
2001.

J U LY / A U G U S T

2008

337

Minford

Orphanides, Athanasios. “Historical Monetary Policy
Analysis and the Taylor Rule.” Journal of Monetary
Economics, July 2003, 50(5), pp. 983-1022.
Orphanides, Athanasios and Wieland, Volker.
“Economic Projections and Rules of Thumb for
Monetary Policy.” Federal Reserve Bank of St. Louis
Review, July/August 2008, 90(4), pp. 307-24.
Taylor, John B. “Discretion versus Policy Rules in
Practice.” Carnegie-Rochester Conference Series on
Public Policy, December 1993, 39, pp. 195-214.

338

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

House Prices and the Stance of Monetary Policy
Marek Jarociński and Frank R. Smets
This paper estimates a Bayesian vector autoregression for the U.S. economy that includes a housing
sector and addresses the following questions: Can developments in the housing sector be explained
on the basis of developments in real and nominal gross domestic product and interest rates? What
are the effects of housing demand shocks on the economy? How does monetary policy affect the
housing market? What are the implications of house price developments for the stance of monetary
policy? Regarding the latter question, we implement a Céspedes et al. (2006) version of a monetary
conditions index. (JEL E3 E4)
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 339-65.

T

he current financial turmoil, triggered
by increasing defaults in the subprime
mortgage market in the United States,
has reignited the debate about the effect
of the housing market on the economy at large
and about how monetary policy should respond
to booming house prices.1 Reviewing the role of
housing investment in post-WWII business cycles
in the United States, Leamer (2007, p. 53) concludes that “problems in housing investment
have contributed 26% of the weakness in the
economy in the year before the eight recessions”
and suggests that, in the most recent boom and
bust period, highly stimulative monetary policy
by the Fed first contributed to a booming housing market and subsequently led to an abrupt
contraction as the yield curve inverted. Similarly,
using counterfactual simulations, Taylor (2007)
1

See the papers presented at the August 30–September 1, 2007,
Federal Reserve Bank of Kansas City economic symposium Housing,
Housing Finance, and Monetary Policy in Jackson Hole, Wyoming;
http://www.kc.frb.org/home/subwebnav.cfm?level=3&theID=105
48&SubWeb=5. A literature survey is presented in Mishkin (2007).

shows that the period of exceptionally low shortterm interest rates in 2003 and 2004 (compared
with a Taylor rule) may have substantially contributed to the boom in housing starts and may
have led to an upward spiral of higher house
prices, falling delinquency and foreclosure rates,
more favorable credit ratings and financing conditions, and higher demand for housing. As the
short-term interest rates returned to normal levels,
housing demand fell rapidly, bringing down both
construction and house price inflation. In contrast, Mishkin (2007) illustrates the limited ability
of standard models to explain the most recent
housing developments and emphasizes the uncertainty associated with housing-related monetary
transmission channels. He also warns against
leaning against rapidly increasing house prices
over and above their effects on the outlook for
economic activity and inflation and suggests
instead a preemptive easing of policy when a
house price bubble bursts, to avoid a large loss
in economic activity. Even more recently, Kohn
(2007, p. 3) says

Marek Jarociński is an economist and Frank R. Smets is Deputy Director General of Research at the European Central Bank. The authors
thank their discussants, Bob King and Steve Cecchetti, for their insightful comments.

© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, the regional Federal Reserve Banks, or the European Central Bank. Articles
may be reprinted, reproduced, published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full
citation are included. Abstracts, synopses, and other derivative works may be made only with prior written permission of the Federal Reserve
Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

339

Jarociński and Smets

I suspect that, when studies are done with
cooler reflection, the causes of the swing in
house prices will be seen as less a consequence
of monetary policy and more a result of the
emotions of excessive optimism followed by
fear experienced every so often in the marketplace through the ages…Low policy interest
rates early in this decade helped feed the initial rise in house prices. However, the worst
excesses in the market probably occurred
when short-term interest rates were already
well on their way to more normal levels, but
longer-term rates were held down by a variety
of forces.

In this paper, we review the role of the housing market and monetary policy in U.S. business
cycles since the second half of the 1980s using an
identified Bayesian vector autoregressive (BVAR)
model. We focus on the past two decades for a
number of reasons. First, following the “Great
Inflation” of the 1970s, inflation measured by the
gross domestic product (GDP) deflator has been
relatively stable between 0 and 4 percent since
the mid-1980s. As discussed by Clarida, Galí, and
Gertler (1999) and many others, this is likely partly
the result of a more systematic monetary policy
approach geared at maintaining price stability.
Second, there is significant evidence that the
volatility of real GDP growth has fallen since 1984
(e.g., McConnell and Pérez-Quirós, 2000). An
important component of this fall in volatility has
been a fall in the volatility of housing investment.
Moreover, Mojon (2007) has shown that a major
contribution to the “Great Moderation” has been
a fall in the correlation between interest rate–
sensitive consumer investment, such as housing
investment, and the other components of GDP.
This suggests that the role of housing investment
in the business cycle may have changed since the
deregulation of the mortgage market in the early
1980s. Indeed, Dynan, Elmendorf, and Sichel
(2005) find that the interest rate sensitivity of
housing investment has fallen over this period.
We use BVAR to perform three exercises.
First, we analyze the housing boom and bust in
the new millennium using conditional forecasts
by asking this question: Conditional on the esti340

J U LY / A U G U S T

2008

mated model, can we forecast the housing boom
and bust based on observed real GDP, prices, and
short- and long-term interest rate developments?
This is a first attempt at understanding the sources
of the swing in residential construction and house
prices in the new millennium. In the benchmark
VAR, our finding is that housing market developments can only partially be explained by nominal
and real GDP developments. In particular, the
strong rise in house prices in 2000 and the peak
of house prices in 2006 cannot be explained.
Adding the federal funds rate to the information
set helps forecast the housing boom. Interestingly,
most of the variations in the term spread can also
be explained on the basis of the short-term interest
rate, but there is some evidence of a long-term
interest rate conundrum in 2005 and 2006. As a
result, observing the long-term interest rate also
provides some additional information to explain
the boom in house prices.
Second, using a mixture of zero and sign
restrictions, we identify the effects of housing
demand, monetary policy, and term spread
shocks on the economy. We find that the effects
of housing demand and monetary policy shocks
are broadly in line with the existing empirical
literature. We also analyze whether these shocks
help explain the housing boom and its effect on
the wider economy. We find that both housing
market and monetary policy shocks explain a
significant fraction of the construction and house
price boom, but their effects on overall GDP
growth and inflation are relatively contained.
Finally, in the light of the above findings and
following a methodology proposed by Céspedes
et al. (2006), we explore the use of a monetary
conditions index (MCI), which includes the federal funds rate, the long-term interest rate spread,
and real house prices, to measure the stance of
monetary policy. The idea of measuring monetary
conditions by taking an appropriate weight of
financial asset prices was pioneered by the Bank
of Canada and the Reserve Bank of New Zealand
in the 1990s. As both countries are small open
economies, these central banks worried about how
changes in the value of the exchange rate may
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

affect the monetary policy stance.2 The idea was
to construct a weighted index of the short-term
interest rate and the exchange rate, where the
weights reflected the relative effect of those monetary conditions on an intermediate or final target
variable, such as the output gap, output growth,
or inflation. A number of authors have extended
the idea of an MCI to other asset prices, arguing
that those asset prices may be equally or more
important than the exchange rate. A prominent
example is Goodhart and Hofmann (2007), who
argue that real house prices should receive a significant weight because of their large effect on the
economy and inflation in particular. In contrast
to this literature, the crucial feature of the MCI
methodology proposed by Céspedes et al. (2006)
is that it takes into account that interest rates and
house prices are endogenous variables that systematically respond to the state of the economy.
As a result, their MCI can more naturally be interpreted as a measure of the monetary policy stance.
Using the identified BVAR, we apply the methodology to question whether the rise in house prices
and the fall in long-term interest rates led to an
implicit easing of monetary policy in the United
States.
In the next section, we present two estimated
BVAR specifications. We then use both BVARs
to calculate conditional forecasts of the housing
market boom and bust in the new millennium.
In the third section, we identify housing demand,
monetary policy, and term spread shocks and
investigate their effect on the U.S. economy.
Finally, in the fourth section we develop MCIs and
show using a simple analytical example how the
methodology works and why it is important to
take into account the endogeneity of short- and
long-term interest rates and house prices with
respect to the state of the economy. We then use
the estimated BVARs to address whether long-term
interest rates and house prices play a significant
role in measuring the stance of monetary policy.
A final section contains some conclusions and
discusses some of the shortcomings and areas for
future research.
2

See, for example, Freedman (1994 and 1995a,b) and Duguay (1994).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

A BVAR WITH HOUSING FOR
THE U.S. ECONOMY
In this section, we present the results from
estimating a nine-variable BVAR of order five for
the U.S. economy. In addition to standard variables, such as real GDP, the GDP deflator, commodity prices, the federal funds rate, and M2, we
include real consumption, real housing investment, real house prices, and the long-term interest
rate spread. To measure house price inflation, we
use the nationwide Case-Shiller house price index,
which limits our sample to 1987:Q1-2007:Q2.
The two estimated BVAR specifications are as
follows: One is a traditional VAR in levels (LVAR)
that uses a standard Minnesota prior. The other
is a differences VAR (DVAR) that is specified in
growth rates and uses priors about the steady
state (see Villani, 2008).
More specifically, in the LVAR, the vector of
endogenous variables is given by
(1)

 y t ct pt HI t / Yt hpt − pt cpt it st mt  ,

where all variables are in logs, with the exception
of the federal funds rate (it), the long-term interest
rate spread (st ), and the housing investment share
of GDP (HIt /Yt); yt is real GDP; ct is real consumption; pt is the GDP deflator; hpt is house prices;
cpt is commodity prices; and mt is the money
stock.3
In the DVAR, the vector of endogenous variables is instead given by
(2)

 ∆yt ∆ct ∆pt HI t / Yt ∆hpt − ∆pt ∆cpt it st ∆mt  ,

where ∆ is the difference operator and the BVAR
is parameterized in terms of deviations from
steady state.
The main difference between the two specifications is related to the assumptions one makes
about the steady state of the endogenous variables.
The advantage of the DVAR with a prior on the
joint steady state is that it guarantees that the
growth rates are reasonable and mutually consistent in the long run, in spite of the short sample
3

See the data appendix for the sources of the time series.

J U LY / A U G U S T

2008

341

Jarociński and Smets

Table 1
Prior and Posterior Means and Standard Deviations of the Steady States in the DVAR
Real
GDP
Real GDP consumption deflator
growth
growth
inflation

Variable

Housing
investment/
GDP

House
price
growth

Commodity
price
growth

Federal
funds
rate

Term
spread

Money
growth

Prior mean

2.50

2.50

2.00

4.50

0.00

2.00

4.50

1.00

4.50

Standard deviation

0.50

0.71

0.20

1.00

2.00

2.00

0.62

1.00

1.00

Posterior mean

2.96

3.23

2.21

4.51

1.52

2.00

5.05

1.42

4.35

Standard deviation

0.22

0.22

0.15

0.07

1.08

1.54

0.34

0.24

0.51

used in the estimation. The cost is that it discards
important sample information contained in the
LVAR variables. As we discuss below, this may
be the main reason behind the larger error bands
around the DVAR impulse responses and conditional projections. Although the forecasts of the
LVAR match the data better at shorter horizons,
the longer-run unconditional forecasts it produces
make less sense from an economic point of view.
Because these considerations may matter for
assessing the monetary policy stance, we report
the findings using both specifications.
In both cases the estimation is Bayesian. In the
case of the DVAR, it involves specifying a prior
on the steady state of the VAR and a Minnesota
prior on dynamic coefficients, as introduced in
Villani (2008). The Minnesota prior uses standard
settings, which are the same as the settings used
for the LVAR. In the DVAR, the informative prior
on the steady state serves two roles: First, it regularizes the inference on the steady states of variables. Without it, the posterior distribution of
the steady states is ill-specified because of the
singularity at the unit root. Second, and this is
our innovation with respect to the approach of
Villani (2008), through it we use economic theory
to specify prior correlations between steady states.
The steady-state nominal interest rate is, by the
Fisher equation, required to be the sum of the
steady-state inflation rate and the equilibrium
real interest rate. The steady-state real interest
rate is, in turn, required to be equal to the steadystate output growth rate plus a small error reflecting time preference and a risk premium. The
steady-state output and consumption growth
342

J U LY / A U G U S T

2008

rates are also correlated a priori, as we think of
them as having a common steady state.
The prior and posterior means and standard
deviations of the steady states in the DVAR are
given in Table 1.
Figure 1 plots the data we use, as well as their
estimated steady-state values from the DVAR. The
steady-state growth rate of real GDP is estimated
to be close to 3 percent over the sample period.
Average GDP deflator inflation is somewhat above
2 percent. The steady-state housing investment–
to-GDP ratio is about 4.5 percent. During the new
millennium construction boom, the ratio rose by
1 percentage point, peaking at 5.5 percent in 2005
before dropping below its long-term average in
the second quarter of 2007. Developments in real
house prices mirror the developments in the construction sector. The estimated steady-state real
growth rate of house prices is 1.5 percent over the
sample period. However, changes in real house
prices were negative during the early-1990s recession. The growth rate of house prices rose above
average in the late 1990s and accelerated significantly above its estimated steady state, reaching
a maximum annual growth rate of more than 10
percent in 2005 before falling abruptly to negative growth rates in 2006 and 2007. Turning to
interest rate developments, the estimated steadystate nominal interest rate is around 5 percent.
The estimated steady-state term spread, that is,
the difference between the 10-year bond yield
rate and the federal funds rate, is 1.4 percent. In
the analysis below, we will focus mostly on the
boom and bust period in the housing market
starting in 2000.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 1
Data Used and Their Estimated Steady-State Values from the DVAR

Output

8
6
4
2
0

Data
Mean
16 Percentile
84 Percentile

–2
–4
1990
5.6
5.4
5.2
5.0
4.8
4.6
4.4
4.2
4.0
3.8
3.6

1995

2000

Consumption

7
6
5
4
3
2
1
0
–1
–2
–3

2005

1990

Housing Investment

1995

2000

2005

House Prices

15

5
0
–5
–10
–15
1995

2000

2005

Interest Rate

10
9
8
7
6
5
4
3
2
1
0
1990

1995

2000

1990

1995

1995

2000

2005

Commodity Prices

1990

1995

2000

2005

Money

12
10
8
6
4
2
0
–2
–4

1990

1995

Using both BVAR specifications, we then ask
the following question: Can we explain developments in the housing market based on observed
developments in real and nominal GDP and the
short- and long-term interest rates? To answer this
question we make use of the conditional forecasting methodology developed by Doan, Litterman,
and Sims (1984) and Waggoner and Zha (1999).
Figures 2A and 2B report the results for the
DVAR and the LVAR, respectively, focusing on
the post-2000 period. Each figure shows the actual
developments of the housing investment–to-GDP
ratio (first column) and the annual real growth
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

2005

Spread

4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0
–0.5
–1.0
2005

2000

1990
50
40
30
20
10
0
–10
–20
–30
–40

10

1990

Prices

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5

2000

2005

1990

1995

2000

2005

rate of house prices (second column). Dotted black
lines denote unconditional forecasts, and blue
lines denote conditional forecasts, conditioning
on observed real and nominal GDP (first row),
observed real and nominal GDP and the federal
funds rate (second row), and observed real and
nominal GDP, the federal funds rate, and the term
spread (third row). Note that this is an in-sample
analysis in that the respective VARs are estimated
over the full sample period. The idea behind
increasing the information set is to see to what
extent short- and long-term interest rates provide
information about developments in the housing
J U LY / A U G U S T

2008

343

Jarociński and Smets

Figure 2A
Housing Investment–to-GDP Ratio and Annual House Price Growth Rate, 1995-2007: Actual
Data and Unconditional and Conditional Forecasts from the DVAR
5.6

Housing Investment Conditional on y,p
Conditional Forecast Mean
16 Percentile
84 Percentile
Unconditional Forecast Mean
Actual

5.4
5.2
5.0
4.8
4.6

10
5
0

4.4
4.2

–5

4.0

5.6

House Prices Conditional on y,p

96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Conditional on y,p,i

House Prices Conditional on y,p,i

5.4

10

5.2
5.0

5

4.8
4.6

0

4.4
4.2

–5

4.0
96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Conditional on y,p,i,s

House Prices Conditional on y,p,i,s

5.6
5.4

10

5.2
5.0

5

4.8
4.6

0

4.4
4.2

–5

4.0
96-01 98-01 00-01 02-01 04-01 06-01

market, in addition to the information already
contained in real and nominal GDP.
A number of interesting observations can be
made. First, as discussed above, the unconditional
forecasts of housing investment and real house
price growth are quite different in both VARs. The
DVAR projects the housing investment–to-GDP
ratio to fluctuate mildly around its steady state,
while the growth rate of house prices is projected
to return quite quickly to its steady state of 1.5
percent from the relatively high level of growth
of more than 5 percent at the end of 1999. The
LVAR instead captures some of the persistent in344

J U LY / A U G U S T

2008

96-01 98-01 00-01 02-01 04-01 06-01

sample fluctuations and projects a further rise in
housing investment and the growth rate of house
prices before it returns close to the sample mean
in 2007.
Second, based on the DVAR in Figure 2A,
neither GDP developments nor short- or long-term
interest rates can explain why real house prices
continued to grow at rates above 5 percent following the slowdown of the economy in 2000 and
2001. Real and nominal GDP developments can
explain an important fraction of the housing boom
in 2002 and 2003, but they cannot account for the
10 percent acceleration of house prices in 2004
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 2B
Housing Investment–to-GDP Ratio and Annual House Price Growth Rate, 1995-2007: Actual
Data and Unconditional and Conditional Forecasts from the LVAR
5.6

Housing Investment Conditional on y,p

5.4

10

5.2
5.0

5

4.8
4.6

0

4.4
4.2

–5

4.0

5.6

House Prices Conditional on y,p

Conditional Forecast Mean
16 Percentile
84 Percentile
Unconditional Forecast Mean
Actual

96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Conditional on y,p,i

House Prices Conditional on y,p,i

5.4

10

5.2
5.0

5

4.8
4.6

0

4.4
4.2

–5

4.0
96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Conditional on y,p,i,s

House Prices Conditional on y,p,i,s

5.6
5.4

10

5.2
5.0

5

4.8
4.6

0

4.4
4.2

–5

4.0
96-01 98-01 00-01 02-01 04-01 06-01

and 2005. The low level of short- and long-term
interest rates in 2004 and 2005 helps explain the
boom in those years. In particular, toward the end
of 2004 and in 2005, the unusually low level of
long-term interest rates helps account for the acceleration in house prices. According to this model,
there is some evidence of a conundrum: In this
period, long-term interest rates are lower than
would be expected on the basis of observed shortterm interest rates. The ability to better forecast
the boom period comes, however, at the expense
of a larger unexplained undershooting of house
prices and housing investment toward the end of
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

96-01 98-01 00-01 02-01 04-01 06-01

the sample. Overall, these results suggest that
the unusually low level of short- and long-term
interest rates may have contributed to the boom
in U.S. housing markets in the new millennium.
Third, the LVAR results in Figure 2B are, however, less clear. The part of the housing boom that
cannot be explained by developments in real
and nominal GDP is smaller. Moreover, adding
short- and long-term interest rates to the data set
does not change the picture very significantly.
These findings suggest that the results of this analysis partly depend on the assumed steady-state
behavior of the housing market and interest rates.
J U LY / A U G U S T

2008

345

Jarociński and Smets

IDENTIFYING HOUSING
DEMAND, MONETARY POLICY,
AND TERM SPREAD SHOCKS
To add a bit more structure to the analysis, in
this section we identify housing demand, monetary policy, and term spread shocks and analyze
their effect on the economy. We use a mixture of
a recursive identification scheme and sign restrictions. As usual, monetary policy shocks are identified by zero restrictions. They are assumed to
affect economic activity and prices with a onequarter lag, but they may have an immediate
effect on the term spread and the money stock.
The housing demand shock is a shock that affects
housing investment and house prices contemporaneously and in the same direction. Moreover,
its immediate effect on output is roughly equal
to the increase in housing investment (i.e., this
shock has no contemporaneous effect on the other
components of output taken together).
We use sign restrictions to impose this identification scheme.4 For simplicity, we also assume
that the housing demand shock affects the GDP
deflator only with a lag. The shock that affects
housing investment and house prices in opposite
directions can be interpreted as a housing supply
shock. However, it turns out that this shock
explains only a small fraction of developments
in the housing market, so we will not explicitly
discuss this shock. Figure 3 shows for the DVAR
(shaded areas) and the LVAR (dotted lines) the
68 percent posterior probability regions of the
estimated impulses.
A number of observations are worth making.
Overall, both VAR specifications give similar estimated impulse response functions. One difference
worth noting is that, relative to the LVAR specification, the DVAR incorporates larger and more
persistent effects on house prices and the GDP
deflator. In what follows, we focus on the more
precisely estimated LVAR specification. According
to Figure 3, a one-standard-deviation housing
demand shock leads to a persistent rise in real
house prices of about 0.75 percent and an increase
in the housing investment share of about 0.05 per4

For a discussion of VAR identification with sign restrictions, see,
for example, Uhlig (2005).

346

J U LY / A U G U S T

2008

centage points. The effect on the overall economy
is for real GDP to rise by about 0.10 percent after
four quarters, whereas the effect on the GDP
deflator takes longer (about three years) to peak
at 0.08 percent above baseline. Note that, in the
DVAR specification, the peak effect on goods
prices is quite a bit larger. The monetary policy
response as captured by the federal funds rate is
initially limited, but eventually the federal funds
rate increases by about 20 basis points after two
years. The initial effect on the term spread is positive, reflecting that long-term interest rates rise
in anticipation of inflation and a rise in shortterm rates.
To assess how reasonable these quantitative
effects are, it is useful to compare them with other
empirical results. One relevant literature is the
empirical literature on the size of wealth/collateral
effects of housing on consumption. As discussed
in Muellbauer (2007) and Mishkin (2007), the
empirical results are somewhat diverse, but some
of the more robust findings suggest that the wealth
effects from housing are approximately twice as
large as those from stock prices. For example,
Carroll, Otsuka, and Slacalek (2006) estimate that
the long-run marginal propensity to consume out
of a dollar increase in housing is 9 cents, compared
with 4 cents for non-housing wealth. Similarly,
using cross-country time series, Slacalek (2006)
finds that it is 7 cents out of a dollar. Overall, the
long-run marginal propensities to consume out
of housing wealth range from 5 to 17 percent, but
a reasonable median estimate is probably around
7 to 8 percent compared with a 5 percent elasticity
for stock market wealth. How does this compare
with the elasticities embedded in our estimated
impulse response to a housing price shock? A 1
percent persistent increase in real house prices
leads to a 0.075 percent increase in real consumption after four quarters. Taking into account that
the housing wealth–to-consumption ratio is
around 3 in the United States, this suggests a
marginal propensity to consume about one-third
of the long-run median estimate reported above.
This lower effect on consumption may partly be
explained by the fact that the increase in house
prices is temporary. The mean elasticities embedded in the DVAR are somewhat lower.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 3
Impulse Responses to a Housing Demand Shock, DVAR and LVAR
Output

Consumption

0.30

0.30

0.20

0.20

0.10

0.10

0.00

0.00

–0.10

–0.10

–0.20

–0.20

–0.30

0.30
0.20
0.10
0.00
–0.10

5

10

15

20

–0.30
0

5

Housing Investment

0.10

10

15

20

House Prices
2.00

0.05

1.00

0.00

0.00
–1.00

–0.05

–2.00
–0.10
0

5

10

15

20

0

5

Interest Rate

0.50

10

15

Spread

0.30

0.20

0.20

0.10

0.10

0.00

0.00

–0.10

–0.10

–0.20

–0.20
0

5

10

15

20

10

15

20

Commodity Prices

3.00
2.00
1.00
0.00
–1.00
–2.00
–3.00
–4.00
–5.00
–6.00
0

5

10

15

20

Money

0.50
0.00
–0.50
–1.00
0

5

We can also compare our estimated impulse
responses with simulations in Mishkin (2007)
that use the Federal Reserve Bank U.S. (FRB/US)
model. Mishkin (2007, Figure 5) reports that a 20
percent decline in real house prices under the
estimated Taylor rule leads to a 1.5 percent deviation of real GDP from baseline in a version of the
FRB/US with magnified channels, and to only a
bit more than 0.5 percent in the benchmark version (which excludes an effect on real housing
investment). Translating our results to a 20 percent real house price shock suggests a multiplier
of 2.5 percent. This multiplier is quite a bit higher
than that suggested by the FRB/US simulations,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

5

1.00

–0.30

–0.30

0

20

0.30

0.40

DVAR 68 Percent
LVAR Mean
LVAR 68 Percent

–0.20

–0.30
0

Prices

0.40

10

15

20

0

5

10

15

20

but in our case this may be partly the result of
the strong immediate response of housing
investment.
Finally, we can also compare the estimated
impulse responses of Figure 3 with the impulse
responses to a positive housing preference shock
in the estimated structural DSGE model of the
U.S. economy in Iacoviello and Neri (2007).
They find that a 1 percent persistent increase in
real house prices is associated with a 0.07 percent increase in consumption and a 3.6 percent
increase in real housing investment. Whereas our
estimated elasticity of real consumption is very
similar, the elasticity of real housing investment
J U LY / A U G U S T

2008

347

Jarociński and Smets

Figure 4
Impulse Responses to a Monetary Policy Shock, DVAR and LVAR
Output
0.30

Consumption

DVAR 68 Percent
LVAR Mean
LVAR 68 Percent

0.20
0.10

0.30

–0.10

–0.10

–0.20

–0.20

–0.30

–0.30
5

10

15

0.20

0.10
0.00

0

0.30

0.20

0.00

20

0.10
0.00
–0.10
–0.20
–0.30
0

5

Housing Investment

0.10

10

15

House Prices

1.00

0.00

0.00
–1.00

–0.05

–2.00
–0.10
0

5

10

15

20

0

5

Interest Rate

0.50

15

Spread

0.30

0.20

0.20

0.10

0.10

0.00

0.00

–0.10

–0.10

–0.20

–0.20
0

5

10

15

20

2008

10

15

20

5

10

15

20

Money

0.50
0.00
–0.50
–1.00
0

5

is quite a bit lower at approximately 1.5 percent.
It falls at the lower bound of the findings of Topel
and Rosen (1988), who estimate that, for every 1
percent increase in house prices lasting for two
years, new construction increases on impact
between 1.5 and 3.15 percent, depending on the
specifications.
Turning to a monetary policy shock, the LVAR
results in Figure 4 show that a persistent 25-basispoint tightening of the federal funds rate has the
usual delayed negative effects on real GDP and the
GDP deflator. The size of the real GDP response
is quite small, with a maximum mean negative
effect of about 0.1 percent deviation from baseJ U LY / A U G U S T

0
1.00

–0.30

–0.30

5

Commodity Prices

3.00
2.00
1.00
0.00
–1.00
–2.00
–3.00
–4.00
–5.00
–6.00

20

0.30

0.40

348

10

0

20

2.00
0.05

Prices

0.40

10

15

20

0

5

10

15

20

line after three years. This effect is even smaller
and less significant in the DVAR specification.
For the LVAR specification, the effect on housing
investment is larger and quicker, with a maximum negative effect of 0.03 percentage points of
GDP (which would correspond to approximately
0.75 percent change) after about two years. Real
house prices also immediately start falling and
bottom out at 0.5 percent below baseline after
two and a half years. The housing market effects
are somewhat stronger in the DVAR specification. The higher sensitivity of housing investment to a monetary policy shock is consistent
with the findings in the literature. For example,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 5
Impulse Responses to a Term Spread Shock, DVAR and LVAR
Output

Consumption

0.30

0.30

0.20

0.20

0.10

0.10

0.00

0.00

–0.10

–0.10

–0.20

–0.20

–0.30

–0.30
0

5

10

15

20

DVAR 68 Percent
LVAR Mean
LVAR 68 Percent

–0.30
0

5

10

15

20

House Prices

–1.00
–2.00

–0.10
0

5

10

15

20

0

5

Interest Rate

0.50

10

15

20

Spread

0.30

0.20

0.20

0.10

0.10

0.00

0.00

–0.10

–0.10

–0.20

–0.20
0

5

10

15

20

15

20

Commodity Prices

0

5

10

15

20

Money

0.00
–0.50
–1.00
0

5

using identified VARs, Erceg and Levin (2002)
find that housing investment is about 10 times as
responsive as consumption to a monetary policy
shock. Our results are also comparable with those
reported in Mishkin (2007) using the FRB/US
model. In those simulations, a 100-basis-point
increase in the federal funds rate leads to a fall
in real GDP of about 0.3 to 0.4 percent, although
the lags (6 to 8 quarters) are somewhat smaller
than those in our estimated BVARs. Further, the
effect on real housing investment is faster (within
a year) and larger, but the estimated magnitude
of these effects (between 1 and 1.25 percent) is
quite a bit larger in our case (around 2.5 percent).
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

10

0.50

–0.30

–0.30

5

1.00

0.30

0.40

0
3.00
2.00
1.00
0.00
–1.00
–2.00
–3.00
–4.00
–5.00
–6.00

0.00

–0.05

0.10

–0.20

1.00

0.00

0.20

0.00

2.00
0.05

0.30

–0.10

Housing Investment

0.10

Prices

0.40

10

15

20

0

5

10

15

20

Dynan, Elmendorf, and Sichel (2005) argue that
the interest rate sensitivity of real housing investment has fallen since the second half of the 1980s
(partly the result of deregulation of the mortgage
market in the early 1980s). Our results suggest
elasticities that are more in line with Erceg and
Levin (2002) than with the FRB/US simulations.
Our results can also be compared with the
impulse responses to an adverse interest rate
shock in Iacoviello and Neri (2007). They find
that a 50-basis-point temporary increase in the
federal funds rate leads to a fall in real house
prices of about 0.75 percent from baseline, compared with a delayed 1 percent fall in real house
J U LY / A U G U S T

2008

349

Jarociński and Smets

Table 2A
Shares of Housing Demand, Monetary Policy, and Term Spread Shocks in Variance
Decompositions, DVAR
Horizon
Variable

Shock

Output

Consumption

Prices

Housing investment

House prices

Commodity prices

Interest rate

Spread

Money

0

3

11

23

Housing

0.016

0.034

0.052

0.062

Monetary policy

0.000

0.004

0.021

0.039

Term premium

0.000

0.003

0.015

0.028

Housing

0.005

0.018

0.033

0.055

Monetary policy

0.000

0.003

0.015

0.029

Term premium

0.000

0.005

0.034

0.063

Housing

0.002

0.013

0.120

0.166

Monetary policy

0.000

0.003

0.014

0.037

Term premium

0.000

0.006

0.034

0.046

Housing

0.521

0.579

0.382

0.291

Monetary policy

0.000

0.015

0.175

0.136

Term premium

0.000

0.005

0.023

0.062

Housing

0.535

0.554

0.410

0.242

Monetary policy

0.000

0.010

0.068

0.083

Term premium

0.000

0.002

0.021

0.060

Housing

0.027

0.028

0.041

0.085

Monetary Policy

0.000

0.012

0.167

0.222

Term premium

0.000

0.004

0.018

0.055

Housing

0.037

0.061

0.165

0.178

Monetary policy

0.752

0.496

0.192

0.166

Term premium

0.000

0.023

0.076

0.088

Housing

0.090

0.050

0.177

0.186

Monetary policy

0.223

0.303

0.214

0.206

Term premium

0.336

0.245

0.146

0.134

Housing

0.060

0.044

0.062

0.099

Monetary policy

0.204

0.141

0.044

0.045

Term premium

0.013

0.042

0.129

0.135

NOTE: The reported shares are averages over the posterior distribution and relate to the (log) level variables.

350

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Table 2B
Shares of Housing Demand, Monetary Policy, and Term Spread Shocks in Variance
Decompositions, LVAR
Horizon
Variable
Output

Consumption

Prices

Housing investment

House prices

Commodity prices

Interest rate

Spread

Money

Shock

0

3

11

23

Housing

0.019

0.049

0.073

0.106

Monetary policy

0.000

0.005

0.036

0.052

Term premium

0.000

0.005

0.026

0.026

Housing

0.005

0.021

0.051

0.093

Monetary policy

0.000

0.008

0.040

0.051

Term premium

0.000

0.005

0.021

0.024

Housing

0.002

0.017

0.127

0.153

Monetary policy

0.000

0.005

0.038

0.114

Term premium

0.000

0.005

0.012

0.016

Housing

0.582

0.554

0.357

0.351

Monetary policy

0.000

0.027

0.124

0.125

Term premium

0.000

0.015

0.021

0.019

Housing

0.586

0.610

0.360

0.229

Monetary policy

0.000

0.011

0.087

0.066

Term premium

0.000

0.003

0.010

0.014

Housing

0.030

0.044

0.154

0.149

Monetary policy

0.000

0.008

0.072

0.100

Term premium

0.000

0.005

0.012

0.015

Housing

0.032

0.055

0.217

0.211

Monetary policy

0.709

0.453

0.206

0.177

Term premium

0.000

0.007

0.018

0.018

Housing

0.072

0.048

0.129

0.150

Monetary policy

0.230

0.281

0.163

0.152

Term premium

0.355

0.215

0.114

0.085

Housing

0.040

0.036

0.053

0.066

Monetary policy

0.257

0.237

0.089

0.060

Term premium

0.015

0.020

0.021

0.025

NOTE: The reported shares are averages over the posterior distribution and relate to the (log) level variables.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

351

Jarociński and Smets

Figure 6A, Panel 1
Counterfactuals, Shutting Down Each of the Identified Shocks, DVAR
Housing Investment Feeding All Shocks
Except Housing Demand
5.6
Counterfactual
Unconditional Forecast
Actual

5.4
5.2
5.0

House Prices Feeding All Shocks
Except Housing Demand
10
5

4.8
0

4.6
4.4

–5

4.2

5.6

96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Feeding All Shocks
Except Monetary Policy

House Prices Feeding All Shocks
Except Monetary Policy

5.4

10

5.2
5.0

5

4.8
0

4.6
4.4

–5

4.2

5.6

96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Feeding All Shocks
Except Term Spread

House Prices Feeding All Shocks
Except Term Spread

5.4

10

5.2
5.0

5

4.8
0

4.6
4.4
4.2

–5
96-01 98-01 00-01 02-01 04-01 06-01

prices in our case (the delay is partly the result
of our recursive identification assumption).
According to the estimates of Iacoviello and Neri
(2007), real investment responds six times more
strongly than real consumption and two times
more strongly than real fixed investment. Overall,
this is consistent with our results. However, the
effects in Iacoviello and Neri (2007) are immediate,
whereas they are delayed in our case. (See also
Del Negro and Otrok, 2007.)
In conclusion, the overall quantitative estimates of the effects of a monetary policy shock
352

J U LY / A U G U S T

2008

96-01 98-01 00-01 02-01 04-01 06-01

are in line with those found in the empirical literature. Similarly to our results, Goodhart and
Hofmann (2007) find that a one-standard-deviation
shock to the real short-term interest rate has about
the same quantitative effect on the output gap as
a one-standard-deviation shock to the real house
price gap.
Finally, in the light of the discussion of the
effects of developments in long-term interest rates
on the house price boom and bust in the United
States and many other countries, it is also interesting to look at the effects of a term spread shock
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 6A, Panel 2
Counterfactuals, Shutting Down Each of the Identified Shocks, DVAR

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
–0.5

GDP Feeding All Shocks
Except Housing Demand

Prices Feeding All Shocks
Except Housing Demand

3.5

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0

7
6

3.0

5
2.5

4

2.0

3
2

1.5
1.0

GDP Feeding All Shocks
Except Monetary Policy

0
97-01 00-01 03-01 06-01

3.5

Counterfactual
Unconditional Forecast
Actual

1

97-01 00-01 03-01 06-01

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0

Interest Rate Feeding All Shocks
Except Housing Demand

Prices Feeding All Shocks
Except Monetary Policy

97-01 00-01 03-01 06-01

Interest Rate Feeding All Shocks
Except Monetary Policy
7
6

3.0

5
2.5

4

2.0

3
2

1.5

1
0

1.0
97-01 00-01 03-01 06-01

97-01 00-01 03-01 06-01

GDP Feeding All Shocks
Except Term Spread

Prices Feeding All Shocks
Except Term Spread

3.5

97-01 00-01 03-01 06-01

Interest Rate Feeding All Shocks
Except Term Spread
7
6

3.0

5
2.5

4

2.0

3
2

1.5
97-01 00-01 03-01 06-01

1.0

1
0
97-01 00-01 03-01 06-01

on the housing market. Figure 5 shows that a 20basis-point increase in long-term interest rates
over the federal funds rate has a quite significant
effect on housing investment, which drops by
more than 0.014 percentage points of GDP (which
corresponds to a 0.3 percent change) after about
a year. Also, real GDP falls with a bit more of a
delay, by about 0.075 percent after six quarters.
Both the GDP deflator and real house prices fall,
but only gradually. Overall, the size of the impulse
responses is, however, small.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

97-01 00-01 03-01 06-01

Tables 2A and 2B report the contribution of
the three shocks to the forecast-error variance at
different horizons in both specifications. Overall,
the housing demand, monetary policy, and term
spread shocks account for only a small fraction
of the total variance in real GDP and in the GDP
deflator. Monetary policy and housing demand
shocks do, however, account for a significant
fraction of the variance in the housing market.
This can be verified by looking at the contribution of the three shocks to the historical boom
and bust episode since 2000, as depicted in
J U LY / A U G U S T

2008

353

Jarociński and Smets

Figure 6B, Panel 1
Counterfactuals, Shutting Down Each of the Identified Shocks, LVAR

5.6

Housing Investment Feeding All Shocks
Except Housing Demand

5.4

Counterfactual
Unconditional Forecast
Actual

5.2
5.0

House Prices Feeding All Shocks
Except Housing Demand
10
5

4.8
0

4.6
4.4

–5

4.2

5.6

96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Feeding All Shocks
Except Monetary Policy

House Prices Feeding All Shocks
Except Monetary Policy

5.4

10

5.2
5.0

5

4.8
0

4.6
4.4

–5

4.2

5.6

96-01 98-01 00-01 02-01 04-01 06-01

96-01 98-01 00-01 02-01 04-01 06-01

Housing Investment Feeding All Shocks
Except Term Spread

House Prices Feeding All Shocks
Except Term Spread

5.4

10

5.2
5.0

5

4.8
0

4.6
4.4
4.2

–5
96-01 98-01 00-01 02-01 04-01 06-01

Figure 6A for the DVAR and 6B for the LVAR.
Panel 1 of each figure shows the developments
of the real housing investment–to-GDP ratio (first
column) and the annual change in real house
prices (second column). Panel 2 of each figure
shows output (first column), prices (second column), and interest rates (third column). Each
graph includes the actual data (black lines), unconditional forecasts as of 2000 (black dotted lines),
and the counterfactual evolution (blue dashed
lines) when each of the following three identified
shocks are put to zero: a housing demand shock
354

J U LY / A U G U S T

2008

96-01 98-01 00-01 02-01 04-01 06-01

(first row), monetary policy shock (second row),
and term spread shock (third row).
For the DVAR (Figure 6A), the term spread
shock does not have a visible effect on the housing
market or the economy as a whole. The housing
demand shock has a large positive effect on the
housing market in 2001 and 2002 and again in
2004 and 2005. A negative demand shock also
explains a large fraction of the fall in construction
and house price growth from 2006 onward. These
shocks have only negligible effects on overall GDP
growth, but do seem to have pushed up inflation
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 6B, Panel 2
Counterfactuals, Shutting Down Each of the Identified Shocks, LVAR

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0

5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0

GDP Feeding All Shocks
Except Housing Demand

Prices Feeding All Shocks
Except Housing Demand

3.5

Interest Rate Feeding All Shocks
Except Housing Demand
7
6

3.0

5
2.5

4

2.0

3
2

1.5
1.0

0

97-01 00-01 03-01 06-01

GDP Feeding All Shocks
Except Monetary Policy

97-01 00-01 03-01 06-01

3.5

Counterfactual
Unconditional Forecast
Actual

1

Prices Feeding All Shocks
Except Monetary Policy

97-01 00-01 03-01 06-01

Interest Rate Feeding All Shocks
Except Monetary Policy
7
6

3.0

5
2.5

4

2.0

3
2

1.5

1
0

1.0
97-01 00-01 03-01 06-01

97-01 00-01 03-01 06-01

GDP Feeding All Shocks
Except Term Spread

Prices Feeding All Shocks
Except Term Spread

3.5

97-01 00-01 03-01 06-01

Interest Rate Feeding All Shocks
Except Term Spread
7
6

3.0

5
2.5

4

2.0

3
2

1.5
97-01 00-01 03-01 06-01

1.0

1
0
97-01 00-01 03-01 06-01

by 10 to 20 basis points over most of the post-2000
period. Loose monetary policy also seems to have
contributed to the housing boom in 2004 and 2005.
Without the relatively easy policy of late 2003 and
early 2004, the boom in house price growth would
have stayed well below the 10 percent growth rate
in 2005. Easy monetary policy also has a noticeable,
though small effect, on GDP growth and inflation.
The LVAR results depicted in Figure 6B give
similar indications, although they generally attribute an even larger role to the housing demand
shocks.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

97-01 00-01 03-01 06-01

HOUSE PRICES AND THE
MONETARY POLICY STANCE
IN THE UNITED STATES
The idea of measuring monetary conditions
by taking an appropriate weight of interest rates
and asset prices was pioneered by the Bank of
Canada and the Reserve Bank of New Zealand in
the 1990s. Because both countries are small open
economies, these central banks worried about
how changes in the value of the exchange rate
J U LY / A U G U S T

2008

355

Jarociński and Smets

may affect the monetary policy stance.5 The idea
was to construct a weighted index of the shortterm interest rate and the exchange rate, where
the weights reflected the relative effect of the
exchange rate on an intermediate or final target
variable, such as the output gap, output growth,
or inflation. A number of authors have extended
the idea of the MCI to other asset prices, arguing
that those asset prices may be equally or more
important than the exchange rate. One prominent
example is Goodhart and Hofmann (2007), who
argue that real house prices should receive a significant weight in an MCI because of their significant effect on the economy. For the United States,
they argue that the relative weight of the short-term
interest rate versus house prices should be of the
order of 0.6 to 1.8.
In the small literature that developed following
the introduction of the MCI concept, a number
of shortcomings have been highlighted.6 One
difficulty is that the lag structure of the effects of
changes in the interest rate and real house prices
on the economy may be different. As noted above,
according to our estimates, the effect of an interest
rate shock on economic activity appears to take
somewhat longer than the effect of a house price
shock. In response, Batini and Turnbull (2002;BT)
proposed a dynamic MCI that takes into account
the different lag structures by weighting all current
and past interest rates and asset prices with their
estimated impulse responses. Another shortcoming of the standard MCI is that it is very difficult
to interpret the MCI as an indicator of the monetary policy stance, because it does not take into
account that changes in monetary conditions will
typically be endogenous to the state of the economy. The implicit assumption of the standard MCI
is that the monetary conditions are driven by
exogenous shocks. This is clearly at odds with
the identified VAR literature that suggests that
most of the movements in monetary conditions
are in response to the state of the economy. For
example, changes in the federal funds rate will
5

See, for example, Freedman (1994 and 1995a,b) and Duguay
(1994).

6

See, for example, Gerlach and Smets (2000).

356

J U LY / A U G U S T

2008

be typically in response to changing economic
conditions and a changing outlook for price stability. An alternative way of expressing this drawback is that the implicit benchmark against which
the MCI is measured does not depend on the likely
source of the shocks in the economy. As a result,
the benchmark in the standard MCI does not
depend on the state of the economy, although
clearly for given objectives the optimal MCI will
vary with the shocks to the economy. A third
shortcoming is that often the construction of an
MCI does not take into account that the estimated
weight of its various components is subject to
uncertainty and estimation error. This uncertainty
needs to be taken into account when interpreting
the significance of apparent changes in monetary
conditions. The methodology developed by
Céspedes et al. (CLMM; 2006) addresses each of
these shortcomings.
In this section, we apply a version of the
MCI proposed by CLMM to derive a measure of
the monetary policy stance that takes into account
movements in the short- and long-term interest
rates and in real house prices. Using this index,
we try to answer this question: Did the rise in
house prices and the fall in long-term interest
rates since 2000 lead to an implicit easing of
monetary policy in the United States? We use
the BVARs estimated in the previous section to
implement the methodology. In the next subsection, we define the MCI and use a simple
analytical example to illustrate its logic. Next,
we apply it to the U.S. economy using the estimated BVARs.

An MCI in a VAR: Methodology and
Intuition
For the sake of example, let the economy be
described by a stationary VAR of order one:
(3)

 X t   A11
 P  = A
 t   21

A12   X t −1   B1 
+
ε,
A22   Pt −1   B2  t

where Xt is the vector of nonpolicy variables,
such as output and inflation, and Pt is the vector
of monetary policy and financial variables, which
in our case are the short-term interest rate, the
long-term interest rate spread, and the real house
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

price index. As in BT, a standard dynamic MCI
with respect to a target variable j can then be
defined as

(

)

j
s −1
*
MCI BT
,t = S j ∑ A11 A12 Pt −s − Pt −s ,
H

(4)

s =1

where Sj is a selection vector that selects the target
variable j from the list of non-policy variables.
Typically, the target variable in the construction
of an MCI is either output growth or the output
gap. This is based on the notion that financial and
monetary conditions affect inflation primarily
through their effect on spending and output.
However, inflation can be used as a target variable also. In this paper, we will present results for
both output growth and inflation as target variables. The parameter H is the time period over
which lags of the monetary conditions are considered. Pt*–s is typically given by the steady state
of the monetary conditions. In our case, this
would be the equilibrium nominal interest rate,
the steady-state term spread, and steady-state real
house price growth rate. Alternatively, it could
also be given by the monetary conditions that
would have been expected as of period t –H, if
there had been no shocks from period t –H to t.
Equation (2) illustrates that the standard MCI is
a weighted average of the deviations of current
and past policy variables from their steady-state
values, where the weights are determined by the
partial elasticity of output with respect to a change
in the policy variable.
As discussed above, a problem with this
notion of the MCI is that the policy variables
are treated as exogenous and independent from
the underlying economic conditions, or, alternatively, they are assumed to be driven by exogenous shocks. As a result, it is very problematic to
interpret this index as a measure of the monetary
policy stance. For example, it may easily be the
case that the policy rate rises above its steadystate value because of positive demand shocks.
In this case, monetary policy may either be too
loose, neutral, or too tight, depending on whether
the higher interest rate is able to offset the effect
of the demand shocks only partially, fully, or
more than fully. Instead, the standard MCI will
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

always indicate that monetary conditions have
tightened.
In contrast to the standard MCI, the alternative
MCI proposed by CLMM does take into account
the endogeneity of the policy instruments. In this
case the MCI is defined as

(5)

(

j
*
s −1
MCI CLMM
,t = S j ∑ A11 A12 Pt −s − Pt −s

(

H

s =1

)

)

*
+S j ∑ A1s1−1B1 E εt −s P  − E εt −s P  .


s =1
H

The first part is the same as in the standard
case (equation (4)), but the second part adds the
effect of shocks that are most consistent with the
observed path of monetary conditions. More
specifically, the shocks are drawn from their distribution, subject to the restriction that they generate the observed path of monetary conditions.
Doan et al. (1984) and Waggoner and Zha (1999)
show that the mean of this constrained distribution is given by
(6)

ε stacked = R ′ ( RR ′ )

−1

( P − E [ P ])stacked,

P
where ε stacked
is a vector of stacked shocks over
period H, R is a stacked matrix of impulse
response coefficients of the monetary conditions
with respect to the shocks, and P – E[P] is the
vector of correspondingly stacked forecast errors
associated with the observed or assumed monetary
conditions over the same period H.
To understand the intuition for why the MCI
by CLMM is a potentially much better indicator
of the stance of monetary policy, it is useful to go
through a simple static analytical example.
Assume the economy is given by the following
set of equations:

(7)

y t = α 1st + α 2ht + εty

(8)

st = β1εty + β2εth + β3εts

(9)

ht = δ st + εth ,

where yt is the target variable, say, output growth,
st , is the short-term policy rate, ht is real house
prices, and there are three shocks: an output shock,
J U LY / A U G U S T

2008

357

Jarociński and Smets

a policy shock, and a housing shock. Equation (7)
reflects the dependence of output on the monetary
conditions and an output shock. For convenience,
we have in this case assumed that there are no
lags in the transmission process. Equation (8) is
a monetary policy reaction function, and equation
(9) shows how house prices depend on the short
rate and a shock.
In this case, the standard MCI (as in BT) is
given by
(10)

MCI BT ,t = α 1st + α 2 ht

and is independent of the monetary policy reaction function. If α1 is negative and α2 is positive,
a rise in house prices will lead to an easing of
monetary conditions unless the short-term interest
rate rises to exactly offset the effect of house prices
on the target variable.
In contrast, the MCI of CLMM is given by
(11) MCI CLMM ,t = α 1st + α 2 ht + E εty st, ht  ,
where we have assumed that all variables are
measured as deviations from the steady state. As
in equation (6), the mean output shock needs to be
consistent with the observed short-term interest
rate and real house prices.
Next, we derive that the expression of the last
term in equation (11) is a function of the interest
rate and house prices. From equations (6) and (7),
it is clear that the relation between the interest
rate conditions and the shocks is given by

(12)

β2
 st   β1
 h  = δβ 1 + δβ
2
 t  1

 εty 
β3   h 
 ε  = R εt .
δβ3   t 
s
 εt 

As discussed above, given a joint standard
normal distribution of the shocks, the mean of
the shocks conditional on the observed interest
rates is given by
(13)

 st 
E εt st ,ht  = R ′( RR ′ ) −1   ,
 ht 

where R is given in equation (12).
To simplify even further, assume that β3 = 0,
that is, there is no policy shock. In this case, there
358

J U LY / A U G U S T

2008

is a one-to-one relationship between the shocks
and the observed interest rate and house prices,
given by
(14)

εty  (1 + δβ2 ) β
1
 h= 
δ
−
εt  

− β2 β1   st 
  .
1   ht 

As a result, the MCI of CLMM is given by

(15)

MCI CLMM ,t =

(α

1

) (

)

+ (1 + δβ2 ) β 1 st + α 2 − β2 β 1 ht .

Comparing expressions (15) and (10), it is
obvious that the MCIs of BT and CLMM have
different weights on the short-term interest rate
and house prices. The weights in the MCI of
CLMM depend not only on the partial elasticities
of output with respect to the short-term interest
rate and house prices, but also on the coefficients
in the policy reaction function and the elasticity
of house prices with respect to the short-term
interest rate.
To see why the MCI of CLMM is a better indicator of the monetary policy stance, it is useful to
investigate how the weights in (15) will depend
on systematic policy behavior. From equations
(7) and (9), one can easily show that, if the central
bank targets output growth, the optimal interest
rate reaction function is given by
(16)

st = −

α2
1
εty −
εth .
α 1 + δα 2
α 1 + δα 2

If the interest rate elasticity of output is negative (α1 < 0) and elasticity with respect to house
prices is positive (α2 < 0), then a central bank trying to stabilize output will lean against positive
output and house price shocks, where the size
of the reaction coefficient will depend on the
strength and the channels of the transmission
mechanism.
Substituting the coefficients β1 and β2 in (15)
with the coefficients in expression (16), it can be
verified that the MCI of CLMM will be equal to
zero. In other words, a policy that stabilizes output will be seen as a neutral policy according to
this index. In contrast, it is obvious that such a
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

change in the policy reaction function will not
affect the standard MCI.
Instead, assume that the central bank reacts
optimally to the output shock, as in equation (13),
but does not respond to the shock to house prices
(β2 = 0). In this case, it can be shown that the MCI
of CLMM is given by
(17)

MCI CLMM ,t = α 2 ( ht − δ st ) = α 2εth .

This result is very intuitive: When the central
bank does not respond to house price shocks
and a rise in house prices has a stimulative effect
on output, the MCI of CLMM will indicate easy
monetary conditions whenever there is a positive
shock to house prices.
This simple example makes it clear that, in
order to have a meaningful indicator of the monetary policy stance, it is important to realize that
the monetary conditions endogenously reflect
all shocks that hit the economy.

An Application to House Prices and
the Policy Stance in the United States
Obviously, the static example is too simple
to bring to the data. In reality, monetary conditions
will have lagged effects on output and inflation
and the lag patterns may differ across the various
components, as shown earlier. In this section,
we use the two specifications of the BVAR—the
LVAR and the DVAR—to calculate MCIs for the
U.S. economy. Consistent with the MCI literature,
we use real GDP growth and inflation as the target
variables. Moreover, to take into account the lags
in the transmission process of monetary policy
that we documented in the third section, we
assume that real GDP growth is expected annual
GDP growth one year ahead, whereas inflation
is expected annual inflation two years ahead.
Figures 7A and 7B show the results of this exercise. To illustrate the effect of taking endogeneity
of the indicators of stance into account, we also
compare the MCI of CLMM (which incorporates
the full set of shocks) with the MCI of BT. In the
latter case, we assume that the observed interest
rates and house prices are driven by only the
three exogenous shocks identified in the third
section.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Figure 7A shows for the DVAR and 7B for
the LVAR the estimated 68 percent probability
regions for the MCI of CLMM (blue dotted lines)
and the MCI of BT (gray shaded areas) based on
one-year-ahead annual output growth (left column) and two-year-ahead annual inflation (right
column) using the following indicators of monetary conditions: the federal funds rate (first row);
the federal funds rate and the term spread (second row); and the federal funds rate, the term
spread, and real house prices (third row). The
MCIs shown are basically the difference between
the conditional forecast of the target variable
based on the actual path of the chosen indicators
of stance and the unconditional forecast of the
target variable.
A few observations are worth making on the
basis of Figure 7A. First, overall, the MCI with
expected output growth as a target variable and
the MCI with inflation as the target variable give
similar indications about stance. Financial conditions were relatively tight in 2000-01, then
gradually became relatively loose in 2002-05
before turning tight again during 2006. Second,
the uncertainty surrounding the MCIs is very high.
Based on standard significance levels, the monetary conditions were not significantly different
from neutral during the whole period. Third,
taking house prices into account (third row of
Figure 7A) does seem to matter for measuring
the monetary policy stance. More specifically,
buoyant growth in house prices in 2004 and 2005
suggests that monetary policy was relatively loose
in this period, whereas it turned tight in 2007.
During the housing boom, easy monetary conditions implied two-year-ahead annual inflation
that was more than 0.5 percentage points above
its steady state. Most recently, tight conditions
imply expected inflation almost 0.5 percentage
points below the target. These results differ marginally when the LVAR specification is used (compare Figure 7B with 7A).
In Figure 7B, a comparison of the 68 percent
posterior probability regions for the MCI of CLMM
(blue dotted lines) with those for the MCI of BT
(shaded areas) reveals that, although the broad
messages of the estimated MCIs are similar, conditioning on the three identified exogenous shocks
J U LY / A U G U S T

2008

359

Jarociński and Smets

Figure 7A
MCIs of CLMM and BT, DVAR
GDP Conditional on i

Prices Conditional on i

3

3

2

2

1

1

0

0

–1

–1

–2

–2

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

GDP Conditional on i,s

Prices Conditional on i,s

3

3

2

2

1

1

0

0

–1

–1

–2

–2

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01
Prices Conditional on i,s,hp–p

GDP Conditional on i,s,hp–p
3

3

2

2

1

1

0

0

–1

–1

–2

–2

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

only (the MCI of BT) gives less-precise estimates.
This is partly because these exogenous shocks
contribute only to a limited degree to the forecast variance of output and inflation. As a result,
the effects are also less precisely estimated. The
point estimates are similar, which suggests that
the developments in 2002-05 were strongly
influenced by the policy and housing demand
shocks and not much by the responses to other
shocks.
As explained earlier, the MCIs are a weighted
average of current and past levels of the short360

BT 68 Percent
CLMM Mean
CLMM 68 Percent

J U LY / A U G U S T

2008

term interest rate, the term spread (or the longterm interest rate), and real house price growth.
To show the relative importance of the three components, Table 3 gives the sum of the weights on
current and past (up-to-8-quarter) lagged values
of each. As in Figure 7A, using annual GDP and
inflation as target variable, the MCIs of CLMM
and BT are, respectively, calculated based on the
short-term interest rate (the first panel); the shortand long-term interest rates (the second panel);
and the short- and long-term interest rates, and
real house price growth (the third panel). A few
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

Figure 7B
MCIs of CLMM and BT, LVAR
GDP Conditional on i

Prices Conditional on i

3

3

2

2

1

1

0

0

–1

–1

–2

–2

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

GDP Conditional on i,s

BT 68 Percent
CLMM Mean
CLMM 68 Percent

Prices Conditional on i,s

3

3

2

2

1

1

0

0

–1

–1

–2

–2

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

GDP Conditional on i,s,hp–p

Prices Conditional on i,s,hp–p

3

3

2

2

1

1

0

0

–1

–1

–2

–2

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

–3
00-01 01-01 02-01 03-01 04-01 05-01 06-01 07-01

Table 3
8-Quarter Sum of MCI Weights, DVAR
MCIi

MCIi,s

MCIi,s,hp–p

Short rate (i)

Short rate

Long rate (s)

Short rate

Long rate

CLMM-GDP

–0.162

–0.201

0.090

–0.198

0.102

0.000

BT-GDP

–0.198

–0.190

0.074

–0.194

0.102

0.003

CLMM-Inflation

–0.046

–0.142

0.162

–0.148

0.250

0.056

BT-Inflation

–0.154

–0.182

0.168

–0.087

0.180

0.083

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

House prices (hp–p)

J U LY / A U G U S T

2008

361

Jarociński and Smets

observations are noteworthy. First, taking only
the short-term interest rate as an indicator of
the policy stance, it is clear that on average an
observed increase in the interest rate above its
steady-state value indicates a restrictive policy
stance with respect to both GDP growth and inflation. This is, in particular, the case when the
short-term interest rate is assumed to be driven
by the three identified exogenous shocks (as in
the MCI of BT). However, if the full endogenous
nature of the nominal interest rate is taken into
account (as in the MCI of CLMM), this is less the
case and more so for inflation than for growth.
The reason is that, because of the central bank’s
reaction function, the short-term interest rate is
likely to increase in response to shocks that drive
up future GDP growth and inflation. In this case,
a rise in interest rates may even suggest an easing
of the policy stance if interest rates do not rise
enough to offset the pickup in growth and inflation. To the extent that changes in the nominal
interest rate reflect higher inflation and inflation
expectations, this argument is particularly strong
when expected inflation is the target.
In the second panel of Table 3, adding the
long-term interest rate slightly changes the picture.
Keeping the long-term interest rate constant,
observing a 1-percentage-point increase in the
short-term interest rate for 8 quarters signals a
fall in GDP growth of about 20 basis points over
the next year and a fall in inflation of somewhat
less over the next two years. In contrast, keeping
the short-term rate constant, a rise in the longterm interest rate by 1 percentage point signals
lax monetary policy, as it predicts a rise in both
GDP growth (up to 9 basis points) and inflation
(up to 16 basis points) above steady state.
Finally, the far-right panel of Table 3 shows
the weights when real house prices are included
in the MCIs also. Their addition has little effect
on the weights on interest rates. The upper rows
show that the weight on real house price growth
is close to zero when the target variable is GDP
growth. This is indeed similar to the results in
Figure 7A, which show that the actual MCIs do
not change very much. However, when annual
inflation over two years is the target variable,
there is a significant weight on house prices: A
362

J U LY / A U G U S T

2008

5-percentage-point rise in the growth rate of real
house prices signals a 30- to 40-basis-point rise
in annual inflation According to the weights, such
a rise in house prices would call for a substantially higher short-term rate (of about 2 percentage points) in order to have neutral monetary
conditions.

CONCLUSIONS
In this paper, we examine the role of housing
investment and house prices in U.S. business
cycles since the second half of the 1980s using
an identified Bayesian VAR. We find that housing
demand shocks have significant effects on housing investment and house prices, but overall these
shocks have had only a limited effect on the performance of the U.S. economy in terms of aggregate
growth and inflation in line with the empirical
literature. There is also evidence that monetary
policy has significant effects on housing investment and house prices and that easy monetary
policy designed to stave off perceived risks of
deflation in 2002-04 has contributed to the boom
in the housing market in 2004 and 2005. However, again, the effect on the overall economy was
limited. A counterfactual simulation suggests
that without those policy shocks inflation would
have been about 25 basis points lower at the end
of 2006.
In order to examine the effect of house prices
on monetary conditions, we implement a methodology proposed by Céspedes et al. (2006). This
methodology consists of calculating the forecast
of a target variable (expected GDP growth or
expected inflation) conditional on the observed
path of monetary conditions, including the shortterm interest rates, the term spread, and house
prices. We show that, in spite of the endogeneity
of house prices to both the state of the economy
and the level of interest rates, taking house prices
into account may sharpen the inference about the
stance of monetary policy. Given the uncertainty
about the sources of business cycle fluctuations
and the effect of the various shocks (including
housing demand shocks) on the economy, uncertainty regarding the stance of monetary policy
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

remains high. Nevertheless, taking the development of house prices into account, there is some
indication that monetary conditions may have
been too loose in 2004 and were relatively tight
in the summer of 2007.
Various caveats regarding the methodology
we use in this paper are worth mentioning. First,
all the analysis presented in this paper is insample and ex post. Although this is helpful in
trying to understand past developments, this does
not prove the methodology is sufficient for realtime analysis. For this we need to extend the
analysis to a real-time context. Second, the statistical model we use to interpret the U.S. housing
market and business cycle is basically a linear
one. It has been argued that costly asset price
booms and busts are fundamentally of an asymmetric nature. Our linear methodology is not able
to handle such nonlinearities. Third, the robustness of the analysis to different identification
schemes for the structural shocks needs to be
further examined. We hope to shed light on some
of these issues in further analysis.

Boom Across U.S. States.” Journal of Monetary
Economics, October 2007, 54(7), pp. 1962-85.
Doan, Thomas; Litterman, Robert B. and Sims,
Christopher. “Forecasting and Conditional
Projection Using Realistic Prior Distributions.”
Econometric Reviews, 1984, 3(1), pp. 1-100.
Duguay, Pierre. “Empirical Evidence on the Strength
of the Monetary Transmission Mechanism in Canada:
An Aggregate Approach.” Journal of Monetary
Economics, February 1994, 33(1), pp. 39-61.
Dynan, Karen E.; Elmendorf, Douglas and Sichel,
Daniel E. “Can Financial Innovation Help to
Explain the Reduced Volatility of Economic
Activity?” Journal of Monetary Economics, January
2006, 53(1), pp. 123-50.
Erceg, Christopher J. and Levin, Andrew T. “Optimal
Monetary Policy with Durable and Non-durable
Goods.” International Finance Discussion Paper
No. 748, Board of Governors of the Federal Reserve
System, 2002.

Batini, Nicolleta and Turnbull, Kenny. “A Dynamic
Monetary Conditions Index for the UK.” Journal of
Policy Modeling, June 2002, 24(3), pp. 257-81.

Freedman, Charles. “The Use of Indicators and the
Monetary Conditions Index in Canada,” in T.J.T.
Baliño and C. Cottarelli, eds., Frameworks for
Monetary Stability: Policy Issues and Country
Experiences. Washington, DC: International
Monetary Fund, 1994, pp. 458-76.

Carroll, Christopher D.; Otsuka, Misuzu and
Slacalek, Jirka. “How Large Is the Housing Wealth
Effect? A New Approach.” NBER Working Paper
12746, National Bureau of Economic Research,
2006.

Freedman, Charles. “The Canadian Experience with
Targets for Reducing and Controlling Inflation,” in
L. Leiderman and L. Svensson, eds., Inflation
Targets. London: Centre for Educational Policy
Research, 1995a.

Céspedes, Brisne; Lima, Elcyon; Maka, Alexis and
Mendonça, Mario J.C. “Conditional Forecasts and
the Measurement of Monetary Policy Stance in
Brazil.” Unpublished manuscript, 2006.

Freedman, Charles. “The Role of Monetary
Conditions and the Monetary Conditions Index in
the Conduct of Policy.” Bank of Canada Review,
Autumn 1995b, pp. 53-59.

Clarida, Richard; Galí, Jordi and Gertler, Mark. “The
Science of Monetary Policy: A New Keynesian
Perspective.” Journal of Economic Literature,
December 1999, 37(4), pp. 1661-707.

Gerlach, Stefan and Smets, Frank. “MCIs and
Monetary Policy.” European Economic Review,
October 2000, 44(9), pp. 1677-1700.

REFERENCES

Del Negro, Marco and Otrok, Christopher. “99
Luftballons: Monetary Policy and the House Price

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Goodhart, Charles and Hofmann, Boris. “Financial
Conditions Indices,” in Charles Goodhart, ed.,
House Prices and the Macroeconomy: Implications

J U LY / A U G U S T

2008

363

Jarociński and Smets

for Banking and Price Stability. Chap. 3: Oxford:
Oxford University Press, 2007.
Iacoviello, Matteo and Neri, Stefano. “The Role of
Housing Collateral in an Estimated Two-Sector
Model of the U.S. Economy.” Working Papers in
Economics 659, Boston College Department of
Economics, 2007.
Kohn, Donald L. “Success and Failure of Monetary
Policy since the 1950s.” Presented at the Deutsche
Bundesbank conference Monetary Policy over Fifty
Years, a conference to mark the fiftieth anniversary
of the Deutsche Bundesbank, Frankfurt, Germany,
September 21, 2007.
Leamer, Edward E. “Housing and the Business
Cycle.” Presented at the Federal Reserve Bank of
Kansas City symposium Housing, Housing Finance,
and Monetary Policy, Jackson Hole, WY, August 30September 1, 2007.
McConnell, Margaret M. and Pérez-Quirós, Gabriel.
“Output Fluctuations in the United States: What
Has Changed Since the Early 1980s?” American
Economic Review, December 2000, 90(5),
pp. 1464-76.
Mishkin, Frederic S. “Housing and the Monetary
Transmission Mechanism.” Presented at the
Federal Reserve Bank of Kansas City symposium
Housing, Housing Finance, and Monetary Policy,
Jackson Hole, WY, August 30-September 1, 2007.

Muellbauer, John. “Housing and Consumer
Behaviour.” Presented at the Federal Reserve Bank
of Kansas City symposium Housing, Housing
Finance, and Monetary Policy, Jackson Hole, WY,
August 30-September 1, 2007.
Slacalek, Jirka. “What Drives Personal Consumption?
The Role of Housing and Financial Wealth.”
Unpublished manuscript, November 2006.
Taylor, John B. “Housing and Monetary Policy.”
Panel discussion at the Federal Reserve Bank of
Kansas City symposium Housing, Housing Finance,
and Monetary Policy, Jackson Hole, WY, August 30September 1, 2007.
Topel, Robert. H. and Rosen, Sherwin. “Housing
Investment in the United States.” Journal of
Political Economy, August 1988, 96(4), pp. 718-40.
Uhlig, Harald. “What Are the Effects of Monetary
Policy on Output? Results from an Agnostic
Identification Procedure.” Journal of Monetary
Economics, March 2005, 52(2), pp. 381-419.
Villani, Mattias. “Steady-State Priors for Vector
Autogressions.” Journal of Applied Econometrics,
2008 (forthcoming).
Waggoner, Daniel F. and Zha, Tao. (1999).
“Conditional Forecasts in Dynamic Multivariate
Models.” Review of Economics and Statistics,
November 1999, 81(4), pp. 639-51.

Mojon, Benoit. “Monetary Policy, Output
Composition, and the Great Moderation.” Working
Paper WP 2007-07, Federal Reserve Bank of
Chicago, 2007.

364

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Jarociński and Smets

APPENDIX: DATA AND SOURCES
Real GDP: real GDP, 3 decimal (GDPC96), seasonally adjusted annual rate, quarterly, billions of chained
2000 dollars.
SOURCE: U.S. Department of Commerce: Bureau of Economic Analysis (BEA) data from Federal
Reserve Economic Data (FRED; http://research.stlouisfed.org/fred2/).
Real consumption: real personal consumption expenditures (PCECC96), seasonally adjusted annual rate,
quarterly, billions of chained 2000 dollars.
SOURCE: BEA data from FRED.
GDP deflator: GDP: implicit price deflator (GDPDEF), seasonally adjusted, quarterly, index 2000 = 100.
SOURCE: BEA data from FRED.
Federal funds rate: effective federal funds rate (FEDFUNDS), monthly, percent, averages of daily figures.
SOURCE: Board of Governors of the Federal Reserve System data from FRED (averaged over 3 months
of the quarter).
Long-term interest rate: 10-year Treasury constant maturity rate (GS10), monthly, percent, averages of
business days.
SOURCE: Board of Governors of the Federal Reserve System data from FRED (averaged over 3 months
of the quarter).
S&P/Case-Shiller U.S. National Home Price Index: quarterly, based on repeated sales.
SOURCE: http://www.standardandpoors.com, available since 1987.
M2: M2 money stock (M2NS), not seasonally adjusted, monthly, billions of dollars.
SOURCE: Board of Governors of the Federal Reserve System data from FRED (averaged over 3 months
of the quarter).
Real private residential fixed investment: 3 Decimal, (PRFIC96), seasonally adjusted annual rate,
quarterly, billions of chained 2000 dollars.
SOURCE: BEA data from FRED.
Commodity price index: Dow Jones spot average, quarterly.
SOURCE: Global Financial Data; www.globalfinancialdata.com.
In the VAR, we use the interest rate spread, computed as the difference between the long interest rate
and the federal funds rate, house prices deflated relative to the GDP deflator, and the ratio of real private residential fixed investment to real GDP. All the variables, except for the short-term interest rate,
spread, and housing investment, enter either in log levels or log differences (annualized), depending
on the VAR specification indicated.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

365

366

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Robert G. King

W

hen monetary historians look
back at this decade, they will
undoubtedly highlight the major
increases in house prices over
the early part of the decade and the sharp declines
of recent years as posing a major challenge for
monetary policy and central banking.
“House Prices and the Stance of Monetary
Policy” by Marek Jarociński and Frank Smets (JS)
is a valuable early contribution to the understanding of this episode. It is extremely clear in spelling
out and accomplishing its two major objectives:
a retrospective econometric analysis of the role
of housing markets in recent developments and
a consideration of the potential role of house
prices in a monetary conditions index.
For my purposes in this discussion, there are
three important pieces of evidence provided by
JS. Using conditional forecasting methods, the
second section of their paper shows that there
may be an important component of house price
variation that cannot be accounted for by shifts
in output and interest rates or there may not: The
qualification is necessary because the results of
difference and level VAR specifications differ
importantly. Their third section uses an identified
VAR to suggest that loose monetary policy may
have contributed to the continuing increase in
house prices in 2004 and 2005. Their fourth section investigates the effect of identified “housing
demand shocks,” with results that I will discuss
further below.

MONETARY POLICY
From the standpoint of monetary policy, there
are three key questions. First, was the behavior of
house prices and quantities normal or unusual
over the recent period? Second, did easy money
cause a major portion of the rise in house prices
and thus make house price declines a necessary
outcome when monetary policy tightened? Third,
could a regular response to housing—perhaps via
the type of monetary conditions index discussed
by JS—be desirable in smoothing out overall economic activity and housing markets themselves?

House Prices
It is important to stress that the second section
of the JS study, about the extent to which movements in house prices are unusual, can be read
in quite different ways.
JS show that movements in interest rates and
output largely explain variation in house prices
if one uses a level (Bayesian) VAR. In this case,
there are two implications for monetary policy.
First, it seems unnecessary to think about potentially including house prices in the state vector
to which monetary policy should respond, since
house prices appear to be well explained by
interest rates and output. Second, there is no
sense in which there is a puzzle in recent years:
House prices just moved with macroeconomic
conditions in a fairly standard manner. From the
standpoint of modern macroeconomic analysis

Robert G. King is a professor of economics at Boston University.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 367-370.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

367

King

and modern central bank practice based on simple
rules, this is an attractive reading of the data.
However, JS also show that it is possible to
argue that developments in interest rates and output leave a great deal to be explained if one uses
a first difference VAR. In this sense, there may
be an unusual event in recent years, with house
prices departing from output and interest rate
fundamentals just during this period. Or house
prices may be not too closely related to these
fundamentals most of the time, so that may be a
case for thinking about a separate monetary policy
response to housing. That is, we need to know
whether other periods of house price increases
and decreases look similar to or different from
those of recent years.

Monetary Policy as a Cause of House
Prices
There has been a great deal of public discussion about “easy money” in the house price boom,
and there is some evidence for this view in JS.
By shutting down the identified monetary policy
shock in panels 1 of their Figures 6A (differences
VAR) and 6B (level VAR), they find that house
prices would have been lower without monetary
policy shocks during 2004 and early 2005.
I have three observations on this finding.
First, one would like to know the statistical confidence with which we can make this statement
(my own sense based on work with VARs is that
this might be low). Second, taking the result at
face value, it is important to stress that monetary policy accounts for only a temporary interval
of higher house price increases and little of the
ultimate decline in house prices. Third, the JS
accounting method does not automatically mean
that a shock yields a contribution during this
period, as may be seen by comparing this to the
contribution that JS suggest for a term structure
spread: There is nothing contributed to house
prices by the yield curve. So, the method is potentially informative in this and other contexts.
I think that we do not know the role that monetary policy played in these events, but there is
good reason to be skeptical of the manner in which
“easy money” is used in many public discussions.
368

J U LY / A U G U S T

2008

In the public eye (that of my neighbors and
my real estate agent in a Boston real estate market
that was a hot one starting in about 2000), there
were two distinct parts to the house price boom.
The first was based on income and wealth: As my
real estate agent said in 2000, people were buying
houses in the face of rapidly increasing house
prices with “real money” from successful economic ventures. The second was later: People
were buying houses or refinancing houses, taking
advantage of the increasingly favorable terms
offered by lenders. Using my agent’s terminology
at the later time, this was “easy money.” But lender
terms were sufficiently generous that it is hard
to draw a connection to the Fed: The public definition of easy money is a statement about lending
terms, not necessarily about monetary policy.

Monetary Policy Response to Housing
An unfortunate aspect of the JS paper is that
the dynamic response to an identified housing
demand shock—that object to which a monetary
policy authority would potentially want to
respond—just doesn’t look plausible to me. The
key features of this shock, as described at the start
of their third section, are that it raises housing
prices; it raises private consumption and national
product; and it has a positive effect on house
investment with a timing that is curious.
From the standpoint of designing a monetary
policy response to the housing sector, this puzzling
pattern of responses makes it problematic to
address my third question (above), which is the
critical one from the standpoint of monetary
policy.

THINKING ABOUT DYNAMIC
RESPONSES IN HOUSING
The analysis of the housing demand shock
requires that we begin to think more carefully
about the nature of housing dynamics. While
macroeconomists use the “time to build” model
of Kydland and Prescott (1980) much less now
than some time ago, housing is surely a setting
in which this model is the benchmark.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

King

To sketch how such a model works and the
potential conflict that I see with the impulse
responses for the identified demand shock of JS,
let’s think about a setting in which there is an
unexpected increase in housing demand at a fixed
stock of housing. We would see an increase in
house prices, which in turn would stimulate
housing starts and an interval of higher housing
expenditure. If the housing starts were undertaken
“on spec” by construction companies, then one
would expect increased starts only if the future
house prices were expected to be high enough to
justify construction costs.
JS cite the empirical estimates of Topel and
Rosen (1988) and the simulations of a recent quantitative macro model developed by Iacoviello and
Neri (2007) as guidance in terms of the effects of
house prices on residential investment. The
estimates of Topel and Rosen (1988), in particular,
suggest an elasticity in the range of 1.5 to 3.15
for the response of investment two years later to
a permanent change in house prices. And JS argue
that their model captures this level of overall
response, thus supporting the identification of
the housing demand shock. However, in terms
of deciding whether this measure of a housing
demand shock is plausible, I think that we need
more detailed dynamic information.
Suppose that it takes three quarters of a year
to complete a housing construction project and
that the distribution of expenditure is uniform
over the construction project. Then, housing
investment (i) is an equally weighted moving
average of starts (s),
1
it = α st + st −1 + st −2  ,
3

where α is a parameter describing the size of
investment projects. More generally, the time-tobuild model may suggest that the time-path of
investment depends on the distribution of investment costs over the life of the construction process
and the interaction of optimal “housing starts”
with the anticipated path of house prices.
Suppose further that starts increase permanently at date t = τ. Then, investment builds up to
a new higher level, with one-third of the increase
taking place in each period. Now, the factors
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

generating starts are not permanent, but if there
is a sustained increase in house prices, then this
calculation should capture the early part of the
impulse response.
From the standpoint of this type of model,
then, the dynamics in Figure 3 seem curious.
That is, a housing demand shock raises prices at
a point in time by about 1 percent, by the same
amount by a year later, by perhaps 1/2 percent after
two years, and by nothing after three years. The
investment dynamics are a response of about
0.05 for the first two quarters, then perhaps half
that by year’s end, and zero by six quarters.
A conventional view of the construction
process is that at least a year is a reasonable
horizon overall, with the first quarter devoted to
planning and permits and the last three quarters
involving the bulk of the expenditure. There is
no question that construction is faster now than
it was a couple of decades ago. But before accepting the identification of the housing demand
shock, one would like to see that dynamics are
consistent with estimates of the distribution of
quarterly construction costs.

Housing Permits, Starts, and Investment
Housing permits have long been used as a
leading indicator (included in the Conference
Board’s series of leading economic indicators),
as have housing starts. Both of these series have
been historically treated as noisy ones, but also
containing useful information about future economic activity. Figure 1 of this commentary shows
why, starting in 1987 as do JS. The reader’s eye is
drawn naturally to the most recent part of the
period, where housing permits and starts (monthly
data) move prior to investment (quarterly data).
If there is a persistent decline in housing starts,
caused by a negative housing demand shock, then
there will be a persistent decline in investment
in any time-to-build model, but it will take time
for the full effect to build up. From this standpoint, the near-term forecasts for housing investment are not too rosy.
The identification of housing demand shocks
would benefit from using indicators of permits
and starts. Such empirical work, expanding on
J U LY / A U G U S T

2008

369

King

Figure 1
Housing Starts and Permits and Residential Fixed Investment
Billions of Chained 2000 $

Thousands of Units
2,500

710

2,000

580

1,500

450

1,000

320
HOUST (left axis)
PERMIT (left axis)
PRFIC1 (right axis)

500
1987

1992

1997

2002

2007

190
2012

NOTE: HOUST is housing starts: total: new privately owned housing units started; PERMIT is new private housing units authorized by
building permit; and PRFIC1 is real private residential fixed investment, 1 decimal. Shaded areas indicate U.S. recessions as determined
by the National Bureau of Economic Research.
SOURCE: Federal Reserve Bank of St. Louis: research.stlouisfed.org.

the study of JS, could lead to dynamic responses
for investment flows in response to identified
housing demand shocks that are more in line with
the structural characteristics of housing market
investment. In turn, this would provide a more
secure basis for analysis of the monetary policy
response to housing.

CONCLUSION
The events of the last few years will certainly
stimulate much additional research on the nature
of housing and mortgage markets, as well as their
implications for monetary policy. The analysis
of Jarociński and Smets highlights a series of
important questions about these linkages, as well
as providing some interesting early empirical
evidence.

370

J U LY / A U G U S T

2008

REFERENCES
Iacoviello, Matteo and Neri, Stefano. “The Role of
Housing Collateral in an Estimated Two-Sector
Model of the U.S. Economy.” Working Papers in
Economics No. 659, Boston College Department of
Economics, 2007.
Jarociński, Marek and Smets, Frank. “House Prices
and the Stance of Monetary Policy.” Federal
Reserve Bank of St. Louis Review, July/August
2008, 90(4), pp. 339-65.
Kydland, Finn and Prescott, Edward C. “Time To
Build and Aggregate Fluctuations.” Econometrica,
November 1982, 50(6), pp. 1345-70.
Topel, Robert. H. and Rosen, Sherwin. “Housing
Investment in the United States.” Journal of
Political Economy, August 1988, 96(4), pp. 718-40.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Stephen G. Cecchetti

I

s housing the business cycle, as Leamer
says? In their fascinating paper, Jarociński
and Smets’s (2008) careful analysis suggests
that the answer is no. Very briefly, they
show that the recent U.S. housing boom is
explained by a combination of increases in housing demand and loose monetary policy. However,
once they adequately account for the myriad of
dynamic interactions, they find that housing
demand shocks have a very limited impact on
the overall volatility of real growth and inflation.
And, finally, Jarociński and Smets use their estimates to suggest that, since 2000, monetary conditions have been close to neutral.
The paper is divided neatly into two parts.
The first presents the results of a careful model
estimation exercise—a Bayesian vector autoregression (BVAR) that includes real gross domestic
product (GDP), the GDP deflator, real consumption, real residential investment, real house prices,
real commodity prices, the money stock, the federal funds rate, and the long-term interest rate
spread. The second part of the paper uses estimates from the first to estimate a monetary conditions index (MCI). Following this organization,
I divide my comments into two parts. First, I discuss the role of housing in the business cycle;
and second, I will make a number of comments
about the use of MCIs.

PART I: UNDERSTANDING THE
ROLE OF HOUSING
In the first part of the paper, Jarociński and
Smets present a careful analysis of the dynamic
properties of housing, monetary policy, and
growth. They focus on the impact of shocks to
housing demand, monetary policy, and the term
spread, concluding that they account for a small
fraction of real GDP and the real GDP deflator but
a large fraction of the variation in house prices and
residential construction. (I am referring to the
variance decomposition results in their Table 2A.
For reasons that will become clear later, I prefer
the differences version of their VAR.) Importantly,
the Jarociński and Smets estimates show that a
combination of a positive housing demand shock
and low interest rates accounts for the bulk of
the rise in house prices and the increase in residential construction activity. (See the historical
decompositions in their Figure 6A.)
I have three separate points to make about
this conclusion. First, the results are neatly consistent with my strongly held view that over the
period that Jarociński and Smets study, 19872006, monetary policymakers stopped being the
destabilizing force that they probably were in
the 1970s and may even have been successfully
neutralizing a variety of demand shocks.1 That
1

See Cecchetti, Flores-Lagunes, and Krause (2006).

Stephen G. Cecchetti is a professor of global finance at Brandeis International Business School and a research associate at the National Bureau
of Economic Research. The author thanks Marek Jarociński and Frank Smets for both correcting errors in the initial version of these comments
and for providing their data.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 371-76.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

371

Cecchetti

Figure 1
Rolling Regression of a 10-Year Bond on Federal Funds, 48-Month Window
Coefficient
2.5
2.0
1.5
1.0
+/–2 Standard Error
0.5
0
–0.5
–1.0
–1.5
–2.0
1991

1993

1995

1997

1999

2001

2003

2005

2007

NOTE: The figure plots coefficients of a regression of the fixed-maturity 10-year U.S. Treasury bond rate on the federal funds rate using
a 48-month rolling window. Estimates are plotted on the last date of the sample.
SOURCE: Board of Governors of the Federal Reserve System.

is, the authors use their VAR to allocate the
volatility of growth and inflation to its various
sources and find no role for monetary policy disturbances. I take this as evidence of the success
of central bank stabilization policy.
Second, there is the always-vexing question
of whether the sample record used in estimating
the model is representative of the experience
during the more recent period for which we would
like to use the model. A number of concerns arise
here. First, there is the problem of trying to separate changes in the federal funds rate from changes
in the term spread. To see the possible problem,
I have run a very simple regression of the 10-year
bond rate on the federal funds rate, using a 48month moving window, and plotted the results
in Figure 1. I simply note that the late-1990s
look very different from the period either before
or after and suspect that the identification that
allows Jarociński and Smets to estimate the
372

J U LY / A U G U S T

2008

impact of the spread is coming from this part of
the sample.
Continuing with the issue of the sample
period, there is the question of how we should
interpret house price data since 2000. Figure 2
plots the ratio of the value of the U.S. housing
stock (from the Federal Reserve Flow of Funds
data) to the housing rental service flow (imputed
for the computation of the National Income and
Product Accounts). The results are striking. The
post-2000 data look dramatically different from
what came before.
Finally, like others before them, Jarociński
and Smets find significant housing wealth effects.
Their estimate is that a persistent 1-percentagepoint increase in house prices leads to a 0.1 percent increase in real GDP after four quarters—an
elasticity of 0.1. Interestingly, because of the
richness of their model, Jarociński and Smets are
able to estimate that this effect is split roughly
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cecchetti

Figure 2
The Ratio of the Value of the U.S. Housing Stock to the Rental Service Flow
20

2006 = 18.4
18

16

14

1978-1999 Average = 14.3
12

10
1952 1955 1958 1961 1964 1967 1970 1973 1976 1979 1982 1985 1988 1991 1994 1997 2000 2003 2006

SOURCE: Value of residential real estate: Federal Reserve Flow of Funds data, line 4 of Table B100 plus line 4 of Table B103. Rental
service flow: the National Income and Product Accounts estimate of the total housing services in personal consumption expenditure,
Table 2.3.5, line 14.

equally between investment in residential construction and consumption. So, the elasticity of
consumption with respect to housing wealth is
only about 0.05, which is at the low end of the
range found by previous researchers (and cited
in the paper).
To digress only slightly, I should note that it
is not obvious that changes in the value of housing should affect nonhousing consumption at
all. We all have to live somewhere. When home
prices rise, it does not signal any increase in the
quantity of economy-wide output. Although someone with a bigger house could sell it and move
into a smaller one, there must be someone else
on the other side of the trade. That is, for each
person trading down and taking wealth out of
their house, someone is trading up and putting
wealth in. And renters planning to purchase
should save more. All of this should cancel out
so that in the aggregate there is no change!
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Put another way, people own their homes to
hedge the risk arising from potential changes in
the price of purchasing housing services. They
want to ensure that they can continue to live in
the same size home. A rise in property prices
means people are consuming more housing, not
that they are wealthier.
And yet, everyone finds that when the housing market booms, people raise their consumption. Is this increase justified? Well, it depends.
If the consumption and house price increases are
both a consequence of higher estimated long-run
growth, then the answer is yes. That is, if everyone
now expects higher future incomes, then they
will demand more housing along with more of
everything else, and there is no bubble. So, if the
house price boom is accompanied by an increase
in the rate of growth of potential output, then it
is not a bubble. An equity price boom would have
to accompany this as well. And, importantly, this
J U LY / A U G U S T

2008

373

Cecchetti

would likely imply an increase in the long-run
real interest rate, too. So, if housing, equity, and
bonds all boom at the same time, we probably
need not be concerned.
Regardless of my fairly minor concerns, I am
convinced by Jarociński and Smets’s conclusion:
Stabilizing real growth requires at least some focus
on residential construction and housing demand.
Housing may not be the business cycle, but it does
play a measurable role. But, as Jarociński and
Smets show, this depends primarily on long-term
interest rates and housing demand, both of which
seem to have a life of their own. Monetary policymakers are left wondering what tools they have
at their disposal to do anything about this.

PART II: MCIs
The second part of the Jarociński and Smets
paper presents a very clear discussion of MCIs.
They conclude that, since 2000, Federal Reserve
policy has been roughly neutral. Before working
through this paper, I had not understood what
MCIs are. Now I do, so I will make some attempt
to share this new-found insight.
As Jarociński and Smets describe, in the past,
several (but not many) central banks used MCIs
as guides to policy formulation. More recently,
business economists have been churning these
out, combining a variety of financial indicators
into something that is supposed to measure conditions in financial markets (the Goldman-Sachs
Financial Conditions Index, DeutscheBank
Financial Conditions Index, Morgan Stanley
Financial Conditions Index, etc.).
The idea behind what I will call the “traditional MCI” is that it should provide a measure
of the relative ease or tightness of monetary conditions. For policymakers, this MCI is supposed
to answer the following question: Given the current state of the economy, how should policymakers set their operational instrument?
The traditional MCI employed by the Bank
of Canada, for example, was of the following
type:
(1)

374

MCE = α ( r − r * ) + β (e − e * ) ,

J U LY / A U G U S T

2008

where r is the interest rate instrument, e is the
exchange rate, and the “*” signifies an equilibrium
level.
In practice, the problem is that (1) implies
the same reaction to any deviation of the exchange
rate from its equilibrium, regardless of the source.
This creates problems, because supply shocks
should (one assumes) require different responses
from demands shocks. It matters why the
exchange rate has moved.
As Jarociński and Smets describe in clear
detail, this led researchers to suggest the computation of a “conditional MCI”—that is, conditional
on some sort of information. A conditional MCI
is the forecast k periods ahead for the output gap
(actual output, y, less potential output, y*) or the
inflation gap (the deviation of inflation, π, from
its target, π*):
(2)

E ( y t + k − y *t + k ) I t 

and
(3)

E π t + k I t  .

Importantly, these expectations are conditional on the policymaker’s implied monetary
policy reaction function. But, the information
set used to compute the expectations need not
have everything in it.
Looking at (2) and (3) leads me to ask the following question: If policymakers are doing their
job, why would the conditional MCI ever deviate
from zero?
Because the conditional MCI should be zero,
what might we get from computing it? As it turns
out, quite a bit. To see, we can start with a generic
formulation of the policymaker’s problem. Assume
that monetary policy sets the interest rate, r, to
minimize the quadratic loss function,
(4)

L = E απ t2 + y t2  ,

subject to the constraints imposed by the dynamic
structure of the economy:
(5)

 yt 
 εt 
π  = A ( L )  r  ,
 t
 t
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cecchetti

where A(L) is a polynomial in the lag operator, L,
and ε is a vector of disturbances.
This problem yields a policy “rule” of the form
(6)

rt* = φ (L ) εt .

Substituting (6) into (5) yields a reduced form:
(7)

 yt  %
π  = A ( L ) εt  .
 t

The conditional MCI is related to the properties of (7). Jarociński and Smets note that when
α = 0 and A(L) in (5) has no lags, then

E ( yt + k − y *t + k ) I t  = 0

for all k. They interpret this as neutral policy.
Although this is fine as far as it goes, the
conditional MCI is actually capable of addressing two additional questions:
(i) Does the central bank need to change its
reaction function to meet its stated goal?
Is the reaction function (6) appropriate to
minimize the loss function, (4)?
(ii) What is the tradeoff or relative weight, a,
in the central bank’s loss function, (4)?
Looking at question (i), we see that this is
not a question of whether policy is loose, tight,
or neutral. The issue is whether it is properly
responding to the shocks that are hitting the
economy. Are policymakers moving their instrument to neutralize demand shocks completely?
Are they changing the short-term interest rate to
offset supply shocks appropriately? It is not about
action, it is about reaction.
To understand (ii), take a look at the following
static version of (5) written as an aggregate
demand–aggregate supply model:
(8)

y = – λ r + εd (aggregate demand)

(9)

π = ω y + ε s (aggregate supply).

The parameters λ and ω represent the slopes of
the aggregate demand and aggregate supply
curves, respectively.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

This setup implies a simple policy rule of
the form
(10)

r* = aεd + bε s .

Using this, we can now compute the implied
conditional MCI for output and inflation (conditional on the optimal policy response, that is):
(11)
and
(12)

( ) (1 +αωαω ) ε

E y r* = −

2

s

1
ε .
( ) (1 + αω
)

E π r* =

2

s

( ) = −αω.
E (π r )

Now, take the ratio of (11) to (12) to obtain

E y r*
(13)

*

So, once we know the slope of the aggregate
supply curve, the ratio of these two conditional
MCIs tells us the relative importance of inflation
variability in the policymaker’s objective function—their inflation volatility aversion, if you will.
To figure out a reasonable value for ω, take a
look at the impulse responses in their Figure 4.
The first row tells us that an interest rate shock
(which is basically an aggregate demand shock)
has roughly the same impact on inflation and output. This leads to the conclusion that ω ≈ 1. Next,
take a look at the first row of their Figure 7A—
the MCI conditional on monetary policy, but not
on other financial conditions. (Because my very
simple construction really models the unconditional, steady-state behavior, I have chosen to use
the differences VAR estimates.)
The implied time series for α is plotted in
Figure 3. These point estimates move around
quite a bit. But the primary problem is that they
are negative. That is, inflation and output seem
to be moving in the same direction at the horizons
over which Jarociński and Smets report their
conditional MCI computations.
There are several possible reasons for this.
The first is that Jarociński and Smets’s Figure 7A
reports the conditional MCI over different horizons for output and inflation. For the former it is
J U LY / A U G U S T

2008

375

Cecchetti

Figure 3
Implied Inflation Volatility Aversion
50

40

30

20

10

0

–10

–20

–30
Jan
00

Jul
00

Jan
01

Jul
01

Jan
02

Jul
02

Jan
03

Jul
03

Jan
04

Jul
04

Jan
05

Jul
05

Jan
06

Jul
06

Jan
07

SOURCE: Author’s calculations using data from Jarociński and Smets (2008, Figure 7A).

one year, whereas for the latter it is two. So,
although there might be a contemporaneous
volatility tradeoff, it isn’t showing up here. A
second possibility is that monetary policymakers
were not in fact acting appropriately to neutralize
the housing demand shock. This interpretation
is consistent with Jarociński and Smets’s results
that the boom which began in fall 2001 was the
consequence of a combination of an increase in
housing demand and expansionary monetary
policy. My conclusion is that this means Federal
Reserve policy was not on the output-inflation
volatility frontier.
In conclusion, I found this a very rewarding
paper to read. Although I may not subscribe to
Jarociński and Smets’s interpretation of the conditional expectation of output or inflation as an
indicator of monetary conditions, I do agree with
their conclusion that housing is at the core of the
business cycle, so it should have a prominent
role in the formulation of monetary policy.

376

J U LY / A U G U S T

2008

REFERENCES
Cecchetti, Stephen G.; Flores-Lagunes, Alfonso and
Krause, Stefan. “Has Monetary Policy Become More
Efficient? A Cross-Country Analysis.” Economic
Journal, April 2006, 116(4), pp. 408-33.
Jarociński, Marek and Smets, Frank R. “House Prices
and the Stance of Monetary Policy.” Federal Reserve
Bank of St. Louis Review, July/August 2008, 90(4),
pp. 339-65.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Assessing Monetary Policy Effects Using
Daily Federal Funds Futures Contracts
James D. Hamilton
This paper develops a generalization of the formulas proposed by Kuttner (2001) and others for
purposes of measuring the effects of a change in the federal funds target on Treasury yields of different maturities. The generalization avoids the need to condition on the date of the target change and
allows for deviations of the effective fed funds rate from the target as well as gradual learning by
market participants about the target. The paper shows that parameters estimated solely on the basis
of the behavior of the fed funds and fed funds futures can account for the broad calendar regularities in the relation between fed funds futures and Treasury yields of different maturities. Although
the methods are new, the conclusion is quite similar to that reported by earlier researchers—
changes in the fed funds target seem to be associated with quite large changes in Treasury yields,
even for maturities of up to 10 years. (JEL: E52, E43)
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 377-93.

E

conomists continue to debate how
much of an effect monetary policy has
on the economy. But one of the more
robust empirical results is the observation that changes in the target that the Federal
Reserve sets for the overnight federal funds rate
have been associated historically with large
changes in other interest rates, even for the
longest maturities. This paper contributes to the
extensive literature that tries to measure the
magnitude of this effect.
One of the first efforts along these lines was by
Cook and Hahn (1989), who looked at how yields
on Treasury securities of different maturities
changed on the days when the Federal Reserve
changed its target for the fed funds rate. Let is,d
denote the interest rate (in basis points) on a
Treasury bill or Treasury bond of constant maturity s months as quoted on some business day, d,
and let ξd denote the target for the fed funds rate
as determined by the Federal Reserve for that day.
Using just those days between September 1974

and September 1979 on which there was a change
in the target, Cook and Hahn estimated the following regression by ordinary least squares (OLS):
(1)

is ,d − is ,d −1 = α s + λs (ξd − ξd −1 ) + usd .

Their estimates of λs for securities of several different maturities are reported in the first column of
Table 1. These estimates suggest that, when the
Fed raises the overnight rate by 100 basis points,
short-term Treasury yields go up by over 50 basis
points and there is a statistically significant effect
even on 10-year yields.
Subsequent researchers found that the magnitudes of the estimated coefficients for λs were
significantly smaller when later data sets were
used. For example, column 2 of Table 1 reports
Kuttner’s (2001) results when the Cook-Hahn
regression (1) was reestimated using data from
June 1989 to February 2000; see also Nilsen (1998).
However, Kuttner (2001) also identified some
conceptual problems with regression (1). For one

James D. Hamilton is a professor of economics at the University of California, San Diego.

© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

377

Hamilton

Table 1
Alternative Estimates of the Response of Interest Rates to Changes in the Federal Funds Target
Study

Cook-Hahn

Specification
Sample

Kuttner

Kuttner

Poole-Rasche

(1)

(1)

(2)-(3)

(4)-(3)

1974-79

1989-2000

1989-2000

1988-2000
0.73**

s = 3 months

0.55**

0.27**

0.79**

s = 6 months

0.54**

0.22**

0.72**

s = 1 year

0.50**

0.20**

0.72**

s = 5 years

0.21**

0.10*

0.48**

s = 10 years

0.13**

0.04*

0.32**

—
0.78**
—
0.48**

NOTE: *indicates statistically significant with p-value < 0.05; **denotes p-value < 0.01.

thing, the market may have anticipated much of
the change in the target ξd that occurred on day d
many days earlier, in which case those expectations would have already been incorporated into
is,d –1. In the limiting case when the change was
perfectly anticipated, one would not expect any
change in is,d to be observed on the day of the
target change. To isolate the unanticipated component of the target change, Kuttner used fd , the
interest rate implied by the spot-month fed funds
contract on day d. These contracts are settled on
the basis of what the average effective fed funds
rate turns out to be for the entire month containing
day d. Because much of the month may already
be over by day d, a target change on day d will
have only a fractional effect on the monthly
average. Kuttner proposed the following formula
to identify the unanticipated component of the
target change on day d:
(2)



Nd
ξ%du = 
( fd − fd −1 ),
 N d − td + 1 

where Nd is the number of calendar days associated with the month in which day d occurs and
td is the calendar day of the month associated
with day d. Kuttner then replaced (1) with the
regression

(

)

(3) is,d − is,d −1 = α s + γ s ξd − ξd −1 − ξ%du + λs ξ%du + usd ,
with additional modifications if d were the first
day or one of the last three days of a month.
378

J U LY / A U G U S T

2008

Kuttner found that the values for γs were essentially zero, meaning that if target changes were
anticipated in advance, then they had no effect
on other interest rates. Kuttner’s estimates of λs ,
the effects of unanticipated target changes, are
reported in column 3 of Table 1 and turn out to be
a bit larger than the original Cook-Hahn estimates.
Poole and Rasche (2000) proposed to sidestep the issues associated with a mid-month target
change by using not the spot-month contract on
day d but instead the one-month-ahead contract,
that is, the interest rate implied by a contract
purchased on day d for settlement based on the
average fed funds rate prevailing in the following
month, denoted fd1. They then replaced the expression in (2) with
(4)

ξ%du = fd1 − fd1−1 .

Their estimates for λs using this formulation
turned out to be similar to Kuttner’s and are
reported in column 4 of Table 1.
However, mid-month target changes remain
an issue for the Poole-Rasche estimates because
there is always the possibility of a second (or even
a third) change in the target some time after day
d and before the end of the following month;
indeed, this turned out to be the case for about
half of the target changes observed between 1988
and 2006. Gürkaynak, Sack, and Swanson (2007)
developed an analog to Kuttner’s formula (2) based
on the date of the next target change that followed
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

after the one implemented on day d; see also
Gürkaynak (2005).
Another potential drawback to either (2) or
(4) was raised by Poole, Rasche, and Thornton
(2002). These authors noted that, particularly
prior to 1994, market participants may not have
been perfectly aware of the target change even at
the end of day d, in which case these formulas
would include a measurement error that would
bias the coefficients downward. Poole, Rasche,
and Thornton developed corrections for the estimates to allow for this measurement error.
A related issue is that the series for ξd, the
actual target change, is itself subject to measurement error, as indeed Kuttner (2001) and Poole,
Rasche, and Thornton (2002) used slightly different series. Learning about the target change presumably also began well before day d. For both
reasons, one would think that data both before
and after day d should typically be used. In this
paper I develop a generalization of the Kuttner
(2001) and Poole, Rasche, and Thornton (2002)
adjustments for purposes of estimating the parameter λs . The basic idea is to suppose that there
exists some day within the month at which the
target may have been changed, but to choose
deliberately not to condition on this day for purposes of forming an econometric estimate. The
paper also generalizes the earlier approaches by
explicitly modeling the difference between the
effective fed funds rate and the actual target.
The next section begins with an examination
of the relation between the target rate chosen by
the Fed and the actual effective fed funds rate.
The third section develops a simple statistical
description of how these deviations, along with
the process of learning by the market about what
the fed funds target is going to be for this month,
would determine the volatility of the spot-month
futures rate. The fourth section shows how the
parameters estimated from the behavior of the
effective fed funds rate and the spot-month futures
rate can be used to predict calendar regularities
in the estimated values for a generalization of the
coefficient λs . The final section finds such calendar regularities largely borne out in the observed
relation between Treasury rates and daily changes
in the spot-month futures rate and develops new
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

estimates of this parameter. Although the method
and data set are rather different from the earlier
researchers, my estimates in fact turn out to be
quite similar to those originally found by Kuttner
(2001) and Poole and Rasche (2000).

THE EFFECTIVE AND TARGET
FEDERAL FUNDS RATES
In this paper, time is indexed in two different
ways, using calendar days t for developing theoretical formulas and business days d to apply
these ideas to actual data. The theoretical formulas will be developed for a typical month consisting of N calendar days indexed by t = 1,2,...,N,
whereas the data set will consist of those days
d = 1,2,...,D for which there are data on both
Treasury interest rates and fed funds futures rates;
d = 1 corresponds to October 3, 1988, and d = D =
4,552 corresponds to December 29, 2006. The
empirical sample for all estimates reported in
this paper also excludes the volatile data from
September 13 to September 30, 2001.
The effective fed funds rate for calendar day t,
denoted rt , is a volume-weighted average of all
overnight interbank loans of Federal Reserve
deposits for that day. All numbers in this paper
will be reported in basis points, so that, for example, a 5.25 percent interest rate would correspond
to a value of rt = 525. Since October 1988, the
Chicago Board of Trade has offered futures contracts whose settlement is based on the average
value for the effective fed funds rate over all the
calendar days of the month (with Friday rates, for
example, also imputed to Saturday and Sunday).
For a month that contains N calendar days, settlement of these futures contracts would be based
on the value of

S = N −1 ∑ rt .
N

(5)

t =1

The terms of a given fed funds futures contract
can be translated1 into an interest rate, ft, such
that, if S (which is not known at day t but will
1

Specifically, if Pt is the price of the contract agreed to by the
buyer and seller on day t, then ft = 100 × 共100 – Pt 兲.

J U LY / A U G U S T

2008

379

Hamilton

Figure 1
Effective Fed Funds Rate, Target Fed Funds
Rate, and Fed Funds Futures Rate,
December 1990

could use the change in the spot-month contract
price on day n to infer how much of the change
in the target interest rate caught the market by
surprise according to the formula
(7)

Fed Funds Rate
900

800

700

Effective
Target
Futures

600

500
1

8

15

22

29

Calendar Date

become known by the end of the month) turns
out to be bigger than ft, the buyer of the contract
has to compensate the seller by a certain amount
for every basis point by which S exceeds ft . If the
marginal market participant were risk neutral, it
would be the case that
(6)

ft = E t (S ),

where Et共.兲 denotes an expectation formed on the
basis of information available to the market as of
day t. This paper will consider only spot-month
contracts, that is, contracts for which by day t we
already know some of the values for r (namely, rτ
for τ ⱕ t) that will end up determining S. My forthcoming paper (Hamilton, forthcoming) demonstrates that, for futures contracts at short horizons
(the spot-month, 1-month-ahead, and 2-monthahead contracts), expression (6) appears to be
an excellent approximation to the data, though
Piazzesi and Swanson (forthcoming) note potential problems with assuming that it holds for
longer-horizon contracts.
Suppose that the Fed changes the target for
the effective fed funds rate on calendar day n of
this month. Kuttner (2001) suggested that we
380

J U LY / A U G U S T

2008

N
(f − f ) .
N − n + 1 n n −1

I will provide a formal derivation of (7) as a special case of a more general statistical inference
problem explored below, but would first like to
comment on one potential drawback of (7), which
is that it implies a huge reweighting of observations that come near the end of the month (n near
N ). Kuttner (2001, p. 529) recognized that this is
a potential concern here because (7) abstracts
from the deviation between the Federal Reserve’s
target for the effective fed funds rate and the
actual effective rate, and as a result magnifies
the measurement error for observations near the
end of the month. Kuttner himself avoided using
(7) for the last three days of the month. Other
researchers like Gürkaynak (2005) avoid applying
it to data from the last week.
Figure 1 plots the relevant variables for
December 1990, which was a particularly wild
month as banks adjusted to lower reserve requirements (Anderson and Rasche, 1996). Although
the Fed had lowered the target to 725 basis points
on December 7, the effective fed funds rate was
trading well above this the week after Christmas,
and speculators seemed to be allowing for a possibility of a big end-of-year spike up, such as the
584-basis-point increase in the effective fed funds
rate that was seen in the last two days of 1985 or
the 975-basis-point spike between December 28
and December 30, 1986. In the event, however,
the effective funds rate plunged 200 basis points
on December 31, 1990.
Because the December 1990 futures contract
was based on the effective rate rather than the
target, speculators were watching these events
closely. The futures rate was trending well above
the new target of 725 basis points in the latter part
of December, partly because the month’s average
would include the first week’s 750-basis-point
target values, partly because the effective rate had
been averaging above the new target subsequently,
and partly in anticipation of an end-of-year spike
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

Figure 2
Squared Residuals by Day of Month
Average Squared Residual
3,000

2,500

2,000

1,500

1,000

500

0

–500
2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

Day of Month

NOTE: The figure plots the average squared residuals from a regression of the deviation of the fed funds rate from the target on its
own lagged value, by day of the month (in basis points). 95 percent confidence intervals are indicated by the upper and lower box
lines, and predicted values from regression (9) are indicated by the dashed line.

up. When it became clear on December 31 that
the last day of the year generated a big move down
rather than up, the December futures contract fell
by 23 basis points on a single day. Formula (7)
would call for us to multiply this number by 31,
to deduce that the interest rate surprise on this
day was some 713 basis points, plausible perhaps
if the market was anticipating a spike up to 1,250
rather than the plunge down to 550 that actually
transpired. Although this is an extreme example,
it drives home the lesson that one really wants
to downweight the end-of-month observations
rather than magnify them in the manner suggested
by the expression in (7).
The next section proposes a more formal statement of this problem and its solution. A necessary
first step is to document some of the properties
of the deviation between the target that the Fed
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

has in place for business day d (denoted ξd) and
the actual fed funds rate. The effective fed funds
rate, rd, was taken from the FRED database of the
Federal Reserve Bank of St. Louis (which in turn
is based on Board of Governors release H.15), and
the target ξd prior to 1994 is from the FRED series
that comes from Thornton (2005) and since 1994
is from Federal Open Market Committee (FOMC)
transcripts. I first estimated the following regression (similar to the models in Taylor, 2001, and
Sarno, Thornton, and Valente, 2005) by OLS
(standard errors are in parentheses):
(8)

rd − ξd = 2.45 + 0.300 ( rd −1 − ξd −1 ) + eˆ d .
(0.29 )

( 0.014)

This regression establishes that there is modest serial correlation in deviations from the target.
Of particular interest in the next section will be
J U LY / A U G U S T

2008

381

Hamilton

the calendar variation in the variability of êd. Let
ωjd = 1 if day d occurs on the j th calendar day of
the month and zero otherwise. A regression of êd2
on {ωjd }31
j =1 then gives the average squared residual as a function of the calendar day of the
month:

êd2 = ∑ βˆ jω jd + νˆ d .

The effective fed funds rate for each day is the
sum of the target for that day plus the deviation
from the target, denoted ut:

rt = ξt + ut .

It follows from (5) and (6) that

31

j =1

The estimated values βˆj are plotted as a function
of the calendar day j in Figure 2 along with the
95 percent confidence intervals for each coefficient. A big outlier on January 23, 1991, (when the
funds rate spiked up nearly 300 basis points on
a settlement Wednesday) is enough to skew the
results for day 23. Apart from this, the most noticeable feature is an increased volatility of the deviation of the funds rate from the target toward the
end of a month. One can represent this tendency
parametrically through the following restricted
regression:
(9)

31−t
eˆ d2 = 283 + 1,746 × 0.5( d ) + νˆ d ,

( 40)

(232)

where td is the calendar day of the month associated with business day d. The predicted values
from (9) are also plotted in Figure 2. In the next
section, a simple theoretical formulation based
on (8) and (9) will be used to characterize the
modest predictability of deviations from the target
and their tendency to become more pronounced
at the end of the month.

ACCOUNTING FOR THE
VOLATILITY OF SPOT-MONTH
FUTURES PRICES
Suppose that market participants know that,
if the Fed is going to change the target within a
given month consisting of N calendar days, it
would do so on calendar day n, so that its target
is a step function:

ξt = ξ0
ξt = ξn

382

forÄ Å t = 1,2,...,n − 1
Ä forÅ Ä t = n,n + 1,..., N .

J U LY / A U G U S T

2008

(10)

N


ft = Et  N −1 ∑ (ξτ + uτ ) 
τ =1


 N − n + 1
 n − 1
ξ +
=
 E t (ξn )
 N  0 
N

+ N −1 ∑ uτ + N −1
t

τ =1

∑
N

τ =t +1

E t (uτ ).

On the day before the target change, I presume
that market participants had some expectation of
what the target was going to be, denoted En –1共ξn 兲.
The actual target would deviate from this by some
magnitude hn:

ξn = E n −1 (ξn ) + hn .

If the equilibrium fed funds price is determined
by risk-neutral rational speculators, the forecast
error hn would be a martingale difference sequence
that represents the content of the news about ξn
that arrived on the day of the target change itself.
Similarly,

ξn = E n −2 (ξn ) + hn + hn −1,

where hn –1 is the news that arrived on day n –1
of the Fed’s intentions on day n, and

ξn = hn + hn −1 + hn −2 + L + h1 + E 0 (ξn ) .

Under rational expectations, {ht} should be a
sequence of zero-mean, serially uncorrelated
variables, whose unconditional variance is
denoted σh2. Notice that h1 represents the information that the market receives on day 1 about
the value for the target that the Fed will adopt on
day n, h2 represents the new information received
on day 2, and so on, with
E 0 (ξn ) + h1 + h2 + L + ht
(11) E t (ξn ) = 
Å
ξn

forÄ t ≤ n

forÄ t > n

.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

Given (8) and (9), I assume that deviations follow
an AR(1) process with an innovation variance
that increases at the end of the month:

ut = φ ut −1 + εt
E

( )
εt2

= γ 0 + γ 1δ ( N −t ) ,

where the empirical results suggest values of φ =
0.30, γ0 = 283, γ1 = 1,746, and δ = 0.5. Then

∑
N

(12)

τ =t +1

Et (uτ ) =

φut + φ ut + L + φ
2

N −t

ut =

(

φ 1 − φ N −t
1−φ

)u .
t

Substituting (11) and (12) into (10) gives
 n − 1
ft = 
ξ
 N  0

 N − n + 1
+
  E 0 (ξn ) + h1 + h2 + L + ht 

N
+ N −1 ∑ uτ + N −1
t

(13)

τ =1

(

φ 1−φ

N −t

1−φ

)u

t

for t ≤ n

t
 N − n + 1
 n − 1
ft = 
ξ0 + 
ξn + N −1 ∑ uτ




 N 
N
τ =1

+N

−1

(

φ 1 − φ N −t
1−φ

)u

for t > n.

t

From (13) we can then calculate the change
in the spot-month futures rate for t ⱕ n to be
 N − n + 1
ft − ft −1 = 
 ht

N
+N

−1

(1 − φ

N −t +1

1−φ

) u − N φ (1 − φ
−1

t

N −t +1

(

1−φ

(

)

)

)u

t −1

1 − φ N −t +1
 N − n + 1
−1
(14) = 
(φut −1 + εt )
 ht + N

N
1−φ
− N −1

(

φ 1 − φ N −t +1
1−φ

)u

t −1

1 − φ N −t +1
 N − n + 1
−1
=
+
h
N
εt for t ≤ n,
 t

N
1−φ

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

whereas for t > n, changes in futures prices are
driven solely by the deviation of the effective fed
funds rate from the target:

ft − ft −1 = N

−1

(1 − φ

N −t +1

1−φ

)ε

forÅ t > n.

t

It follows that the variance of daily changes in the
spot-month futures rate would be given by
2
E ( ft − ft −1 ) targetÄ changeÄ onÄ dayÄ n 


2 2

2
N −t +1 2
/
 ( N − n + 1) / N  σ h + σ ε ,t 1 − φ
2
(15)   2
=  N (1 − φ )  forÄ Å t ≤ n



N
t
2
−
σ 1 − φ +1 2 /  N 2 (1 − φ )2  for t > n


 ε ,t

(

)

(

)

σ ε2,t = γ 0 + γ 1δ ( N −t ) .

Prior to 1992, the day of a target change would
often (but not always) occur the day after an FOMC
meeting. Since 1994, it usually has occurred on
the day of an FOMC meeting, but there are exceptions: Three times in 2001 (January 3, April 18,
and September 17) the Fed changed the target
without a meeting, and in August and September
of 2007 there was active speculation that the Fed
was considering or possibly had even already
implemented an intermeeting rate cut. Rather
than treat day n as if always known to the econometrician, I have followed a different philosophy,
which is to ask, How would the data look if they
were generated by (15) but the econometrician
does not condition on knowledge of the particular
value of n? Suppose that the day of the target
change (which the formula assumed was known
to market participants as of the start of the month)
could have occurred with equal probability on any
one of the calendar days n = 1,2,...,N. If we let η
denote the unknown day of the target change, then
the unconditional data would exhibit a calendar
regularity in the variance that is described by

J U LY / A U G U S T

2008

383

Hamilton

Figure 3
Squared Spot-Month Change by Day of Month
Average Squared Change
20

15

10

5

0

–5
2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

Day of Month

NOTE: The figure plots the average squared change in the spot-month futures rate, by day of the month (in basis points). 95 percent
confidence intervals are indicated by upper and lower box lines, and predicted values from regression (19) are indicated by the
dashed line.
2
2
E (ft − ft −1 ) = N −1 ∑ E ( ft − ft −1 ) η = n 


n =1

N

(16)

(1 − φ
=

)

N −t +1 2

N 2 (1 − φ )

(1 − φ
=

)

2

σ ε2,t + N −1 ∑

N −t +1 2

N 2 (1 − φ )

2

N

n =t

γ 0 + γ 1δ ( N −t )  +



= κ 1 (t ) + κ 2 (t ) σ 2h ,
where

κ1

(1 − φ
(t ) =

384

)

N −t +1 2

N (1 − φ )

N2

h

∑

τ2 2
σh
N3

N −t +1
τ =1

γ 0 + γ 1δ ( N −t ) 



( N − t + 1) ( N − t + 2) (2N − 2t + 3) .
2

κ 2 (t ) =

( N − n + 1)2 σ 2

J U LY / A U G U S T

2

6N 3

2008

Expression (16) describes the variance of
changes in the spot-month rate as the sum of two
terms. The first term (κ1共t兲) represents solely the
contribution of deviations of the effective funds
rate from the target. For days near the beginning
of the month (N – t large), this is essentially equal
to γ0/共1 – φ兲2 (the unconditional variance of ut)
divided by N 2 (because each ut contributes with
weight 1/N to the monthly average). This declines
gradually during the month (because there are
fewer days remaining for which the serial correlation in ut contributes to the variance) but then
rises quickly at the end of the month because of
the large value of γ1, reflecting the increased volatility of the deviations from the target at month end.
The second term (κ2共t兲) represents the contribution
of target changes to the volatility of the spot-month
rate. This contribution declines monotonically as
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

the day of the month t increases. This is because,
as the month progresses, it becomes increasingly
likely that the target change for the month has
already occurred and there is no more uncertainty
about the value of ξn for that month. Added
together, expression (16) implies that the variance
of changes in the spot-month futures rate should
decline over most of the month but then increase
at the very end.
Expression (16) was derived under the
assumption that at the beginning of every month
market participants are certain that there will be
a target change on day n of the month. If instead
there is a fraction ρ of months for which people
anticipate a change on some day n and a fraction
1 – ρ for which they are certain there will be no
change, the result would be that the last term in
(16) would be multiplied by ρ:
(17)

E ( ft − ft −1 ) = κ 1 (t ) + γ 2κ 2 (t ),
2

where γ 2 = ρσ h2 .

This model can be tested using daily data on
fed funds futures contracts.2 Figure 3 plots regression coefficients along with 95 percent confidence
intervals from a regression of the squared change
in the spot-month futures rate on calendar day j :

(fd − fd −1 )2 = ∑ βˆ j ω jd + νˆ d ,
31

(18)

j =1

where ω jd = 1 if business day d occurs on calendar
day j and is zero otherwise. In other words, βˆj is
the average squared change for observations falling
on the jth day of a month. These indeed exhibit
a tendency to fall over most of the month but then
rise at the end.

2

Data for October 3, 1988, through June 30, 2006, were purchased
from the Chicago Board of Trade; data for July 3 through January 29,
2007, were downloaded from the now-defunct web site
spotmarketplace.com. For d corresponding to the first day of the
month (say the first day of February for illustration), fd – fd –1
was calculated as the change in the February contract between
February 1 and the last business day in January. For all other days
of the month, it was simply the change in the spot-month contract
between day d and the previous business day.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Let td denote the calendar day associated with
business day d (in other words, if ω jd = 1, then
td = j). I then tested whether the specific function
derived in (17) could account for this pattern by
estimating via OLS the following relation:
(19)

.9κ 2 (td ) + νˆ d .
(fd − fd −1 )2 = κ 1 (td ) + 27
(3.3)

Note that all the parameters appearing in the
functions κ1共t兲 and κ2共t兲 are known as described
above on the basis of the observed behavior of
deviations of the effective fed funds rate from its
target, so that only a single parameter—the coefficient on κ2共td 兲 in equation (19)—was estimated
directly from the behavior of the futures data.
This parameter, γ2, has the interpretation of being
the variance of daily news the market receives in
a typical month about the upcoming Fed target
(recall equation (17)):

ˆ ˆ h2 = 27.9 .
γˆ 2 = ρσ
(3.3)

Note also that (19) imposes 30 separate restrictions on the 31 parameters of the unrestricted
regression (18). The F(30, 4,521) = 0.59 test statistic leads to ready acceptance of the null hypothesis that this relation is indeed described by the
function given in (16) with a p-value of 0.96
(again, treating κj 共t兲 as known functions). The
model thus successfully accounts for the tendency
of the volatility of the spot-month futures rate to
decline over most of the month but then increase
the last few days. The actual volatility seems to
increase more at the end of the month than the
model predicts, though it is possible to attribute
this entirely to sampling error.

INFERRING MARKET
EXPECTATIONS OF TARGET
CHANGES FROM THE SPOTMONTH FUTURES RATE
We are now in a position to answer the primary question of this paper, which is, What does
an observed movement in the spot-month futures
rate signal about market expectations about the
target rate that is going to be set for this month?
J U LY / A U G U S T

2008

385

Hamilton

E Yt ( ft − ft −1 )  = N −1 ∑ E Yt ( ft − ft −1 ) η = n 
N

Figure 4

n =1

Plot of κ 4(t) as a Function of t

N −t +1
 N − n + 1  2 
τ
= N −1 ∑ ρ E 
ht  = N −1 ∑
γ2

(21)


N
N


τ =1
n =t
N

κ 4std
2.5

=

2.0
1.5
1.0

2N 2

2=

γ 2κ 3 (t ) .

Substituting (21) and (17) into (20) establishes

0.5
0
6

1

11

21

16

26

31

Day of Month

Let Λt denote the information set available to market participants as of date t, and let Ωt = {ft , ft –1,…}
be the information set that is going to be used by
the econometrician to form an inference, where
it is assumed that Ωt is a subset of Λt , the previous
target ξ0 is an element of both Ωt and Λt, and the
day n target change is an element of Λt but not of
Ωt. Our task is to use the observed data Ωt to form
an inference about how the market changed its
assessment of ξn based on information it received
at t, that is, to form an assessment about

(

)

(

Yt = E ξn Λt − E ξn Λt −1
ht forÄ Å t ≤ n
=
0 forÄ Å t > n.

)

We can calculate the linear projection of Yt on Ωt
as follows (e.g., Hamilton, 1994, equation [4.5.27]):
(20)

( N − t + 1) ( N − t + 2) γ

(

)

Ê Yt Ωt =

E Yt ( ft − ft −1 )
2
E ( ft − ft −1 ) 



( ft − f t − 1 ) .

Recalling (14), the numerator of (20) can be found
from

(22)

(

)

Eˆ Yt Ωt =

κ 3 (t )γ 2
(f − f )
κ 1 (t ) + κ 2 (t ) γ 2 t t −1

= κ 4 (t ) (ft − ft −1 ) .

The parameters determining κ4共t兲 have all been
estimated above from the properties of the deviations of the fed funds rate from the target and
squared changes in the spot-month futures rate.
Figure 4 plots the function κ4共t兲 for these
parameter values. To understand the intuition
for this function, consider first the case in which
the fed funds rate is always identically equal to
the target, so that σ ε2,t and κ1共t兲 are both zero. From
(14), the expected squared change in the spotmonth rate conditional on knowing that the target
change will occur on day n would be given by

E (ft − ft −1 ) η = n,σ ε2,t = 0 


2
σ 2 ( N − n + 1) / N  forÅ Ä t ≤ n

= h
,
forÄ Å t > n
0
2

(23)

whereas the covariance of the spot-month futures
rate change with the expected target rate change
would for this case be

E (ft − ft −1 )Yt η = n,σ ε2,t = 0 
σ h2 ( N − n + 1) / N  forÅ Ä t ≤ n
.
= 
forÄ Å t > n
0

Thus, if we knew both the day of the target change
and that there were no targeting errors, the inference would be

Ê  ht Ωt ,η = n,σ ε2,t = 0  = βn (t ) (ft − ft −1 ),



386

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

where

 N / ( N − n + 1) forÅ Ä t ≤ n
,
βn (t ) = 
forÄ Å t > n
0

which reproduces Kuttner’s (2001) formula (7)
for the special case considered by Kuttner, namely,
t = n. If we don’t know the day of the target change,
but still impose no targeting error, we’d use the
unconditional moments:
2
E ( ft − ft −1 ) σ ε2,t = 0 



= N −1 ∑ σ h2 ( N − n + 1) / N 
N

2

n =t

E ( ft − ft −1 )Yt σ ε2,t = 0 



= N −1 ∑ σ h2 ( N − n + 1) / N 
N

n =t

Eˆ  ht Ωt ,σ ε2,t = 0  = β (t ) (ft − ft −1 )



(24)

β (t ) =

N −1 ∑ n =t ( N − n + 1) / N 
N

N −1 ∑ n =t ( N − n + 1) / N 
N

2

.

For N large and t = 1, the numerator of (24) would
be approximately (1/2) and the denominator about
(1/3), so that the coefficient β 共1兲 would be close
to 1.5. This is bigger than Kuttner’s expression (7),
which equals unity at n = 1, because a one-unit
increase in h1 will increase the expected target on
day n > 1 by one unit but increase the futures rate
on day t = 1 by only [共N – n + 1兲/N ] < 1. Kuttner’s
formula assumes that, if we use the day t = 1
change in the futures, the target change occurs
on day n = 1, whereas our formula assumes that
in all probability the actual change is going to be
implemented on some day n > 1.
Going from t to t + 1, we drop N –1[共N – t + 1兲/
N ] from the numerator and drop the smaller magnitude N –1[共N – t + 1兲/ N ]2 from the denominator,
so that the ratio (24) monotonically increases in t
until it finally reaches the same value as (7) on
the last day of the month:

β (N ) = N .

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

In the presence of targeting errors, expression
(22) adds the term κ1共t兲/γ 2 to the denominator of
(24), so, as noted by Poole, Rasche, and Thornton
(2002), the optimal inference in the presence of
targeting errors always puts a smaller weight on
ft – ft –1 than does (24). This explains why the
function κ4共t兲 in Figure 4 begins at a value below
1.5 for t = 1. The function κ4共t兲 then begins to
increase monotonically in t for the same reason as
in (24). However, as t increases, both the numerator and denominator in (24) become smaller,
whereas κ1共t兲/γ 2 is approximately constant (at least
for small t). This latter effect eventually overwhelms the tendency of (22) to increase in t, and
it begins to fall after the 20th day of the month.
This decline accelerates toward the very end of
the month as κ1共t兲 starts to spike up from the endof-month targeting errors.

RESPONSE OF INTEREST RATES
TO CHANGES IN FEDERAL FUNDS
FUTURES
We’re now ready to return to the original question of how interest rates for Treasuries of various
maturities seem to respond to the spot-month fed
funds futures rate. Deviations of the funds rate
from the target should have a quite negligible effect
on maturities greater than three months, because
the autocorrelation implied by (8) dies out within
a matter of days. We should therefore find that, if
we regress the change in Treasury yields on the
change in the spot-month futures rate, the value
of the regression coefficient should exhibit exactly
the same pattern over the month as the function
in Figure 4—the impact should rise gradually
through the first half of the month and fall off
quickly toward the end of the month.
As a first step in evaluating this conjecture,
divide the calendar days of a month into j = 1,2,
...,8 octiles and let ψjd = 1 if business day d is
associated with a calendar date in the jth octile
of the month. For example, ψ1d = 1 if day d falls
on one of the first four days of the month, whereas
ψ8d = 1 if it falls on the 29th, 30th, or 31st. Let
is,d denote the yield in basis points on day d for
a Treasury bill or bond of constant maturity s
J U LY / A U G U S T

2008

387

Hamilton

Figure 5
The Effect of Federal Funds Rate Changes on 1-Year Treasury Yields

3.0

2.5

2.0

1.5

1.0

0.5

0.0

–0.5

–1.0
2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

Day of Month

NOTE: The figure plots the coefficients and 95 percent confidence intervals for the OLS regression of daily change in the 1-year
Treasury yield on daily change in the spot-month futures rate, with different coefficients for each octile based on the calendar day of
the month (denoted by the rectangles and vertical lines) and the predicted values for the coefficients for each day of the month as
implied by (26) (denoted by the dashed line).

months; for example, i12,d would be the 1-year
rate. (Daily Treasury yields were taken from the
St. Louis FRED database.) Consider OLS estimation of

is ,d − is ,d −1 = ∑ α jsψ jd ( fd − fd −1 ) + usd .
8

(25)

j =1

In Figure 5, the OLS estimates, α̂ j共t兲,s, along with
their 95 percent confidence intervals, are plotted
as a function of calendar day t = 1,2,...,31 for
s = 12, which corresponds to a 1-year Treasury
security. These indeed display very much the
predicted pattern—an increase in the fed funds
futures rate around the middle of the month has
a slightly bigger effect on the 1-year Treasury rate
than it would have at the beginning of the month,
388

J U LY / A U G U S T

2008

and a much bigger effect than it would have
toward the end of the month. The same pattern
holds for shorter yields (Figure 6) and longer
yields (Figure 7).
According to the theory, we can capture the
exact effect predicted for each calendar day by
regressing the change in interest rates on the product between the change in fed funds futures and
the function in (22):
(26)

is ,d − is ,d −1 = λsκ 4 (td )( fd − fd −1 ) + usd ,

where td is the calendar day of the month associated with business day d, λs is the effect of a onebasis-point increase in the target rate on a Treasury
security of maturity s, and usd results from factors
influencing yields that are uncorrelated with
changes in the expected target rate. Note that all
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

Figure 6
The Effect of Federal Funds Rate Changes on 3-Month and 6-Month Treasury Yields
Effect on 3-Month Treasury Yield

3.0
2.5
2.0
1.5
1.0
0.5
0.0
–0.5
–1.0
2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

24

26

28

30

Day of Month

Effect on 6-Month Treasury Yield

3.0
2.5
2.0
1.5
1.0
0.5
0.0
–0.5
–1.0
2

4

6

8

10

12

14

16

18

20

22

Day of Month

NOTE: The figure plots the coefficients and 95 percent confidence intervals for the OLS regression of daily change in the 3-month
and 6-month Treasury yields on daily change in the spot-month futures rate, with different coefficients for each octile based on the
calendar day of the month (denoted by the rectangles and vertical lines) and the predicted values for the coefficients for each day of
the month as implied by (26) (denoted by the dashed line).

the parameters governing κ4共t兲 have been inferred
from the behavior of the fed funds rate and futures
alone. Estimates of λs for different maturities, s,
are reported in the first column of Table 2, and
values of λ̂ sκ4共t兲 for different maturities, s, are
plotted as a function of t in Figures 5 to 7.
The adequacy of (26) was investigated in a
number of different ways. One obvious question
is how important the function κ4共td 兲 is for the
regression. This can be explored by comparing
(26) with a specification in which changes in
futures prices have the same effect on interest
rates regardless of when within the month they
occur:
(27)

is ,d − is ,d −1 = cs ( fd − fd −1 ) + usd .

The specifications (26) and (27) are non-nested,
but it is simple enough to generalize to a model
that includes them both as special cases:
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(28)

is,d − is,d −1 =

c s (fd − fd −1 ) + λsκ 4 (td ) (fd − fd −1 ) + usd .

If model (26) is correct, then we should be able
to accept the null hypothesis that cs = 0, whereas
if (27) is correct, we should accept the null hypothesis that λs = 0. If neither specification is correct,
then we should reject both null hypotheses. The
second and third columns of Table 2 report the
OLS coefficient estimates and standard errors for
(28). For maturities greater than two years, we
accept the null hypothesis that cs = 0 and strongly
reject the hypothesis that λs = 0. For maturities
less than two years, both hypotheses are rejected,
suggesting that there is more to the response of
short-term interest rates to fed funds futures than
is captured by (26) alone. Even in these cases,
however, the term involving κ4共t兲 makes by far
the more important contribution statistically. I
conclude that the model successfully captures a
J U LY / A U G U S T

2008

389

Hamilton

Figure 7
The Effect of Federal Funds Rate Changes on 2-Year, 3-Year, and 10-Year Treasury Yields
Effect on 2-year Treasury Yield

3.0
2.5
2.0
1.5
1.0
0.5
0.0
–0.5
–1.0
2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

22

24

26

28

30

22

24

26

28

30

Day of Month

Effect on 3-year Treasury Yield

3.0
2.5
2.0
1.5
1.0
0.5
0.0
–0.5
–1.0
2

4

6

8

10

12

14

16

18

20

Day of Month

Effect on 10-year Treasury Yield

3.0
2.5
2.0
1.5
1.0
0.5
0.0
–0.5
–1.0
2

4

6

8

10

12

14

16

18

20

Day of Month

NOTE: The figure plots the coefficients and 95 percent confidence intervals for the OLS regression of daily change in 2-year, 3-year,
and 10-year Treasury yields on daily change in the spot-month futures rate, with different coefficients for each octile based on the
calendar day of the month (denoted by the rectangles and vertical lines) and the predicted values for the coefficients for each day of
the month as implied by (26) (denoted by the dashed line).

390

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

Table 2
The Effect of Federal Funds Futures on Interest Rates
Restricted
effect
Maturity

λs

With constant effect added
λs

cs

With separate
octile effects added
(p-value for indicated H0)

α 1s = … = α 8s = 0

λs = 0

3 Months

0.658**
(0.022)

0.256**
(0.089)

0.499**
(0.060)

(0.00)**

(0.98)

6 Months

0.706**
(0.021)

0.286**
(0.084)

0.529**
(0.056)

(0.00)**

(0.45)

1 Year

0.748**
(0.023)

0.226**
(0.095)

0.608**
(0.063)

(0.00)**

(0.60)

2 Years

0.685**
(0.029)

0.159
(0.112)

0.586**
(0.079)

(0.00)**

(0.74)

3 Years

0.641**
(0.030)

0.143
(0.122)

0.552**
(0.081)

(0.01)**

(0.62)

10 Years

0.426**
(0.028)

0.082
(0.115)

0.375**
(0.077)

(0.05)*

(0.45)

NOTE: This table shows the regression coefficients relating change in the interest rate on securities with maturity s to change in the fed
funds futures rate. *indicates statistically significant with p-value <0.05; **denotes p-value <0.01. OLS standard errors are in parentheses.

clear tendency in the data for the impact to vary
across the month, although it seems to leave
something out in the description of the response
of short-term interest rates.
In the same spirit, we can nest (26) and (25):

is ,d − is ,d −1 =

(29)

∑α js ψ jd ( fd − fd −1 ) + λsκ 4 (td )( fd − fd −1 ) + usd .
8

j =1

The results, shown in the last two columns of
Table 2, are not as encouraging. In every case, we
strongly reject the hypothesis that α 1s = … = α 8s = 0,
meaning that for each maturity, s, there are statistically significant deviations from the broad
monthly pattern that is predicted by (26), and in
every case readily accept the hypothesis that λs = 0,
meaning that the specific variation within octiles
that is predicted by (26) is not particularly found
in the data.
These last results are perhaps not too surprising given the many approximations embodied in
(26), which assumed among other things that all
months have N = 31 calendar days and ignored
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

both weekend effects and that some business days
convey much more important economic news
than others (on this last point, see Poole and
Rasche, 2000, and Gürkaynak, Sack, and Swanson,
2005).
We can in fact carry that last point a step further and estimate a separate coefficient λjs for
every calendar day j = 1,…,31:

is ,d − is ,d −1 = ∑ λ jsω jd ( fd − fd −1 ) + usd ,
31

(30)

j =1

where ωjd = 1 if day d falls on the jth day of the
month. Figure 8 plots the OLS estimates of λjs as
a function of the calendar day j along with 95 percent confidence intervals and the predicted values
for the function λjs implied by (26) for 1-year
Treasuries. Again the broad pattern seems to fit
well, though again there are large deviations on
some days that are well beyond what could be
attributed to sampling error, and formal hypothesis tests comparing (30) with (26) (which the
former formally nests as a special case) lead to
overwhelming rejection, with a p-value less than
10–10 for each s. In addition to the details noted
J U LY / A U G U S T

2008

391

Hamilton

Figure 8
Effects of Federal Funds Rate Changes on 1-Year Treasury Yields

3.0

2.5

2.0

1.5

1.0

0.5

0.0

–0.5

–1.0
2

4

6

8

10

12

14

16

18

20

22

24

26

28

30

Day of Month

NOTE: The figure plots the coefficients and 95 percent confidence intervals for the OLS regression of daily change in the 1-year
Treasury yields on daily change in the spot-month futures rate, with different coefficients for each calendar day of the month (denoted
by the rectangles and vertical lines) and the predicted values for the coefficients for each day of the month as implied by (26) (denoted
by the dashed lines).

above, individual outliers are highly influential
for the daily regression (30), and one would want
to carefully model these non-Gaussian innovations, usd , and GARCH effects before trying to
build a more detailed model that could reproduce
more of the unrestricted pattern. This and related
tasks, such as trying to use information about the
actual date of the target change when it is unambiguously known, using one-month or two-month
futures contracts in place of the spot rate, and
exploring the consequences of a secular change
in σ h2 (e.g., Lang, Sack, and Whitesell (2003) and
Swanson, 2006), we leave as topics for future
research.
Although there is much more to be done
before having a completely satisfactory understanding of these relations, I believe that the
392

J U LY / A U G U S T

2008

approach developed here gives us a plausible
interpretation of the broad regularities found in
the data and a sound basis for generalizing the
Kuttner (2001) and Poole, Rasche, and Thornton
(2002) approaches. Although the methods involve
some new uses of the data, the conclusion I draw
is quite consistent with earlier researchers—
changes in the fed funds target seem to be associated with quite large changes in Treasury yields,
even for maturities of up to 10 years.

REFERENCES
Anderson, Richard G. and Rasche, Robert H.
“A Revised Measure of the St. Louis Adjusted
Monetary Base.” Federal Reserve Bank of St. Louis
Review, March/April 1996, 78(2), pp. 3-14.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Hamilton

Cook, Timothy and Hahn, Thomas. “The Effect of
Changes in the Federal Funds Rate Target on Market
Interest Rates in the 1970s.” Journal of Monetary
Economics, November 1989, 24(9), pp. 331-51.
Gürkaynak, Refet S. “Using Federal Funds Futures
Contracts for Monetary Policy Analysis.” Working
paper, Board of Governors of the Federal Reserve
System, 2005.
Gürkaynak, Refet S.; Sack, Brian P. and Swanson,
Eric T. “Do Actions Speak Louder Than Words?
The Response of Asset Prices to Monetary Policy
Actions and Statements.” International Journal of
Central Banking, June 2005, 1(1), pp. 55-93.
Gürkaynak, Refet S.; Sack, Brian P. and Swanson,
Eric T. “Market-Based Measures of Monetary Policy
Expectations.” Journal of Business and Economic
Statistics, April 2007, 25(2), pp. 201-12.
Hamilton, James D. Time Series Analysis. Princeton,
NJ: Princeton University Press, 1994.
Hamilton, James D. “Daily Changes in Fed Funds
Futures Prices.” Journal of Money, Credit, and
Banking (forthcoming).
Kuttner, Kenneth N. “Monetary Policy Surprises and
Interest Rates: Evidence from the Fed Funds Futures
Market.” Journal of Monetary Economics, June 2001,
47(3), pp. 523-44.
Lang, Joe; Sack, Brian P. and Whitesell, William.
“Anticipations of Monetary Policy in Financial
Markets.” Journal of Money, Credit, and Banking,
December 2003, 35(6, part 1), pp. 889-909.

Piazzesi, Monika and Swanson, Eric T. “Futures Prices
as Risk-Adjusted Forecasts of Monetary Policy.”
Journal of Monetary Economics (forthcoming).
Poole, William and Rasche, Robert H. “Perfecting the
Market’s Knowledge of Monetary Policy.” Journal
of Financial Services Research, December 2000,
18(2/3), pp. 255-98.
Poole, William; Rasche, Robert H. and Thornton,
Daniel L. “Market Anticipations of Monetary Policy
Actions.” Federal Reserve Bank of St. Louis Review,
July/August 2002, 84(4), pp. 65-94.
Sarno, Lucio; Thornton, Daniel L. and Valente,
Giorgio. “Federal Funds Rate Prediction.” Journal
of Money, Credit, and Banking, June 2005, 37(3),
pp. 449-71.
Swanson, Eric T. “Have Increases in Federal Reserve
Transparency Improved Private Sector Interest Rate
Forecasts?” Journal of Money, Credit, and Banking,
April 2006, 38(3), pp. 791-819.
Taylor, John B. “Expectations, Open Market
Operations, and Changes in the Federal Funds
Rate.” Federal Reserve Bank of St. Louis Review,
July/August 2001, 83(4), pp. 33-47.
Thornton, Daniel L. “A New Federal Funds Rate
Target Series: September 27, 1982–December 31,
1993.” Working Paper 2005-032, Federal Reserve
Bank of St. Louis, 2005;
http://research.stlouisfed.org/wp/2005/2005-032.pdf.

Nilsen, Jeffrey H. “Borrowed Reserves, Fed Funds
Rate Targets, and the Term Structure,” in Ignazio
Angeloni and Riccardo Rovelli, eds., Monetary
Policy and Interest Rates. London: Macmillan
Press, 1998.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

393

394

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Alec Chrystal

I

am very pleased to be asked to participate
in this conference that honors the career
of Bill Poole. As a student of monetary
economics I, like all my generation, was
substantially influenced by Poole (1970). I was
pleased to meet him many times at the annual
St. Louis economic policy conference, both before
and after he became president of the Federal
Reserve Bank of St. Louis. We were also very
grateful that he came to London in 2000 to give
the annual Henry Thornton Lecture at the Cass
Business School of City University.
I now turn to my comments on Professor
Hamilton’s (2008) paper. The paper uses data
from daily movements in federal funds futures
to test for links between futures prices, the policy
rate itself, and the behavior of market interest rates.
I first comment on the empirical work presented
and then suggest additional avenues of research
to further enlighten the topic. I then ask this:
Who might be interested in these results and
what might they learn from them?
Some of the difficulty in using the federal
funds futures price as an indicator of market
expectations arises from the fact that the contract
settles on an average daily price over a month. I
do not wish to get into the institutional detail here
or into the econometric problems this causes,
not least because I am dominated in institutional
knowledge by Ken Kuttner, the other discussant,
and in econometric expertise by Professor
Hamilton himself. However, as a naive outsider,
I cannot help but ask whether there is some

“cleaner” money market interest rate that contains
the same information but avoids the complexities
of moving-average valuation. Could we, for example, do roughly the same exercise with shortterm Treasury bill discount rates, short maturity
Treasury bond yields, or indeed interbank loan
rates? If we could, then it would surely be simpler
to use these rates and parsimony would lean in
their favor.
Assuming now that the federal funds futures
prices are the best proxy for market expectations,
what do the results tell us, and what else might
we like to know? The results reported in this
paper confirm two earlier findings: First, market
rates anticipate actual policy rate changes and,
second, other market rates (yields to maturity)
move with the federal funds rate, including those
of up to 10-year maturity. I will discuss each of
these in turn.
It is not a major surprise to find that markets
anticipate policymakers’ decisions. That this is
highly likely has been central to economics since
the rational expectations revolution of the 1970s.
However, it would be interesting to know if markets have become better at doing this over time
and whether this ability has been affected by
improved transparency about the target rate, the
stated biases in the policy stance, and what is
being targeted. Similar questions apply to the
unexpected component of policy changes: Has
the impact of policy changed over time, and are
the results for the full sample dependent on specific periods or specific sets of events?

Alec Chrystal is a professor of money and banking and associate dean and head of the faculty of finance at Sir John Cass Business School,
City University, London, England.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 395-97.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

395

Chrystal

A slightly different point arises in the context
of testing the effects of federal funds futures on
other market rates. It makes sense that short rates
move most closely with the federal funds rate.
However, the theoretical link with long rates is
rather more ambiguous. In what way should long
rates react to changes (and expected changes) in
short-term policy rates? This could go either way,
and indeed there could be no link at all. Suppose
the Fed is tightening rates in order to bring down
inflation in the future. This will raise implied
forward rates up to some term, but it may lower
interest rate expectations further out, that is, tip
the yield curve. In such cases, forward rates further
out will change but the change could quite logically be in the opposite direction. Tighter policy
now could lower inflation expectations further
out and, hence, create expectations of lower interest rates in the future. Hence, it might be interesting to test the reaction of long forward rates, in
addition to yields to maturity, as the former will
strip out the impact at the short end of the yield
curve. What actually happens in each case will
be dependent on the complexity of the environment, and the reaction might be asymmetric—
rises may have a different impact than falls.
Why and to whom is all this likely to be
interesting? Potentially there are three groups
who may be able to learn something from the
relationships that emerge from this and similar
studies: first, the monetary authorities themselves;
second, market participants who trade in these
and related markets; and third, those in the economics profession who want to understand how
monetary policy works, that is, those with an
interest in the transmission mechanism.
The monetary authorities may be interested
in all this for two possible reasons. First, by monitoring the federal fund futures they can see what
the markets expect policy to be and can factor
that into their decisions. Second, they could
understand what impact an unexpected rate
change has on the markets. (I will return later to
the issue of whether these results mean that only
unexpected rate changes matter.) I do not know
for sure, but my guess is that Federal Open Market
Committee members have reliable ways of backing out market expectations and of estimating
396

J U LY / A U G U S T

2008

the impact of their policy rate changes without
having to rely on this evidence from the federal
funds futures market. Hence, I suspect that the
contribution of these results to policymakers’
decisionmaking is quite small.
Market participants have little to learn from
these results because the federal funds futures
prices reflect their behavior in the first place, so
they are not going to learn about their own expectations from a price that their behavior has created.
There may be something that these players could
learn from federal funds futures prices, but only
if the data were much more finely sampled.
Tick-by-tick data for this and other closely linked
money markets might help to identify exactly
where changes in sentiment first appear. Market
traders probably know this already, but it is also
possible that the news for some episodes appears
to some segments of the market first. However, it
is more likely that market participants get new
information more or less simultaneously and the
timing of market movements is purely a product
of how we measure the “market price.” That is,
all prices respond as quickly as is technically
possible to the same information.
So what can we as economists learn from all
this about the transmission mechanism of monetary policy? I suggest that this evidence does
nothing but confirm what we already knew: Markets anticipate what policymakers are going to do,
and markets move most when the policy change
is most unexpected. However, I should emphasize
that this evidence neither supports nor confounds
the old notion of the Lucas aggregate supply curve,
by which only unexpected policy changes have
real effects.
To see this, I hypothesize that monetary policy
works through a number of channels to influence
aggregate demand in the economy. Only one of
these channels is the direct effects on other market
interest rates. Other channels include asset prices
(and thus wealth effects), expectations and confidence, and international financial markets (and
thus the exchange rate). The fact that market rates
anticipate policy rate changes does not mean that
the changes have no effect; it just means that the
effects happen sooner. Market rate changes will
still affect saving and investment decisions and

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Chrystal

thus also aggregate demand. They will also affect
asset valuations and thus create wealth effects.
Unexpected policy rate changes may well
have a bigger measurable impact on market rates
of all maturities, but this does not prove either
that only unexpected rate changes have real effects
or that unexpected rate changes have bigger real
effects. It remains possible that unexpected policy changes have a bigger impact on aggregate
demand, but the evidence adduced here does not
address this issue.
In short, this paper contains some outstanding
innovative econometric work that throws much
light on the links between federal funds futures
prices, the policy rate, and other market rates.
However, the results have no apparent implications that should cause us to revise our view of
how monetary policy works.

REFERENCES
Poole, William. “Optimal Choice of Monetary Policy
Instrument in a Simple Stochastic Macro Model.”
Quarterly Journal of Economics, 1970, 84(2),
pp. 197-216.
Hamilton, James D. “Assessing Monetary Policy
Effects Using Daily Federal Funds Futures
Contract.” Federal Reserve Bank of St. Louis
Review, July/August 2008, 90(4), pp. 377-93.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

397

398

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Kenneth N. Kuttner

R

ecent efforts to understand the transmission of monetary policy have
spawned a growing literature examining the response of financial markets to monetary policy.1 Most of these studies
assess the likely impact of unanticipated
changes in the target federal funds rate, typically
in a sample of well-defined policy “events” consisting of Federal Open Market Committee
(FOMC) meeting days, plus the days of
unscheduled funds rate changes. The problem
is that economists do not always know the days
on which the policy actions took place, especially in the early 1990s. Before the FOMC
began announcing its policy actions in February
1994, there was often some confusion in the
financial markets as to whether there had been a
change in the funds rate target. This ambiguity
has been largely dispelled by the FOMC’s
announcements, although, as Hamilton (2008)
notes, there has been occasional speculation
that the Fed has surreptitiously changed the target rate.2
Hamilton’s (2008) paper is primarily an effort
to address the issue of unknown event dates. It
departs from the usual assumption that the days
of policy actions (or possible actions) are known
1

The first paper in this literature was Cook and Hahn (1989).
Subsequent work includes Poole and Rasche (2000), Kuttner
(2001), Poole, Rasche, and Thornton (2002), Gürkaynak, Sack,
and Swanson (2005), and Bernanke and Kuttner (2005).

2

So far, none of this speculation has proved to be correct.

and uses instead a signal-extraction approach to
determine the market’s reaction without conditioning on this information. His elegant approach
allows the market’s reaction to be estimated using
the entire sample, not just event days. Moreover,
the approach allows for the measurement of financial markets’ response to evolving expectations
of future Fed actions, a feature that allows him
to extract information even when the Fed does
not surprise the markets.
The analysis focuses on the response of term
interest rates, as in Kuttner (2001), although there
is no reason the same approach could not also be
applied to stock prices or exchange rates. The
paper’s key empirical results largely confirm those
reported elsewhere, which is good news for those
of us who have used the much simpler event-study
approach. The response of term interest rates to
changes in the funds rate is uniformly less than
one for one, and the effect on longer-term interest
rates is generally less than it is for short-term rates.
It is interesting to note, however, that this latter
tendency is less pronounced than it is in Kuttner’s
(2001) results.
My discussion will focus on two issues. The
first point is somewhat technical, as it concerns
the details of how the “noise” in the federal funds
rate is modeled. The second is a more conceptual
discussion of how the interpretation of the shocks
identified by Hamilton’s procedure might differ
from those in conventional event-study analyses.

Kenneth N. Kuttner is a professor of economics at Williams College and a research associate at the National Bureau of Economic Research.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 399-403.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

399

Kuttner

Figure 1
The Target and Effective Funds Rates, 1995
Percent
7.5

7.0

6.5

6.0

5.5

5.0

4.5
Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Dec

1995

NOTE: Vertical lines denote settlement Wednesdays.

MODELING FUNDS RATE NOISE
Unlike the more common event-study analysis,
Hamilton’s signal-extraction method requires statistically modeling the noise process present in
the daily effective federal funds rate.3 Intuitively,
the reason for this is that calculating the likely
signal present in any given funds rate change
requires some estimate as to the amount of noise
likely to be present on any given day: The noisier
the effective funds rate, the less likely it is that
the observed change in the rate (and, by extension,
the expected rate implied by the current-month
futures contract) represents a policy change.
Observing that the magnitude of these deviations tends to increase over the course of a month,
Hamilton models the targeting error as an auto3

This noise results from the fact that the New York Fed’s control
over the funds rate is not absolute: Their Trading Desk injects just
enough reserves to hit the target funds rate, given its assessment of
the factors affecting reserve demand and supply. However, because
of unanticipated changes in demand or supply, the actual (“effective”) funds rate may differ from the target.

400

J U LY / A U G U S T

2008

regressive process whose innovation variance is
a linear function of the day of the month (equations (8) and (9)). To get a sense of the magnitude
of these targeting errors, his estimated parameters
imply a 45-basis-point standard deviation on the
31st day of the month, 34 basis points on the 30th
day, and 17 basis points on the 1st day.
Although this is not an unreasonable first
pass, some refinements are possible. First, because
there is no reason to think that the end-of-month
volatility in 31-day months is greater than it is
for 30-day months, it would be desirable to relax
the assumption of 31-day months and replace
equation (9) with

eˆ d2 = a + b × 0.5( N i −td ) + νˆ d ,

where Ni is the number of days in month i.
A second important refinement would be to
account for the “settlement Wednesday” effect.
Especially in the early part of the sample, the
Wednesdays associated with the final day of the
reserve maintenance period were often associated
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Kuttner

Figure 2
The Target and Effective Funds Rates, 2002
Percent
2.00
1.75
1.50
1.25
1.00
Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Dec

Jul

Aug

Sep

Oct

Nov

Dec

2002
Percent
2.00
1.75
1.50
1.25
1.00
Jan

Feb

Mar

Apr

May

Jun
2002

NOTE: The vertical lines in the top panel denote settlement Wednesdays; in the bottom panel they mark the last day of the month.

with extremely large funds rate spikes, as shown
in Figure 1. (The vertical lines denote settlement
Wednesdays.) To account for this pattern, a reasonable specification for the targeting error might
be something like

rd − ξd = 0.3( rd −1 − ξd −1 ) + 14Wd + 19M d + eˆ d
eˆ d2 = 179 + 196Wd +1 + 1,422Wd − 27M d +1
+1,481M d + νˆ d ,

where Wd is a dummy equal to 1 on settlement
Wednesdays and Md is a dummy equal to 1 on
the last day of the month. The other notation is
the same as Hamilton’s.4
Three features of this alternative specification
are particularly interesting. One is that there are
4

The parameter estimates are estimated using ordinary least
squares from May 17, 1989, through October 12, 2007, excluding
September 2001 and December 1999.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

significant level effects associated with settlement
Wednesdays and with the last day of the month:
Errors on these days tend to be positive. The second is that the standard deviation of the targeting
error is 27 basis points higher on settlement
Wednesdays. Third, unlike in Hamilton’s specification, there is no evidence of a month-end effect,
except on the very last day of the month.
Other information about changes in the federal
funds market can also be brought to bear to further
refine the specification. One such change is the
shift to lagged reserve accounting as of July 30,
1998. Partly as a result of this change, the monthend and settlement-Wednesday volatility of the
funds rate, as well as the overall variance, has
fallen sharply in recent years. Post 1998, the
standard deviation of the last-day-of-month targeting error is only 22 basis points (compared with
Hamilton’s last-day estimate of 45 basis points),
and there is no longer any evidence of a settlement-Wednesday spike. That the Federal Reserve
J U LY / A U G U S T

2008

401

Kuttner

Bank of New York’s Trading Desk has improved
control over the funds rate is readily apparent
in Figure 2. (The vertical lines in the top panel
denote settlement Wednesdays; in the bottom
panel they mark the last day of the month.)
Finally, in refining the estimates of the targeting error process, one would want to make
allowances for special circumstances affecting
the federal funds market. Hamilton already makes
one such allowance, omitting September 2001
from the sample used for estimating the model.
December 1999 should be dropped for similar
reasons: With the Y2K changeover approaching,
the Fed flooded the market with reserves in an
effort to assuage liquidity concerns. Consequently,
the funds rate traded as much as 150 basis
points below its target as the end of the month
approached. Including atypical episodes, such
as this one, could overestimate the amount of
noise normally present in the effective funds rate.
It is important to emphasize that none of these
observations undercuts in any way the soundness
of Hamilton’s basic approach. In particular, I can
think of no reason to suspect that any misspecification in equations (8) or (9) would necessarily
bias the parameter estimates reported in
Hamilton’s Table 2. Instead, it is more akin to the
problem of choosing inappropriate weights in a
weighted-least-squares procedure: In that case,
while the parameter estimates may not be biased,
the procedure is not making optimal use of the
information contained in the data.

ON INTERPRETING THE
“HAMILTON SHOCKS”
The second part of my remarks concerns the
interpretation of the funds rate shocks underlying
Hamilton’s procedure. By way of background, it
may be useful to distinguish between two different
regimes. In the first regime, changes in the funds
rate target are equally likely on any day—but
changes in the target are not announced by the
FOMC. This regime plausibly corresponds to the
pre-1994 world, in which policy actions were
generally not disclosed and a significant fraction
of rate changes took place between meetings. In
402

J U LY / A U G U S T

2008

this regime, day-to-day changes in the futuresimplied rate on any particular day would plausibly
represent the market’s inference as to whether the
Fed had changed its target on that day.
In the second regime, which is more relevant
post 1994, the days of the rate changes are
largely known; and even when policy actions are
taken between FOMC meetings, the changes are
announced, and not in response to any specific
news that might have arrived on that day. In this
case, the day-to-day change in the futures rate on
days other than “event” days (i.e., days of rate
changes or FOMC meetings) would reflect changes
in the market’s expectation of the target funds rate
on some future date.
Now consider the sources of news that could
affect policy expectations. One source is new
macroeconomic information: higher-than-expected
employment, for example, or lower-than-expected
inflation. The other source would be changes in
the Fed’s perceived preferences regarding inflation
vis à vis output—the presumed source of monetary policy “shocks,” as the term is commonly
used in the literature.
These distinctions bear on how we should
interpret the information contained in alternative
measures of monetary policy shocks or surprises.
In the second regime, policy surprises (i.e.,
changes in the futures rate) occurring on event
days are more likely to be driven by the second
category of news: changes in the Fed’s perceived
preferences.5 Changes occurring on days other
than event days would, for the most part, be
associated with the arrival of economic news. In
the first regime, however, day-to-day changes in
the futures rate could result from either source:
changes in policy preferences or macro news.
Thus, conditioning on known event days
allows the econometrician to distinguish between
the endogenous response of policy expectations
to new economic information and otherwise
inexplicable policy shocks. This distinction can
be critically important in assessing the financial
market response to monetary policy. As shown in
5

It is also possible that the change would be interpreted as the Fed
reacting to private information, although the evidence for this
view is weak; see Faust, Swanson, and Wright (2004).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Kuttner

Bernanke and Kuttner (2005), the stock market’s
reaction to unanticipated funds rate changes is
effectively zero when those changes occurred on
the same day as an employment report, a pattern
that was common in the early 1990s. Any analysis
that failed to make this distinction could provide
a misleading answer to the primary question of
interest to policymakers: how the market will
react to an unexpected change in the fund rate
target.
Within Hamilton’s framework, it would be
easy to make this distinction. In fact, it suggests
an interesting test of the null hypothesis that the
response to (calendar-adjusted) changes in the
futures rate is the same on event days as it is on
non-event days. The alternative, of course, would
be that the reaction differs in a systematic way.
Given the richness of the dataset, it would be
possible to go further and distinguish between
event days and the days of specific economic
news releases (e.g., inflation, employment, gross
domestic product). This assumes that the relevant
days are known, of course—but after 1994, this
is not such a bad assumption. Econometrically,
the only modification to Hamilton’s procedure
would be to allow the relevant dummy variables
to interact with the slope coefficient in equation
(26).

CONCLUSION
None of these points takes away from the
bottom line: The paper is a classic Hamilton timeseries tour de force. It addresses an important
question using elegant econometrics, and it incorporates a detailed knowledge of the market for
federal funds. Using more of that knowledge to
refine the targeting-error specification would
enhance an already fine paper, as would further
efforts to understand what the shocks in the
model really represent.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

REFERENCES
Bernanke, Ben S. and Kuttner, Kenneth N. “What
Explains the Stock Market’s Reaction to Federal
Reserve Policy?” Journal of Finance, June 2005,
60(3), pp. 1221-57.
Cook, Timothy and Hahn, Thomas. “The Effect of
Changes in the Federal Funds Rate Target on
Market Interest Rates in the 1970s.” Journal of
Monetary Economics, November 1989, 24(3),
pp. 331-51.
Faust, Jon; Swanson, Eric T. and Wright, Jonathan H.
“Do Federal Reserve Policy Surprises Reveal Private
Information About the Economy?” Contributions to
Macroeconomics, 2004, 4(1), pp. 1-29.
Gürkaynak, Rafet S.; Sack, Brian P. and Swanson,
Eric T. “Do Actions Speak Louder Than Words?
The Response of Asset Prices to Monetary Policy
Actions and Statements.” International Journal of
Central Banking, June 2005, 1(1), pp. 55-93.
Hamilton, James D. “Assessing Monetary Policy
Effects Using Daily Federal Funds Futures
Contracts.” Federal Reserve Bank of St. Louis
Review, July/August 2008, 90(4), pp. 377-93.
Kuttner, Kenneth N. “Monetary Policy Surprises and
Interest Rates: Evidence from the Fed Funds
Futures Market.” Journal of Monetary Economics,
June 2001, 47(3), pp. 523-44.
Poole, William and Rasche, Robert H. “Perfecting the
Market’s Knowledge of Monetary Policy.” Journal
of Financial Services Research, December 2000,
18(2/3), pp. 255-98.
Poole, William; Rasche, Robert H. and Thornton,
Daniel L. “Market Anticipations of Monetary Policy
Actions.” Federal Reserve Bank of St. Louis Review,
July/August 2002, 84(4), pp. 65-93.

J U LY / A U G U S T

2008

403

404

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

The Importance of Being
Predictable
John B. Taylor

I

t is a pleasure to participate in this conference and join in the recognition of Bill
Poole. My remarks build on two of Bill
Poole’s important contributions to monetary
theory: his 1970 Quarterly Journal of Economics
(QJE) paper on monetary policy under uncertainty and his more recent series of lucid short
papers on predictability, transparency, and policy
rules, many of which were adapted from speeches
and published in the Review of the Federal
Reserve Bank of St. Louis.
At the same time I want to express my appreciation for Bill’s extraordinary service in public
policy: starting in the 1960s as a member of the
staff of the Federal Reserve Board, where he wrote
his 1970 QJE paper and many others; then later as
a member of the President’s Council of Economic
Advisers during the difficult disinflation of the
early 1980s, where his role in explaining and
supporting the Fed’s price stability efforts was
essential; and most recently as president of the
Federal Reserve Bank of St. Louis, where his
emphasis on good communication and good
policy has contributed, and will continue to
contribute, to improvements in the conduct of
monetary policy. Regarding these contributions I
give two of my favorite examples of Bill Poole’s
many pithy phrases which I hope will ring in
monetary policymakers’ ears for many years to

come: “We ignore the behavior of the monetary
aggregates at our peril” (Poole, 1999); and “Clearly,
more talk does not necessarily mean more transparency” (Poole, 2005a).

THE BEGINNINGS OF RESEARCH
ON POLICY RULES IN
STOCHASTIC MODELS CIRCA 1970
Let me begin by reviewing Bill Poole’s
deservedly famous 1970 QJE article. In my view,
that paper conveyed two novel messages, one
about dealing with uncertainty and the other
about reducing uncertainty.

An Approach to Monetary Policy That
Could Deal with Existing Uncertainty
The first message was presented in the form
of a simple graphical ISLM analysis, and soon
after textbook writers incorporated this analysis
in their macroeconomics and money and banking textbooks. At the time Poole wrote his paper,
the typical IS and LM curves were drawn without
a notion that they could move around stochastically. Bill Poole showed how adding exogenous
disturbances to the curves provided a simple
framework for monetary policy decisionmaking
under uncertainty.
While the framework was simple, the message
was extremely useful: When shocks to money
demand are very large, central banks should target
the interest rate because those shocks would otherwise cause harmful swings in interest rates. When

John B. Taylor is a professor of economics at Stanford University.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 405-10.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

405

Panel Discussion

shocks to investment demand or consumption
demand are very large, central banks should target
the money supply because the interest rate will
move to mitigate these demand shocks. Hence, the
Poole analysis showed explicitly how policymakers could deal with exogenous uncertainty
in a formal mathematical way.

An Approach to Monetary Policy That
Could Reduce Uncertainty
The second message was more complex and
profound, and also more relevant for my purpose
here. Poole investigated what he called a “combination policy” involving both the interest rate
and the money supply, and he examined its properties in an economy-wide dynamic stochastic
model. The model, with the combination policy
inserted, could be written as a vector autoregression. Poole showed how to compute the steadystate stochastic distribution implied by the model.
He also showed how to find the optimal policy
to minimize the variance of real gross domestic
product (GDP) around the mean of this stochastic
steady-state distribution. The method involved
finding the homogeneous and particular parts of
the solution and then writing the endogenous
variable as an infinite weighted sum of lagged
shocks—what is now commonly called an
impulse response function.
The combination policy had key features of
active monetary policy rules in use today. The
policy involved the money supply (M), the interest
rate (r), and lagged values of real GDP (Y ). Poole
wrote it algebraically as

M = c1′ + c2′ r + lagged values of Y ,

where the coefficients c1′ and c2′ were determined
to minimize the variance of real GDP in the steadystate stochastic distribution. He showed that the
optimal policy yielded a smaller loss than the
fixed interest rate policy, the fixed money supply
policy, or a combination policy that ignored the
reactions to lagged real GDP.
Note that, although the rule was active, there
was no discretion here. Once those parameters
were chosen, they would stay for all time. People
criticized Poole for this rule approach and argued
406

J U LY / A U G U S T

2008

instead in favor of discretion. They said that
policymakers could see or forecast the shocks to
the LM curve and the IS curve and adjust the
policy instruments as they saw fit without having
to stick to any one policy rule. For example, I have
a vivid memory of discussing the Poole paper
with Franco Modigliani after I presented a paper
at MIT later in the decade. He insisted that there
was no reason to constrain policymakers the way
Poole did. There was still an enormous resistance
to policy rules, even the active sort, at this time.
However, although discretionary actions might
improve performance in a given situation, the
possibility of discretion, and especially its misuse,
could add to the uncertainty already in the markets. The advantage of Poole’s active policy rules
was that they were more predictable and could
therefore reduce uncertainty. The second lesson
from Poole’s 1970 paper was thus that policymaking based on rules would improve economic
performance by reducing uncertainty compared
with policymaking based on pure discretion.
This same basic stochastic dynamic modeling approach was applied again and again in the
1970s and 1980s, eventually to more complex
empirically estimated models with rational expectations and sticky prices. Optimal rules were
computed in these newer models. Over time the
resistance to active policy rules began to weaken.
Most surprising was that actual monetary policy
decisions became more predictable and could
even be described closely by policy rules. Most
rewarding was that the more predictable rule-like
behavior yielded improved policy performance.
And most interesting is that we can now look
back at this period of greater predictability and
learn from it.

RULES OF THUMB IN THE
PRIVATE SECTOR
An unanticipated advantage—at least from
the vantage point of 1970—of the more predictable
behavior by central banks has been the response
of the private sector. Recognizing that the central
bank’s interest rate settings are following more
regular rule-like responses to such variables as
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

inflation and real GDP, the private sector has taken
these responses into account in projecting future
variables and in developing their own rules of
thumb for making decisions. An important example is the formation of expectations of future
interest rates, which affect bond traders’ and
investors’ decisions and thereby influence longterm interest rates, as has been emphasized by
Poole in his more recent writings. I quote from a
paper he gave earlier this year (Poole, 2007, p. 6):
What our analysis missed a generation ago was
that the typical model with only one interest
rate could not possibly allow for stabilizing
market responses in long rates when the central
bank set the short rate. Of course, macro econometric models did have both short and long
rates, but the structure of the models did not
permit analysis of the sort I am discussing
because the typical term structure equation
made the long rate a distributed lag on the
short rate. The model’s short rate, in turn, was
determined by monetary policymakers setting
it directly or by the money market under a
policy determining money growth.
Once we allow expectations to uncouple
the current long rate from the current short
rate, the situation changes dramatically. The
market can respond to incoming information
in a stabilizing way without the central bank
having to respond. Long bond rates can change,
and change substantially, while the federal
funds rate target remains constant.

In this example, the private sector has adapted
to a particular policy rule in which the short-term
interest rate rises by a predictable amount when
inflation rises. Thus, if expectations of inflation
rise, the private sector will predict that the central
bank will raise short-term interest rates in the
future; traders will then bid down bond prices,
raising long-term interest rates, and thereby mitigating the inflationary impulse before the central
bank action is needed.
There are other examples where private sector
behavior has adapted to rule-like behavior of the
central bank. Consider foreign exchange markets.
Empirical studies show that when there is a surprise increase in inflation, the immediate reaction
in foreign exchange markets is an appreciation
of the currency. Yet conventional price theory
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

would predict the opposite, a negative correlation
between exchange rates and inflation, because
higher prices make goods at home relatively
expensive, requiring a depreciation of the currency to keep purchasing power from moving too
far away from parity. But the regular central bank
interest rate response to inflation explains the
empirical correlation. How? An increase in inflation implies that the central bank will raise the
interest rate, which makes the currency more
attractive, bidding up the exchange rate.
There are many other examples where individuals and institutions in the private sector adapt
to policy-induced correlations. In effect, they are
creating their own rule-like behavior, their own
rules of thumb, and we are probably unaware of
most of them. Indeed, the individuals who act on
them may not even know that they derive from
the rule-like behavior of policymakers. Of course,
it is not only the private sector in the United
States. Markets all over the world follow closely
what the Fed is likely to do.
And it is not only the private sector. Central
banks take account of the predictable behavior
of the other central banks and in particular the
behavior of the Federal Reserve, which matters
greatly for their own decisions. For example, the
recent June 2007 Monetary Policy Report of the
Norges Bank states that “It cannot be ruled out
that a wider interest rate differential will lead to
an appreciation of the krone. This may suggest a
gradualist approach in interest rate setting.” In
other words, actions by the Federal Reserve that
affect the interest rate differential will in turn
influence interest rates set by other central banks.
This effect can also occur automatically—
another rule of thumb—if model simulations
used to set interest rates at central banks assume,
as they usually do, that other central banks follow
such policy rules.
An implication of this development is that if
central banks depart from their regular responses,
then they run the risk of disrupting private sector
rules of thumb. Even if they explain the reason
for the irregular behavior as clearly as possible,
emphasizing that it is temporary, some individuals
or institutions may continue operating with the
old rules of thumb unaware that these rules have
J U LY / A U G U S T

2008

407

Panel Discussion

anything to do with the monetary policy–induced
correlations.
For example, during the period from 2002 to
2005, the interest rate in the United States fell well
below levels that would have been predicted
from the behavior of the Federal Reserve during
most of the period during the Great Moderation.
Using modern time-series methods, Frank Smets
and Marek Jaronciński (2008) showed in their
paper for this conference that there was such a
deviation, and they linked the deviation to the
boom and bust in housing prices and construction. In Taylor (2007), I argued that the resulting
acceleration of housing starts and housing prices,
as well as the low interest rates, may have upset
rules of thumb that mortgage originators were
using to assess the payment probabilities based
on various characteristics of the borrower. Their
programs are usually calibrated in a cross section
at a point in time. If housing prices start rising
rapidly, the cross section will show increased
payment probabilities, but the programs will miss
this time-series element. When housing prices
reverse, the models will break down. It would
have been very difficult to predict a breakdown
in the rules of thumb such as the mortgage underwriting programs, but if it had not been that rule
of thumb, it might have been another.
Another related example was the negligible
response of long-term interest rates when the
Federal Reserve raised short-term interest rates
in 2004 and 2005. This might be explained by
this same deviation. Investors may have felt that
the Fed had departed from the kind of rule that
formed the basis of the longer-term interest rate
responses of the kind discussed in the above
quote by Poole.
Two examples from international monetary
policy issues are also worth noting. Following
the Russian debt default and financial crisis of
1998 there was a global contagion that affected
emerging markets with little connection to Russia.
The contagion even reached the United States,
led to the Long Term Capital Management crisis,
and caused enough of a freeze-up in U.S. markets
that the Federal Reserve reduced the interest rate
by 75 basis points. In contrast, following a very
similar default and financial crisis in Argentina
in 2001, there was virtually no contagion. The
408

J U LY / A U G U S T

2008

main difference between these two episodes in
my view is predictability. In the case of Russia,
the International Monetary Fund suddenly
removed financial support, only one month after
renewing it. This surprise disrupted the world’s
financial markets. In contrast, in the case of
Argentina, the International Monetary Fund
gradually reduced support and was as clear as it
possibly could be in its intentions. Hence, there
was little surprise. The default and currency crises
were discounted by the time they happened.
Another international example is the currency
intervention policy of the United States and the
other key currency countries. There has been no
intervention by the United States or Europe in
these markets since September 2000. And since
March 2004, Japan has not intervened. Moreover,
most policymakers in these countries have suggested a strong aversion to intervention in the
currency markets. In effect, compared with a
policy of frequent intervention, as in the 1980s
and 1990s, the currency policy has become much
more predictable. The assumption of zero intervention in most circumstances is a good one.
What has been the result? The behavior of the
major currencies has been less volatile and even
the volatility of volatility has come down.
It is difficult to prove causality in any of
these examples, and certainly more research is
needed. Our experience with different degrees of
predictability is increasing and strongly suggests
advantages of policy predictability and risks of
unpredictability.

Toward Greater Predictability
There have been great strides in improving
monetary policy predictability at the Federal
Reserve and other central banks in recent years, as
Bill Poole has documented and explained (Poole,
2003 and 2005a,b; Poole and Rasche, 2003). Can
we make monetary policy even more predictable?
One suggestion is to publish the Fed’s balance
sheet on a daily basis, or at least the Fed balances
that commercial banks hold at the Fed. This would
make it easier to interpret episodes where the
central bank decides to provide additional liquidity in the overnight money market, as on August
9 and 10 of this year. The available data on repos
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

do not provide the information that analysts
need to interpret these actions and to distinguish
them from monetary policy actions aimed at overall macroeconomic goals of price stability and
output stability.
Another suggestion would be to publish some
of the key assumptions used in formulating policy,
including potential GDP and/or the GDP gap, or
at least publish these with a shorter lag. This
would make it easier for the private sector to
assess the deviations from policy rules. In this
regard, it is interesting that Bill Poole’s (2006)
recent analysis of the Fed’s policy rule could not
go beyond 2001, because the data on the GDP gap
were not released beyond that date.
What about the Federal Reserve formally
announcing numerical inflation targets as other
central banks have done? I have suggested moving
slowly in this direction because a sudden change
could be misunderstood, and because policy has
worked well for two decades with a more informal
inflation target. A further lengthening of the inflation forecast horizon for the Monetary Policy
Report would be an example of a more gradual
change and would be a good step in my view.
I have been concerned that placing more
emphasis on a numerical inflation target could
take emphasis away from predictability in setting
the instruments. From the perspective of a policy
rule approach, publishing one part of the rule—
the inflation target—and not publishing other
parts—the reaction coefficients—would create an
asymmetry in a direction away from the regular
reactions of the instruments that I have stressed in
these remarks. Perhaps there is a way to prevent
creating such an asymmetry. For example, the
possibility of a joint announcement might be considered, perhaps both a target range for the inflation rate, from 1.5 to 2.5 percent, and a target
range for the reaction coefficient of the interest
rate to the inflation rate, from 1.5 to 2.5 percent,
but there are many other possibilities.

CONCLUSION
In these remarks I have tried to convince you
of the importance of being predictable in monetary
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

policy, building on Bill Poole’s paper written
nearly four decades ago and on more recent experience with different degrees of predictability in
practice. One of the key points, which needs
much more research, is how the private sector and
other public sector institutions develop rules of
thumb that are based, perhaps unknowingly, on
the systematic rule-like behavior of the monetary
authorities. These private sector rules of thumb
can improve the operation of the economy, but
they can be broken in unanticipated and disruptive ways if policy becomes less predictable even
for a short time and even if policymakers make
their very best efforts to explain why.

REFERENCES
Jaronciński, Marek and Smets, Frank R. “House Prices
and the Stance of Policy.” Federal Reserve Bank
of St. Louis Review, July/August 2008, 90(4),
pp. 339-65.
Poole, William. “Optimal Choice of Monetary Policy
Instruments in a Simple Stochastic Macro Model.”
Quarterly Journal of Economics, May 1970, 84(2),
pp. 197-216.
Poole, William. “Monetary Policy Rules?” Federal
Reserve Bank of St. Louis Review, March/April
1999, 81(2), pp. 3-12.
Poole, William. “Fed Transparency: How Not
Whether?” Federal Reserve Bank of St. Louis Review,
November/December 2003, 85(6), pp. 1-8.
Poole, William. “FOMC Transparency.” Federal
Reserve Bank of St. Louis Review, January/February
2005a, 87(1), pp. 1-9.
Poole, William. “How Predictable Is Fed Policy?”
Federal Reserve Bank of St. Louis Review,
November/December 2005b, 87(6), pp. 659-68.
Poole, William. “The Fed’s Monetary Policy Rule.”
Federal Reserve Bank of St. Louis Review, January/
February 2006, 88(1), pp. 1-12.
Poole, William. “Milton and Money Stock Control.”
Presented at the Milton Friedman luncheon, co-

J U LY / A U G U S T

2008

409

Panel Discussion

sponsored by the University of Missouri–Columbia
Department of Economics, the Economic and Policy
Analysis Research Center, and the Show-Me
Institute; Columbia, MO, July 31, 2007.

Taylor, John B. “Housing and Monetary Policy.”
Panel discussion at the Federal Reserve Bank of
Kansas City Symposium Housing, Housing
Finance, and Monetary Policy, Jackson Hole, WY,
September 1, 2007.

Poole, William and Rasche, Robert H. “The Impact of
Changes in FOMC Disclosure Practices on the
Transparency of Monetary Policy: Are Markets and
the FOMC Better ‘Synched’?” Federal Reserve Bank
of St. Louis Review, January/February 2003, 85(1),
pp. 1-10.

Monetary Policy Under
Uncertainty
Ben S. Bernanke

B

ill Poole’s career in the Federal
Reserve System spans two decades
separated by a quarter of a century.
From 1964 to 1974, Bill was an economist on the staff of the Board’s Division of
Research and Statistics. He then left to join the
economics faculty at Brown University, where
he stayed for nearly 25 years. Bill rejoined the
Fed in 1998 as president of the Federal Reserve
Bank of St. Louis, so he is now approaching the
completion of his second decade in the System.
As it happens, each of Bill’s two decades in
the System was a time of considerable research
and analysis on the issue of how economic uncertainty affects the making of monetary policy, a
topic on which Bill has written and spoken many
times. I would like to compare the state of knowledge on this topic during Bill’s first decade in
the System with what we have learned during
his most recent decade of service. The exercise
is interesting in its own right and has the added
benefit of giving me the opportunity to highlight
Bill’s seminal contributions in this line of research.

DEVELOPMENTS DURING THE
FIRST PERIOD: 1964-74
In 1964, when Bill began his first stint in the
Federal Reserve System, policymakers and
researchers were becoming increasingly confident
in the ability of monetary and fiscal policy to
smooth the business cycle. From the traditional
Keynesian perspective, which was the dominant
viewpoint of the time, monetary policy faced a
long-term tradeoff between inflation and unemployment that it could exploit to keep unemployment low over an indefinitely long period at an
acceptable cost in terms of inflation. Moreover,
improvements in econometric modeling and the
importation of optimal-control methods from
engineering were seen as having the potential to
tame the business cycle.
Of course, the prevailing optimism had its
dissenters, notably Milton Friedman. Friedman
believed that the inherent complexity of the
economy, the long and variable lags with which
monetary policy operates, and the political and
bureaucratic influences on central bank decisionmaking precluded policy from fine-tuning the
level of economic activity. Friedman advocated the
use of simple prescriptions for monetary policy—
such as the k percent money growth rule—which

Ben S. Bernanke is Chairman of the Board of Governors of the Federal Reserve System.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 410-15.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

410

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

he felt would work reasonably well on average
while avoiding the pitfalls of attempting to finetune the economy in the face of pervasive uncertainty (Friedman, 1968).
Other economists were more optimistic than
Friedman about the potential benefits of activist
policies. Nevertheless, they recognized that the
fundamental economic uncertainties faced by
policymakers are a first-order problem and that
improving the conduct of policy would require
facing that problem head on. During this decade,
those researchers as well as sympathetic policymakers focused especially on three areas of economic uncertainty: the current state of the
economy, the structure of the economy (including
the transmission mechanism of monetary policy),
and the way in which private agents form expectations about future economic developments and
policy actions.
Uncertainty about the current state of the
economy is a chronic problem for policymakers.
At best, official data represent incomplete snapshots of various aspects of the economy, and even
then they may be released with a substantial lag
and be revised later. Apart from issues of measurement, policymakers face enormous challenges in
determining the sources of variation in the data.
For example, a given change in output could be
the result of a change in aggregate demand, in
aggregate supply, or in some combination of the
two.
As most of my listeners know, Bill Poole
tackled these issues in a landmark 1970 paper,
which examined how uncertainty about the state
of the economy affects the choice of the operating
instrument for monetary policy (Poole, 1970). In
the simplest version of his model, Bill assumed
that the central bank could choose to specify its
monetary policy actions in terms of a particular
level of a monetary aggregate or a particular value
of a short-term nominal interest rate. If the central bank has only partial information about disturbances to money demand and to aggregate
demand, Bill showed that the optimal choice of
policy instrument depends on the relative variances of the two types of shocks. In particular,
using the interest rate as the policy instrument is
the better choice when aggregate demand is relaF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

tively stable but money demand is unstable, with
money growth being the preferable policy instrument in the opposite case.
Bill was also a pioneer in formulating simple
feedback rules that established a middle ground
between the mechanical approach advocated by
Friedman and the highly complex prescriptions
of optimal-control methods. For example, Bill
wrote a Federal Reserve staff paper titled “Rulesof-Thumb for Guiding Monetary Policy” (Poole,
1971). Because his econometric analysis of the
available data indicated that money demand was
more stable than aggregate demand, Bill formulated a simple rule that adjusted the money growth
rate in response to the observed unemployment
rate. Bill was also practical in noting the pitfalls
of mechanical adherence to any particular policy
rule; in this study, for example, he emphasized
that the proposed rule was not intended “to be
followed to the last decimal place or as one that
is good for all time [but]…as a guide—or as a
benchmark—against which current policy may
be judged” (p. 152).
Uncertainty about the structure of the economy
also received attention during that decade. For
example, in his elegant 1967 paper, Bill Brainard
showed that uncertainty about the effect of policy
on the economy may imply that policy should
respond more cautiously to shocks than would
be the case if this uncertainty did not exist.
Brainard’s analysis has often been cited as providing a theoretical basis for the gradual adjustment
of policy rates of most central banks. Alan Blinder
has written that the Brainard result was “never
far from my mind when I occupied the Vice
Chairman’s office at the Federal Reserve. In my
view…a little stodginess at the central bank is
entirely appropriate” (Blinder, 1998, p. 12).
A key source of uncertainty became evident
in the late 1960s and 1970s as a result of highly
contentious debates about the formation of expectations by households and firms. Friedman (1968)
and Ned Phelps (1969) were the first to highlight
the central importance of expectations formation,
arguing that the private sector’s expectations
adjust in response to monetary policy and therefore preclude any long-run tradeoff between
unemployment and inflation. However, Friedman
J U LY / A U G U S T

2008

411

Panel Discussion

and Phelps retained the view that monetary policy
could exert substantial effects on the real economy over the short to medium run. In contrast,
Robert Lucas and others reached more dramatic
conclusions, arguing that only unpredictable
movements in monetary policy can affect the
real economy and concluding that policy has no
capacity to smooth the business cycle (Lucas,
1972; Sargent and Wallace, 1975). Although these
studies highlighted the centrality of inflation
expectations for the analysis of monetary policy,
the profession did not succeed in reaching any
consensus about how those expectations evolve,
especially in an environment of ongoing structural
change.

DEVELOPMENTS DURING THE
SECOND PERIOD: 1998-2007
Research during the past 10 years has been
very fruitful in expanding the profession’s understanding of the implications of uncertainty for
the design and conduct of monetary policy.
On the issue of uncertainty about the state of
the economy, Bill’s work continues to provide
fundamental insights regarding the choice of
policy instrument. Money-demand relationships
were relatively stable through the 1950s and
1960s, but, in the wake of dramatic innovations
in banking and financial markets, short-term
money-demand relationships became less predictable, at least in the United States. As a result,
consistent with the policy implication of Bill’s
1970 model, the Federal Reserve (like most other
central banks) today uses the overnight interbank
rate as the principal operating target of monetary
policy. Bill’s research also raised the possibility
of specifying the operating target in other ways,
for example, as an index of monetary or financial
conditions; and it provided a framework for evaluating the usefulness of intermediate targets—
such as core inflation or the growth of broad
money—that are only indirectly controlled by
policy.
More generally, the task of assessing the current state of the economy remains a formidable
challenge. Indeed, our appreciation of that chal412

J U LY / A U G U S T

2008

lenge has been enhanced by recent research using
real-time data sets.1 For example, Athanasios
Orphanides has shown that making such real-time
assessments of the sustainable levels of economic
activity and employment is considerably more
difficult than estimating those levels retrospectively. His 2002 study of U.S. monetary policy in
the 1970s shows how mismeasurement of the
sustainable level of economic activity can lead
to serious policy mistakes.
On a more positive note, economists have
made substantial progress over the past decade in
developing new econometric methods for summarizing the information about the current state
of the economy contained in a wide array of economic and financial market indicators (Svensson
and Woodford, 2003). Dynamic-factor models,
for example, provide a systematic approach to
extracting information from real-time data at very
high frequencies. These approaches have the
potential to usefully supplement more informal
observation and human judgment (Stock and
Watson, 2002; Bernanke and Boivin, 2003; and
Giannone, Reichlin, and Small, 2005).
The past decade has also witnessed significant
progress in analyzing the policy implications of
uncertainty regarding the structure of the economy. New work addresses not only uncertainty
about the values of specific parameters in a given
model of the economy but also uncertainty about
which of several competing models provides the
best description of reality. Some research has
attacked those problems using Bayesian optimalcontrol methods (Brock, Durlauf, and West, 2003).
The approach requires the specification of an
explicit objective function as well as of the investigator’s prior probabilities over the set of plausible models and parameter values. The Bayesian
approach provides a useful benchmark for policy
in an environment of well-defined sources of
uncertainty about the structure of the economy,
and the resulting policy prescriptions give relatively greater weight to outcomes that have a
higher probability of being realized. In contrast,
other researchers, such as Lars Hansen and
Thomas Sargent (2007), have developed robust1

A recent example is Faust and Wright (2007).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

control methods—adapted from the engineering
literature—that are aimed at minimizing the consequences of worst-case scenarios, including those
with only a low probability of being realized.
An important practical implication of all
this recent literature is that Brainard’s attenuation
principle may not always hold. For example, when
the degree of structural inertia in the inflation
process is uncertain, the optimal Bayesian policy
tends to involve a more pronounced response to
shocks than would be the case in the absence of
uncertainty (Söderstrom, 2002). The concern about
worst-case scenarios emphasized by the robustcontrol approach may likewise lead to amplification rather than attenuation in the response of
the optimal policy to shocks (Giannoni, 2002;
Onatski and Stock, 2002; and Tetlow and von zur
Muehlen, 2001). Indeed, intuition suggests that
stronger action by the central bank may be warranted to prevent particularly costly outcomes.
Although Bayesian and robust-control
methods provide insights into the nature of optimal policy, the corresponding policy recommendations can be complex and sensitive to the set of
economic models being considered. A promising
alternative approach—reminiscent of the work
that Bill Poole did in the 1960s—focuses on simple
policy rules, such as the one proposed by John
Taylor, and compares the performance of alternative rules across a range of possible models and
sets of parameter values (Levin, Wieland, and
Williams, 1999 and 2003). That approach is motivated by the notion that the perfect should not be
the enemy of the good; rather than trying to find
policies that are optimal in the context of specific
models, the central bank may be better served by
adopting simple and predictable policies that
produce reasonably good results in a variety of
circumstances.
Given the centrality of inflation expectations
for the design of monetary policy, a key development over the past decade has been the burgeoning
literature on the formation of these expectations
in the absence of full knowledge of the underlying
structure of the economy.2 For example, considerations of how the public learns about the econ2

See Bernanke (2007) and the references therein.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

omy and the objectives of the central bank can
affect the form of the optimal monetary policy
(Gaspar, Smets, and Vestin, 2006; and Orphanides
and Williams, 2007). Furthermore, when the public is unsure about the central bank’s objectives,
even greater benefits may accompany achieving
a stable inflation rate, as doing so may help anchor
the public’s inflation expectations. These studies
also show why central bank communications is
a key component of monetary policy; in a world
of uncertainty, informing the public about the
central bank’s objectives, plans, and outlook can
affect behavior and macroeconomic outcomes
(Bernanke, 2004; and Orphanides and Williams,
2005).

CONCLUSION
Uncertainty—about the state of the economy,
the economy’s structure, and the inferences that
the public will draw from policy actions or economic developments—is a pervasive feature of
monetary policymaking. The contributions of
Bill Poole have helped refine our understanding
of how to conduct policy in an uncertain environment. Notably, we now appreciate that policy
decisions under uncertainty must take into
account a range of possible scenarios about the
state or structure of the economy, and those policy
decisions may look quite different from those that
would be optimal under certainty. For example,
policy actions may be attenuated or augmented
relative to the “no-uncertainty benchmark,”
depending on one’s judgments about the possible
outcomes and the costs associated with those outcomes. The fact that the public is uncertain about
and must learn about the economy and policy
provides a reason for the central bank to strive
for predictability and transparency, avoid overreacting to current economic information, and
recognize the challenges of making real-time
assessments of the sustainable level of real economic activity and employment. Most fundamentally, our discussions of the pervasive
uncertainty that we face as policymakers is a
powerful reminder of the need for humility
about our ability to forecast and manage the
future course of the economy.
J U LY / A U G U S T

2008

413

Panel Discussion

REFERENCES
Bernanke, Ben S. “Fedspeak.” Presented at the
Meetings of the American Economic Association,
San Diego, January 3, 2004; www.federalreserve.gov/
boarddocs/speeches/2004/200401032/default.htm.
Bernanke, Ben S. “Inflation Expectations and Inflation
Forecasting.” Presented at the Monetary Economics
Workshop of the National Bureau of Economic
Research Summer Institute, Cambridge, MA, July 10,
2007; www.federalreserve.gov/newsevents/speech/
bernanke20070710a.htm.
Bernanke, Ben S. and Boivin, Jean. “Monetary Policy
in a Data-Rich Environment.” Journal of Monetary
Economics, April 2003, 0(3), pp. 525-46.
Blinder, Alan S. Central Banking in Theory and
Practice. Cambridge, MA: MIT Press, 1998.
Brainard, William C. “Uncertainty and the
Effectiveness of Policy.” American Economic
Review, May 1967, 57(2), pp. 411-25.
Brock, William A.; Durlauf, Steven N. and West,
Kenneth D. “Policy Analysis in Uncertain Economic
Environments.” Brookings Papers on Economic
Activity, 2003, 1, pp. 235-322.
Faust, Jon and Wright, Jonathan H. “Comparing
Greenbook and Reduced Form Forecasts Using a
Large Realtime Dataset.” Presented at the Federal
Reserve Bank of Philadelphia conference RealTime Data Analysis and Methods in Economics,
April 19-20, 2007; www.phil.frb.org/econ/conf/
rtconference2007/papers/Paper-Wright.pdf.
Friedman, Milton. “The Role of Monetary Policy.”
American Economic Review, March 1968, 58(1),
pp. 1-17.
Gaspar, Vitor; Smets, Frank and Vestin, David.
“Adaptive Learning, Persistence, and Optimal
Monetary Policy.” Journal of the European Economic
Association, April-May 2006, 4(2/3), pp. 376-85.
Giannone, Domenico; Reichlin, Lucrezia and Small,
David. “Nowcasting GDP and Inflation: The RealTime Informational Content of Macroeconomic Data
Releases.” Finance and Economics Discussion Series

414

J U LY / A U G U S T

2008

2005-42, Board of Governors of the Federal Reserve
System, October 2005; www.federalreserve.gov/
pubs/feds/2005.
Giannoni, Marc P. “Does Model Uncertainty Justify
Caution? Robust Optimal Monetary Policy in a
Forward-Looking Model.” Macroeconomic
Dynamics, February 2002, 6(1), pp. 111-44.
Hansen, Lars Peter and Sargent, Thomas J. Robustness.
Princeton, NJ: Princeton University Press, 2007.
Levin, Andrew; Wieland, Volker and Williams, John.
“Robustness of Simple Monetary Policy Rules Under
Model Uncertainty,” in John B. Taylor, ed., Monetary
Policy Rules. Chicago: University of Chicago Press,
1999, pp. 263-99.
Levin, Andrew; Wieland, Volker and Williams, John.
“The Performance of Forecast-Based Monetary
Policy Rules under Model Uncertainty.” American
Economic Review, June 2003, 93(3), pp. 622-45.
Lucas, Robert E. Jr. “Expectations and the Neutrality
of Money.” Journal of Economic Theory, April 1972,
4(2), pp. 103-24.
Onatski, Alexei and Stock, James H. “Robust Monetary
Policy under Model Uncertainty in a Small Model
of the U.S. Economy.” Macroeconomic Dynamics,
February 2002, 6(1), pp. 85-110.
Orphanides, Athanasios. “Monetary-Policy Rules and
the Great Inflation.” American Economic Review,
May 2002, 92(2), pp. 115-20.
Orphanides, Athanasios and Williams, John C.
“Inflation Scares and Forecast-Based Monetary
Policy.” Review of Economic Dynamics, April 2005,
8(2), pp. 498-527.
Orphanides, Athanasios and Williams, John C. “Robust
Monetary Policy with Imperfect Knowledge.”
Journal of Monetary Economics, July 2007, 54(5),
pp. 1406-35.
Phelps, Edmund S. “The New Microeconomics in
Inflation and Employment Theory.” American
Economic Review, May 1969, 59(2), pp. 147-60.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

Poole, William. “Optimal Choice of Monetary Policy
Instruments in a Simple Stochastic Macro Model.”
Quarterly Journal of Economics, May 1970, 84(2),
pp. 197-216.

Stock, James H. and Watson, Mark W. “Forecasting
Using Principal Components from a Large Number
of Predictors.” Journal of the American Statistical
Association, December 2002, 97(460), pp. 1167-79.

Poole, William. “Rules-of-Thumb for Guiding
Monetary Policy,” in Open Market Policies and
Operating Procedures: Staff Studies. Washington,
DC: Board of Governors of the Federal Reserve
System, 1971, pp. 135-89.

Svensson, Lars E.O. and Woodford, Michael. “Indicator
Variables for Optimal Policy.” Journal of Monetary
Economics, April 2003, 50(2), pp. 691-720.

Sargent, Thomas J. and Wallace, Neil. “‘Rational
Expectations,’ the Optimal Monetary Instrument,
and the Optimal Money Supply Rule.” Journal of
Political Economy, April 1975, 83(2), pp. 241-54.

Tetlow, Robert J. and von zur Muehlen, Peter. “Robust
Monetary Policy with Misspecified Models: Does
Model Uncertainty Always Call for Attenuated
Policy?” Journal of Economic Dynamics and Control,
June/July 2001, 25(6/7), pp. 911-49.

Söderstrom, Ulf. “Monetary Policy with Uncertain
Parameters.” Scandinavian Journal of Economics,
February 2002, 104(1), pp. 125-45.

The Importance of Being
Predictable
William Poole

T

his has been an absolutely wonderful
occasion for me. I deeply appreciate all
those who have come: friends that I’ve
known from way, way back, newer
friends recently formed. And I am very gratified
that Ben Bernanke and John Taylor joined on
the panel. I especially want to thank, above all,
Bob Rasche and the Research Division here,
both for organizing and executing this event—
but even more than that for the support that I’ve
gotten and the intellectual excitement over my
almost 10 years here. We’ve really worked
together in a very collegial way. It’s going to be
hard to imagine being productive without hav-

ing a staff like that behind me. They have been
coauthors, really—staff is really the wrong way
to put it—coauthors on the speeches, some of
which have been published in the Federal
Reserve Bank of St. Louis Review.
Well, nostalgia takes you only so far. And, so
I want to talk about business, if you will, going
back to some of the earlier literature. How we got
to where we are today does help to inform us
about some very important current issues. I was
fascinated—totally unexpected—that my obscure,
1971 paper would become a centerpiece of some
of the discussion. It’s interesting to reflect on that
because the times were so different. When I was
working on that paper, the policy of the Federal
Reserve was sort of unspecified. It was calculated
meeting by meeting. And what struck me was that
there are (at least the way I looked at it with my
Chicago background) some powerful business

William Poole was the president of the Federal Reserve Bank of St. Louis at the time this conference was held.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 415-19.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

415

Panel Discussion

cycle regularities—just the very crude sort of
thing that Friedman and Schwartz demonstrated,
the patterns of money growth and interest rates
over the business cycle. And so my idea was that
we’ve got to find a way to avoid making exactly
the same mistake over and over again. When you
go back and look at the business cycles in the
1920s and 50s and 60s, it just looked like the same
mistake over and over again. So there had to be
some way of formalizing something as a—call it
a rule of thumb—a baseline, and say we should
depart from some baseline behavior only if we
had some pretty good reason for doing so. Otherwise, we’re just going to be making the same mistake over and over again. And, in fact, we did
make the same mistake, in a business cycle sense,
several times more after that. So that was the
origin of that paper; it was nothing more complicated than that.
There was one other piece of it. In this era
there was tremendous disagreement between
those who viewed policy in the context of setting
a monetary aggregate (the Chicago background
that I had) and those who looked at policy entirely
through an interest rate filter. And, of course, the
origin of my 1970 paper was an attempt to make
sense of those different views. You could not
make sense of it in a deterministic model. It had
to have something to do (if there was anything
valid in this debate) with the uncertainty in the
model, the nature of the disturbances. And that
was the origin of my 1970 paper. And the origin
of this sort of combination money growth/interest
rate rule that was discussed earlier here at this
conference was really an effort to try to bridge
the gap between these two very different schools
of thought and how they approached monetary
policy. And obviously John Taylor did a much,
much better job with that later on.
Now, in the discussion of the Svensson and
Williams (2008) paper at this conference, which
I had not seen before, there was something that
sort of rubbed me the wrong way and I couldn’t
put my finger on it right away. I raised the issue
about the model’s assumption about central bank
behavior, the assumption of the state of knowledge in the private sector. And the answer was
the model assumes complete knowledge of the
416

J U LY / A U G U S T

2008

central bank. As I reflect on that, that’s equivalent
to saying that the central bank has permanent
credibility—no one will ever doubt what the
central bank is going to do. Put another way, that
everyone knows exactly what the central bank is
going to do. And I just don’t believe that’s a valid
assumption. I think credibility has been very
costly to create among central banks around the
world, and I think it’s a terrible mistake to take it
for granted. Credibility is potentially very fragile;
indeed, one of the central things that we need to
pay attention to is how to maintain credibility.
And the way in which you maintain credibility
is a very important part of a rules-based monetary
policy. An important part of maintaining credibility is to say what you are going to do and then do
it. The central bank does what it said it would do
unless it has a very good explanation for why it
departs from what it said it was going to do.
Of course, we’ve had important institutional
developments here with central bank independence, and, really, I think we’ve strengthened
independence in the Federal Reserve although
the law hasn’t changed very much—strengthened
independence in a practical sense. And that’s
been very important. But we should always keep
in mind that the central bank is a political institution established by law or by treaty—by laws that
can be changed. But even more than that: John
Taylor’s served in the government, I’ve served at
the Council of Economic Advisers (CEA); any of
you who’ve been there—Murray Weidenbaum,
who recruited me to the CEA—anybody who’s
worked in the government knows that there are
all sorts of things that are done around the edges
of the law, behind the scenes, that are not exactly
in line with what the law might call for. And
there’s a natural view, which I think is correct, to
be suspicious because central banks in the past
have not always been immune from behavior that
is secret or around the edges of the law. So, to
maintain the confidence that people need to have
in the central bank, you need to do things with a
great deal of careful planning and you have to
maintain a very high level of integrity; you have
to have people there who can be trusted not to
be a part of the political process.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

That problem is not going to go away. We live
in a vigorously democratic society where people
try to use government for various purposes that
are not always in the national interest. And central banks are inevitably going to remain part of
the political system, as they should. But we need
to maintain the highest possible integrity in order
to maintain confidence. And that’s what I think
bothers me about the Svensson and Williams
paper: It misses a critical component. To me, one
of the ways in which you could lose confidence is
if you started to run experiments. I can’t imagine
having to write a press statement or to give a
speech to explain why it was that we conducted
an experiment that had predictable (and I mean
predictable) consequences of a recession or mild
recession to the purpose of learning more about
some parameter. It’s hard to imagine anything
that we might do that would be more damaging
to long-run credibility.
I’ll describe one of the things that happened
to me when I came to St. Louis, talking a little bit
about my journey here, having been an academic
for most of my professional career. When the
St. Louis Fed’s board of directors recruited me,
John McDonnell was the chairman of the board;
he said, “Bill you have to understand, you are not
going to be able to do any research in this job.”
Well, it hasn’t turned out that way. And, in fact, I
think that with my research productivity jointly
with the economists here (they’ve done all the
hard work), I’ve accomplished more in these 10
years than I had in the previous 20.
The research started as a consequence, really,
of dealing with issues that I needed to understand
as a part of doing my job. But I didn’t know where
I could turn in the journal literature with which
I was familiar to get any help with any of these
issues. One of the very first issues was this question: If I am going to give a speech, what am I going
to write, what am I going to talk about? And then
how to deal with press contacts and the Q&A
with press coverage and so forth.
So I started to think abstractly about the whole
process of central bank communication, and I
went back to what I would regard as the two first
principles that come out of the rational expectations literature. One is that the private sector needs
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

to know what the central bank is doing. And you
can’t have a good equilibrium if the private sector
doesn’t know what the central bank is doing. A
part of that requires that the central bank itself
knows what it is doing. That’s the place to start.
So, anyone who has been in a classroom, and
most of you here probably have, know that you
hone your own ideas and develop a great deal of
clarity when you are forced to actually stand up
and talk about them. And part of the effort to
understand on a more systematic basis what sort
of policy adjustments we should make comes
from pinning down the fundamental nature of
the policy rule. And John, particularly, of course,
has led the way on that.
So that’s part of the process, and the private
sector learns about what the central bank is doing
in good part just from observing what it is doing
and trying to put some system into that, which
you in principle can extract from what’s done
without any words on our part. But there are
certainly lots of cases where this signal extraction
might go a lot more smoothly and a lot more
quickly if the central bankers would actually talk
intelligently about what they are doing. Think
about the context of a simple learning model, for
example. Suppose that the central bank has been
operating on some value of a parameter and then
decides for whatever reason that it wants to have
a different value of that parameter. Well, it might
take someone with Jim Hamilton’s skills to generate an enormous number of observations to actually discover from central bank behavior that the
parameter has changed. And in the meantime,
that means that the private sector is operating
under a different understanding of that parameter
than the central bank is operating under. That
produces expectational problems in terms of the
equilibrium and efficiency. If we could explain
what we are doing, why this parameter has
changed, we ought to be able to move that equilibrium to the correct point much more quickly.
So that’s part of the task of central bank talk:
Explain what we’re doing and why were doing
it, to help promote a good equilibrium between
the central bank and the private sector—an equilibrium in which the meshing of knowledge is
really critical to the efficiency of the outcome.
J U LY / A U G U S T

2008

417

Panel Discussion

But it seems to me that there is a second
principle that’s extremely important from the
rational expectations literature: The central bank
ought not to be purveyors of random disturbance.
We ought not to add random noise to the system
either in terms of the actions we take or in terms
of what we say.
So you’re standing up in front of an audience
and you’ve got these two things you’re struggling
with: first, trying to convey genuine information,
and second, trying not to say something that
causes a market disturbance that is decidedly
not helpful. Some of the press people might love
it, but it’s not what I ought to be doing. Now, when
I came, I had no professional guidance in any of
the economics literature about how to do this. I
knew what the basic principles were. But what
do you actually do when you are standing up in
front of an audience? I had no guidance whatsoever. Probably I didn’t read enough memoirs; I
don’t know. But I don’t think that people generally
talk about this kind of thing in their memoirs,
either.
So, I started to think a lot about the communications process, and I know that one approach
that some people take is that, they’re so worried
about the second problem, they give up on the
first and so they really don’t say much of anything. That didn’t seem to me to be satisfactory,
because I thought the first principle of trying to
produce a better understanding in the marketplace of what the central bank is doing was really
an important responsibility of my office.
Another issue is that behavior of the markets
is obviously driven by active and by-and-large
pretty well-informed market participants, not
primarily by Main Street. And a lot of what we
do out here in the Reserve Banks is to wander
around the Districts or, more broadly speaking,
to audiences of all sorts, with different backgrounds and degrees of expertise. One of the
communications challenges is to be able to give
a speech that says something to well-informed
people and at the same time doesn’t pass completely over the heads of people who are not so
well informed. Of course, that puts a lot of constraints on what you say, but also how you say it.
But why would we care about Main Street?
Well, one of the very important reasons is that
418

J U LY / A U G U S T

2008

this is the business we’re in. The monetary policy
business that we’re in is designed to improve the
welfare, and maintain a high degree of welfare,
for all the citizens. For Main Street as well as Wall
Street. We need to talk to bankers and traders and
portfolio managers, but we also need to talk to
Main Street because these are our constituents.
At any moment in time, all the time, the interests
of various people in the markets are in conflict:
Some people are long, some people are short,
some people have short-term investments, some
long-term investments, some equity, some bonds,
and so forth. There are a lot of different interests,
and it is extremely important that we serve the
“general interest.” I think that what that means
is that we have broad macroeconomic objectives
that we can summarize quite well in talking about
the dual mandate: maintaining the stable purchasing power of the currency and reducing fluctuations in GDP and employment from equilibrium
paths. If we are successful with these explanations,
we will have done 99.9 percent of what we can
do and what we ought to do.
Another important reason for talking to Main
Street can be illustrated by a story from when I
was at the CEA. That was a difficult period, in the
early 1980s, and there was a lot of commentary
on the part of Congress and to some extent the
administration about Federal Reserve policy.
Knowing a lot about the Fed, I was trying to
explain to people that pushing the Fed was counterproductive from the point of view of the interests of the politicians themselves. I remember
that a senator, who often made comments about
the Fed, wanted lower interest rates. (By the
way, you may have seen the comment that Alan
Greenspan made, in one of his interviews, that
not once while he was in office did he ever get a
phone call or a letter from a politician recommending higher interest rates. Not once. There is
an asymmetry here.) So, a lot of the politicians
are not all that well informed about monetary
policy, and I remember going up to Capitol Hill
and talking to a very prominent senator and saying, “You have to understand that the Fed values
its independence and it is extremely important
that the Fed not appear to be responding to the
entreaties of politicians. And, therefore, if you

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Panel Discussion

want interest rates lower, you will not get the
result that you want by blasting the Fed, because
the Fed can’t respond to that blast. It’s not in your
interest.” He said, “I understand that, but it plays
well on Main Street.” It was very simple. Very
simple. So, one of the reasons to talk to Main
Street is to help people understand why that position is wrong. I hope that people will develop a
tin ear to language like that that comes from all
sorts of different directions.
Another issue that I’ve been quite concerned
about during my time in St. Louis has been trade
issues. I have made a good number of speeches
where I’ve talked about trade and capital flows,
the importance of world markets, and trying to
resist—I don’t even like to use the word “protectionism,” because it is good to try to protect people—the kind of economic isolationism that many
of these policies encourage. So, we need to talk
to Main Street as well as the monetary experts,
and that makes the communications issues both
challenging and very, very interesting. Very
interesting.
I do want to say one other thing. John referred
to a lecture that I gave earlier this year. He did
not mention that this was an event on Milton
Friedman’s birthday. It was out at the University
of Missouri, and a point that I remember vividly
from those days in Chicago, and a point that I
think has tremendous importance today, is that
Milton always argued—and Brunner and Meltzer,
and others, but Milton sort of led this analysis—
that one of the great advantages of a monetary
aggregates rule is that it allows maximum scope
for the market to respond to disturbances and

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

move interest rates in a way that will be stabilizing. Built-in stability is very important. Of course,
there is a huge literature about the built-in stabilizers in the fiscal policy area. The current policy
stance has the advantages of high credibility, wellanchored inflation expectations, and the possibility of understanding in a more formal way with
several decoupled interest rates in the model of
allowing the market to do a great deal of the stabilization work. That has enormous advantages in
producing efficient results. It allows the Federal
Reserve at many critical times to sit back and
watch until the situation is clearer in terms of
the arriving evidence. And you can go through
lots of recent cases where there have been very
substantial fluctuations in long-term interest rates
that do an enormous amount of the stabilization
work for us. That provides great clarity in the
stance of policy and at the same time is a framework that produces a tremendous amount of
built-in stabilization.

REFERENCES
Poole, William. “Optimal Choice of Monetary Policy
Instruments in a Simple Stochastic Macro Model.”
Quarterly Journal of Economics, May 1970, 84(2),
pp. 197-216.
Poole, William. “Rules-of-Thumb for Guiding
Monetary Policy,” in Open Market Policies and
Operating Procedures: Staff Studies. Washington,
DC: Board of Governors of the Federal Reserve
System, 1971, pp. 135-89.

J U LY / A U G U S T

2008

419

420

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Announcements and the Role of Policy Guidance
Carl E. Walsh
By providing guidance about future economic developments, central banks can affect private sector
expectations and decisions. This can improve welfare by reducing private sector forecast errors,
but it can also magnify the impact of noise in central bank forecasts. I employ a model of heterogeneous information to compare outcomes under opaque and transparent monetary policies. While
better central bank information is always welfare improving, more central bank information may
not be. (JEL E52, E58)
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 421-42.

S

tandard models used for monetary policy
analysis typically assume that households and firms and the central bank
share a common information set and
economic model, yet actual policy decisions are
taken in an environment in which heterogeneous
information is the norm and many alternative
models coexist. The resulting heterogeneity in
views can play an important role in affecting
both policy choices and the monetary transmission process. Transparency in the conduct of
policy can help to reduce heterogeneous information. Inflation-targeting central banks, for example, make significant attempts to reduce uncertainty about policy objectives, such as through
the release of detailed inflation and output projections, to ensure the public shares central bank
information about future economy developments.
By being transparent about its objectives and its
outlook for the economy, central banks help provide the public with guidance about the future.
But providing guidance carries risks. As Poole
(2005, p. 6) has expressed it, “[F]or me the issue is
whether under normal and routine circumstances

forward guidance will convey information or
whether it will create additional uncertainty.”
Because any forecast released by the central
bank is subject to error, being more transparent
may simply lead the private sector to react to what
was, in retrospect, noise in the forecast. The possibility that the private sector may overreact to
central bank announcements does capture a concern expressed by some policymakers. For example, in discussing the release of Federal Open
Market Committee (FOMC) minutes, Janet Yellen
expressed the view that “Financial markets could
misinterpret and overreact to the minutes” (Yellen,
2005, p. 1).
In this paper, I explore the role of economic
transparency—specifically, transparency about
the central bank’s assessment of future economic
conditions—in altering the effectiveness of monetary policy. I do so in a framework in which central
bank projections may convey useful information
but may also introduce inefficient fluctuations
into the economy.
A focus on economic transparency seems
appropriate for understanding the issues facing
many central banks. The recent concerns about

Carl E. Walsh is a professor of economics at the University of California, Santa Cruz, and a visiting scholar at the Federal Reserve Bank of
San Francisco.

© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

421

Walsh

the implications of the subprime mortgage market
reflect, in part, private sector uncertainty about
the Fed’s view of the economic outlook and the
way the outlook for inflation and real economic
activity may be affected by financial market conditions. Throughout 2007, for example, many
financial market participants appeared to hold
more pessimistic views than the Federal Reserve
about future economic developments;1 and in
recent months, market participants have often
expected significant interest rate cuts, while some
members of the FOMC have emphasized concerns
about the outlook for inflation, suggesting they
saw less need for rate reductions. News reports
speculating on possible interest rate cuts by the
Fed or the European Central Bank focused very
little on uncertainty about central bank preferences but a great deal on the uncertainty about
the outlook for the economy. These reports reveal
heterogeneity among private forecasters and uncertainty about the Fed’s (or the European Central
Bank’s) outlook for the economy. And public
statements by central bankers were designed to
communicate their views on future economic
developments. Jean-Claude Trichet’s statement
that the markets “have gone progressively back
to normal” (Atkins, Mackenzie, and Davies, 2007,
p. 1) and Ben Bernanke’s (2007) comment that
housing remains a “significant drag” on the economy, both exemplify how central bankers signal
their assessment of economic conditions, and
this assessment is one factor that influences the
(heterogeneous) outlooks among members of the
private sector.
The uncertainty in financial markets in recent
months illustrates clearly the significant differences that can arise between the central bank and
private market participants. This is a classic example of heterogenous information about the economy. Much of the debate has been focused on the
question of future interest rate cuts, but the underlying issues appear to be related to differing views
among private forecasters and between private
1

“Even as Wall Street analysts ratchet up their worries about a
recession, Fed officials are far from convinced that a true downtown is likely” (Andrews, 2007). A more vivid example of disagreement was provided by CNBC commentator Jim Cramer, whose blast
that the Fed is clueless about “how bad it is out there” was reportedly seen by more than a million viewers on YouTube.

422

J U LY / A U G U S T

2008

forecasters and the Fed over the likely impact of
financial market disturbances on the real economy and the likelihood of a future recession.
The next section discusses the two goals of
transparency Bill Poole (2005) has stressed—
accountability and policy effectiveness. The third
section develops a model of asymmetric and heterogeneous economic information that can be
used to model the implications of transparency.
Two policy regimes are considered. In the first,
the public observes the policy instrument of the
central bank but the central bank provides no
further information to the public. In the second,
the central bank provides information on its outlook for future economic developments. The welfare implications of these regimes are discussed
in the fourth section. Within each regime, better
quality central bank information is always welfare
improving (the pro-transparency aspect of Morris
and Shin, 2002, emphasized by Svensson, 2006).
However, across regimes, more central bank
information has ambiguous effects.

THE GOALS OF TRANSPARENCY
Transparency requires asymmetric information, but the nature of this asymmetry can take
many forms. In fact, Geraats (2002) has classified
five types of transparency—political, procedural,
economic, policy, and operational. Briefly, these
correspond to central bank transparency about
objectives, the internal decisionmaking process,
forecasts and models, policy actions, and instrument setting and control errors. Each of these
dimensions of transparency is important and has
been studied extensively (see Geraats, 2002, for a
survey).
In recent years, central banks have become
more transparent along all these dimensions,
and levels of transparency that would have been
viewed as exceptional 20 years ago are today
accepted as best practice among modern central
banks.2 The trend toward independent central
2

See Eijffinger and Geraats (2006) and Dincer and Eichengreen
(2007) for indices of central bank transparency. Cukierman (2006)
discusses some of the factors that might place limits on how
transparent central banks should (or can) be.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

banks with explicit mandates assigned to them
and the widespread adoption of inflation targeting
has contributed greatly to political transparency.
The Bank of England is among the most procedurally transparent central banks, publishing
minutes and individual votes of its Monetary
Policy Committee discussions. Central banks,
such as the Federal Reserve, that were formerly
reluctant to communicate policy actions directly
now do so clearly, timely, and directly. The most
transparent central banks, such as the Reserve
Bank of New Zealand and the Bank of Norway,
publish their projections for the policy interest
rate. The use of a short-term interest rate as the
policy instrument has greatly enhanced operational transparency. But although most central
banks today are transparent about their policy
stance and operational procedures—something
hard to avoid when the policy instrument is a
short-term market interest rate—there is much
greater variation in the extent to which central
banks are transparent about their decisionmaking
process, their internal forecasts, and their policy
objectives.
But what is the point of being transparent?
As noted earlier, Poole (2006) has articulated two
goals of transparency: to meet the Fed’s “responsibility to be politically accountable” and “to
make monetary policy more effective.” The next
two subsections discuss each of these goals.

Transparency and Accountability
The role transparency plays in supporting
accountability can differ depending on whether
the ultimate objectives of monetary policy are
observable or unobservable. Consider first the
case in which the objectives of monetary policy
are, ex post, clearly measurable and observable.
For concreteness, assume inflation is the only
objective of the central bank and there is agreement on the appropriate measure of inflation that
the central bank should control. In this environment, it is in principle straightforward to ensure
accountability. Observing the ex post rate of
inflation would seem to provide a simple means
for judging the performance of the central bank.
However, even under the conditions specified (a
single measurable objective), the ex post realizaF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

tion of inflation is not a sufficient performance
measure. The reason is that inflation is not directly
controllable—even under an optimal policy
(where the central bank is doing exactly what it
should be doing), the realized inflation rate can
differ from the desired value. This difference may
be small, but as long as there is any random variation that is beyond the ability of the central bank
to eliminate, public accountability based solely
on inflation outcomes will punish some good
central bankers and reward some lucky ones.
Transparency can help promote accountability
by allowing the public to base its evaluation of
the central bank not just on observed inflation
but on the information that was available to the
central bank when it had to make its policy decision. Having access to internal bank forecasts, for
example, allows outsiders to evaluate the decisions made by the central bank. This can mitigate
some of the problems associated with evaluations
based solely on realized inflation. Having access
to the information on which decisions were based
helps remove the influence of random uncontrollable events that affect inflation and therefore
supports a better system of accountability.3
In general, however, policy objectives are not
directly observable, and they may even be inherently unmeasurable. Certainly, recent theoretical
models, which have emphasized the use of the
welfare of the representative agent as the appropriate objective of policy, have defined optimal
policy in terms of unmeasurable objectives. It is
not clear that we could reach agreement on the
correct way to measure welfare, as that depends
on the specific model we believe characterizes
the economy, even if we could agree on how to
define welfare. It certainly is not observable.
Transparency can be especially critical when
objectives are unobserved. Assessing, or holding
accountable, an economic agent when objectives
are unobservable is not a situation unique to
monetary policy and central banks. Education is
perhaps the most prominent field in which public
3

As Tim Harford (2007, Part 2, p. 3) pointed out in a recent “Dear
Economist” column in the Financial Times, it might seem sensible
for a company to judge its ice cream sales force on total sales, but
having information about the weather allows for a better assessment
of the contribution of the sales team to actual sales.

J U LY / A U G U S T

2008

423

Walsh

policy must deal with this situation; the objectives
are high quality education and teaching but there
exists wide disagreement over how to define and
measure these qualities.
Because social welfare does depend on inflation and inflation can be observed, one might use
inflation as a type of performance measure, holding the central bank accountable for achieving a
low and stable inflation rate. Inflation targeting
can be thought of as defining a performance measure for the central bank. The critical issue in
choosing any performance measure, however, is
how powerful one wants to make the incentives.
If accountability is based strictly on realized inflation and the consequences of missing the target
are large, then the central bank will naturally
focus on achieving the target, even if this means
sacrificing other, more difficult to measure, aspects
of social welfare. The concern that inflationtargeting produces too much of a focus on inflation control is at the heart of most criticisms of
inflation targeting in the United States.
But this is where transparency becomes particularly important. Greater transparency can
lessen the need to rely on a single easily measured
performance indicator. When there is greater
transparency, and the public is able to assess the
same information the central bank has used to set
policy, it is no longer necessary to base central
bank accountability on inflation outcomes only
(Walsh, 1999).

Transparency and the Effectiveness of
Monetary Policy
Poole’s second goal of transparency, promoting
policy effectiveness, requires that private sector
decisions be influenced, and influenced systematically, by the information central banks provide.
With the development of New Keynesian models
and their emphasis on the importance of forwardlooking behavior, managing expectations to
improve policy effectiveness has taken on a new
importance. Woodford (2005) has gone so far as
to state that “not only do expectations about policy matter, but, at least under current conditions,
very little else matters.”4

The intuition for Woodford’s statement is
straightforward. Policymakers control directly
only a short-term interest rate. Yet rational agents
are forward looking and so base their spending
and pricing decisions on their assessment of future
interest rates, not just current rates. The recognition that expectations matter is not confined to
academics; a recent article in the Financial Times
(Guha, 2007) states that “What really matters,
both for the markets and the economy, is not the
current policy rate but the expected path of future
rates.”
Transparency and its relationship to policy
effectiveness played a key role in the large literature that focused on the average inflation rate
bias that could arise under optimal discretionary
policy. By and large, this literature emphasized
political and operational transparency, and it
employed models in which policy surprises were
the source of the real effects of monetary policy.
Geraats (2002) provides an excellent survey of
the literature.
In these models, the central bank preferences
were generally treated as stochastic and unknown.
The policy instrument was also taken to be
observed with error or subject to a control error.
For example, the central bank might control nonborrowed reserves, but this allowed only imperfect
control of the money supply.5 Observing money
growth would not provide enough information
for the public to disentangle the effects of control
errors from shifts in central bank preferences.
Thus, there was opaqueness about political objectives and operational implementation. Transparency was typically modeled as a reduction in the
noise in the signal on the policy instrument. The
optimal degree of transparency ensured the public
would learn quickly when the central bank preferences shifted, but still left open the possibility
that the bank could create a surprise if one was
needed to aid stability. Cukierman and Meltzer
(1986) showed that the central bank may prefer
to adopt a less efficient operating procedure than
5

4

Italics in the original.

424

J U LY / A U G U S T

2008

See, for example, Cukierman and Meltzer (1986) and Faust and
Svensson (2002).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

is technically feasible (i.e., not reduce the control
error variance to its minimum possible level).6
As emphasized in recent discussions of
transparency, however, New Keynesian models
imply that it is predictable monetary policies, not
surprises, that are most effective in achieving
policy goals. In such an environment, transparency, rather than reducing the efficacy of policy
can actually increase it. Central bank announcements about future policy actions, or about future
economic developments, can affect private sector
expectations of future interest rates, inflation, and
economic activity. With spending and pricing
decisions dependent on these expectations, using
announcements to influence expectations gives
the central bank an additional policy instrument.
As such, it serves to make policy more effective.
The argument that transparency can increase the
effectiveness of monetary policy is certainly more
consistent with the modern practice of central
banks, which has been uniformly to move in the
direction of greater transparency.
But providing information to the public may
have potential costs. These costs are associated
with the conditional nature of any forecast. Some
economists have worried that the public will not
understand the distinction between a conditional
and an unconditional forecast.7 Particularly
because reputation is important, deviating from
a previously announced policy path may be interpreted as a deviation from a commitment equilibrium rather than as an appropriate response based
on new information. If a central bank fails to raise
interest rates after signaling that it planned to, the
private sector may believe the bank has become
less concerned about inflation, causing inflation
expectations to rise. Financial market participants
may underestimate the conditionality of the
announced rate path and so view deviations as
introducing unwarranted uncertainty into financial markets. These factors may make the central
6

7

See also Faust and Svensson (2002), who show that, when the
choice of transparency is made under commitment, patient central
banks with small inflation biases will prefer minimum transparency.
They argue that this result might account for the (then) relatively
low degree of transparency that characterized the U.S. Federal
Reserve System.
Goodhart (2006).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

bank reluctant to adjust rates, producing a lock-in
effect that would reduce flexibility and limit
policy effectiveness.
Even when the public understands the conditional nature of the guidance provided by the
central bank, announcements may introduce new
sources of volatility. The influential paper by
Morris and Shin (2002) has highlighted one channel through which central bank announcements
may have a detrimental effect. Unlike standard
models that assume all private agents share the
same information, Morris and Shin focus on the
more realistic case in which private agents have
individual, heterogeneous sources of information
and must attempt to forecast what others are
expecting.8 Morris and Shin have argued that
there can be a cost to providing more-accurate
public information; agents may overreact to public
information, making the economy more sensitive
to any forecast errors in the public information.
Subsequent research (e.g., Hellwig, 2004, and
Svensson, 2006) has suggested that the MorrisShin result is not a general one and that better,
more accurate, central bank information is welfare
improving. However, just as the earlier literature
on transparency employed models at odds with
current policy frameworks (only surprises mattered, the money supply was the instrument),
the Morris-Shin analysis is conducted within a
framework that fails to capture important aspects
of actual monetary policy. For example, the issue
facing most central banks is not whether to provide
more-accurate forecasts. Instead, the issue is
whether or not to provide more information by,
for example, announcing forecasts. And even in
the absence of explicit announcements or guidance, central banks already provide information
through the setting of the policy instrument. The
impact of a change in the policy instrument will
depend, in part, on the information that it conveys
about the central bank’s view of the economy.
The work by Morris and Shin has been
extended by Amato and Shin (2003), who cast
the Morris-Shin analysis in a more standard macro
model. In their model, the central bank has per8

Woodford (2003) has investigated the role of higher-order expectations in inducing persistent adjustments to monetary shocks in
the Lucas-Phelps islands model. See also Hellwig (2002).

J U LY / A U G U S T

2008

425

Walsh

fect information about the underlying shocks.
This ignores the uncertainty policymakers themselves face in assessing the state of the economy.
Nor do Amato and Shin allow the private sector
to use observations on the policy instrument to
draw inferences about central bank information.
They also assume one-period price setting and represent monetary policy by a price level–targeting
rule. In Hellwig (2004), prices are flexible and
policy is given by an exogenous stochastic supply
of money; private and public information consists
of signals on the nominal quantity of money.
The potential costs and benefits of releasing
central bank forecasts have also been analyzed by
Geraats (2005). However, Geraats assumes agents
do not observe the bank’s policy instrument prior
to forming expectations and employs a traditional
Lucas supply function. Her focus is on reputational equilibria in a two-period model with a
stochastic inflation target. Thus, the model and the
issues addressed differ from the focus on the role
of information in a Morris-Shin-like environment.
Rudebusch and Williams (2006) and
Gosselin, Lotz, and Wyplosz (forthcoming) focus
specifically on the provision of future interest rate
projections. Rudebusch and Williams explore
the role of interest rate projections in a model of
political transparency—the asymmetry of information pertains to policy preferences and the
central bank inflation target. Transparency is
modeled as reducing noise in central bank projections. In contrast to the model I develop in the
next section, Rudebusch and William incorporate
learning and find that the public’s ability to learn
and welfare increase when interest rate projections are provided.
Gosselin, Lotz, and Wyplosz (forthcoming)
adopt a quite different approach and focus on
what they characterize as creative opacity. In their
model, the private sector learns from the information released by the central bank, but the central
bank also learns about private sector information
by observing long-term interest rates. By providing its projection for the short-term interest rate,
the central bank is able to recover private sector
information from the long-term rate. This aligns
expectations but may require the central bank to
distort its current interest rate setting to achieve
426

J U LY / A U G U S T

2008

the desired long-term rate. If central bank information is poor, it may be better to remain opaque.
Although the role of central bank learning is a
critical one, I ignore it in the model in the next
section in order to focus on the way inflation and
output are affected by central bank announcements.
Thus, several questions remain unresolved
concerning the role of transparency in an environment in which agents have heterogeneous information and central bank actions and announcements
are commonly available. Specifically, how does
the information conveyed by the central bank
instrument affect the central bank’s incentives
and alter the effectiveness of policy?9 What is the
effect of more information as opposed to better
information? And are concerns about the added
uncertainty of greater transparency warranted?
These questions are addressed in the model in
the next section.

WELFARE EFFECTS OF OPAQUENESS
AND TRANSPARENCY
To investigate the role of economic transparency, I employ a simple model motivated by New
Keynesian models based on Calvo-type pricing
adjustment by monopolistic firms and by Morris
and Shin’s (2002) demonstration of the role heterogenous information can play.10 Like Gosselin, Lotz,
and Wyplosz (forthcoming), I assume the central
bank’s preferences are known. Unlike their model,
however, I incorporate the common-knowledge
effect central to the Morris and Shin model. However, I focus on how the private sector learns from
information provided by the central bank and
ignore the reverse inference, where the central
bank learns from private sector information, which
is key in the Gosselin, Lotz, and Wyplosz model.
The basic model is similar to the one employed
in Walsh (2007a,b). In these earlier papers, how9

In Walsh (2007b), I show that this incentive effect under discretion
can make it socially optimal to appoint a Rogoff-conservative central
banker, that is, a central banker who places less weight on outputgap stabilization than society does.

10

As noted earlier, in the basic Morris-Shin model, Svensson (2006)
shows that for almost all parameter values, better central bank
information is welfare improving.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

ever, only demand and cost shocks were present,
so it was necessary to make just a single projection
(of inflation or the output gap) to fully reveal the
central bank information (because the public also
observed the policy instrument). The primary
focus was also on partial transparency in the
sense of Cornand and Heinmann (2004). The chief
contributions of the present paper are to enrich
the information structure, to account fully for the
welfare costs of relative price dispersion created
by heterogeneous information, and to assess
transparency in terms of both quantity (the role
of providing more information) and quality (the
effect of better information).
Firms receive private signals on the fundamental shocks affecting the economy. Each period,
a fraction of firms adjust their prices. In doing so,
they are concerned with their relative price and
so must attempt to forecast what other priceadjusting firms are doing. But this requires the
individual firm to predict what other firms are
predicting about the shocks hitting the economy.
Hence, higher-order expectations will matter, as
in Morris and Shin (2002).
The central bank, like individual firms, is
assumed to possess potentially noisy information
on the economic outlook. I consider two policy
regimes. In the first, the opaque regime, denoted
by superscript o, the central bank makes no
announcements. However, even in this regime,
the central bank reveals something about its outlook for the economy when it sets its policy instrument. In the absence of other information, the
private sector forms expectations by combining
the observation on the instrument with their own
private information. A rise in the policy interest
rate, for example, will be interpreted partially as
a central bank attempt to offset a projected positive demand shock and partially as an attempt to
contract real output to offset a positive cost shock.
When deciding on its policy, the central bank
needs to take into account how the public will
interpret its actions because the instrument conveys information.
The second regime, denoted by superscript f,
corresponds to full transparency. In this regime,
the central bank releases its projections on future
economic developments. Because it is on this
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

information that the central bank bases its policy
decision, the actual setting of the instrument conveys no additional information. The benefits of
this regime are that private sector forecasts are
improved and, because there is more common
information across firms, relative price dispersion is reduced. The potential cost is that private
expectations react to what may turn out ex post
to be central bank forecast errors.
While I assume the central bank operates in
a discretionary manner in setting its policy instrument, I also assume it can commit to a policy
regime (opaque or transparent).

The Basic Model
The underlying model of price adjustment is
based on Calvo, combined with the timing assumptions of Christiano, Eichenbaum, and Evans (2005)
and the addition of firm-specific information.
The Christiano, Eichenbaum, and Evans timing
implies that firms who adjust their price for
period t do so based on t–1 information. Expressed
alternatively, firms in period t make decisions
about their prices for period t +1. Because information differs across firms, price-setting firms
will not all set the same price as in the standard
common-information framework that is employed
in most models. In addition, because firms care
about their relative price, they must forecast the
aggregate t +1 price level when they set their
individual price for that period. This also differs
from standard specifications in which firms are
assumed to know the aggregate equilibrium price
level when they set their price level.
Three types of shocks are considered: (i) costs
shocks that are assumed to represent inefficient
volatility in real marginal costs; (ii) aggregate
demand shocks; and (iii) shocks to the gap between
the economy’s flexible-price equilibrium level of
output and its efficient level of output. The last
one will be referred to as a welfare-gap shock.
The model differs from standard New Keynesian
models in that the same information is not commonly available to all firms and firms must set
prices before observing the current realizations
of shocks.
The basic timing is as follows:
J U LY / A U G U S T

2008

427

Walsh

(i) At the end of period t, the central bank
forms projections about t+1 economic conditions and sets its policy instrument, θt .
(ii) Firms observe πt , xt, and θt as well as
individual specific signals about t +1
shocks. Firms may also observe announcements made by the central bank.
(iii)Those firms that can adjust their price set
prices for t +1.
(iv) Period t +1 shocks occur and πt +1 and xt +1
are realized.
A randomly chosen fraction 1 – ω of firms
optimally set their price for period t +1. If β is the
discount factor (see Walsh, 2007b), one can show
that

(1)

π*j ,t +1 = (1 − ω ) E tj π*t +1 + (1 − ωβ )κ E tj xt +1
 ωβ  j
+ (1 − ωβ ) Etj ets+1 + 
E π ,
 1 − ω  t t +2

where π *j,t +1 is the log price firm j sets for period
t+1 relative to the period t average log price level
(i.e., p*j,t +1 – pt ); E tjπ–t*+1 is firm j ’s expectation about
the average π *
i,t +1 being set by other adjusting
j
firms; E t xt +1 is firm j ’s expectation about the output gap in t+1; est+1 is the aggregate, common cost
shock; and E tj π t +2 is firm j ’s expectation about
future inflation. For simplicity, I assume (1) is
linearized around a zero-inflation steady state.
To keep the model simple, I represent the
demand side of the model in a very stylized,
reduced-form manner. Monetary policy is represented by the central bank’s choice of θt and by
any announcements the central bank might make.
I assume θt is observed at the start of the period
so that any firm that sets its price in period t can
condition its choice on the central bank’s policy
action. The output gap is then equal to
(2)

xt +1 = θt + etv+1 ,

where evt+1 is a demand shock. Although I will
call θt the central bank instrument, it essentially
represents the central bank’s intended output gap.
Information. As noted, there are three fundamental disturbances in the model: ets represents
cost factors that, for a given output gap and
expectations of future inflation, generate ineffi428

J U LY / A U G U S T

2008

cient inflation fluctuations; evt the aggregate
demand disturbance; and eut a shock to the gap
between the flexible-price output gap and the
efficient output gap. I assume each is serially and
mutually uncorrelated.
Firms must set their prices and the central
bank must set its policy instrument before learning the actual realizations of the aggregate shocks.
i
Firm j ’s idiosyncratic information, e j,t+1
for i = s,
v, u, is related to the aggregate shock according to

e ij ,t +1 = eti+1 + φ ij ,t +1, i = s,v ,u.

i
The φ j,t+1
terms are identically and independently
distributed across firms and time. These signals
are private in that they are unobserved by other
i
agents. For convenience, each φj,t+1
will be referred
s
to as a noise term, even though φ j,t+1
is actually
the idiosyncratic component of the firm’s cost
shock. All stochastic variables are assumed to be
normally distributed. Define the signal-to-noise
ratio, γ ji = σ 2i/共σ 2i + σ 2j,i 兲, where σ 2i is the variance
2
of ei and σ j,i
is the variance of φ ji. Let Ωj,t +1 denote
the vector of private signals received by firm j,
and let Ωt+1 = 兰Ωj,t+1 be the information aggregated
across firms.
The central bank combines its information,
models, and judgment to obtain forecasts of future
economic disturbances. It will be convenient to
represent this information, in parallel with the
treatment of firm information, as signals on the
three aggregate disturbances:

i
i
i
ecb
,t = et +1 + φcb,t , i = s,v ,u.

i
The noise terms φ cb
are assumed to be independently distributed and to be independent of φ ji
i
2
for all i, j, and t. Define γ cb
= σ 2i/共σ 2i + σ i,cb
兲, where
2
i
σ i,cb
is the variance of φ cb
. Let Ωcb,t +1 denote the
innovation to the central bank information set.
Let Z′t = [ets evt eut ]. Then Etcb Zt+1 = Γcb Ωcb,t+1, where
E cb denotes expectations conditional on central
bank information and

Γcb

s
γ cb

= 0

 0

0

v
γ cb

0

0 

0 .
u 
γ cb


The central bank’s objective is to minimize,
under discretion, a standard quadratic loss funcF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

tion that depends on inflation variability and
output-gap variability. Specifically, loss is given by
(3)

(

)

cb
i  2
u 
Lcb
t = Et ∑ β π t + i + λ x xt + i − et + i  ,
∞

i =0

where eut is equal to stochastic variation in the gap
between the flexible-price output gap (x) and the
welfare-maximizing output gap.
With staggered price adjustment, New
Keynesian models imply that the welfare costs
of inflation variability arise from the dispersion
of relative prices it generates (Rotemberg and
Woodford, 1997, Woodford, 2003a). Relative price
dispersion can arise from inflation (because of
staggered price adjustment) and because of heterogeneous information across firms. It can be shown
(see the appendix) that the variance of relative
prices across firms depends on π t2 and on the
noise in the signals received by individual firms.
Thus, social loss is given by

(

)

(4) Lst = Et ∑ β i π t2+ i + λI zt2+i + λx xt + i − etu+ i  ,


∞

i =0

zt2 is

where
relative price dispersion arising from
heterogenous information across individual firms,
with the appropriate weight on this source of
loss relative to π t2 given by

λI =

(1 − ω )2 .
ω

The loss associated with heterogeneous
information can be reduced if the central bank
provides more information. However, this loss is
not affected by the period-by-period policy choice
the central bank makes in setting its instrument
(conditional on the policy regime that defines the
type of announcements the central bank makes).
Thus, under discretion, the central bank takes as
given the term zt2 in (4), which is due to heterogeneous information, and minimizes (3).
We can now evaluate equilibrium under each
policy regime.

(equivalently, its signals) directly to the public.11
In the absence of central bank announcements,
firm j ’s new information is given by its private
signals and the policy instrument. The new
information available to firm j consists of Ωj,t+1
and θt . Assume beliefs about monetary policy are

θt = δ o E tcbψ t +1 = δ o Γcb Ωcb ,t +1 ,

where δ 0 is 1 × 3. These beliefs are consistent
with a rational expectations equilibrium under
discretionary monetary policy.
Define Θo = [Θo1 Θo2 ] such that Θo1 is 3 × 3 and
Θo2 is 3 × 1, where the ij th element of Θo1 gives the
effect of the firm’s j th signal on its forecast of the
i th shock. Similarly, the i th element of Θo2 is the
effect of θt on the firm’s forecast of the i th shock.
Firm j ’s expectation of Zt+1 is

Etj Zt +1 = Θo1 Ω j ,t +1 + Θo2θt .

Because the firm’s signals on the different shocks
are uncorrelated, Θo1 would, in the absence of the
observation of θt, consist of a diagonal matrix with
signal-to-noise ratios along the diagonal. The offdiagonal elements of Θo1 can be nonzero when the
firm combines its own information with θt to
forecast the shocks. For example, suppose θt > 0.
This might indicate a response by the central bank
to a negative demand shock, a negative cost shock,
or a positive welfare-gap shock. If the firm’s signal
on the demand shock is positive, then given θt,
this makes it less likely the central bank is reacting
to a negative demand shock. The firm will therefore alter its forecast of cost and target shocks.
As shown in the appendix, the equilibrium
strategy for firm j will take the form
(5)

π*j ,t +1 = b1o Ω j ,t +1 + b2oθt ,

where bo1 is 1 × 3. Under both regimes, the expression for the coefficients on Ωj,t+1 in the firm’s
equilibrium strategy takes the same form.12

Equilibrium Under the Opaque Regime

11

In regime o, firms observe their own private
signals and the central bank instrument. In
regime f, the central bank provides its forecasts

Alternatively, the central bank could announce its inflation and
output-gap forecasts; combined with the observed instrument setting,
these announcements would fully reveal the central bank’s signals.

12

Of course, their values differ under the two regimes to the extent
that the information available to firms differs.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

429

Walsh

The appendix shows also that the impact of
the instrument on an individual firm’s pricing
decision is

(6)

 1 − ωβ 
b2o = 
κ
 ω 

 1
+   (1 − ω )b1o + (1 − ωβ ) (ι1 + κι2 )  Θo2 ,
ω

where ιi is a 3 × 1 vector of zeros with a 1 in the
i th place. Equation (6) illustrates the channels
through which a policy action affects the pricing
decisions of firms. The first term, 共1 – ωβ 兲κ /ω is
the standard effect operating through the output
gap. Because inflation is 共1 – ω 兲 times the pricing
decision of the individual firm in a standard New
Keynesian model, the effect on aggregate inflation
operating through this terms would be 共1 – ω 兲
共1 – ωβ 兲κ/ω, which is the normal coefficient on
the output gap in a New Keynesian model based
on Calvo pricing.
The remaining terms on the right side in (6)
represent the informational effects of policy
actions. For example, observing θt affects the
firm’s expectations about cost, given by the term
共1 – ωβ 兲ιiΘo2 , and demand shocks, given by the
term 共1 – ωβ 兲κιiΘo2 . Observing θt also affects individual pricing decisions through the firm’s expectations of what other firms are expecting, the
共1 – ω 兲bo1 term.
Equilibrium inflation is given by
(7)
and

(

πt +1 = (1 − ω ) π*t +1 = (1 − ω ) b1o Ωt +1 + b2oθt

)

∂πt +1
= (1 − ω )b2o .
∂θt

The information channel can significantly
affect the extent to which the central bank instrument impacts inflation. I calibrate the model by
setting ω = 0.65 (as a compromise between micro
evidence suggesting ω on the order of 0.5 and
time-series estimates typically on the order of 0.8),
β = 0.99, and κ = 1.8. These values imply 共1 – ω兲
共1 – ωβ 兲κ /ω = 0.3455. The standard deviations of
all shocks are set equal to 1. Figure 1 shows how
共1 – ω兲b2o varies with the quality of private sector
information, as measured by the signal-to-noise
430

J U LY / A U G U S T

2008

ratio, γ ji. When firms have perfect information
on the shocks 共γ ji = 1兲, the policy instrument, θ,
conveys no information and its effect on inflation
equals 0.3455, which is shown by the horizontal
line in Figure 1.
However, when θ conveys information (i.e.,
when γ ji < 1), its impact on inflation is significantly
reduced. Movements in θ are partially attributed
to the central bank’s response to the various
shocks. A rise in θ, for example, lowers firms’
forecasts of demand shocks. Because the net effect
on the expected output gap is θt + E jievt+1, the effect
on price-setting behavior and inflation is less than
the change in θ. A rise in θ also leads firms to
reduce their forecast of cost shocks, partially
offsetting the positive impact of a rise in θ on
inflation. For a given quality of private sector
information, the information channel becomes
more important as central bank information
improves and private firms place more weight
on the information conveyed by policy actions.
The informational effects are larger, therefore,
when the central bank has better quality information (in Figure 1, compare the solid line for
i
i
γ cb
= 0.5 with the dashed line for γ cb
= 0.9).
Operating in a discretionary regime, the
central bank sets policy optimally in each period
based on its current forecasts about the future
state of the economy. The first-order condition
for minimizing the expected value of the central
bank’s loss function (3) subject to (2) and (7) is
given in the appendix. This first-order condition
can be solved for the optimal policy responses,
and their values are also given in the appendix.
The solution to the model is obtained numerically by beginning with initial values for the policy coefficients, using these to obtain Θo, bo1, and
bo2, and then obtaining new values for the policy
coefficients. This process continues until convergence. Once the equilibrium values of bo1 and bo2
and the policy coefficients are obtained, aggregate
inflation is given by

(

)

 b1o + b2oδ o Γcb ψ t +1 
,
π t +1 = (1 − ω ) 

 +b2o δ o Γ cbφcb ,t +1


F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

Figure 1
Elasticity of Inflation with Respect to the Policy Instrument in the Opaque Regime as a
Function of the Quality of Private Information
o

(1 – ω)*b 2
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Signal-to-Noise Ratio for Private Information
i = 0.5; dotted line, γ i = 0.9.
NOTE: Solid line, γ cb
cb

whereas the welfare gap is given by

xt +1 − etu+1 = θt + etv+1 − etu+1

= δ o Γ cb Ωcb,t +1 + (ι2 − ι3 ) Zt +1

(

)

= δ o Γcb + ι2 − ι3 Zt +1 + δ o Γcbφcb ,t +1 .

Equilibrium Under a Transparent
Regime
I interpret full transparency as a regime in
which the central bank shares its information on
the economy. Within the context of the model,
this would mean that the central bank publishes
its signals on the various disturbances so that
Ωcb,t+1 becomes known to all firms. Equivalently,
the central bank could publish its forecasts for
inflation and the output gap. In a transparent
regime, the instrument is no longer a source of
information to the private sector. This alters the
impact of θt on inflation and affects the central
bank’s incentives for setting policy. When the cenF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

tral bank provides its information to the public,
the central bank information set is a subset of the
public information set. In this context, Svensson
and Woodford (2003) have shown that certainty
equivalence holds and the policy decision of the
central bank depends only on the expected values
of the shocks. In particular, this implies that the
optimal policy will be independent of the quality
of either central bank information or private sector
information.
Let Θ f = [Θ 1f Θ 2f ] be the appropriate 3 × 6
coefficient matrix such that

Etj Zt +1 = Θ1f Ω j ,t +1 + Θ1f Ωcb ,t +1 .

The appendix shows that the equilibrium strategy
for price setting firms is

π*j ,t +1 = b1f Ω j ,t +1 + b2f θt + b3f Ωcb,t +1,

where bf1 takes the same form as bo1 (except when
Θ 1f replaces Θo1 in the expression for bf1). Although
the formula bf1 is the same as for bo1, their values
J U LY / A U G U S T

2008

431

Walsh

Table 1

Loss Under Alternative Regimes (σ 2s = σ 2v = σ 2u = 1)

i
γ cb

γ ji

0.2

0.4

0.6

0.8

1.0

8.83

10.20

11.70

13.52

16.79

= 0.5

Opaque regime
Transparent regime

π Equivalent

i
γ cb

9.52

10.33

11.49

13.35

16.79

3.32

1.39

1.80

1.66

0

4.56

5.55

6.50

7.21

7.64

6.11

6.15

6.22

6.40

7.64

4.97

3.08

2.11

3.60

0

= 0.9

Opaque regime
Transparent regime

π Equivalent

NOTE: Bold indicates the regime with the least loss.

will differ as Θ f ⬆ Θ o. The effects of the central
bank instrument and information are given by

b2f =

( 1 − ωβ )
κ
ω

and
 1
b3f =   (1 − ω )b1f + (1 − ωβ )(ι1 + κι2 )  Θ2f .
ω

Inflation will equal 共1 – ω 兲π–t*+1, so

∂πt +1
(1 − ω )(1 − ωβ )κ
= (1 − ω )b2f =
∂θt
ω

and is independent of any informational effects.
The exact expressions for the optimal policy
response to each type of signal are given in the
appendix.

THE VALUE OF RELEASING
INFORMATION
We can now compare the effects of providing
information by comparing outcomes under the
opaque regime and the transparent regime. To
assess outcomes under the two regimes, the model
is solved using the same calibrated parameters
as employed earlier (i.e., ω = 0.65, β = 0.99,
432

J U LY / A U G U S T

2008

κ = 1.8). I initially set the variances of all shocks
equal to 1. For the loss function, I set λx = 1/16,
reflecting the use of quarterly inflation rates.
Table 1 shows the loss under each regime for
different combinations of the signal-to-noise ratios
for both the private sector and the central bank.
The first thing to note is the loss is increasing in
the quality of private sector information (moving
across rows from left to right) and decreasing in
the quality of central bank information (comparing
the top panel to the bottom panel). Better private
information makes expectations more sensitive
to signals and so increases the volatility of expectations. Greater volatility of expectations produces
more inflation volatility. This is welfare decreasing. Better central bank information is welfare
improving because it allows the central bank to
engage in more effective stabilization policies that
reduce the volatility of inflation and the output
welfare gap. Although Morris and Shin (2002)
suggest that improved commonly available information could reduce welfare, the results in Table 1
are consistent with Hellwig (2004) and Svensson
(2006), who argue that better quality central bank
information generally improves welfare.
When γ ji = 1, firms observe the true shocks
perfectly. In this case, the release of information
or projections by the central bank is irrelevant
and the loss is the same under both regimes, as
shown in the last column of Table 1. When private
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

Table 2

Components of Loss (σ 2s = σ 2v = σ 2u = 1)

0.2

i
γ cb
= 0.9
γ ji

0.4

0.6

0.8

1.0

Opaque regime
L

4.56

5.55

6.50

7.21

7.64

2
u
λxσ x–e

1.82

2.15

2.85

3.72

3.22

1.68

1.71

1.84

2.20

4.42

1.07

1.70

1.81

1.30

0

L

σ π2

6.11

6.15

6.22

6.40

7.64

2
u
λxσ x–e

1.66

1.66

1.67

1.73

3.22

λIσ z2

4.42

4.42

4.42

4.42

4.42

0.02

0.06

0.13

0.26

0

σ π2

λIσ z2

Transparent regime

information is imperfect, loss differs under the
two regimes (the regime with the least loss is indicated in bold). The rows labeled “π equivalent”
express the reduction in loss under the optimal
regime in terms of the reduction in average inflation (expressed at annual rates) that would yield
a similar reduction in loss. For example, if γ ji = 0.8
i
and γ cb
= 0.5, the improvement of moving from
an opaque regime to a transparent one is equivalent to a reduction in inflation of 1.66 percentage
points. The general results are similar in both
the top panel, when central bank information is
relatively poor (the signal and the noise have equal
i
variances so that γ cb
= 0.5), and the bottom panel,
when central bank information is relatively good
i
(γ cb
= 0.9). What matters is the quality of private
information. If this is low, then the expectations
of firms (and what individual firms expect that
other firms are expecting) are sensitive to any
commonly available information released by the
central bank.
The results in Table 1 are robust to different
values for the variances of the underlying shocks.13
The finding that transparency can lower welfare
when private information is poor is suggestive of
the Morris and Shin (2002) argument that noisy
13

For each σ i2, the value was changed between 2 and 0.01, whereas
the other variances were held fixed at 1.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

public information can decrease welfare. To investigate whether this is the effect that accounts for
the relative performance of the two regimes, one
can calculate the sources of loss under each
regime. From (4), loss arises from inflation variability, welfare-gap variability, and relative price
dispersion caused by heterogeneous information.
Table 2 shows each of these components for the
i
= 0.9, which corresponds to the lower
case γ cb
i
= 0.5).
panel of Table 1 (results are similar for γ cb
Table 2 reveals three differences between the
equilibria for the opaque and transparent regimes
that are independent of the quality of private
information. First, inflation is less volatile when
policy is transparent. Second, the contribution
of welfare-gap volatility to the overall loss is much
larger when policy is transparent. And third, the
welfare cost of relative price dispersion is much
smaller when policy is transparent. When γ ji is
very low, opacity is the preferred regime because
the welfare gap is much more stable. As will be
discussed further below, the informational effects
of policy actions are larger when the quality of
private information is poor and thus these effects
distort the incentive of the central bank such that
policy reacts too little to cost shocks. This makes
inflation more volatile but leaves the welfare gap
more stable. Both inflation and output-gap volatilJ U LY / A U G U S T

2008

433

Walsh

Table 3

Optimal Policy Coefficients (σ 2s = σ 2v = σ 2u = 1)
δ

i
γ cb
= 0.5

γ ji = 0.4

s

δ

v

u

δ

s

δ

v

δ

u

Opaque regime

–0.0947

–0.8884

0.7179

–0.2510

–0.9764

0.5246

Transparent regime

–0.3647

–1.0000

0.3436

–0.3647

–1.0000

0.3436

i
γ cb
= 0.9

Opaque regime

–0.0816

–0.8944

0.7475

–0.1865

–0.9713

0.6356

Transparent regime

–0.3647

–1.0000

0.3436

–0.3647

–1.0000

0.3436

ity in the opaque regime increase as γ ji rises, so
that the transparent regime becomes preferred
when private sector information is good.14
Table 3 shows the optimal policy responses
to three central bank signals for γ ji equal to 0.4
i
and 0.8 and for γ cb
equal to 0.5 and 0.9. Response
coefficients in the transparent regime are independent of the quality of both private sector and
the central bank information. This result follows
from the demonstration by Svensson and
Woodford (2003) that the central bank’s decision
problem satisfies the conditions for certainty
equivalence if the private sector has more information than the central bank. This is the case in
the transparent regime because the private sector
knows both the central bank signals and their own
private signals. The way informational effects in
the opaque regime distort stabilization policy is
clear from the muted response (in absolute value)
to signals on the cost shock and amplified response
to signals on the welfare-gap shock. The tradeoff
between inflation and welfare-gap volatility is
clearly present—policy under the transparent
regime responds more to stabilize inflation and,
as a result, the welfare gap is more volatile, as
was shown in Table 2.
14

δ

γ ji = 0.8

Also apparent in Table 2 is that, in the transparent regime, the
volatility of the welfare gap is independent of the quality of private
sector information. This reflects the certainty equivalence property
that characterizes the policy choice of the central bank in the
transparent regime. The central bank’s setting of its instrument is
independent of γ ji and, as a result, so is the behavior of the output
and welfare gaps.

434

J U LY / A U G U S T

2008

In addition, transparency allows the central
bank to more efficiently neutralize the effects of
expected demand shocks. This can be seen by
comparing the policy reaction coefficients under
the two regimes. Under the transparent regime,
expected demand shocks are completely offset
(i.e., δ v = –1) regardless of the quality of private
sector or central bank information. Under the
opaque regime, δ v = –1 only when the public
sector has perfect information on the shocks.
Otherwise, δ v is less than 1 in absolute value and
demand shocks are not fully offset.
Under the opaque regime, when the policy
instrument is moved, the public will confuse
movements designed to offset forecasted demand
shocks with movements designed to offset either
cost or welfare-gap shocks. As a consequence,
movements aimed at offsetting demand shocks
can affect inflation expectations and cause actual
inflation to fluctuate as the public attributes part
of the instrument change to the other shocks. This
makes it optimal to not offset demand shocks
completely. Once the public can infer the central
bank estimate of demand shocks, as it can under
transparency, there is no longer any reason not to
fully react to insulate the output gap and inflation
from projected demand shocks, so δ 2v = –1.
In New Keynesian models, the welfare costs
of inflation are the result of the relative price
dispersion that arises with staggered price adjustment. Heterogeneous information among firms
will also create relative price dispersion. Because
information provided by the central bank is comF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

Figure 2
Relative Price Dispersion Due to Heterogeneous Information
σ z2
0.15

0.10

Opaque Regime, γcb Low
Opaque Regime, γcb High
Transparent Regime, γcb Low
Transparent Regime, γcb High

0.05

0
0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Quality of Private Information

mon to all firms, it can help reduce relative price
dispersion. Figure 2 shows the measure of relative
prices dispersion that results from heterogeneous
information among firms. The solid line with
asterisks corresponds to the case of poor-quality
i
= 0.5) under the
central bank information (γ cb
opaque regime, and the unconnected asterisks
correspond to the opaque regime with high-quality
central bank information (γ cb = 0.9). The diamonds
indicate the outcomes under the transparent
regime with poor-quality central bank information
(the solid line) and high-quality central bank
information (the unconnected diamonds).
When γ ji = 1, all firms share the same information, so dispersion due to heterogeneous information goes to zero under either policy regime. When
firms have very poor-quality information (i.e., for
low initial values of γ ji ) the heterogeneity of the
information is high, but because the information
is of poor quality, firms do not respond strongly to
it. As information quality improves, firms react
more strongly to their own private information and
this increases price dispersion. Hence, relative
price dispersion is initially increasing in γ ji .
Now consider the role of quality central bank
information under the opaque regime. Relative
price dispersion is lower when central bank inforF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

mation is good than when it is poor, though the
loss from relative price dispersion actually constitutes a larger fraction of total social loss when
central bank information is good. This is the result
of the better stabilization the central bank can
achieve when it has high-quality information on
the economy. Not surprisingly, relative price dispersion is always lower under the transparent
regime. For the same reason, high-quality central
bank information reduces relative price dispersion
under the transparent regime.

CONCLUSIONS
Under an opaque policy regime, where the
private sector and the central bank do not share
the same information, policy actions become a
source of information to the public. And these
policy actions have both direct effects on the output gap and indirect informational effects. Under
an opaque regime, however, certainty equivalence
does not hold and information channels affect
the central bank’s incentives. Optimal policy will
depend on the quality of both central bank information and public information. In an opaque
regime, the central bank stabilizes inflation less
J U LY / A U G U S T

2008

435

Walsh

and the welfare gap more than it would in a transparent regime.
Under a completely transparent regime, the
public sector has access to the central bank assessment of the economy. In this case, policy actions
no longer provide any additional information.
Optimal policy is independent of the quality of
central bank information.
Consistent with the work of Svensson (2006)
and Hellwig (2004), better central bank information was found to improve welfare. With better
information, the central bank can implement
more effective stabilization policies. The effect of
providing more information by making announcements about projected inflation and the output
gap is more ambiguous. Transparency always acts
to lower relative price dispersion across firms by
expanding the set of commonly available information, but central bank announcements can make
expectations more volatile, particularly if firms
have relatively poor information. Transparency
dominates opacity when the private sector has
relatively good information because in this case
firms do not overreact to the information contained in central bank announcements. However,
if private sector information is poor, central bank
announcements can reduce welfare. So although
better central bank information is desirable, more
central bank information may not be.

REFERENCES
Amato, Jeffrey D. and Shin, Hyun Song. “Public and
Private Information in Monetary Policy Models.”
Working Paper No. 138, Bureau of Labor Statistics,
September 2003.
Andrews, Edmund L. “Bad News Puts Political Glare
Onto Economy.” New York Times, September 8, 2007;
http://www.nytimes.com/2007/09/08/business/
08policy.html.
Atkins, Ralph; Mackenzie, Michael and Davies, Paul
J. “ECB Chief Fails to Reassure Markets.” Financial
Times, August 15, 2007, p. 1.
Bernanke, Ben S. “The Recent Financial Turmoil and
its Economic and Policy Consequences.” Presented

436

J U LY / A U G U S T

2008

at the Economic Club of New York, New York, NY,
October 15, 2007.
Blinder, Alan S. “Monetary Policy Today: Sixteen
Questions and About Twelve Answers.” Presented
at the Banco de España conference Central Banks
in the 21st Century, June 2006.
Christiano, Lawrence J.; Eichenbaum, Martin and
Evans, Charles. “Nominal Rigidities and the
Dynamic Effects of a Shock to Monetary Policy.”
Journal of Political Economy, 2005, 113(1), 1-45.
Cornand, Camille and Heinemann, Frank. “Optimal
Degree of Public Information Dissemination.”
Working Paper No. 1353, CESifo, December 2004.
Cukierman, Alex. “The Limits of Transparency.”
Presented at the session “Monetary Policy
Transparency and Effectiveness” at the American
Economic Association, January 2006.
Cukierman, Alex and Meltzer, Allan H. “A Theory
of Ambiguity, Credibility, and Inflation under
Discretion and Asymmetric Information.”
Econometrica, September 1986, 54(5): pp. 1099-128.
Dincer, Nergiz N. and Eichengreen, Barry. “Central
Bank Transparency: Where, Why, and With What
Effects?” NBER Working Paper No. 13003, National
Bureau of Economic Research, March 2007.
Eijffinger, Sylvester C.W. and Geraats, Petra M. “How
Transparent Are Central Banks?” European Journal
of Political Economy, March 2006, 22(1), pp. 1-21.
Faust, Jon and Svensson, Lars E.O. “The Equilibrium
Degree of Transparency and Control in Monetary
Policy.” Journal of Money, Credit, and Banking,
May 2002, 34(2), pp. 520-39.
Geraats, Petra M. “Central Bank Transparency.”
Economic Journal, November 2002, 112(483),
pp. 532-65.
Geraats, Petra M. “Transparency and Reputation: The
Publication of Central Bank Forecasts” Topics in
Macroeconomics, 2005, 5(1), pp 1-26.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

Goodhart, Charles A.E. “Letter to the Editor.”
Financial Times, June 29, 2006.
Gosselin, Pierre; Lotz, Aileen and Wyplosz, Charles.
“The Expected Interest Rate Path: Alignment of
Expectations vs. Creative Opacity Should Central
Banks Reveal Expected Future Interest Rates?”
International Journal of Central Banking
(forthcoming).
Guha, Krishna. “Debate Unfolds on Likely Impact of
Cut in US Interest Rates.” Financial Times,
September 6, 2007, p. 2.
Harford, Tim. “Dear Economist.” Financial Times,
September 1, 2007, Part 2, p. 3.
Hellwig, Christian. “Public Announcements,
Adjustment Delays and the Business Cycle.” UCLA,
November 2002.
Hellwig, Christian. “Heterogeneous Information and
the Benefits of Transparency.” UCLA, December
2004.
Morris, Stephen and Shin, Hyun Song. “Social Value
of Public Information.” American Economic Review,
December 2002, 92(5), pp. 1521-34.
Poole, William. “Communicating the Fed’s Policy
Stance.” Presented at the HM Treasury/GES conference Is There a New Consensus in Macroeconomics?
London, November 30, 2005; www.stlouisfed.org/
news/speeches/2005/11_30_05.htm.
Poole, William. “Fed Communications.” Presented
to the St. Louis Forum, February 24, 2006;
www.stlouisfed.org/news/speeches/2006/
02_24_06.htm.
Rotemberg, Julio J. and Woodford, Michael. “An
Optimizing-Based Econometric Model for the
Evaluation of Monetary Policy,” in Ben S. Bernanke
and Julio J. Rotemberg, eds., NBER Macroeconomic
Annual 1997. Cambridge, MA: MIT Press, 1997,
pp. 297-346.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Rudebusch, Glenn D. and Williams, John C. “Revealing
the Secrets of the Temple: The Value of Publishing
Central Bank Interest Rate Projections.” October
2006; also in J.Y. Campbell, ed., Asset Prices and
Monetary Policy. Chicago: University of Chicago
Press (forthcoming).
Svensson, Lars E.O. “Social Value of Public
Information: Morris and Shin (2002) Is Actually
Pro Transparency, Not Con.” American Economic
Review, March 2006, 96(1), pp. 448-51.
Svensson, Lars E. O. and Woodford, Michael.
“Optimal Policy With Partial Information in a
Forward-Looking Model: Certainty-Equivalence
Redux.” NBER Working Paper No. w9430. National
Bureau of Economic Research, January 2003.
Walsh, Carl E. “Announcements, Inflation Targeting
and Central Bank Incentives.” Economica, May
1999, 66(262), pp. 255-69.
Walsh, Carl E. “Transparency, Flexibility, and
Inflation Targeting,” in Frederic Mishkin and
Klaus Schmidt-Hebbel, eds., Monetary Policy
Under Inflation Targeting. Santiago, Chile: Banco
Central de Chile, 2007a.
Walsh, Carl E. “Optimal Economic Transparency.”
International Journal of Central Banking.” March
2007b, 3(1), pp. 5-30.
Woodford, Michael. Money, Interest, and Prices,
Princeton: Princeton University Press, 2003.
Woodford, Michael. “Central Bank Communications
and Policy Effectiveness.” Presented at the Federal
Reserve Bank of Kansas City symposium The
Greenspan Era: Lessons for the Future, Jackson Hole,
WY, August 2005.
Yellen, Janet L. “Policymaking on the FOMC:
Transparency and Continuity.” Federal Reserve
Bank of San Francisco Economic Letter, No. 2005-22,
September 2, 2005.

J U LY / A U G U S T

2008

437

Walsh

APPENDIX
Welfare Weight on Information Dispersion
The welfare loss in New Keynesian models arises from inefficient price dispersion across firms.
–
Let pj,t denote firm j ’s price and let Pt be the aggregate price level. Then

(

∆t ≡ varj logp j ,t − Pt −1

(
= E ( logp

= Et logp j ,t − Pt −1
t

(

)

j ,t

− Pt −1

(

)

) − ( E logp − P )
) − (P − P ) .
2

2

t −1

t

)

2

t −1

j ,t

t

2

(

Using the assumptions of the Calvo model, the first term on the right can be written as

E t logp j ,t −1 − Pt −1
Now

2

+ (1 − ω ) E t logp*j ,t − logPt −1

2

)

= ω ∆t −1 + (1 − ω ) Et logp*j ,t − Pt −1 .
2

logp*j ,t − P t −1 = logp*j ,t − logp*t + logp*t − Pt −1,
where the first term on the right is zero in the standard New Keynesian model with common information across firms. Hence,

(

Et logp*j ,t − Pt −1

)

(

) + (logp

= E t logp*j ,t − logp*t

2

2

*
t

− Pt −1

)

2

because the idiosyncratic noise is independent of the fundamental shocks. From the definition of
inflation,

(

so

E t logp*j ,t − Pt −1
Combining these results,

(

)

π t = (1 − ω ) logp*t − Pt −1 ,

)

2

(

= E t logp*j ,t − logp*t

(

∆t = ω ∆t −1 + (1 − ω ) E t logp*j ,t − logp*t

(

= ω ∆t −1 + (1 − ω ) E t logp*j ,t − logp*t

It follows that

where

)

2

)

2

)

2

 1  2
π .
+
 1 − ω  t
2

 1  2
+
π − πt2
 1 − ω  t
 ω  2
+
π .
 1 − ω  t

(

)

∞
∞
2
 ω  1 
i  2
*
*
E t ∑ β i ∆t + i = 
E

t ∑ β  π t + i + λ I logp j ,t + i − logpt + i  ,


 1 − ω   1 − ωβ  i = 0 

i =0

λ I = (1 − ω ) / ω .
2

438

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

The Opaque Regime
Let

σ s2 0

Σ =  0 σ v2

0
 0

Σj
and

Σcb

σ s2 + σ 2j ,s

=
0


0

2
σ s2 + σ cb
,s

0
=

0


0

0 ,

σ u2 

0

σ v2 + σ 2j ,v
0

0

2
σ v2 + σ cb
,v

0



,
0

σ u2 + σ 2j ,u 
0



0
.

2
2
σ u + σ cb,u 
0

In the absence of central bank announcements, firm j ’s new information is given by
ets+1 + φ sj ,t +1 


etv+1 + φ vj ,t +1  Ω j ,t +1 
=

,
etu+1 + φ uj ,t +1   θt 


θt


where

θt = δ o Γcb Ωcb ,t +1 .

Define
Θ =  Σ
o


 Σj
ΣΓcb
′ δ o′


ΣΓcb
δ
′
 o
δ Γ cb Σ δ o Γ cb Σcb Γc′ b δ o ′ 
o′ 

−1

= Θo1

Θo2  ,

where Θo1 is 3 × 3 and Θo2 is 3 × 1. Thus, firm j ’s expectation of Zt +1 is

Etj Zt +1 = Θo1 Ω j ,t +1 + Θo2θt

= Θo1 Ω j ,t +1 + Θo2δ o Γ cb Ωcb ,t +1 .

The aggregate information (i.e., aggregated across all firms) is
ets+1 


Ωt +1  etv+1  Zt +1 
 θ  =  u  =  θ .
 t  et +1   t 
θ 
 t 

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

439

Walsh

Defining ιi as a 1 × 3 vector with a 1 in the i th place and zeros elsewhere, we can write (1), a firm’s
price adjustment, as
 ωβ  j
π*j ,t +1 = (1 − ω ) E tj π*t +1 + (1 − ωβ )κθt + (1 − ωβ )(ι1 + κι2 ) Etj Zt +1 + 
E π
 1 − ω  t t +2
= (1 − ω ) Etj π t*+1 + (1 − ωβ )κθt

(

)

 ωβ  j
+ (1 − ωβ )(ι1 + κι2 ) Θo1 Ω j ,t +1 + Θo2θt + 
E π .
 1 − ω  t t +2

An equilibrium strategy for firm j will take the form

π*j ,t +1 = b1o Ω j ,t +1 + b2oθt ,

where bo1 is 1 × 3.
In forming expectations about the pricing behavior of other firms adjusting in the current period,
firm j ’s expectation of π–t*+1 is given by

E tj π*t +1 = b1o Etj+1Ωt +1 + b2oθt
= b1o Etj+1Zt +1 + b2oθt

= b1o  Θ1o Ω j ,t +1 + Θo2θt  + b2oθt

(

)

= b1o Θ1o Ω j ,t +1 + b1o Θo2 + b2o θt .
Because

π t +1 = (1 − ω )π*t +1 ,
E tj π t +2 = (1 − ω ) Etj π t*+2

it follows that

(

)

= (1 − ω ) Etj b1o Θ1o Ω j ,t +2 + b1o Θ2o + b2o θt +1 


= 0.

Substituting these into the equation for π *
j,t +1 and collecting terms,

π*j ,t +1 = (1 − ω )b1o Θ1o Ω j ,t +1 + (1 − ωβ ) (ι1 + κι2 ) Θo1 Ω j ,t +1

(

)

+ (1 − ωβ )κθt + (1 − ω ) b1o Θo2 + b2o θt
+ (1 − ωβ )(ι1 + κι2 ) Θo2θt .

Equating coefficients with the proposed solution yields

b1o = (1 − ωβ ) (ι1 + κι2 ) Θ1o  I 3 − (1 − w ) Θ1o  .
−1

The expression for bo2 is reported in the text.
The objective function under discretion involves minimizing

(

)

2
E tcb π t2+1 + λx xt +1 − etu+1 



440

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Walsh

(1 − ω )bθ Etcbπ t +1 + λx (θt + Etcbetv+1 − Etcbetu+1 ) = 0.

subject to (2) and (7). The first-order condition for the central bank decision problem under discretion is

Etcb π t +1 = (1 − ω )b1o Etcb Ω j ,t +1 + (1 − ω )b2oθt

From (7),

= (1 − ω )b1o Γcb Ωcb ,t +1 + (1 − ω )b2oθt

because

E tcb Ω j ,t +1 = E tcb Zt +1 = Γ cb Ωcb ,t +1 .

( )

Hence, the first-order condition becomes

 λ + (1 − ω )2 b o 2  θ = (1 − ω )bo E cb eu
2
2 t
t +1
 x
 t

− (1 − ω ) b2o b1o Γ cb Ωcb ,t +1 − λx Etcbetv+1 .
2

This in turn implies that

( ) θ

 λ + (1 − ω )2 b o
2
 x

Hence,
where δ o = [δ s δ v δ u ] and

2

t

= − (1 − w ) b2ob1o Γcb Ωcb ,t +1
2

+ 0

− λx

(1 − ω )b2o 

Γ cb Ωcb,t +1 .

θt = δ o Γcb Ωcb ,t +1 ,

(8)


2
o
1 − ω ) b2ob11
(

δ =−
 + 1− 2 o
 λx ( ω ) b2

(9)


2
o 
λx + (1 − ω ) b2ob12


δ =−
 + − 2 o 2
 λx (1 ω ) b2 

s

v



2


( )

( )


2
o 
λ − (1 − ω ) b2ob13
.
δu =  x
 + 1− 2 o 2 
 λx ( ω ) b2 

( )

The Transparent Regime

In regime f, the central bank announces its signals so that firms observe Ωcb,t+1 directly. Firms’
expectations now depend on Ωcb,t+1 and not directly on θt.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

441

Walsh

Guess an equilibrium strategy of the form

π ∗j ,t +1 = b1f Ω j ,t +1 + b2f Ωcb ,t +1 + b3f θt .

Then, following the same procedures as used to solve the model without announcements, one finds
that

b1f = (1 − ωβ )(ι1 + κι2 )  Θ1f  I 3 − (1 − ω ) Θ1f 
 1
b2f =   (1 − ω )b f + (1 − ωβ )(ι1 + κι2 )  Θ2f .
ω

−1

b3f =

(1 − ωβ ) κ .
ω

(1 − ω )b3f Etcbπ t +1 + λx (θt + Etcbetv+1 − Etcbetu+1 ) = 0.

Optimal policy in this regime satisfies the first-order condition

(

Note that

)

E tcb π t +1 = (1 − ω )  b1f Γ cb + b2f Ωcb ,t +1 + b3f θt 

because

E tcb Ω j ,t +1 = Γcb Ωcb ,t +1 .
Solving the first-order condition yields

θt = d f Γcb Ωcb ,t +1 ,

with d f = [d s d v d u ] and

(

)

 bf / γ s + bf 
21
11 
cb
2
d s = − (1 − ω ) b3f 
2

f 2
 λx + h (1 − w ) 

( )
 λ + (1 − ω ) b (b / γ ) + b  


= −


λ + (b ) (1 − ω )


 λ − (1 − ω ) b (b / γ ) + b  


=
.


λ
+
b
ω
1
−
(
)
(
)


2

d

v

f
3

x

2

d

x

x

442

J U LY / A U G U S T

2008

v
cb

f 2
3

x

u

f
22

f
3

f 2
3

f
23

f
12

2

u
cb

f
13

2

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Commentary
Marvin Goodfriend

B

ack in 1984, I was invited by Bill Poole,
then a member of the President’s
Council of Economic Advisors, to
work as a senior staff economist for
money and banking at the Council. When I
arrived at the Old Executive Office Building that
fall, I brought with me an early draft of a paper
on central bank secrecy that I had just finished.
I gave a copy to Bill, who I knew had a longstanding interest in central bank communications.
I remember his reaction: Bill put the paper in an
envelope, signed it, wrote on it “for my eyes
only,” and had his secretary put it in a safe. How
appropriate, I thought! Later, Bill asked for a
briefing and I described among other things the
substance of the FOMC defense of monetary
policy secrecy in a recently concluded Freedom
of Information Act lawsuit.
My interest in the topic was initiated by a
headline in the American Banker that read
“Secrecy Primary Tool of Monetary Policy.” How
could that be? The assertion seemed at odds with
everything Bill taught us in graduate school at
Brown—that, according to rational expectations
theory, more information should be better than
less. Bill emphasized that private agents have an
incentive to use to their advantage whatever
information they have, whatever its source. I wrote
the paper to explore under what circumstances,
if any, central bank secrecy could be justified.
I would never have predicted that “information policy” would have generated so much interest among central bankers or as much research as

it does today. The main difference is that today we
speak of central bank “transparency” or “policy
guidance” rather than central bank “secrecy.” I
remember thinking that it was unlikely that central
bank secrecy would ever be debated openly, and
if it ever were, then I thought the case for transparency would quickly win the day. I wasn’t quite
correct on either outcome.
In any case, it is useful to recall how far we’ve
come. For the most part, central banks have moved
away from secrecy toward transparency, partly
by being more explicit about longer-run inflation
objectives and partly by communicating shortterm policy concerns and intentions more explicitly. Few would now claim that secrecy is a tool
of monetary policy. Quite the contrary, communication is today widely recognized to play a central
role in monetary policy. That said, we have come
to the point where even those who favor transparency in principle worry that excessive forward
guidance on monetary policy might be counterproductive. This is the thrust of the concern
expressed by Bill that motivates Carl Walsh’s
(2008) paper.
The balance of my remarks addresses the
limits of forward guidance by drawing a distinction between two dimensions of information
policy: (i) transparency with regard to a long-run
inflation objective and (ii) discretionary announcements used by central banks to substitute for
transparency about a long-run inflation objective.
Transparency and communication help to
implement interest rate policy in two ways—

Marvin Goodfriend is a professor of economics at the Tepper School of Business, Carnegie Mellon University.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 443-45.
© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

443

Goodfriend

first, because a central bank uses a nominal
interest rate policy instrument to manage real
interest rates, and second, because a central bank
uses a short-term interest rate to manage longerterm rates.
The primary role of transparency and communication must be to convey clearly the central
bank’s long-run inflation objective in order to
anchor inflation expectations firmly, so that a central bank can manage short- and longer-term real
interest rates reliably with its nominal interest
rate policy instrument.
The secondary role of communication is to
help exercise leverage over longer-term interest
rates with a short-term interest rate policy instrument. This a central bank can do because financial markets price longer-term interest rates (up
to a possibly time-varying term premium) as an
average of expected future short rates. To manage
expectations of future short rates, a central bank
must take markets into its confidence by communicating its intentions based on its forecast of
economic conditions, its structural view of the
economy, and its medium-term objectives for
employment, financial stability, and inflation.
Naturally, central banks are reluctant to reveal
much of their current concerns or intentions
because judgments about such things are necessarily imperfect, tentative, and subject to frequent
revision. And some central banks are reluctant
to announce explicitly their longer-run inflation
objectives too. On the other hand, central banks
recognize that interest rate policy benefits from
transparency and communication, and central
banks are inclined to be evermore revealing of
their thinking in an effort to better manage longerterm interest rates.
The reluctance of central bankers to take markets systematically into their confidence creates
a reliance on announcements to convey their concerns about the economy and their intentions for
short-term interest rates. Discretionary announcements employed to guide markets in lieu of systematic transparency about underlying objectives
and concerns would appear to provide a degree
of flexibility in communication policy. The point
I wish to make, however, is that it is an illusion
to think that discretionary announcements can
444

J U LY / A U G U S T

2008

substitute reliably for systematic strategic transparency. Inevitably, the public will find it hard to
interpret announcements made without strategic
guidance and, therefore, a central bank will find
it hard to predict the public’s reaction to such
announcements.
My point is nothing more than to apply
rational expectations reasoning, made famous by
Robert Lucas, to announcements. It is difficult
for a central bank to predict how either a policy
action or a discretionary announcement will be
interpreted by markets when undertaken with
insufficient strategic guidance, that is, when either
is undertaken independently of a policy rule.
The Federal Reserve’s experience in May and
June 2003 is a case in point. The Fed famously
accompanied a cut in its federal funds rate target
at the May 2003 FOMC meeting with a surprise
announcement that significant further disinflation
would be “unwelcome.” The statement was
intended to alert the market to the fact that the
Fed would act to deter deflation. The Fed was
taken by surprise by what it considered an overreaction in the media and markets to its concern
for deflation. The Fed rectified matters by dropping the federal funds rate by only 25 basis points
at the June FOMC meeting instead of the expected
50 basis points.
The market reaction to the surprise May 2003
FOMC announcement was excessive relative to
what the Fed expected, but it could have been
just as easily insufficient relative to what the Fed
intended. Either way, such misunderstandings
are potentially costly for the implementation of
interest rate policy because they whipsaw markets,
create confusion, and weaken a central bank’s
ability to manage interest rates. Failing to convey
a monetary policy message accurately in the first
place can produce an extended period of policyinduced volatility as the mutual understanding
between markets and the central bank on interest
rate policy is gradually and painfully restored.
Arguably, the confusion in 2003 could have
been avoided if an explicit numerical lower
bound on the Fed’s tolerance range for core personal consumption expenditures inflation had
been in place. Markets would have been prepared
for interest rate actions the Fed would take as

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Goodfriend

inflation neared the 1 percent lower bound on its
tolerance range. And longer-term interest rates
would have drifted down as inflation drifted
lower in early 2003 in anticipation of the Fed’s
reaction. In that context, announcements could
have reinforced reliably the Fed’s concern about
further disinflation and the credibility of its
commitment to prevent inflation from falling
below 1 percent.
In conclusion, and returning to Bill Poole’s
concern about excessive forward guidance, we
can say this: Forward guidance on interest rate
policy is likely to be most effective when it reinforces a well-articulated monetary policy strategy
anchored by an explicit numerical long-run inflation target. Otherwise, forward guidance should
be undertaken with care and only with good reason given that discretionary announcements are
difficult if not impossible to calibrate consistently
to achieve their intended effect.

REFERENCE
Walsh, Carl E. “Announcements and the Role of
Policy Guidance.” Federal Reserve Bank of St. Louis
Review, July/August 2008, 90(4), pp. 421-42.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

445

446

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Rules-of-Thumb for Guiding Monetary Policy
William Poole
This article was originally published in the Board of Governors of the Federal Reserve System
Open Market Policies and Operating Procedures—Staff Studies, July 1971. It is reprinted here as
an addendum to these conference proceedings.
Federal Reserve Bank of St. Louis Review, July/August 2008, 90(4), pp. 447-97.

INTRODUCTION

T

his study has been motivated by the
recognition that the key to understanding
policy problems is the analysis of uncertainty. Indeed, in the absence of uncertainty it
might be said that there can be no policy problems, only administrative problems. It is surprising, therefore, that there has been so little
systematic attention paid to uncertainty in the
policy literature in spite of the fact that policymakers have repeatedly emphasized the importance of the unknown.
In the past, the formal models used in the
analysis of monetary policy problems have almost
invariably assumed complete knowledge of the
economic relationships in the model. Uncertainty
is introduced into the analysis, if at all, only
through informal consideration of how much
difference it makes if the true relationships differ
from those assumed by the policymakers. In this
study, on the other hand, uncertainty plays a key
role in the formal model.
Since this study is so long, a few comments
at the outset may assist the reader in finding his
way through it. The remainder of this introduc-

tory section outlines the structure of the study so
that the reader can see how the various parts fit
together. The reader interested only in a summary
of the analysis and empirical findings should
read this introductory section and then turn
directly to the summary in Section V. This summary concentrates on the theoretical analysis
while only briefly stating the most important
empirical findings. It omits completely the technical details of both the theoretical and empirical
work. The reader interested in the technical details
should, of course, turn to the appropriate parts
of Sections I through IV. Insofar as possible these
sections have been written so that the reader can
understand any one section without having to
wade through all of the other sections.
Section I contains the theoretical argument
comparing interest rates and the money stock as
policy-control variables under conditions of uncertainty. The analysis is verbal and graphical, using
the simple Hicksian IS-LM model with random
terms added. This model is general enough to
include both Keynesian and monetarist outlooks,
depending on the specific assumptions as to the
shapes of the functions. Since the theoretical
analysis emphasizes the importance of the relative

William Poole is a former president of the Federal Reserve Bank of St. Louis. At the time this article was written, he was a senior economist in
the special studies section of the division of research and statistics at the Board of Governors of the Federal Reserve System. Joan Walton,
Lillian Humphrey, and Debra Bellows provided research assistance.

© 2008, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

447

Poole

stability of the expenditures and money demand
functions, an examination of the evidence on
relative stability appears in Section II.
Given the conclusion of Section II on the
superiority of a policy operating through adjustments in the money stock, the next question is how
the money stock should be adjusted to achieve
the best results. While policymakers generally
look askance at suggestions for policy rules, the
only way that economists can give long-run advice
is in terms of rules. That is to say, the economist
is not being helpful at all if he in effect says, “Look
at the rate of inflation, at the rate of unemployment, at the forecasts of the government budget
deficit, and at other relevant factors, and then act
appropriately.” Advice requires the specification
of exactly how policy should be adjusted, and for
this advice to be more than an ad hoc recommendation for the current situation, it must involve
specification of how the money stock or some
other control variable should be adjusted under
hypothetical future conditions of inflation, unemployment, and so forth. The purpose of Section III
is to develop such a rule-of-thumb, or policy
guideline, based on the theoretical and empirical
analyses of Sections I and ll.
A number of technical problems of monetary
control are examined in Section IV. After a short
introduction to the issues, the first part of this
section discusses the relative merits of a number
of monetary aggregates including various reserve
measures, the narrowly and broadly defined
money stocks, and bank credit. The second part
examines whether policy should specify desired
rates of change of an aggregate in terms of weekly,
monthly, or quarterly averages, or in some other
manner. The third part examines in a very incomplete fashion a few of the problems of adjusting
open market operations so as to reach the desired
level of an aggregate.
Finally, Section V consists of a summary of
Sections I through IV. To avoid undue repetition,
woven into this summary section are a number of
general observations not examined in the other
sections.
448

J U LY / A U G U S T

2008

I. THE THEORY OF MONETARY
POLICY UNDER UNCERTAINTY
Basic Concepts
The theory of optimal policy under uncertainty has provided many insights into actual
policy problems (Theil, 1964; Brainard, 1967; Holt,
1962; Poole, 1970). While much of this theory is
not accessible to the nonmathematical economist,
it is possible to explain the basic ideas without
resort to mathematics.
The obvious starting point is the observation
that with our incomplete understanding of the
economy and our inability to predict accurately
the occurrence of disturbing factors such as strikes,
wars, and foreign exchange crises, we cannot
expect to hit policy goals exactly. Some periods
of inflation or unemployment are unavoidable.
The inevitable lack of precision in reaching policy
goals is sometimes recognized by saying that the
goals are “reasonably” stable prices and “reasonably” full employment.
While the observation above is trite, its
implications are not. Two points are especially
important. First, policy should aim at minimizing
the average size of errors. Second, policy can be
judged only by the average size of errors over a
period of time and not by individual episodes.
Because this second point is particularly subject to
misunderstanding, it needs further amplification.
Since policymakers operate in a world that
is inherently uncertain, they must be judged by
criteria appropriate to such a world. Consider the
analogy of betting on the draw of a ball from an
urn with nine black balls and one red ball. Anyone
offered a $2 payoff for a $1 bet would surely bet
on a black ball being drawn. If the draw produced
the red ball, no one would accuse the bettor of a
stupid bet. Similarly, the policymaker must play
the economic odds. The policymaker should not
be accused of failure if an inflation occurs as the
result of an improbable and unforeseeable event.
Now consider the reverse situation from that
considered in the previous paragraph. Suppose
the bettor with the same odds as above bets on the
red ball and wins. Some would claim that the bet
was brilliant, but assuming that the draw was not
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

rigged in any way, the bet, even though a winning one,
must be judged foolish. It is foolish because, on
the average, such a betting strategy will lead to
substantially worse results than the opposite
strategy. Betting on red will prove brilliant only
one time out of 10, on the average. Similarly, a
particular policy action may be a bad bet even
though it works in a particular episode.
There is a well-known tendency for gamblers
to try systems that according to the laws of probability cannot be successful over any length of
time. Frequently, a gambler will adopt a foolish
system as the result of an initial chance success
such as betting on red in the above example. The
same danger exists in economic policy. In fact, the
danger is more acute because there appears to be
a greater chance to “beat the system” by applying
economic knowledge and intuition. There can be
no doubt that it will become increasingly possible
to improve on simple, naive policies through
sophisticated analysis and forecasting and so in
a sense “beat the system.” But even with improved
knowledge some uncertainty will always exist,
and therefore so will the tendency to attempt to
perform better than the state of knowledge really
permits.
Whatever the state of knowledge, there must
be a clear understanding of how to cope with
uncertainty, even though the degree of uncertainty
may have been drastically reduced through the
use of modern methods of analysis. The principal
purpose of this section is to improve understanding of the importance of uncertainty for policy by
examining a simple model in which the policy
problem is treated as one of minimizing errors
on the average. Particular emphasis is placed on
whether controlling policy by adjusting the interest rate or by adjusting the money stock will lead
to smaller errors on the average. The basic argument is designed to show that the answer to which
policy variable—the interest rate or the money
stock—minimizes average errors depends on the
relative stability of the expenditures and money
demand functions and not on the values of parameters that determine whether monetary policy is
in some sense more or less “powerful” than fiscal
policy.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Figure 1

r

LM1

r0

LM2

IS
Y

Yf

SOURCE: Originally published version, p. 139.

Monetary Policy Under Uncertainty in
a Keynesian Model 1
The basic issues concerning the importance
of uncertainty for monetary policy may be examined within the Hicksian IS-LM version of the
Keynesian system. This elementary model has
two sectors, an expenditure sector and a monetary
sector, and it assumes that the price level is fixed
in the short run.2 Consumption, investment, and
government expenditures functions are combined
to produce the IS function in Figure 1, while the
demand and supply of money functions are combined to produce the LM function. If monetary
policy fixes the stock of money, then the resulting
LM function is LM1, while if policy fixes the interest rate at r0 the resulting LM function is LM2. It
is assumed that incomes above “full employment
income” are undesirable due to inflationary pressures while incomes below full employment
income are undesirable due to unemployment.
If the positions of all the functions could be
predicted with no errors, then to reach full
employment income, Yf , it would make no differ1

For the most part this section represents a verbal and graphical
version of the mathematical argument in Poole (1970).

2

Simple presentations of this model may be found in Reynolds
(1969, pp. 275-82) and Samuelson (1967, pp. 327-32).

J U LY / A U G U S T

2008

449

Poole

Figure 2

Figure 3

SOURCE: Originally published version, p. 141.

SOURCE: Originally published version, p. 140.

ence whether policy fixed the money stock or the
interest rate. All that is necessary in either case
is to set the money stock or the interest rate so
that the resulting LM function will cut the IS function at the full employment level of income.
Significance of Disturbances. The positions
of the functions are, unfortunately, never precisely known. Consider first uncertainty over the
position of the IS function—which, of course,
results from instability in the underlying consumption and investment functions—while
retaining the unrealistic assumption that the
position of the LM function is known. What is
known about the IS function is that it will lie
between the extremes of IS1 and IS2 in Figure 2.
If the money stock is set at some fixed level, then
it is known that the LM function will be LM1, and
accordingly income will be somewhere between
the extremes of Y1 and Y2. On the other hand,
suppose policymakers follow an interest rate
policy and set the interest rate at r0. In this case
income will be somewhere between Y1′ and Y2′,
a wider range than Y1 to Y2, and so the money
stock policy is superior to the interest rate policy.3
3

In Figure 2 and the following diagrams, the outcomes from a
money stock policy will be represented by unprimed Y’s, while

450

J U LY / A U G U S T

2008

The money stock policy is superior because an
unpredictable disturbance in the IS function will
affect the interest rate, which in turn will produce
spending changes that partly onset the initial
disturbance.
The opposite polar case is illustrated in
Figure 3. Here it is assumed that the position of
the IS function is known with certainty, while
unpredictable shifts in the demand for money
cause unpredictable shifts in the LM function if
a money stock policy is followed. With a money
stock policy, income may end up anywhere
between Y1 and Y2. But an interest rate policy
can fix the LM function at LM3 so that it cuts the
IS function at the full employment level of income,
Yf . With an interest rate policy, unpredictable
shifts in the demand for money are not permitted
to affect the interest rate; instead, in the process
of fixing the interest rate the policymakers adjust
the stock of money in response to the unpredictable shifts in the demand for money.
In practice, of course, it is necessary to cope
with uncertainty in both the expenditure and
monetary sectors. This situation is depicted in
the outcomes from an interest rate policy will be represented by
primed Y’s.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Figure 4

Figure 5

SOURCE: Originally published version, p. 141.

SOURCE: Originally published version, p. 141.

Figure 4, where the unpredictable disturbances are
larger in the expenditure sector, and in Figure 5
where the unpredictable disturbances are larger
in the monetary sector.
The situation is even more complicated than
shown in Figures 4 and 5 by virtue of the fact
that the disturbances in the two sectors may not
be independent. To illustrate this case, consider
Figure 5 in which the interest rate policy is superior to the money stock policy if the disturbances
are independent. Suppose that the disturbances
were connected in such a way that disturbances
on the LM1 side of the average LM function were
always accompanied by disturbances on the IS2
side of the average IS function. This would mean
that income would never go as low as Y1, but
rather only as low as the intersection of LM1 and
IS2, an income not as low as Y1′ under the interest
rate policy. Similarly, the highest income would
be given by the intersection of LM2 and IS1, an
income not so high as Y2′.4
4

The diagram could obviously have been drawn so that an interest
rate policy would be superior to a money stock policy even though
there was an inverse relationship between the shifts in the IS and
LM functions. However, inverse shifts always reduce the margin

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Importance of Interest Elasticities and Other
Parameters. So far the argument has concentrated
entirely on the importance of the relative sizes
of expenditure and monetary disturbances. But
is it also important to consider the slopes of the
functions as determined by the interest elasticities of investment and of the demand for money,
and by other parameters? Consider the pair of IS
functions, IS1 and IS2, as opposed to the pair, IS3
and IS4, in Figure 6. Each pair represents the
maximum and minimum positions of the IS function as a result of disturbances, but the pairs have
different slopes. Each pair assumes the same
maximum and minimum disturbances, as shown
by the fact that the horizontal distance between
IS1 and IS2 is the same as between IS3 and IS4.
For convenience, but without loss of generality,
the functions have been drawn so that under
an interest rate policy represented by LM2 both
pairs of IS functions produce the same range of
incomes. To keep the diagram from becoming
of superiority of an interest rate policy, possibly to the point of
making a money stock policy superior. Conversely, positively
related shifts favor an interest rate policy.

J U LY / A U G U S T

2008

451

Poole

Figure 6

Figure 7

SOURCE: Originally published version, p. 141.

SOURCE: Originally published version, p. 141.

Figure 8

Figure 9

SOURCE: Originally published version, p. 143.

SOURCE: Originally published version, p. 143.

452

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

too messy, only one LM function, LM1, under a
money stock policy has been drawn. Now consider disturbances that would shift LM1 back and
forth. From Figure 6 it is easy to see that if shifts
in LM1 would lead to income fluctuations greater
than from Y1′ to Y2′—which fluctuations would
occur under an interest rate policy—then an
interest policy would be preferred regardless of
whether we have the pair IS1 and IS2, or the pair
IS3 and IS4.
The importance of the slope of the LM function
is investigated in Figure 7 for the two LM pairs,
LM1 and LM2, and LM3 and LM4. The functions
have been drawn so that each pair represents
different slopes but an identical range of disturbances. It is clear that if shifts in IS1 are small
enough, then an interest rate policy will be preferred regardless of which pair of LM functions
prevails. Conversely, if a money stock policy is
preferred under one pair of LM functions because
of the shifts in the IS function, then a money stock
policy will also be preferred under the other pair
of LM functions.
The upshot of this analysis is that the crucial
issue for deciding upon whether an interest rate
or a money stock policy should be followed is the
relative size of the disturbances in the expenditure and monetary sectors. Contrary to much
recent discussion, the issue is not whether the
interest elasticity of the demand for money is
relatively low or whether fiscal policy is more or
less “powerful” than monetary policy.
To avoid possible confusion, it should be
emphasized that the above conclusion is in terms
of the choice between a money stock policy and
an interest rate policy. However, if a money stock
policy is superior, then the steeper the LM function is, the lower the range of income fluctuation,
as can be seen from Figure 7. It is also clear from
Figure 6 that under an interest rate policy an error
in setting the interest rate will lead to a larger error
in hitting the income target if the IS function is
relatively flat than if it is relatively steep. But
these facts do not affect the choice between
interest rate and money stock policies.
The “Combination” Monetary Policy. Up to
this point the analysis has concentrated on the
choice of either the interest rate or the money
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

stock as the policy variable. But it is also possible
to consider a “combination” policy that works
through the money stock and the interest rate
simultaneously. An understanding of the combination policy may be obtained by further consideration of the cases depicted in Figures 2 and 7.
In Figure 8 the disturbances, as in Figure 2,
are entirely in the expenditure sector. As was
seen in Figure 2, the result obtained by fixing the
money stock so that LM1 prevailed was superior
to that obtained by fixing the interest rate so that
LM2 prevailed. But now suppose that instead of
fixing the money stock, the money stock were
reduced every time the interest rate went up and
increased every time the interest rate went down.
This procedure would, of course, increase the
amplitude of interest rate fluctuations.5 But if the
proper relationship between the money stock and
the interest rate could be discovered, then the
LM function could be made to look like LM0 in
Figure 8. The result would be that income would
be pegged at Yf . Disturbances in the IS function
would produce changes in the interest rate, which
in turn would produce spending changes sufficient to completely offset the effect on income of
the initial disturbance.
The most complicated case of all to explain
graphically is that in which it is desirable to
increase the money stock as the interest rate rises
and decrease it as the interest rate falls. In Figure 9
the leftmost position of the LM function as a
result of disturbances is LM1 when the money
stock is fixed and is LM2 when the combination
policy of introducing a positive money-interest
relationship is followed. The rightmost positions
of the LM functions under these conditions are not
shown in the diagram. When the interest rate is
pegged, the LM function is LM3. If either LM1 or
LM2 prevails, the intersection with IS1 produces
the lowest income, which is below the Y1′ level
5

The increased fluctuations in interest rates must be carefully
interpreted. In this model the IS function is assumed to fluctuate
around a fixed-average position. However, in more complicated
models involving changes in the average position of the IS function,
perhaps through the operation of the investment accelerator, interest
rate fluctuations may not be increased by the policy being discussed
in the text. By increasing the stability of income over a period of
time, the policy would increase the stability of the IS function in
Figure 8 and thereby reduce interest rate fluctuations.

J U LY / A U G U S T

2008

453

Poole

obtained with LM3. But in the case of LM2, income
at Y1 is only a little lower than at Y1′, whereas
when IS2 prevails, LM2 is better than LM3 by the
difference between Y2 and Y2′. Since the gap
between Y2 and Y2′ is larger than that between Y1
and Y1′, it is on the average better to adopt LM2
than LM3 even though the extremes under LM2
are a bit larger than under LM3.
Extensions of Model. At this point a natural
question is that of the extent to which the above
analysis would hold in more complex models.
Until more complicated models are constructed
and analyzed mathematically, there is no way of
being certain. But it is possible to make educated
guesses on the effects of adding more goals and
more policy instruments, and of relaxing the rigid
price assumption.
Additional goals may be added to the model
if they are specified in terms of “closer is better”
rather than in terms of a fixed target that must be
met. For example, it would not be mathematically
difficult to add an interest rate goal to the model
analyzed above, if deviations from a target interest
rate were permitted but were treated as being
increasingly harmful. On the other hand, it is
clear that if there were a fixed-interest target,
then the only possible policy would be to peg
the interest rate, and income stabilization would
not be possible with monetary policy alone.
The addition of fiscal policy instruments
affects the results in two major ways. First, the
existence of income taxes and of government
expenditures inversely related to income (for
example, unemployment benefits) provides automatic stabilization. In terms of the model, automatic stabilizers make the IS function steeper than
it otherwise would be, thus reducing the impact
of monetary disturbances, and reduce the variance
of expenditures disturbances in the reduced-form
equation for income. This effect would be shown
in Figure 6 by drawing IS1 so that it cuts LM2 to
the right of Y1′ and drawing IS2 so that it cuts LM2
to the left of Y2′.
The second major impact of adding fiscal
policy instruments occurs if both income and the
interest rate are goals. Horizontal shifts in the IS
function that are induced by fiscal policy adjustments, when accompanied by a coordinated mone454

J U LY / A U G U S T

2008

tary policy, make it possible to come closer to a
desired interest rate without any sacrifice in
income stability. An obvious illustration is provided by the case in which the optimal monetary
policy from the point of view of stabilizing income
is to set the interest rate as in Figure 5. Fiscal
policy can then shift the pair of IS functions, IS1
and IS2, to the right or left so that the expected
value of income is at the full employment level.
If the interest rate is not a goal variable, then
fiscal policy actions that shift the IS function
without changing its slope do not improve income
stabilization over what can be accomplished with
monetary policy alone, provided the lags in the
effects of monetary policy are no longer than those
in the effects of fiscal policy. An exception would
be a situation in which reaching full employment
with monetary policy alone would require an
unattainable interest rate, such as a negative one.
These comments on fiscal policy have been
presented in order to clarify the relationship
between fiscal and monetary policy. While monetary policymakers may urge fiscal action, for the
most part monetary policy must take the fiscal
setting as given and adapt monetary policy to
this setting. It must then be recognized that an
interest rate goal can be pursued only at the cost
of sacrificing somewhat the income goal.6
All of the analysis so far has taken place
within a model in which the price level is fixed
in the short run. This assumption may be relaxed
by recognizing that increases in money income
above the full employment level involve a mixture of real income gains and price inflation.
Similarly, reductions in money income below
the full employment level involve real income
reductions and price deflation (or a slower rate
of price inflation). The model used above can be
reinterpreted entirely in terms of money income
so that departures from what was called above
the “full employment” level of income involve a
mixture of real income and price changes. Stabi6

An interest rate goal must be sharply distinguished from the use
of the interest rate as a monetary policy instrument. By a goal
variable is meant a variable that enters the policy utility function.
Income and interest rate goals might be simultaneously pursued
by setting the money stock as the policy instrument or by setting
the interest rate as the policy instrument.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

lizing money income, then, involves a mixture
of the two goals of stabilizing real output and of
stabilizing the price level.
However, interpreted in this way the structure
of the model is deficient because it fails to distinguish between real and nominal interest rates.
Price level increases generate inflationary expectations, which in turn generate an outward shift
in the IS function. The model may be patched up
to some extent by assuming that price changes
make up a constant fraction of the deviation of
income from its full employment level and assuming further that the expected rate of inflation is a
constant multiplied by the actual rate of inflation.
Expenditures are then made to depend on the real
rate of interest, the difference between the nominal rate of interest and the expected rate of inflation. The result is to make the IS function, when
drawn against the nominal interest rate, flatter and
to increase the variance of disturbances to the IS
function. These elects are more pronounced: (a)
the larger is the interest sensitivity of expenditures; (b) the larger is the fraction of price changes
in money income changes; and (c) the larger is
the effect of price changes on price expectations.
The conclusion is that since price flexibility in
effect increases the variance of disturbances in
the IS function, a money stock policy tends to be
favored over an interest rate policy.

II. EVIDENCE ON THE RELATIVE
MAGNITUDES OF REAL AND
MONETARY DISTURBANCES
Nature of Available Evidence
Little evidence is available that directly tests
the relative stability of the expenditure and money
demand functions. It is necessary, therefore, to
proceed somewhat indirectly. First, simulation of
the FR-MIT model7 is used to show the probable
size of the effect on gross national product (GNP),
the GNP deflator, and the unemployment rate of
an assumed expenditure disturbance. This evi-

dence provides some indication of the extent to
which the impact of an expenditure disturbance
depends on the choice between the money stock
and the Treasury bill rate as monetary policy control variables. This evidence bears only on the
question of what happens if an expenditure disturbance occurs, not on the relative stability of
the expenditure and money demand functions.
However, this approach is useful when combined
with intuitive feelings about relative stability.
The second type of evidence, derived from
reduced-form studies, is more directly related to
the question of relative stability; nevertheless, it
is not entirely satisfactory because the studies
examined were not designed to answer the question at hand. To supplement these studies by
other investigators, there follows a simple test of
the stability of the demand for money function.

Impact of an Expenditure Disturbance
Simulation of the FR-MIT model provides
some insight as to how the size of the impact of
an expenditure disturbance depends on the choice
of the monetary policy instrument. The simulation technique is necessary because the FR-MIT
model is nonlinear, making it impossible to obtain
an explicit expression for the reduced form.8
However, comparison of two sets of simulations
provides some interesting results. Except as indicated below, the simulations all used the actual
historical values of the model’s exogenous variables and all simulations started with 1962-I, a
starting date selected arbitrarily.
The first set of five simulations assumes an
exogenous money stock that grows by 1 percent
per quarter, starting with the actual money stock
in 1961-IV as the base. To investigate the impact
of a disturbance in an exogenous expenditures
variable, the exogenous variable “federal expenditures on defense goods” was set in one simulation at its actual level minus $10 billion; in another
at actual minus $5 billion; and in three further
simulations at actual, actual plus $5 billion, and
actual plus $10 billion. This procedure produces
8

7

For a general description of the model, see de Leeuw and Gramlich
(1968).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

In a reduced-form equation, an endogenous (that is, simultaneously
determined) variable is expressed as depending only on exogenous
and predetermined variables (variables taken as given for the
current period).

J U LY / A U G U S T

2008

455

Poole

Figure 10
Reduced-Form Parameter Estimates for
Federal Defense Expenditures from FR-MIT
Model

SOURCE: Originally published version, p. 146.

four hypothetical observations on “disturbances”
in defense expenditures, of –10, –5, +5, and +10,
and the simulation provides four corresponding
observations for the change in income (and other
endogenous variables). By using income as an
example, the change in an endogenous variable
in response to a disturbance in defense expenditures is the difference between income simulated
by the model when defense expenditures were
set at actual historical values and when set at
actual plus 10, plus 5, and so forth. The income
obtained in the simulations, even when defense
expenditures are set at actual levels, is not the
same as the actual historical level of income both
because the assumed monetary policy differs from
456

J U LY / A U G U S T

2008

the policy actually followed and because of errors
in the model itself.
By calculating the ratio of the change in an
endogenous variable to the disturbance in defense
expenditures for the four observations, four
estimates of the linear approximation to the
reduced-form parameter, or multiplier, of defense
expenditures are obtained, and these four estimates have been averaged to produce a single
estimate. Since the effects of a disturbance accumulate over time, the reduced-form parameter
estimate has been calculated for the 12 quarters
from 1962-I through 1964-IV. Exactly the same
procedure has been used for the simulations with
a fixed rate for 3-month Treasury bills. Finally, the
ratio of the parameter estimates for the reduced
forms under the money stock and interest rate
policies has been calculated with the parameter
estimates from the simulations with the exogenous money stock in the numerator of the ratio.
The reduced-form parameter estimates under
the two monetary policies, and the ratios of these
estimates, have been plotted in Figure 10 for 12
quarters for the reduced forms for nominal GNP,
for the unemployment rate, and for the GNP
deflator. The results are striking. A substantial
difference appears in the parameters of reduced
forms for the fourth quarter following the initial
disturbance, and the differences in the parameters
become steady thereafter. By the 12th quarter the
reduced-form parameters for the money stock
policy are only about 40 percent of those for the
interest rate policy.
The interpretation of these results is that
employment, output, and the price level are far
more sensitive to disturbances in defense expenditures under an interest rate policy than under
a money stock policy. This conclusion presumably generalizes to expenditures variables other
than defense expenditures, but the results would
differ in detail because each expenditures variable enters the FR-MIT model in a somewhat different way.
It might be argued that these results suggest
that there is no significant difference between
interest rate and money stock policies because
the reduced-form parameters are essentially identical up to about four quarters. Surely, so this
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

argument goes, mistakes could be discovered and
onset within four quarters. There are two difficulties with this argument. The first is that the
FR-MIT model may overstate the length of the
lags and therefore understate the differences in
reduced-form parameters for the two policies for
the quarters immediately following a disturbance.
But the second and more important reason is that
it may not be easy to reverse the effects of the
disturbance after the disturbance has been discovered. With an interest rate policy, a very large
change in the rate might be required to offset the
effects appearing after the fourth quarter, and such
a change might not be feasible, or at least not
desirable in terms of its effects on security markets
and on income in the more distant future.
The numerical results reported above depend,
of course, on the FR-MIT model, and this model
is deficient in a number of respects. But any
model in which, other things being equal, investment and other interest-sensitive expenditures
decline when interest rates rise will show results
in the same direction.
These results may be extended to analyze the
significance of errors in forecasting exogenous
variables. Consider an explicit expression for the
reduced form for income. Let the exogenous variables such as government expenditures, perhaps
certain categories of investment, strikes, weather,
population growth, and so forth, be X1, X2, …, Xn,
and let the coefficients of these variables be α1, α2,
…, αn when the interest rate is the policy instrument, and λ1, λ2, …, λn when the money stock is
the instrument. Then the reduced form for income
when the interest rate is the instrument is
(1) Y = α 0 + α 1X 1 + α 2 X 2 + …+ α n X n + α r r + u

where αr is the coefficient of the interest rate and
u is the random disturbance. On the other hand,
when the money stock is the instrument, the
reduced form is
(2)

Y = λ0 + λ1X 1 + λ2X 2 +…+ λn X n + λM M + ν

As discussed in Section II, the disturbance νt
may have either a larger or a smaller variance than
the disturbance ut . One factor tending to make
νt smaller than ut is that a money stock policy
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

reduces the impact of expenditures disturbances,
but another factor, the introduction into the
reduced form of money demand disturbances,
tends to make νt larger. The net result of these
two factors cannot be determined a priori.
But in formulating policy it is not possible to
reason directly from equations 1 and 2 because
many of the Xi cannot be predicted in advance
with perfect accuracy. For scientific purposes ex
post it may be possible to say that a change in
income was caused by a change in some Xi; for
policy purposes ex ante this scientific knowledge
is useless unless the change in Xi can be predicted.
It is necessary to think of each Xi as being composed of a predictable part, X̂i , and an unpredictable part, Ei.

X i = Xˆ i + E i

For policy purposes the error term in the
reduced form includes both the disturbances to
the equation and the errors in forecasting exogenous variables. The two types of errors ought to
be treated exactly alike in formulating policy.
Equations 1 and 2 can then be rewritten as follows:
(3)

Y = α 0 + α i Xˆ 1 + α 2Xˆ 2 +…+ α n Xˆ n
+ α r r + α 1E 1 + α 2 E 2 + …+ α n E n + u

(4)

Y = λ0 + λi Xˆ 1 + λ 2Xˆ 2 +…+ λn Xˆ N
+ λM M + λ1E 1 + λ 2 E 2 + …+ λn E n + ν

For policy purposes the error term in the reducedform equation 3 is the sum of the terms from α1E1t
through ut and in the reduced-form equation 4
the sum of the term λ1E1t through νt .
A systematic study of the importance of the
Ei terms cannot be made because no formal record
of errors in forecasting exogenous variables exists
insofar as the author knows. However, some
insight into the problem may be obtained by listing the variables that must be forecast. Which
variables have to be forecast depends, of course,
on the model being used. The larger econometric
models generally have relatively few exogenous
variables that raise forecasting problems because
so many variables are explained endogeneously
by the model itself. The FR-MIT model has 63
J U LY / A U G U S T

2008

457

Poole

exogenous variables; some of these are relatively
easy to forecast, but others are subject to considerable forecasting error. The latter include such
variables as exports, number of mandays idle due
to strikes, armed forces, and federal expenditures.
Furthermore, this model involves lagged endogenous variables in many equations; hence an inaccurate forecast of GNP next quarter will increase
the error in forecasting GNP two quarters into the
future, which in turn will lead to errors in forecasting GNP three quarters into the future, and so
forth. Errors in forecasting exogenous variables,
therefore, produce cumulative errors in forecasting GNP in future quarters.
In simpler models the forecasting problem is
more severe. Consider, for example, the opposite
extreme from the large econometric model, the
single-equation model. Convenient representatives
of such models are those spawned in the controversy over the Friedman-Meiselman paper (1963)
on the stability of the money/income relationship.
The various definitions of exogenous, or “autonomous,” spending utilized by the various authors
in this controversy are as follows :
a) Friedman-Meiselman definition: Autonomous expenditures consist of the “net
private domestic investment plus the government deficit on income and product
account plus the net foreign balance”
(Friedman and Meiselman, 1963, p. 184).
b) Ando-Modigliani definition: Autonomous
expenditures consist of two variables which
enter the reduced form with different coefficients. One variable is “property tax portion of indirect business taxes” plus “net
interest paid by government” plus “government transfer payment” minus “unemployment insurance benefits” plus “subsidies
less current surplus of government enterprises” minus “statistical discrepancy”
minus “excess of wage accruals over disbursement.” The second variable is “net
investment in plant and equipment, and
in residential houses” plus “exports” (Ando
and Modigliani, 1965a, pp. 695-96, 702).
c) DePrano-Mayer definition: The basic definition is “investment in producers’ durable
458

J U LY / A U G U S T

2008

equipment, nonresidential construction,
residential construction, federal government expenditures on income and product
account, and exports. One variant of this
hypothesis subtracts capital consumption
estimates, and the other does not” (DePrano
and Mayer, 1965a, p. 739). DePrano and
Mayer also tested 18 other definitions of
autonomous expenditures (DePrano and
Mayer, 1965a, pp. 739-40).
d) Hester definition: Autonomous expenditures consist of the “sum of government
expenditure, net private domestic investment, and the trade balance” (Hester, 1964a,
p. 366). Hester also experimented with
three other definitions involving alternative treatments of imports, capital consumption allowances, and inventory investment
(Hester, 1964a, pp. 366-67).
To a considerable extent the diversity in these
definitions is misleading because except for the
Friedman-Meiselman definition all the definitions
are in fact rather similar. But whichever definition
is used, it is impossible to escape the feeling that
inaccurate forecasting of exogenous variables is
likely to be a major source of uncertainty. And
while this discussion has taken place within the
context of formal models, exactly the same problem plagues judgmental forecasting. Every forecasting method can be viewed as starting from
forecasts of “input,” or exogenous, variables and
then proceeding to judge the implications of these
inputs for GNP and other dependent, or endogenous, variables.
Regardless of what type of model is used, it
appears that for the foreseeable future it will be
necessary to forecast exogenous variables that
simply cannot be forecast accurately by using
present methods. As a result, it seems very likely
that the error term including forecast errors has
a far smaller variance in equation 4 than in equation 3. Indeed, it might be argued that as a source
of uncertainty the Ei terms are far more important
than the u or ν terms, and therefore that the
smaller size of the λi parameters as compared to
the α i parameters is of great importance. If the
parameter estimates from the FR-MIT model are
accepted, the standard deviation of the total ranF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

dom term relevant for policy (that is, including
errors in forecasting exogenous variables) would
be over twice as large under an interest rate policy
as under a money stock policy. If this argument
is correct, shifting from the current policy of
emphasizing interest rates to one of controlling
the money stock might cut average errors in half,
where errors are measured in terms of the deviations of employment, output, and price level from
target levels for these variables.

Evidence from Reduced-Form Equations
Additional insight into the relative sizes of
disturbances under interest rate and money stock
policies may be obtained by examining the controversy generated by the Friedman-Meiselman
paper on the stability of the money/income relationship (Friedman and Meiselman, 1963). In this
paper equations almost the same as equations 1
and 2 above were estimated. The equation corresponding to equation 1 differs in that the exogenous variables were assumed to consist only of a
single autonomous spending variable, as defined
above. The equation corresponding to equation 2
has the same disability for our purposes, but it
also did not include an interest rate as a variable.
Before examining the implications of the
Friedman-Meiselman findings for this study, it
should be noted that their approach was sharply
criticized in papers by Donald D. Hester (1964a),
Albert Ando and Franco Modigliani (1965a), and
Michael DePrano and Thomas Mayer (1965a).
These critics particularly attacked the FriedmanMeiselman definition of autonomous expenditures, and proposed and tested the alternative
definitions listed above. However, they also
attacked the single-equation approach and recommended the use of large models instead.
The tests of alternative equations must be
regarded as inconclusive in terms of which variable—the money stock or autonomous spending—
is more closely related to the level of income.9
9

For reasons that need not be explained here, most of this controversy was conducted in terms of equations with consumption
rather than GNP as the dependent variable. In the FriedmanMeiselman study, however, results are reported for equations with
GNP (Friedman and Meiselman, 1963, p. 227). Such results are
also reported in Andersen and Jordan (1968, p. 17).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Both approaches achieve values for R 2 of 0.98 or
0.99 so that the unexplained variance is very small
in both cases. It seems very unlikely that the addition of an interest rate variable to the equations
by using autonomous expenditures as the explanatory variable, which addition would make the
equations correspond to equation 1 above, would
make any substantial difference.
From this evidence it appears that ex post
explanations of the level of income are about as
accurate by using autonomous expenditures alone
as are those by using money stock alone. But given
the inaccuracies in forecasting autonomous expenditures, it must be concluded that ex ante explanations by using the money stock are substantially
more accurate than those with forecasts of autonomous expenditures. From this evidence, the total
random term in equation 4 appears to have a
substantially smaller variance than the total random term in equation 3.
For the reasons mentioned by the FriedmanMeiselman critics, evidence from single-equation
studies cannot be considered definitive. But neither can the evidence be ignored, especially in
light of the difficulties encountered in the construction and the use of large econometric models
such as the FR-MIT model.

Evidence on Stability of Demand for
Money Function
One of the shortcomings of the single-equation
studies discussed above is that their authors paid
too little attention to the stability of regression
coefficients over time. Consider the following
statement by Friedman and Meiselman:
The income velocity of circulation of money
is consistently and decidedly stabler than the
investment multiplier except only during the
early years of the Great Depression after 1929.
There is throughout, including those years, a
close and consistent relation between the stock
of money and consumption or income, and
between year-to-year changes in the stock of
money and in consumption or income
(Friedman and Meiselman, 1963, p. 186).

This conclusion is based on correlation coefficients between money and income (or consumpJ U LY / A U G U S T

2008

459

Poole

Figure 11
Velocity and Interest Rate Regressions
(regressions fitted to quarterly data, 1947-60)

SOURCE: Originally published version, p. 150.

tion), but what is relevant for policy is the
regression coefficient, which determines how
much income will change for a given change in
the money stock. In the Friedman-Meiselman
study, a table (Friedman and Meiselman, 1963,
p. 227) reports the regression coefficient for
income on money as being 1.469 for annual data
1897-1958. However, the same table reports regression coefficients for 12 subperiods, some of which
are overlapping, ranging from 1.092 to 2.399.
With a few exceptions, most economists
agree that velocity changes can be explained in
part by interest rate changes.10 Thus, variability
in the regression coefficients when income is
regressed on money is not evidence of the instability of the demand for money function. To obtain
10

For a convenient review of evidence on this subject, see Laidler
(1969).

460

J U LY / A U G U S T

2008

some evidence on the stability of this function,
the following simple procedure was used. Quarterly data were collected on the money stock, GNP,
and Aaa corporate bond yields for 1947 through
1968. A demand for money function was fitted
by regressing the log of the interest rate on the
log of velocity, and vice versa. The regressions
were run for the four periods, 1947 through 1960,
1947 through 1962, 1947 through 1964, and 1947
through 1966. The results inside each estimation
period were then compared with the results outside the estimation period.
The results of this process for the 1947-60
estimation period are shown in Figure 11. The
observations for 1947 through 1960 are represented by dots, and the observations for 1961
through 1968 by X’s. The two least-squares regressions—log interest rate on log velocity and vice
versa—fitted for the 1947-60 period have been
drawn. From Figure 11 it appears that the relationship since 1960 has been quite similar to the
one prior to 1960.
Table 1 presents the results of applying a
standard statistical test to the regression and
postregression periods to determine whether the
demand for money function was stable. To understand this table, refer first to section A of the table,
and to the 1947-60 estimation period. Section A
reports results from regressing the log of velocity
on the log of the Aaa corporate bond rate, and the
first row refers to the regression for 1947 through
1960. The square of the regression’s standard error
of estimate is 0.00517 with 54 degrees of freedom.
There were 32 quarters in the postregression
period 1961 through 1968, and for this period the
mean-square error of velocity from the velocity
predicted by the regression is 0.00836. The ratio
of the mean-square errors from regression outside
to those inside the estimation period is given in
the column labeled “F.” Since the ratio of two
mean squares has the F distribution under the
hypothesis that both mean squares were produced
by the same process, an F test may be used to test
whether the demand for money function has been
stable. If the function has been stable, then errors
from regression outside the period of estimation
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Table 1
Tests of the Stability of the Demand for Money Function by Using Quarterly Data
Regression
Estimation period

(SEE)2

Progression
d.f.

MSE

d.f.

F

Significance level
.10

A. Log velocity regressed on log Aaa corporate bond yield
1947-60

.00517

54

.00836

32

1.62

1947-62

.00484

62

.00746

24

1.54

.10

1947-64

.00509

70

.00587

16

1.15

>.25

1947-66

.00502

78

.00986

8

1.96

.10

B. Log Aaa corporate bond yield regressed on log velocity
1947-60

.00684

54

.00589

32

1.16*

>.25

1947-62

.00614

62

.00723

24

1.18

>.25

1947-64

.00570

70

.01162

16

2.04

.025

1947-66

.00537

78

.02192

8

4.08

.005

NOTE: *MSE < (SEE)2.

should be, on the average, the same size as the
errors inside the period of estimation. For the
1947-60 regression being discussed, F = 1.62 and
is significant at the 10 percent level but not at
the 5 percent level.
Looking at Table 1 as a whole it can be seen
that, for three of the regressions, the errors outside
the period of estimation are not statistically significantly larger than those inside the period of
estimation. Indeed, for the bond rate regression
for the 1947-60 period, the errors outside the
period of estimation were actually smaller, on
the average, than those inside the period of estimation. Overall, however, these results taken at
face value cast some doubt on the stability of the
demand for money function.
However, there is reason to believe that there
are problems in applying the F test in this situation. The reason is that the residuals from regression exhibit a very high positive serial correlation
as indicated by Durbin-Watson test statistics of
around 0.15 for all of the regressions. What this
means is that the effective number of degrees of
freedom is actually less than indicated in the table,
and with fewer degrees of freedom the F ratios
computed have less statistical significance than
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

the significance levels reported in the table. The
only way around this problem is to run a more
complex regression that removes the serial correlation of the residuals, but there is no general
agreement among economists as to exactly what
variables belong in such a regression. The virtue
of the simple regressions of velocity on an interest rate and vice versa is that this form has been
used successfully by many investigators starting
in 1954 (Latané, 1954).
The appropriate conclusion to be drawn from
this evidence would seem to be that the relationship between velocity and the Aaa corporate bond
rate is too close and too stable to be ignored, but
not close enough and stable enough to eliminate
all doubts. However, the question is not whether
an ironclad case for a money stock policy exists
but rather whether the evidence taken as a whole
argues for the adoption of such a policy. While
there is certainly room for differing interpretations of Figure 11 and Table 1, and of the other
evidence examined above, on the whole all of
these results seem to point in the same direction.
It appears that the money stock rather than interest rates should be used as the monetary policy
control variable.
J U LY / A U G U S T

2008

461

Poole

III. A MONETARY RULE FOR
GUIDING POLICY
Rationale for a Rule-of-Thumb
The purpose of this section is to develop a
rule-of-thumb to guide policy. Such a rule—not
meant to be followed slavishly—would incorporate advice in as systematic a way as possible. The
rule proposed here is based upon the theory and
evidence in Sections II and III and upon a close
examination of post-accord experience.
Individual policymakers inevitably use
informal rules-of-thumb in making decisions.
Like everyone else, policymakers develop certain standard ways of reacting to standard situations. These standard reactions are not, of course,
unchanging over time, but are adjusted and developed according to experience and new theoretical ideas. If there were no standard reactions to
standard situations, behavior would have to be
regarded as completely random and unpredictable. The word “capricious” is often, and not
unfairly, used to describe such unpredictable
behavior.
There are several difficulties with relying on
unspecified rules-of-thumb. For one thing, the
rules may simply be wrong. But an even more
important factor, because formally specified rules
may also be wrong, is that the use of unspecified
rules allows little opportunity for cumulative
improvements over time. A policymaker may have
an extremely good operating rule in his head and
excellent intuition as to the application of the
rule but unless this rule can be written down
there is little chance that it can be passed on to
subsequent generations of policymakers.
An explicit operating rule provides a way of
incorporating the lessons of the past into current
policy. For example, it is generally felt that monetary policy was too expansive following the imposition of the tax surcharge in 1968. Unless the
lesson of this experience is incorporated into an
operating rule, it may not be remembered in 1975
or 1980. How many people now remember the
overly tight policy in late 1959 and early 1960
that was a result of miscalculating the effects of
the long steel strike in 1959? Since the FOMC
membership changes over time, many of the cur462

J U LY / A U G U S T

2008

rent members will not have learned firsthand the
lesson from a policy mistake or a policy success
10 years ago. If the FOMC member is not an economist, he may not even be aware of the 10-yearold lesson.
It is for these reasons that an attempt is made
in this section to develop a practical policy rule
that incorporates the lessons from past experience.
The rule is not offered as one to be followed to
the last decimal place or as one that is good for
all time. Rather, it is offered as a guide—or as a
benchmark—against which current policy may
be judged.
A rule may take the form of a formal model
that specifies what actions should be taken to
achieve the goals decided upon by the policymakers. Such a model would provide forecasts
of goal variables, such as GNP, conditional on
the policy actions taken. The structure of the
model and the estimates of its parameters would,
of course, be derived from past data and in that
sense the model would incorporate the lessons
of the past.
But in spite of advances in modelbuilding
and forecasting, it is clear that forecasts are still
quite inaccurate on the average. In a study of the
accuracy of forecasts by several hundred forecasters between 1953 and 1963, Zarnowitz concluded that the mean absolute forecast error was
about 40 percent of the average year-to-year
change in GNP (Zarnowitz, 1967, p. 4). He also
reported, “there is no evidence that forecasters’
performance improved steadily over the period
covered by the data” (Zarnowitz, 1967, p. 5).
Not only are forecasts several quarters ahead
inaccurate but also there is considerable uncertainty at, and after, the occurrence of businesscycle turning points as to whether a turning point
has actually occurred. In a study of FOMC recognition of turning points for the period 1947-60,
Hinshaw concluded that (Fels and Hinshaw, 1968,
p. 122):
The beginning data of the Committee’s recognition pattern varied from one to nine months
before the cyclical turn…On the other hand,
the ending of the recognition pattern varied
from one to seven months after the turn…With

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

the exception of the 1948 peak, the Committee
was certain of a turning point within six
months after the NBER date of the turn. At the
date of the turn, the estimated probability was
generally below 50; it reached the vicinity of
50 about two months after the turn.

This recognition record, which is as good as that
in 10 widely circulated publications whose
forecasts were also studied in (Friedman and
Meiselman, 1963) casts further doubt on the value
of placing great reliance on the forecasts.11
Given the accuracy of forecasts at the current
state of knowledge,12 it seems likely that for some
time to come forecasts will be used primarily to
supplement a policy-decisionmaking process that
consists largely of reactions to current developments. Only gradually will policymakers place
greater reliance on formal forecasting models.13
While a considerable amount of work is being
done on such models, essentially no attention is
being paid to careful specification of how policy
should react to current developments. While
sophisticated models will no doubt in time be
developed into highly useful policy tools, it
appears that in the meantime relatively simple
approaches may yield substantial improvements
in policy. Given that knowledge accumulates
rather slowly, it can be expected that carefully
specified but simple methods will be successful
before large-scale models will lie. Careful specification of policy responses to current developments is but a small step beyond intuitive policy
responses to current developments. This step
surely represents a logical evolution of the policyformation process.
11

For further analysis of forecasting accuracy, see Mincer (1969).

12

The accuracy of forecasts may now be better than in the periods
examined in the studies cited above. But without a number of years
of data there would be no way of knowing whether forecasts have
improved, and so forecasts must in any case be assumed to be
subject to a wide margin of error at the present time.

13

It may be objected that great reliance is already placed on forecasts,
at least on judgmental forecasts. However, these forecasts typically
involve a large element of extrapolation of current developments.
It seems fair to say that in most cases in which conditions forecast
a number of quarters ahead differ markedly from current conditions,
policy has followed the dictates of current conditions rather than
of the forecasts.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Post-Accord Monetary Policy
That an operating guideline is needed can be
seen from the experience since the Treasury–
Federal Reserve accord. In order that this experience may be understood better, subperiods were
defined in terms of “stable,” “easing,” or “firming”
policy as determined from the minutes of the
Federal Open Market Committee. The minutes
used are those published in the Annual Reports
of the Board of Governors of the Federal Reserve
System for 1950 to 1968. The definitions of “stable,” “easing,” and “firming” periods are necessarily subjective as are the determinations of
dates when policy changed.14 The dating of policy changes was based primarily on the FOMC
minutes, although the dates of changes in the
discount rate and in reserve requirements were
used to supplement the minutes. “Stable” periods
are those in which the policy directive was
unchanged except for relatively minor wording
changes. In some cases the directive was essentially unchanged although the minutes reflected
the belief that policy might have to be changed in
the near future. While the Manager of the System
Open Market Account might change policy
somewhat as a result of such discussions, the
unchanged directive was taken at face value in
defining policy turning points.
More difficult problems of interpretation
were raised by such directives as “unchanged
policy, but err on the side of ease,” or “resolve
doubts on the side of ease.” Such statements were
used to help in defining several periods during
which policy was progressively eased (or tightened). For example, in one meeting the directive
might call for easier policy, the next meeting might
call for unchanged policy but with doubts to be
resolved on the side of ease, and a third meeting
might call for further ease. These three meetings
would then be taken together as defining an
14

The author was greatly assisted in these judgments by Joan Walton
of the Special Studies Section of the Board’s Division of Research
and Statistics. Miss Walton, who is not an economist, carefully read
the minutes of the entire period and in a large table recorded the
principal items that seemed important at each FOMC meeting.
Having a noneconomist read the minutes tempered the inevitable
tendency for an economist to read either too much or too little
into the minutes. However, the final interpretation of the minutes
rested with the author.

J U LY / A U G U S T

2008

463

Poole

“easing” period. However, unless accompanied
by other FOMC meetings clearly calling for a
policy change, statements such as those calling
for an “unchanged policy with doubts resolved
on the side of ease” were interpreted as not calling for a policy change.
Some important monthly economic time
series for the post-accord period are plotted in
Figure 12. The heavy vertical lines represent
periods of “stable,” “easing,” and “firming” policy
as indicated by “S,” “E,” and “F” at the bottom
of the figure. Except for the unemployment rate,
the average of each series for each policy period
has been plotted as a horizontal line.
The two features of the post-accord experience are especially noteworthy. First, decisions
to change policy have been taken about as close
to the time when, in retrospect, policy changes
were needed as could be expected in the light of
existing knowledge.15 There have been mistakes
in timing, but the overall record is impressive.
The second major feature of this period is that
policy actions, as opposed to policy decisions,
have been in the correct direction if policy actions
are defined by either free reserves or interest rates,
but not if policy actions are defined in terms of
either the money stock or bank credit.
To examine the timing question in more
detail, a useful comparison is that between business cycle turning points (as defined by the
National Bureau of Economic Research) and
decisions to change policy. The post-accord period
begins at a time when the U.S. economy was beset
by inflation stemming from the war in Korea. the
dates of the principal changes in policy and of
the business cycle peaks and troughs are listed
in Table 2. The policy dates are those that define
the beginning of the “stable,” “easing,” and “firming” periods indicated in Figure 12.
The decision to ease policy was made prior
to the business cycle peaks of July 1953 and May
1960. The decision in 1957 was made in the fourth
month following the cycle peak in July, but as can
be seen from Figure 12, the unemployment rate
had not risen very much through October. Given
15

For additional views on the timing of Federal Reserve decisions,
see Brunner and Meltzer (1964) and Fels and Hinshaw (1968).

464

J U LY / A U G U S T

2008

the amount of uncertainty always present in interpreting business conditions, this lag must be
considered to be well within the margin of error
to be expected for stabilization policy. However,
the easing policy decision in 1968 was clearly a
mistake in retrospect but not in prospect given
the expectations held by the majority of economists that the tax increase would significantly
temper the economic boom.
Firming policy decisions were also generally
well timed. Following the 1953-54 recession,
decisions to firm policy in small steps were taken
from December 1954 to September 1955, as unemployment declined to about 4 percent of the labor
force. During the recovery period after the 1957-58
recession, firming decisions were taken from July
1958 to May 1959. There was also a series of firming decisions taken from the end of 1961 to 1966.
Especially noteworthy are those taken from
December 1965 to August 1966, in response to
the beginning of inflation associated with the
escalation of military activity in Vietnam. The
easing policy decisions taken in late 1966 and
early 1967 were fully appropriate in light of the
economic slack that developed in 1967.
Even from the point of view of those who
doubt the importance of fiscal policy, this record
of the timing of policy decisions in the post-accord
period is remarkably good. The timing record
does not suggest that much attention was paid to
forecasts, but this lack of attention was perhaps
not unfortunate given the accuracy of forecasts
during the period. From this point of view, the
only real mistake was the easing decision taken
in 1968. Of course, those who believe that a steady
rate of growth of the money stock is better than
any discretionary policy likely to be achieved in
practice may read this record as supporting their
thesis. But the post-accord record of the timing
of policy decisions is certainly encouraging to
those who believe that the lags in the effects of
policy are short enough, and the effects predictable enough, to make discretionary monetary
policy a powerful stabilization tool if only decisions can be made promptly.
While the System’s performance in the timing
of policy decisions has been commendable, the
same cannot be said for the actions taken in
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Table 2
Dates of Principal Monetary Policy Decisions and of Business Cycle Peaks and Troughs
Business cycle
Turning point

FOMC policy decisions
Date

Policy

Starting date

Accord

1951

March 1-2

Firming

1952

September 25

Stable
Peak

1953 July

Trough

1954 August

Peak

1957 July

Trough

1958 April

Peak

1960 May

Trough

1961 February

Easing

December 8
1953

Stable

December 15

Firming

1954

December 11

Stable

1955

October 4

Easing

1957

November 12

Stable

1958

April 15

Firming

July 29

Stable

1959

June 16

Easing

1960

March 1

Stable
Firming

August 16
1961

Stable
Firming

October 24
November 14

1962

June 19

Stable

July 10

Firming

December 18

Stable

1963

Firming

January 8
May 7

Stable

August 20

Firming

1964

August 18

Stable

1965

March 2

Firming
Stable

December 14
1966

Easing
Stable
Stable

September 13
November 1

1967

Firming

May 2
November 27

1968

April 30

Easing

July 16

Stable

August 13

Firming

December 17

Stable

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

June 11

1969

April 29

J U LY / A U G U S T

2008

465

Poole

Figure 12
Post-Accord Monetary Policy

SOURCE: Originally published version, p. 154.

466

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Figure 12, cont’d
Post-Accord Monetary Policy

SOURCE: Originally published version, p. 155.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

467

Poole

response to the decisions. In the earlier discussion
the purposely vague terms “easing,” “firming,”
and “stable” were used to describe policy decisions. These terms were meant to convey the
notions that policymakers wanted, respectively,
to accelerate, decelerate, or maintain the pace of
economic advance. The question that must now
be examined is whether policy actions did in fact
tend to accelerate, decelerate, or maintain the
level of economic activity.
Policy actions were in accord with policy
decisions if these actions are measured by either
the 3-month Treasury bill rate or free reserves. The
bill rate rose in “firming” periods, fell in “easing”
periods, and tended to remain unchanged in
“stable” periods. However, there was some tendency for the bill rate to rise in “stable” periods
following “firming” periods, and to fall in “stable”
periods following “easing” periods, a pattern not
inconsistent with the interpretation of policy
being offered in this study. Similar comments
apply to free reserves.
But the picture is quite different if policy
actions are measured by the rate of growth of the
money stock. Careful study of Figure 12 will make
this point clear. The growth rate declined in
response to the “firming” policy decision in late
1952, and again in the “stable” period in early
1953. This behavior was, of course, consistent
with the “firming” decision. But the rate of growth
declined further following the “easing” decision
in June 1953 and remained low until the middle
of 1954. The unemployment rate rose rapidly
from its low of 2.6 percent at the cycle peak in
July 1953 to 6.0 percent in August 1954, the cycle
trough; the money stock was at the same level in
April 1954, 9 months following the cycle peak
and 10 months following the decision to adopt
an “easing” policy, as it had been at the peak.
The same pattern that had appeared during
the 1953-54 recession appeared again at the time
of the 1957-58 recession. The rate of growth of
the money stock declined in 1957 prior to the
cycle peak. (The Treasury bill rate also rose substantially.) But after the decision to adopt an
“easing” policy in November 1957, the growth
rate of the money stock declined further. From
October 1957 to January 1958, the money stock
468

J U LY / A U G U S T

2008

fell at a 2.9 percent annual rate; from the cycle
peak in July to October it had fallen at a 1.5 percent annual rate.
The rate of growth of the money stock
increased substantially in February 1958, and it
remained at the higher level during the “stable”
policy period April to July. There followed a
period of “firming” policy decisions from the end
of July 1958 to May 1959; however, the average
growth rate of the money stock during this period
was virtually identical to the average in the preceding “stable” period. But in the “stable” period
from June 1959 to February 1960, the rate of
growth of money, at –2.2 percent, was much lower
than in the preceding “firming” period. This rate
of growth of money can hardly be considered
appropriate in the light of the fact that except for
one month the unemployment rate was continuously above 5 percent. However, the picture was
confused by a long steel strike.
The decision to ease policy was taken on
March 1, 1960, but the rate of growth of the money
stock remained negative until July. The rate of
growth of money fell following the “firming”
policy decisions of October 1961 and June 1962.
In spite of another firming decision in December
1962 the rate of growth then increased, and it
continued to rise during the “firming” period in
1963, maintaining the same rate in the following
“stable” period. In August 1964, another “firming”
decision was taken, and the growth rate trended
down during the “firming” period from August
1964 to February 1965.
During the “stable” period from March to
November 1965, the Vietnam war heated up. In
the second half of 1965 the growth rate of money
was 6.1 percent compared with 3.0 percent during
the first half. The “firming” policy decision came
in December, but the rate of growth of money
averaged over 6 percent for the months December
through April 1966. At this point monetary growth
ceased. In January 1967 the money stock was
actually less than in May 1966—there having
been no increase in the growth rate in the months
immediately following the “easing” decision of
November 1, 1966.
The growth rate of money then accelerated
during the “stable” period from May through

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

October 1967; for the period as a whole growth
averaged 8.7 percent. In the following “firming”
period November 1967 through April 1968, the
rate of growth of the money stock was lower but it
was still relatively high at 5.1 percent. The growth
rate then rose to 9.6 percent in the “stable” period
May through July 1968 and thereafter fell to a little
less than 6 percent in the July-November 1968
period following the “easing” decision of July 16,
1968.
There ensued a “firming” period from
December 1968 through April 1969. Although
original figures indicated that monetary growth
was relatively little during this period, a revision
in the money stock series showed that the rate
averaged 5.5 percent for the period as a whole.
The rate following April was lower, especially in
the June-December 1969 period, which saw no
net growth in the money stock.
A broadly similar view of the timing of policy
actions is obtained from a careful examination of
the rate of growth of total bank credit. However,
as shown in Figure 12, this series is quite erratic
and much more difficult to interpret than the
series on the rate of growth of the money stock.
The proper way to interpret these results
would seem to be as follows. When interest rates
fell in a recession, policy was easier than it would
have been if interest rates had not been permitted
to fall. But if the money stock was also falling, or
growing at a below-average rate, policy was
tighter than it would have been had money been
growing at its long-run average rate. Similar statements apply to rising interest rates and aboveaverage monetary growth in a boom.

A Monetary Rule
Given the arguments of Sections I and II on
the advantages of controlling the money stock as
opposed to interest rates, a logical first step in
developing a policy guideline is to examine cases
clearly calling for ease or restraint. Consider first
a recession. To insure that monetary policy is
expansionary, the rule might be that interest rates
should fall and the money stock should rise at
an above-average rate. This policy avoids two
possible errors.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

The first is illustrated in Figure 13. If the IS
function shifts down from IS1 to IS2 while the LM
function shifts from LM1 to LM2, the interest rate
will fall from r1 to r2. The shift from LM1 to LM2
could be caused by a shift in the demand for
money with the stock of money unchanged. But
this shift could also be caused by a decline in the
stock of money, perhaps because of an attempt
by policymakers to keep the interest rate from
falling too rapidly. However, in terms of income
it is clearly better to permit the interest rate to fall
to r3 by maintaining the stock of money fixed, and
better yet to shift the LM function to the right of
LM1 by increasing the stock of money.
The point is the simple one that monetary
policy should not rely simply on a declining
interest rate in recession but should also insure
that the money stock is growing at an adequate
rate. The LM function may still shift to LM2 in
spite of monetary growth because of an increased
demand for money; without the monetary growth,
however, this shift in the demand for money
would push the LM function to the left of LM2 and
income would be even lower.
The second type of error avoided by the proposed policy rule is illustrated in Figure 14. Again,
it is assumed that the situation is one of recession.
With a fixed money stock, an increase in the
demand for money will shift the LM function
from LM1 to LM2, tending to reduce income. However, if the interest rate is prevented from rising
above r1, the increased demand for money is met
by an increased supply of money.
Maintaining monetary growth and a declining
interest rate in recession insures that the contribution of monetary policy is expansive. Increases
in the demand for money, unless accompanied
by a falling IS function, are fully offset by preventing increases in the interest rate. The greater the
fall in the IS function the smaller the offset to an
increased demand for money. However, in no
case should a fall in the IS function be permitted
to cause a fall in the money stock.
The policy proposed does not, of course,
guarantee an expansion of income. No such guarantee is possible because downward shifts in the
IS function may exceed any specified shift in the
LM function. But more important than theoretical
J U LY / A U G U S T

2008

469

Poole

Figure 13

Figure 14

SOURCE: Originally published version, p. 159.

SOURCE: Originally published version, p. 159.

possibilities are empirical probabilities. For all
practical purposes the problem is not how to
insure expansion in a recession but how to trade
off the risks of too much expansion against too
little. The discussion of Figures 13 and 14 was
entirely in terms of encouraging income expansion, or limiting further declines, in the face of
depressing disturbances. But disturbances may
be expansionary in a recession, and such disturbances may combine with expansionary policy
to create overly rapid recovery from the recession.
Consider again Figure 13, but suppose the
initial position is as shown by IS2 and LM2. If the
interest rate is not permitted to rise, a shift to IS1
will lead to a large increase in income to the level
given by the intersection of IS1 with a horizontal
LM function drawn at r2. This situation can be
avoided only if the interest rate is permitted to
rise. The natural question is how the interest rate
can be permitted to rise within a recession policy
of pushing the interest rate down and maintaining above-average monetary growth. The answer
is that the recession policy should be followed
470

J U LY / A U G U S T

2008

only if the interest rate can be kept from rising
with a monetary growth rate below some upper
bound.
Exactly the same analysis running in reverse
applies to a policy for checking an inflationary
boom. In a boom interest rates should rise and
monetary growth should be below average. However, there must be a lower limit on monetary
growth to avoid an unduly contractionary policy.
Having presented the basic ideas behind the formulation of a monetary rule, it is now necessary
to become more specific about the rule. After
specifying the rule in detail, it will be possible to
discuss the considerations behind the specific
numbers chosen.
The proposed monetary policy rule-of-thumb
is given in Table 3. The rule assumes that full
employment exists when unemployment is in the
4.0 to 4.4 percent range and that monetary growth
in the 3 to 5 percent range is consistent with price
stability. At full employment the Treasury bill rate
may rise or fall, either because of market pressures
or because of small adjustments in monetary
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Table 3
Proposed Monetary Policy Rule-of-Thumb (Percent)
Rule for month*
Unemployment rate previous month

Direction of Treasury bill rate
(3-month)

Growth of money stock
(annual rate)

0-3.4

Rising

1-3†

3.5-3.9

Rising

2-4†

4.0-4.4

Rising or falling

3-5

4.5-4.9

Falling

4-6‡

5.0-5.4

Falling

5-7‡

5.5-5.9

Falling

6-8‡

6.0-100.0

Falling

6-8

NOTE: *The 3-month bill rate is to be adjusted in the indicated direction provided that monetary growth is in the indicated range. If
the bill rate change cannot be achieved within the monetary growth rate guideline, then the bill rate guideline should be abandoned.
†If the bill rate the previous month was below the bill rate 3 months prior to that, then the upper and lower limits on monetary growth
are both increased by 1 percent. ‡If the bill rate the previous month was above the bill rate 3 months prior to that, then the upper and
lower limits on monetary growth are both reduced by 1 percent.

policy; however, monetary growth should remain
in the 3 to 5 percent range.
When unemployment drops below 4 percent,
the rule calls for a restrictive monetary policy.
The bill rate should rise and monetary growth
should be reduced. If the bill rate and monetary
growth guidelines are not compatible, then the
monetary guideline should be binding. For example, suppose that unemployment is in the 3.5 to
3.9 percent range. If monetary growth below 2
percent would be required to obtain a rising bill
rate, then monetary growth should be 2 percent
and the bill rate be permitted to fall. If this situation persists so that the bill rate falls for several
months in spite of the low monetary growth, then
the limits on monetary growth should be increased
as indicated in footnote 2 to Table 3. The reason
for this prescription is that the bill rate on the
average turns down 1 month before the peak of
the business cycle (Holt, 1962, p. 111). Unemployment, on the other hand, may increase relatively
little in the early months following a cycle peak.
Tying monetary growth to the bill rate in the way
indicated in footnote 2 of Table 3 produces a more
timely adjustment of policy than relying on the
unemployment rate alone.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

The proposed rule calls for a falling bill rate
and a relatively higher rate of monetary growth
as unemployment rises above the 4.0 to 4.4 percent range. The rule for high unemployment situations calls for adjusting the monetary growth
rate downward when the bill rate is consistently
rising as indicated by footnote 3 to Table 3. The
reasoning behind this adjustment is exactly parallel to the reasoning above for low unemployment
situations.
The proposed monetary rule has the virtues
of simplicity and dependence on relatively wellestablished economic doctrine. Because of its
simplicity, the basic ideas behind the rule can be
explained to the noneconomist. The simplicity of
the rule also will make possible relatively easy
evaluations of the rule’s performance in the future
if the rule is followed. With more complicated
rules it would be much more difficult to know
how to improve the rule in the future because it
would be difficult to judge what part of the rule
was unsatisfactory. Since, as has been repeatedly
emphasized above, the rule is not proposed as
being good for all time, it is best to start with a
simple rule and then gradually to introduce more
variables into the rule as experience accumulates.
J U LY / A U G U S T

2008

471

Poole

In designing the rule, the attempt was made to
base the rule on fairly well-established economic
knowledge. There is, of course, a great deal of
debate as to just what is and what is not well
established. What can be done, and must be done,
is to explain as carefully as possible the assumptions upon which the rule is based, with full recognition that other economists may not accept these
assumptions.
First, the evidence for the importance of
money is impressive. It seems fair to say that very
few economists believe today that changes in the
stock of money have nothing to do with business
fluctuations. Rather, the argument is over the
extent to which monetary factors are important.
Some no doubt will feel that the 2-percentagepoint ranges on monetary growth specified by the
rule are excessively narrow; however, it should
be noted that a 4 percent growth rate is double a
2 percent growth rate. Also important is the fact
that the rule is meant to serve as a guideline rather
than be absolutely binding. Since policy should
deviate from the rule if there is good and sufficient
reason—such as wartime panic buying—a further
element of flexibility exists within the framework
of the rule.
The rule is specified in terms of changes in
the bill rate and the monetary growth rate, with
the monetary growth rate being tied to the unemployment rate and to changes in the bill rate in the
recent past. This formulation has been designed
to avoid what seem to be the most obvious errors
of the past. Over the years the monetary growth
rate has been lowest at business cycle peaks and
in the early stages of business contractions, and
highest at cycle troughs and in the middle stages
of business expansions. The highest rate of monetary growth since the Treasury–Federal Reserve
accord has been during the inflation associated
with escalation of military operations in Vietnam.
For purposes of smoothing the business cycle, so
far as this author knows, there is no theory propounded by any economist that would call for
high monetary growth during inflationary booms
and low monetary growth during recessions. Such
behavior of the money stock could only be optimal
within a theory in which money had little or no
effect on business fluctuations and in which other
472

J U LY / A U G U S T

2008

goals such as interest rate stability were important.
Being based on the unemployment rate and
bill rate changes in the recent past, the proposed
monetary rule does not rely on forecasting. Nor
does the rule depend on the current and projected
stance of fiscal policy. Both of these factors ought
to be included in applying the rule by adjusting
the rate of growth of the money stock within the
rule limits, or even by going outside the limits.
But given the accuracy of economic forecasts
under present methods, and given the current
uncertainty over the size of the impact of fiscal
policy (not to mention the hazards in forecasting
federal receipts and expenditures), it does not
appear that these variables can be systematically
incorporated into a rule at the current state of
knowledge.

Tests of the Proposed Rule
Three types of evidence on the value of the rule
are examined below. The first approach involves
a simple comparison of the rule with the historical record to show that the rule would generally
have been more expansionary (contractionary)
than actual policy when actual policy—in the light
of subsequent economic developments—might be
judged to have been too contractionary (expansionary). The second approach examines the
cyclical behavior of the estimated residuals from
a simple demand for money function to show that
it is unlikely that the proposed rule would interact with the disturbances to produce an excessively inflationary or deflationary impact. Both
these approaches are deficient because they rely
heavily on the historical record, a record that
would have been quite different had the rule been
followed in the past. To avoid this difficulty, a
third approach uses simulation of the FR-MIT
model, but the results do not appear very useful
because of shortcomings in this model.
An Impressionistic Examination of the Rule.
Broadly speaking, the results of comparing the
rule with the historical record since the Treasury–
Federal Reserve accord in March 1951 are these.
The rule would have provided a substantially
tighter monetary policy than the actual during
the inflationary period from the accord until
about September 1952. At that point, actual
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

policy as measured both by the rate of growth
of the money stock and by the 3-month bill rate
became considerably tighter. In the last quarter
of 1952, actual policy was in accord with the
rule, but thereafter it tightened even further. In
the 9 months following the cyclical peak in July
1953, the money stock had a zero rate of growth
while the unemployment rate rose from 2.6 percent to 5.9 percent. Under the rule the rate of
growth of the money stock would never have
gone below 1 percent and would have steadily
increased as unemployment rose.
Actual policy became more expansive in the
second quarter of 1954, and the cycle trough was
reached in August. However, the rule would have
been considerably more expansive, and it would
have remained more expansive than the actual
all through the 1955-56 boom. Inasmuch as the
unemployment rate remained near 4.0 percent
from May 1955 through August 1957, the rule
would have been too inflationary during this
period. However, it can be argued that monetary
policy was overly restrictive before the cycle peak
in July 1957, since in the year prior to the peak
the money stock grew only by 0.7 percent. Less
subject to dispute is the fact that policy was far
too restrictive after the peak; in the 6 months
following the peak the money stock fell at an
annual rate of 2.2 percent, and at the same time
the unemployment rate rose from 4.2 percent to
5.8 percent.
The rule would have been considerably more
expansive all during the high unemployment
period of 1958-59, and it would have prevented
the declines in the money stock in late 1959 and
early 1960. At the peak in May 1960 the unemployment rate was 5. 1 percent, and the money
stock had fallen by 2.1 percent in the previous
12 months. Unlike the periods following peaks
in 1954 and 1957, policy became more expansive
immediately after the May 1960 peak, although
not so expansive as called for by the proposed
rule.
From the trough in February 1961 through
June 1964, the unemployment rate never declined
below 5 percent. Under the rule, policy would
have been more expansive than the actual policy
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

followed throughout this period, especially as
compared with the March-September 1962 period,
during which the money stock fell slightly. Unemployment fell rapidly in 1965 with the Vietnam
build-up; the rule would have been more expansive than actual through July 1965 and then less
expansive than actual through April 1966. Indeed,
in the 9-month period prior to April 1966, with
the unemployment rate falling from 4.4 percent
to 3.8 percent, monetary growth accelerated to a
6.6 percent annual rate; the proposed rule would
have first called for monetary growth in the 3 to
5 percent range, and then in the 2 to 4 percent
range starting in February 1966, following the
drop in the unemployment rate below 4.0 percent
in January. Finally, the negative growth rates of
money in the 1966 credit crunch would have
been avoided under the rule, as would the high
rates of growth in 1967 and 1968.
This impressionistic look at the proposed
rule may be supplemented by a simple scoring
system for judging when the rule would have
been in error. For each month during the sample
period it was determined whether the rule would
have been more or less expansive than the actual
policy, or about the same as the actual policy. The
unemployment rate 12 months from the month
in question was used to indicate whether or not
the policy was correct, with a desired range of
unemployment of 4.0 to 4.4 percent. The rule was
deemed to have made an error if: (1) the actual
policy was in accord with the rule, but unemployment 12 months later was not in the desired range;
(2) the rule called for a more expansive policy
than the actual, and unemployment 12 months
later was below the desired range; and (3) the
rule called for a less expansive policy than the
actual, and unemployment 12 months later was
above the desired range.
Since the latest data used in this analysis
were for July 1969, comparison of the rule with
actual policy ends July 1968. Starting the sample
with 1952, the first full year after the accord, provides a total of 199 months. Based on the criterion
described above, the rule would have been in
error in 63 months. If the criterion is changed by
substituting the unemployment rate 9 months
J U LY / A U G U S T

2008

473

Poole

ahead instead of 12 months ahead, the rule has
62 errors; using the unemployment rate 6 months
ahead yields 59 errors.
Some of these errors are of negligible import.
For example, in March 1953 the rule calls for a
money growth rate of 2 to 4 percent, but the actual
was 1.9 percent. Thus, the rule would have been
more expansive than the actual this particular
month, a mistake since unemployment was too
low and inflation too high during this period.
However, the rule would have been less expansive than actual in every one of the preceding 6
months and in all but one of the 6 months following this “mistake.” Except for scattered errors
such as the one just discussed, most of the rule
errors occurred in two separate periods. The first
is the 2-year period following the cycle trough in
August 1954, during which time the rule would
have been too expansive. The second is the last
half of 1964 and the first half of 1965, when the
rule would have been too expansive in light of
the subsequent sharp decline in unemployment.
Unless one has completed a careful examination of the data, there is a tendency to underestimate how rapidly the economy can change. For
example, from the cycle peak in July 1953 to the
cycle trough 13 months later, the unemployment
rate rose by 3.4 percentage points; and from the
peak in July 1957 to the trough 9 months later in
April 1958, it rose by 3.2 percentage points.
Changes in the other direction have tended to be
somewhat less rapid, but significant nonetheless.
In the year following the trough in August 1954,
the unemployment rate declined 2.0 percentage
points, and it declined 2.2 percentage points in
the year following the trough in April 1958. In
January 1965 unemployment was 4.8 percent and
the problem was still one of how to reach full
employment. A year later the rate was 3.9 percent
and the problem was inflation.
Thus, it appears that for the most part the rule
would have been superior to policy actually followed. Of course, the rule is not infallible and
would have erred on a number of occasions. But
in spite of these errors—and it should be recognized that some errors are inevitable no matter
what rule or which discretionary policymakers
are in charge—the proposed rule has the great
474

J U LY / A U G U S T

2008

virtue of turning policy around promptly as
imbalances develop.
Relationship of the Rule to Monetary
Disturbances. Since the rule was developed on
the basis of the theoretical and empirical analysis
of Sections I and II, which emphasized the relative stability of the demand for money, it is appropriate to conduct a systematic examination of
the disturbances in the demand for money. It
will be recalled that the rule was formulated in
such a way as to insure expansionary policy
action in a recession and contractionary policy
action in a boom. However, it was recognized
that disturbances in the expenditure sector and/or
in the monetary sector might reinforce policy
actions leading to an excessively expansionary
or contractionary effect on income. If there were
a significant chance of these excessive effects
occurring, then the rule proposed would be overly
“aggressive” and a rule involving a smaller range
of monetary growth rates would be in order.
To provide some evidence of the effect of
disturbances in the money demand function, the
residuals from the simple velocity function tested
in Section II were examined carefully. The technique involved regressing velocity on the Aaa corporate bond rate, and vice versa, for the 1947-68
period and then comparing the residuals with
turning points in the business cycle. The reader
may make these comparisons visually from
Figure 15. At the bottom of this figure cycle
peaks and troughs are identified by “P” and “T,”
respectively.
The residuals from the estimated equations
suggest that the demand for money has contractionary disturbances near business cycle peaks
and expansionary disturbances near cycle troughs.
The residuals have the same turning points for
the regression of velocity on the interest rate as
for the regression of the interest rate on velocity.
The residual peaks occur at or before the cycle
peaks, while the residual troughs occur at or
after the cycle troughs.
To assess the significance of these endings,
consider the following simple view as to the
dynamics of monetary elects. In the short run,
income is a predetermined variable in the demand
for money function. An increase in the money
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Figure 15
Residuals from Velocity Regression Compared with Business-Cycle Turning Points

SOURCE: Originally published version, p. 164.

stock makes the interest rate lower than it would
be otherwise, and this eventually leads to expansion in investment and income. A downward
disturbance in the demand for money function
has the same effect.
Given this view of monetary dynamics,
Figure 15 suggests the following conclusions.
Shifts in the demand for money tend to be contractive in their effect on income in the late stages
of a business cycle expansion, implying that a
restrictive monetary policy must not be pushed
too hard. Then, shortly before the cycle peak, the
shifts apparently tend to become expansive. This
effect is fortunate since it is only after the cycle
peak that rising unemployment would trigger a
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

policy change under the proposed rule. However,
there appears to be little danger that the rule would
be overly expansionary because after the cycle
trough, while policy is still expansionary, contractive shifts in the demand for money occur.
Simulations of the FR-MIT Model. The final
technique used to test the proposed monetary
rule was to simulate the FR-MIT model under
the rule. As explained below, the results are of
questionable value but are presented anyway
for the sake of completeness and in order not to
suppress results unfavorable to the proposed rule.
To simplify the computer programming, the
rule used in the simulations is not exactly the
same as the one proposed in Table 3 above. The
J U LY / A U G U S T

2008

475

Poole

Figure 16
Simulations of Unemployment in FR-MIT Model

SOURCE: Originally published version, p. 165.

476

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

proposed rule, it will be recalled, involved a bill
rate guideline and a money stock guideline. If, for
example, the bill rate cannot be pushed up without pushing monetary growth below the lower
limit in the money guideline, the proposed rule
calls for setting monetary growth at its lower limit.
The simulation rule, on the other hand, ignores
the bill rate guideline and simply sets the monetary growth rate at the midpoint of the range specified by the proposed rule.
Another difference, and no doubt a more
important one, between the proposed rule and the
simulation rule is that the simulation rule had to
be specified in terms of quarterly data since the
FR-MIT model uses quarterly data. In the simulation rule, the growth rate of the money stock
depends on the level of unemployment determined by the model in the previous quarter. The
growth rate of the money stock was modified by
past changes in the bill rate, as in footnotes 2 and 3
to Table 3, except that the relevant bill rate change
was in terms of the previous quarter before that.
The simulation rule, then, reacts somewhat more
slowly to unemployment trends than does the
proposed rule.
In order to investigate the importance of the
starting point, simulations were run with starting
dates in the first quarters of 1956, 1958, 1960,
1962, and 1964. The simulated unemployment
rate for the five simulations is shown in the five
panels of Figure 16 by the curves marked “S.”
The actual unemployment rate is shown by the
curves marked “A” and control simulations, to
be explained below, by the unconnected points.
It is clear from Figure 16 that the simulation
rule for money growth produces an unstable
unemployment rate. However, because of deficiencies in the model this result is probably not
very meaningful. That the model is defective can
be seen by comparing unemployment in the control simulations with the actual unemployment.
In the control simulations all of the model’s exogenous variables, including the money stock, were
set at their actual levels.16 Even with the exoge16

The FR-MIT model was estimated with the money stock as an
endogenous variable. There are separate equations for currency
and demand deposits, both of which are endogenous, while unborrowed reserves are exogenous. In the simulations the money stock

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

nous variables set at their actual levels, the simulated level of unemployment at times differs from
the actual level.
Because of the role of the stochastic disturbances in the model, especially as they feed
through lagged endogenous variables, it cannot
be expected that control simulations will exactly
duplicate the actual results. But the fact that the
control simulations differ from the actual by
considerable margins over long periods of time
strongly suggests that the money rule simulations
do not provide much useful information on the
properties of the proposed rule.
The simulations are valuable in one respect,
however. An examination of Figure 16 strongly
suggests that the money rule is interacting with
the rest of the model to produce a cycle of 5 to 6
years. Such a cycle is particularly evident in the
simulations starting in 1956 and 1958. That the
monetary rule has very powerful effects in the
model is shown by the simulations beginning in
1960 and 1962. In both simulations unemployment reaches a trough in 1964 and then rises in
spite of the 1964-65 tax cuts and the stimulus of
spending for military operations in Vietnam starting at the end of 1965.
There is no doubt that the monetary rule is
too aggressive within the context of the FR-MIT
model. A simulation of a perfectly steady rate of
growth of money is shown in Figure 17. The rate
of growth in this simulation is 2.76 percent per
year, the same as the actual rate of growth over
the period 1955-IV through 1969-I. In Figure 17,
the curve labeled S2 is the simulated unemployment rate with the steady rate of growth of money.
The simulated unemployment rate under the
monetary rule is shown by S1, which is the same
as S in panel A of Figure 16. The unconnected
points show the same control simulation as shown
in panel A of Figure 16.
was made exogenous by suppressing the equation that makes
demand deposits depend on unborrowed reserves. To simulate
the effects of a particular rate of growth of money, the currency
equation was retained, but demand deposits were set at whatever
level was required to obtain the desired rate of growth of demand
deposits plus currency. In the control simulations demand deposits
were set at their actual levels, but currency remained an endogenous variable and differed somewhat from actual since simulated
GNP differed somewhat from actual GNP.

J U LY / A U G U S T

2008

477

Poole

Figure 17
Simulations of FR-MIT Model

SOURCE: Originally published version, p. 166.

It appears impossible to draw any firm conclusions from the simulations. However, the
simulations clearly raise the possibility that the
proposed monetary rule may produce economic
instability. If anything, the proposed rule is too
aggressive, and so policy should probably err on
the side of producing growth rates in money closer
to a steady 3 to 5 percent rather than farther from
the extremes in the proposed rule.

IV. SELECTION AND CONTROL
OF A MONETARY AGGREGATE
Basic Issues
Up to this point, the analysis has been entirely
in terms of optimal control of the money stock.
The theoretical analysis has been general enough
that no precise definition of the money stock has
been required. The empirical work, however, has
used the narrow definition of demand deposits
adjusted plus currency, for the simple reason that
this definition seems to be the most appropriate
one.
478

J U LY / A U G U S T

2008

In principle there is no reason not to look
simultaneously at all of the aggregates and, of
course, at all other information as well. But in
practice, at the present state of knowledge, there
simply is no way of knowing how all of these
various measures ought to be combined.17 Furthermore, the selection of a single aggregate for operating purposes would permit the FOMC to be far
more precise in its policy deliberations and in its
instructions to the Manager of the Open Market
Account. Thus, the best procedure would seem
to be to select one aggregate as the policy control
variable, and insofar as the state of knowledge
permits, to incorporate other information into
policy by making appropriate adjustments in the
rate of growth of the aggregate selected.
In principle the aggregate singled out as the
control variable should be subject to exact determination by the Federal Reserve. The reason is
that errors in reaching an aggregate that cannot
be precisely controlled may interact with disturbances in the relationships between the aggregate
and goal variables such as GNP to produce a suboptimal policy. However, as argued later in this
section, this consideration is likely to be quite
unimportant in practice for any of the aggregates
commonly considered. Therefore, the analysis of
which aggregate should be singled out will be
conducted under the assumption that all of the
various aggregates can be precisely controlled by
the Federal Reserve.

Selection of a Monetary Aggregate
At the outset it must be emphasized that the
various aggregates frequently discussed are all
highly correlated with one another in the postwar period. This is true for total bank credit, the
narrow money stock, the broad money stock (narrow money stock plus time deposits), the bank
credit proxy (total member bank deposits), the
17

This point is an especially important one since those favoring
simple approaches are frequently castigated for ignoring relevant
information, and for applying “simplistic solutions to inherently
complex problems.” For this charge to be upheld, it must be shown
explicitly and in detail how this other information is to be used,
and evidence must be produced to support the proposed complex
approach. As far as this author knows, there is essentially no evidence sorting out the separate effects of various components of
monetary aggregates.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

monetary base (member bank reserves plus currency held by the public and nonmember banks),
and several other figures that can be computed.
While these various aggregates are highly
correlated over substantial periods of time, they
show significantly different trends for short
periods. In selecting an aggregate, the most important considerations are the theoretical relevance
of the aggregate and the extent to which the theoretical notions have been given empirical support. Both of these considerations point to the
selection of the narrowly defined money stock.
The most important theoretical dispute is
between those who emphasize the importance of
bank deposit liabilities—the “monetary” view—
and those who emphasize the importance of
banks’ earning assets—the “credit” view. This
controversy, which dates back well into the 19th
century, is difficult to resolve because historically
banks have operated on a fractional reserve basis
and so have had both earning assets and deposit
liabilities. Since balance sheets must balance,
bank credit and bank deposits are perfectly correlated except insofar as there are changes in nonearning assets—such as reserves—or nondeposit
liabilities—such as borrowing from the Federal
Reserve System. If these factors never changed,
the perfect correlation between bank deposits and
bank credit would make it impossible ever to
obtain evidence to distinguish between the monetary and the credit views. Since the correlation,
while not perfect, has historically been very high,
it has been very difficult to obtain evidence.
Hence, it is still necessary to place major reliance
on theoretical reasoning.
There would be little reason to examine the
issue closely if we could be confident that the
very high correlation between deposits and bank
credit would continue into the indefinite future.
But there are already substantial differences in
the short-run movements of bank credit and bank
deposits, and these differences are likely to become
greater and of a longer-term character in the future.
Banks are raising increasingly large amounts of
funds through nondeposit sources such as sales
of commercial paper and of capital certificates
and through borrowing from the Euro-dollar market and the Federal Reserve System. (Borrowings
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

from the System would probably expand significantly if proposed changes in discount-window
administration were implemented.)
The easiest way to examine the theoretical
issues is to consider some hypothetical experiments. Consider first the experiment in which
the Federal Reserve raises reserve requirements
by $10 billion at the initial level of deposits but
simultaneously buys $10 billion in U.S. government securities in the open market. Deposits need
not change, but banks must hold more reserves
and fewer earning assets. Under the monetary
view the effects would be nil (except for very
minor effects examined below) because deposits
would be unchanged, but under the credit view
the effect would be a tendency for income to contract because bank credit would be lower.
The monetary view is easily explained. Suppose first that the banks initially hold U.S. government securities in excess of $10 billion. When
reserve requirements are raised, the banks simply
sell $10 billion of these securities, and this is
exactly the amount being purchased by the Federal
Reserve. Thus, since deposits are unchanged
and bank loans to the nonbank private sector—
hereinafter called simply the “private sector”—
are also unchanged, there should be no effects
on that sector.
Now suppose that the banks do not have $10
billion in government securities. In this case they
must sell private securities, say corporate bonds,
to the private sector. The private sector obtains
the funds to buy these bonds from the sale of $10
billion of government securities to the Federal
Reserve. The amount of credit in the private sector
is again unchanged. The banks own fewer private
securities, while the public owns more private
securities and fewer government securities.
Thus, the amount of credit extended to the
private sector need not change at all even though
bank credit falls. However, two minor effects are
possible: First, the Federal Reserve purchase of
government securities changes the composition
of portfolios. Thus, even if banks have over $10
billion of government securities, they may be
expected to adjust their portfolios by selling some
government securities and some private securities.
For ease of exposition, run-offs of loans may be
J U LY / A U G U S T

2008

479

Poole

included in the sale of private securities. The net
result, then, is that the banks have more reserves,
fewer government securities, and fewer private
securities; the private sector has fewer government
securities and fewer liabilities to the banks. The
private sector may have—but it will not necessarily have—fewer claims within the sector. It is
quite possible that private units may substitute
claims on other private units for the government
securities sold to the Federal Reserve.
Looked at from the liability side, those units
initially with liabilities outstanding to banks may
have those liabilities shifted to other private sector units. This occurs, of course, when banks sell
securities to the private sector or allow loans to
run off that are then replaced by firms selling commercial paper to other firms, drawing on sources
of trade credit, and/or borrowing from nonbank
financial institutions. A net effect can occur only
when the combined portfolios of banks and the
private sector contain fewer government securities,
though more reserves, than before; such a change
may be looked upon as a reduction in liquidity
and thereby lead to a greater demand for money
and a reduced willingness to undertake additional
expenditures on goods and services.
The second effect of the hypothetical experiment being discussed is that bank earnings will
be reduced by the increase in reserve requirements. Banks will eventually adjust by raising
service charges on demand deposits and/or reducing interest paid on time deposits. For simplicity,
assume that the change in reserve requirements
applies only to demand deposits so that there is
no reason for banks to change the interest paid
on time deposits. With higher service charges on
demand deposits, lower interest rates on securities are required if people are to hold the same
stock of money as before. Since the hypothetical
experiment assumed that deposits did not change,
interest rates must fall by the same amount as the
increase in service charges, an effect that will tend
to expand investment and national income.
The portfolio effect tends to contract income
while the service charge effect tends to expand
income. These effects individually seem likely
to be small, and the net effect may well be nil. In
this regard, it is interesting to note that the rela480

J U LY / A U G U S T

2008

tionship of velocity to the Aaa corporate bond rate
is about the same for observations in the 1950’s
as in the 1920’s (Latané, 1954, 1960) in spite of
the enormous changes in financial structure and
in government bonds outstanding.
Consider another hypothetical experiment—
one that is in fact not so hypothetical at the current time. Suppose that banks suddenly start
issuing large amounts of commercial paper and
investing the proceeds in business loans. It is
possible that the loans simply go to corporations
that have stopped issuing their own commercial
paper. In this case the bank would be purely a
middleman with no effect on the aggregate amount
of commercial paper outstanding. The increase
in bank credit would not represent an increase
in total credit.
But, of course, banks issuing commercial
paper must perform some function. This function
is clearly that of increasing the efficiency of the
financial sector in transferring funds from the
ultimate savers to the ultimate borrowers. The
efficiencies arise in several ways. First, under
fractional reserve banking, banks have naturally
developed expertise in lending. It is efficient to
make use of this expertise by permitting banks to
have more lendable funds than they would have
if restricted to demand deposits alone. The efficiency takes the form of fewer administrative
resources being required to transfer funds from
savers to borrowers.
The second form of efficiency results from
the fact that financial markets function best when
there is a large amount of trading in a standardized instrument. For example, the shares of large
corporations are much more easily marketed than
those of small corporations. Many investors want,
and require, readily marketable securities, and
they can be persuaded to buy securities in small
firms only if the yields are high. As a result funds
may go to large corporations to finance relatively
low-yielding investment projects while highyielding projects available to small firms cannot
be financed. Commercial banks, and other financial intermediaries, improve the allocation of
capital by issuing relatively standardized securities with good markets and lending the proceeds
to small firms.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

The question is whether there is any effect on
economic activity from an increase in bank credit
financed by commercial paper—assuming that
the money stock is not affected. To begin with, it
must be emphasized that an increase in the efficiency of investment does not necessarily affect
the total of investment. The same resources may
be absorbed either in building a factory that will
produce a product that cannot be sold or in building a factory to produce a highly profitable product in great demand.
Banks, and financial intermediaries in general, have the effect of reducing somewhat the
cost of capital for small firms. Because intermediaries bid funds away from large corporations,
the cost of capital for large corporations tends to
be somewhat higher than it would be if there were
no intermediaries. At this stage in the analysis
the net effect on investment is impossible to predict since it depends on whether the reduction
in investment by large corporations is larger or
smaller than the increase in investment by small
corporations.
In examining the effects of intermediation,
however, another factor must be considered. Suppose it is assumed that the interest rates relevant
for the demand for money are rates on high-quality
securities. It was argued above that intermediation
tends unambiguously to raise the yields on highquality securities above what they otherwise
would be. Since the assumption throughout has
been that the stock of money is unchanged, the
level of income must increase if the quantity of
money demanded is to be unchanged with the
higher interest rate of high-quality securities. The
conclusion, therefore, is that the increase in bank
credit is expansionary in the hypothetical experiment being discussed.
This conclusion, however, does not warrant
the further conclusion that bank credit is the
appropriate monetary aggregate for policy purposes. The effect examined above occurs when
any financial intermediary expands. Not only is
there the problem that data for all intermediaries
are simply not available on a current basis but
also there are serious problems in even defining
an intermediary. A particularly good example of
this difficulty is afforded by trade credit. A large
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

nonfinancial corporation may advance trade credit
to customers, many of whom may be small, and
may also advance funds to suppliers through
prepayments. The large corporation finances
these forms of credit through the sale of securities,
or through retained earnings diverted from its
own investment opportunities and/or from dividends. In this case the large corporation is serving
exactly the same function as the financial intermediaries are. But tracing these credit flows is
obviously impossible at the present time.
Another problem with bank credit as a guide
to policy is that changes in bank credit depend
both on changes in bank deposits and on changes
in nondeposit sources of funds. As demonstrated
by the hypothetical experiments examined above,
the effect of a change in bank credit depends
heavily on whether or not deposits change.
One final hypothetical experiment will be
considered. Suppose the U.S. Treasury sells additional government securities to the public to
finance an increase in cash balances at commercial banks. Since banks have received no additional reserves, total deposits cannot change.
Deposits owned by the public are transferred to
the Treasury. Bank credit is unchanged, but the
impact on the private sector is clearly contractionary. The private sector holds more government
bonds and fewer deposits. Equilibrium can be
restored only through some combination of a
rise in interest rates and a decline in income.
The conclusion is that it appears to be fundamentally wrong for policymakers to place primary
reliance on bank credit. This is not to say that
there is no information to be gained from analysis
of bank and other credit flows. However, selection
of bank credit as the monetary aggregate would
be a mistake. Instead, information on credit flows
may be used to adjust the desired rate of growth of
the money stock, however it is defined, although
it is not clear that the knowledge presently exists
as to how to interpret credit flows.
From this analysis it appears that neither
bank credit nor any deposit total that includes
Treasury deposits is an appropriate monetary
aggregate for monetary policy purposes. Before
considering the narrow and broad definitions of
J U LY / A U G U S T

2008

481

Poole

the money stock, let us examine the monetary
base, total reserves, and unborrowed reserves.
It is clear that different levels of the money
stock may be supported by the same level of the
monetary base. Given the monetary base, different
levels of the money stock result from changes in
reserve requirement ratios; from shifts of deposits
between demand and time, which of course are
subject to different reserve requirement ratios;
from shifts of deposits among classes of banks
with different reserve ratios; and from shifts
between currency and deposits. These effects are
widely understood, and they have led to the construction of monetary base figures adjusted for
changes in reserve requirements. Similar adjustments are applied to total and nonborrowed
reserves. If enough adjustments are made, the
adjusted monetary base is simply some constant
fraction of the money stock, while adjusted
reserves are some constant fraction of deposits.
It is obviously much less confusing to adopt some
definition of the money stock as the appropriate
aggregate rather than to use the adjusted monetary
base or an adjusted reserve figure.
There can be no doubt that FOMC instructions
to the Manager in terms of nonborrowed reserves
would be more precise and more easily followed
than instructions in terms of the money stock.
But the simplicity of reserve instructions would
disappear if adjusted reserves were used, for then
the Manager would have to predict such factors
as shifts between demand and time deposits, the
same factors that must be predicted in controlling
the money stock. No one would argue that such
factors—and others such as changes in bank borrowings and shifts in Treasury deposits—should
be ignored. lf the FOMC met daily, instructions
could go out in unadjusted form with the FOMC
making the adjustments. But surely this technical
matter should be handled not by the FOMC but
by the Manager and his staff in order to permit
the FOMC to concentrate on basic policy issues.
The only aggregates left to consider are the
narrowly and broadly defined money stocks.
There is a weak theoretical case favoring the
narrow definition because time deposits must be
transferred into demand deposits or currency
before they can be spent. The case is weak because
482

J U LY / A U G U S T

2008

the cost of this transfer is relatively low. If the
cost were zero, then there would be no effective
distinction between demand and time deposits.
Indeed, since time deposits earn interest, all funds
would presumably be transferred to time deposits.
No strong empirical case exists favoring one
definition over the other. The broad and narrow
money stocks are so highly correlated over time
that it is impossible to distinguish separate effects.
It appears, however, that there is a practical case
favoring the adoption of the narrow money stock.
Time deposits include both passbook accounts,
which can be readily transferred into demand
deposits, and certificates of deposit, which cannot. Since CD’s appear to be economically much
more like commercial paper than like passbook
time accounts, they ought to be excluded from
the broadly defined money stock.
There is, of course, no reason why CD’s cannot
be excluded from the definition of money. The
problem is that banks may in the future invent
new instruments that will be classified as time
deposits for regulatory purposes but that are not
really like passbook accounts. In retrospect it
may be clear how the new instrument should be
treated, but the situation may be confused for a
time. The same sort of problem exists with
demand deposits—consider the compensating
balance requirements imposed by many banks—
but it seems likely that the problem will remain
more serious for time deposits.
In summary, there is a strong case favoring
the selection of some definition of the money
stock as the monetary aggregate, and there appears
to be a marginal case for preferring the narrowly
defined money stock.

Technical Problems of Controlling
Money Stock
In the preceding sections it has been argued
that the monetary policy control instrument
should be the money stock. The purpose of this
section is to investigate some of the technical
problems in controlling the money stock. The
first topic examined is that of the form of instructions to the Manager of the System Open Market
Account. Following this discussion is an examiF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

nation of the feedback method of control. Finally,
there is an examination of the significance of
data revisions. All of this discussion is in terms
of the narrowly defined money stock, but much
of it also applies to other aggregates.
Specification of the Desired Money Stock.
There are two major issues connected with the
form of FOMC instructions to the Manager. The
first is whether the desired money stock should
be expressed in seasonally adjusted or unadjusted
form, while the second is whether the desired
money stock should be expressed in terms of a
complete path week by week over time or of an
average over some period of time. The first issue
turns out to be closely related to the question
of data revisions, and so its discussion will be
deferred for the moment. It is to the second issue
that we now turn.
Since required reserves are specified in terms
of a statement-week average, the statement week
is the natural basic time unit for which to measure
the money stock, and the measure takes the form
of the average of daily money stock figures over
the statement week. The fact that daily data may
not be available on all components of the money
stock does not affect the argument; however estimated, the weekly-average figure is the most
appropriate starting point in the analysis.
The weekly money stock is clearly not subject
to precise control because of data lags and uncontrollable random fluctuations. Furthermore, no one
believes that these weekly fluctuations have any
significant impact. The natural conclusion to be
drawn is that there is no point in specifying
instructions in terms of weekly data but rather that
some average level over a period of weeks should
be used. Upon closer examination, however, this
conclusion can be shown to be unjustified.
The difficulty in expressing the instructions
in terms of averages can be explained very simply
by two examples. To keep the examples from
becoming too complicated, it will be assumed
that instructions take the form of simple rates of
growth on a base money stock of $200 billion.
The neglect of compounding makes no essential
difference to the argument.
For the first example, assume that the policy
instruction is for a growth rate of 4 percent per
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

annum, which is $8 billion per year or about
$154 million per week. If the money stock grew
by $154 million per week for 8 weeks, then the
figure for the eighth week would be above the
base week figure by an amount representing a 4
percent annual growth rate. The average of weeks
5 through 8 would be above the average of weeks
1 through 4 by $616 million, an amount also representing a 4 percent annual growth rate. So far,
there is no reason to favor the path specification
over a specification in terms of 4-week averages.
Now suppose that the increase in weeks 1
through 4 was on schedule, but that a large uncontrollable increase of $500 million occurred in the
fifth week. Starting from a base-week figure of
$200 billion, the average money stock for weeks
1 through 4 would be $200.385 billion, and if the
instruction were in terms of 4-week averages it
would specify an average money stock of $201.001
billion for weeks 5 through 8.
Since by hypothesis the money stock grew
by $154 million in each of the first 4 weeks, in
the fourth week the level was $200.616 billion.
The jump of $500 million in the fifth week would
take the level to $201.116 billion, a figure already
above the desired average of $201.001 billion for
weeks 5 through 8. To reach this desired average
given the jump in week 5, the money stock in
weeks 6 through 8 would have to average less
than $201.001 billion, and so the money stock
would have to be forced below the level of the
fifth week for weeks 6 through 8. Furthermore,
as the reader may calculate, it would be necessary
to have higher than normal weekly growth in
weeks 9 through 12 if the average of these weeks
were to be above the average of weeks 5 through 8
by $616 million. On the other hand, if the instruction were in terms of the desired weekly path, the
instruction would read that the desired money
stock in the eighth week was $201.232 billion,
and therefore the Manager would not have to
force the money stock down in weeks 6 through
8. Instead, he could aim for a growth of about
$39 million in each of the weeks 6 through 8 to
bring the level in week 8 to the desired figure of
$201.232 billion.
From this example it can be seen that specification in terms of averages of levels of the money
J U LY / A U G U S T

2008

483

Poole

stock forces the Manager to respond to random
fluctuations in a whipsawing fashion. Since
week-by-week fluctuations have essentially no
significance, there is no point in wrenching the
financial markets in order to undo a random
fluctuation. If averaging is to be used, the average
should be specified in terms of the desired average
weekly change over, say, the next 4 weeks rather
than in terms of the average level of the next 4
weeks. Specification in terms of the average
weekly change is equivalent to a specification
stating that the Manager should aim for a particular target level in the fourth week.
The second example illustrating the hazards
of specification in terms of the average level will
show what happens when policy changes. As
before, assume that the money stock in the base
week is $200 billion and that the desired growth
is at a 4 percent rate in weeks 1 through 4. In this
example it is assumed that there are no errors in
hitting the desired money stock. Thus, the money
stock is assumed to grow by $154 million per
week, reaching a level of $200.616 billion in the
fourth week and an average level of $200.385
billion for weeks 1 through 4.
Now suppose that in week 4 the FOMC
decides on a policy change and specifies a 1 percent growth rate for the money stock for weeks 5
through 8. If the specification were in terms of the
average level, then it would require an increase
in the average level of $154 million, which would
bring the average level to $200.539 billion for
weeks 5 through 8. But the figure for week 4 is
already $200.616 billion, and so the money stock
in weeks 5 through 8 would have to average less
than the figure already achieved in week 4.
Thus, after a steady 4 percent growth week
by week, an average-level policy specification
would actually require a negative week-by-week
growth before the new 1 percent growth rate could
be achieved. On the other hand, a policy specification in terms of the weekly path would require
a weekly growth of $38.5 million each week for
weeks 5 through 8.
To make the point clear, this example was
constructed so that the policy shift from a 4 to a
1 percent growth rate would actually require a
negative growth rate for a time on a week-by-week
484

J U LY / A U G U S T

2008

basis when the instructions are in terms of average
levels. In general, when average levels are used,
a policy shift to a lower growth rate will require
in the short term a growth rate lower than the
new policy rate set, and a policy shift to a higher
growth rate will require a short-term growth rate
above the new policy rate. Since policymakers will
typically want to shift policy gradually, the levels
specification is especially damaging because it
in fact instructs the Manager to shift policy more
rapidly than the policymakers had desired. It
should be noted that the larger the number of
weeks included in the average-level specification,
the more severe this problem becomes.
Because the money stock cannot be controlled
exactly, there is a natural tendency to feel that
instructions stated in terms of averages are more
attainable. In actuality, of course, this effect is
illusory; averaging produces a smaller number to
measure the errors, but does not improve control.
Nevertheless, if averages are to be used in the
instructions, the above examples demonstrate
that the averages should be calculated in terms
of weekly (or perhaps monthly) changes but not
in terms of averages of levels.
Use of average changes does have one advantage, however. An instruction in this form permits
the Manager to correct an error in week 1 over the
next few weeks rather than instructing him to
correct the error entirely in week 2. As explained
above, an instruction in terms of the average
weekly change over the next 4 weeks is equivalent
to an instruction in terms of the desired level in
week 4, leaving unspecified the desired levels in
weeks 1 through 3.
Control Through the Feedback Principle. It
is useful to begin by comparing the problems of
controlling the money stock with the problems
of controlling interest rates. In controlling interest
rates, the availability of continuous readings on
rates makes it possible for the Manager to exercise very accurate control without understanding
the causes of rate changes. Being in continuous
contact with the market, the Manager can intervene with open market purchases or sales as
soon as the federal funds rate, the Treasury bill
rate, or any other rate starts to change in an undesirable fashion. This feedback control is not exact

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

since interest rate information arrives with some
lag, and there are other lags such as the time
required to decide upon and execute an open
market transaction and the time it takes for the
market to react to the transaction.
More precise control over interest rates
could be achieved if the Manager were willing to
announce Federal Reserve buying and selling
prices for, say, 3-month Treasury bills available
to all comers. This is essentially the way in which
government securities were pegged during World
War II. In principle, there is no reason why such a
peg could not be operated in peacetime, although
it would certainly be desirable to change the peg
frequently, perhaps as often as every day or even
every hour. However, in terms of actual behavior
of interest rates there is no significant difference
between a frequently adjusted peg and continuous
intervention by the Manager as described in the
previous paragraph.
The main point of this discussion of interest
rate control is to emphasize that with frequent
interest rate readings it is not necessary to know
exactly what causes interest rate changes. In time
the Manager develops a feel for the market that
enables him to guess accurately which interest
rate changes are temporary and which are likely
to be “permanent” and so require offsetting open
market operations. Furthermore, his feel for the
market will enable him to know how large the
operations should be. Finally, when he guesses
wrongly on these matters, his continuous contact
with the market enables him to correct mistakes
rapidly.
The same arguments apply to controlling the
money stock. The difference between interest rate
control and money stock control is a matter of
degree rather than kind. Data on the money stock
become available with a greater lag, and the data
are more subject to revision. But since it is not
necessary to control the money stock down to
the last dollar, the question is whether it is technically possible to have control that is accurate
enough for policy purposes. The answer to this
question would certainly appear to be in the
affirmative.
The weekly-average figure for the money stock
is released to the public 8 days following the end
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

of the week to which the average refers. Of course,
data are available internally with a shorter lag.
Since the policy rule in the previous section is
based on controlling the monthly-average money
stocks it would appear that the data are at the
present time available with a short enough lag
that feedback methods of control are feasible.
To see how feedback control would work,
suppose that the Manager were instructed to come
as close as possible to a target money stock of M4*
in week 4 of a 4-week operating horizon. The
Manager knows that the weekly change in the
money stock depends on open market purchases,
P, which he controls, and many other factors as
well, which for simplicity of exposition will be
denoted by one factor, z. These factors cannot be
predicted exactly, and so the Manager will think
of z as consisting of a predictable part, ẑ, and an
unpredictable part, u. These relationships may
be expressed as
(5)

∆M = α P + z = α P + zˆ + u

where α is the coefficient giving the change in
money per dollar of open market purchases.
If there were no errors in measuring the money
stock, the analysis could be completed on the
basis of equation 5. But of course there are errors
in measuring the money stock. To analyze the
significance of measurement errors, let Mi be the
money stock for week i as measured at the end of
week i.18 Also, let Mif be the final “true” money
stock figure for week i, and let ei = Mif – Mi .
The Manager starts out the 4-week period
with an estimated money stock of M0 for week
zero. Of course, the figure for M0 is a preliminary
one, but revisions in this figure as more data accumulate will affect the estimates for the money
stock in later weeks and so affect the Manager’s
actions in later weeks. It will be assumed that
he wants to increase the money stock by equal
amounts in each week to reach the desired figure of
18

If a money stock estimate is not directly available at the end of
week i, one can be constructed by taking the estimate from actual
deposit data for week i – 1 and adding to it a projection for the
effects of open market operations and other factors for week i. This
projection would, of course, come from equation 5.

J U LY / A U G U S T

2008

485

Poole

M4* in week 4. In week 1, therefore, he wants to produce a change in the money stock 1/4(M4* – M0).
Substituting this figure into equation 5 we obtain

(

)

1
M 4∗ − M 0 = α P1 + zˆ 1 + u1
4

(

)

Thus, the Manager sets P1 according to

P1 =

(6)

1 1

M 4∗ − M 0 − zˆ 1 

α 4


At the end of the first week the Manager has
the estimate, M1, for the money stock for that week,
and again it is assumed that he wants to spread
the desired change M* – M1 equally over the next
3 weeks. Thus, the Manager sets P2 according to

P2 =

(7)

(

)

1 1

M 4∗ − M 1 − zˆ 2 

α 3


Similarly, he sets P3 and P4 according to equations
8 and 9.

(

)

(8)

P3 =

1 1

M 4∗ − M 2 − zˆ 3 
α  2


(9)

P4 =

1 ∗
M − M 3 − zˆ 4 
α 4

(

)

From equations 9 and 5 it can be seen that
the actual money stock in week 4 is
(10) M 4f = M 3f + M ∗ − M 3 − zˆ 4 + z = M 4∗ + e3 + u4
This expression for the fourth week of a planning
period generalizes to the nth week of a planning
period of any length merely by replacing the
subscript 4 by the subscript n. We can, therefore,
express the annual rate of growth, g, over an n
week period by

g=
(11)

=

52  M nf − M 0f 

n  M 0f


52  M n∗ − M 0f  52
 + n (e n − 1 + u n )
n  M 0f


From equation 11 it can be seen that the actual
growth rate, g, equals the desired growth rate plus
an error term that becomes smaller as n becomes
larger.
486

J U LY / A U G U S T

2008

This analysis shows that a feedback control
system that continuously adjusts open market
operations as data on the money stock in the
recent past become available can achieve a target
rate of growth with a margin of error that is smaller
the longer the period over which the rate of growth
is calculated. It also provides a framework in
which to examine the relative importance of
operating errors, the ui , and data errors, the ei .
To obtain an accurate estimate of the sizes of
these errors is beyond the scope of this study.
However, a very crude method may be used to
obtain an estimate of the maximum size of the
total error. Monthly money stock changes at annual
rates were computed for the period January 1951
through September 1969 on the basis of seasonally
adjusted data. This time period yields a total of
225 monthly changes. Then each monthly change
was expressed in terms of its deviation from the
average of the changes for the previous 3 months.
For example, the September deviation was calculated by subtracting from the September monthly
change the average of the changes for August, July,
and June. The use of deviations allows in part
for longer-run trends in the money stock, which
trends are assumed to be readily controllable.
Since the deviations were calculated over a period
during which little or no attention was paid to
controlling the money stock, they surely represent
an upper limit to the degree of volatility in the
money stock to be expected under a policy
directed at control of the money stock.
These monthly deviations have a standard
deviation of 3.12 percent per annum. Applying
equation 11, except for replacing 52 by 1 to reflect
the fact that the rates of change were expressed
at annual rates in the first place, it is found that
the standard deviation over a 3-month period
would be 1.04 percent per annum. If it is assumed
that these deviations are normally distributed,
the conclusion is that over 3-month periods the
actual growth rate would be within plus or minus
1.04 percent of the desired growth rate about 68
percent of the time, and would be within plus or
minus 2.08 percent about 95 percent of the time.
Inasmuch as these limits would be cut in half
over 6-month periods, the actual growth rate 95
percent of the time would be in the range of plus
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

or minus 1.04 percent of the desired growth rate.19
When it is recalled that these calculations are
based on an estimate of variability over a period
in which very little attention was paid to stabilizing money stock growth rates, it is clear that
fears as to the ability of the Federal Reserve to
control the money stock accurately are completely
unfounded.20
This conclusion justifies the approach used
at the beginning of this section on the selection
of a monetary aggregate, at least for the narrowly
defined money stock and most probably for other
aggregates as well. That approach, it will be
recalled, analyzed the selection issue on the
assumption that every one of the aggregates considered could be precisely controlled for all practical purposes. There can be no doubt that errors
in reaching targets for goal variables such as GNP,
at the present state of knowledge, are due almost
entirely to incomplete knowledge of the relationships between instrument variables (such as various aggregates and interest rates) and the goal
variables, and hardly at all to errors in setting
instrument variables at desired levels.
Problems of Data Revisions and Changing
Seasonality. Another topic that needs examination is the effect of data revisions. While weeklyaverage data are released with an 8-day lag, these
figures are subject to revision. Not much weight
can be given to early availability of data that are
later revised substantially. To investigate this
problem, two money stock series were compared,
one “preliminary” and one “final.” Since the
analysis below is based on published monthly
data, it obviously provides little insight into the

accuracy of weekly data. However, since policy
instructions may be based on monthly data, the
analysis is of some value in assessing data accuracy. Furthermore, the conclusions on the importance of revisions in seasonal factors can be
expected to hold for the weekly data.
A “preliminary” series of monthly growth
rates of the money stock was constructed by calculating the growth rate for each month from data
reported in the Federal Reserve Bulletin for the
following month. For example, the Bulletin dated
September reports money stock data for 13 months
through August; it is the annual rate of change of
August over July that is called the “preliminary”
August rate-of-change observation. The “final”
series is the annual rate of growth calculated from
the monthly money stock series covering 1947
through September 1969, reported in the Federal
Reserve Bulletin for October 1969, pp. 790-93.
Data were gathered on both a seasonally adjusted
basis and an unadjusted basis for January 1961
through August 1969.
The correlation between the preliminary and
final seasonally adjusted series is 0.767, while
for the unadjusted series the correlation is 0.997.
Another way to compare the preliminary and
final series is to examine the differences in the
two series.21 For the seasonally adjusted data, the
differences have a mean of 0.122 and a standard
deviation of 3.704, and the mean absolute difference is 2.891. On the other hand, for the seasonally unadjusted data the differences have a mean
of 0.150 and a standard deviation of 1.366, and
the mean absolute difference is 0.955.22
These results make it abundantly clear that
the major reason why the preliminary and final

19

21

The analysis of the differences inadvertently runs from February
1961 through August 1969 while the correlation analysis runs
from January 1961 through August 1969.

22

To take account of the fact that the “final” money stock series may
be further revised for months near the October 1969 publication
date of this series, the analysis of differences between the preliminary and final series was also run on the period February 1961
through December 1968. The mean difference, the standard deviation of the differences, and the mean absolute difference, are,
respectively, for the seasonally adjusted data 0.026, 3.779, and
2.922, while the figures for the seasonally unadjusted data are
0.038, 1.280, and 0.890. In spite of the fact that the “final” series
is not really final for 1969 data, the average differences are generally larger for the longer period due to the relatively large data
revisions in the middle of 1969.

20

If the calculations are based on the variability of the monthly
changes themselves rather than on the deviations of the monthly
changes, the results are not greatly changed. The standard deviation
of the monthly changes over the same period used before is 3.53
per cent per annum, which yields a 95 percent chance of the growth
rate being in a range around the desired rate of plus or minus 2.36
(1.18) per cent per annum for 3-month (6-month) periods.
Compare “First, however, it may be worthwhile to touch on the
extensively debated subject whether the Federal Reserve, if it
wanted to, could control the rate of money supply growth. In my
view, this lies well within the power of the Federal Reserve to
accomplish provided one does not require hair-splitting precision
and is thinking in terms of a time span long enough to avoid the
erratic, and largely meaningless, movements of money supply
over short periods” (Holmes, 1969, p. 75).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

487

Poole

figures on the money stock differ is revision of
seasonal adjustment factors. While such revisions
may produce substantial differences between
preliminary and final monthly growth rates, the
differences must be lower for the average of several
months’ growth rates. The reason, of course, is
that revision of seasonal factors must make the
figures for some months higher and those for other
months lower, leaving the annual average about
unchanged.
The significance of revisions in seasonal factors can be understood only after a discussion of
the significance of seasonality for a money stock
rule. If the monetary rule were framed in terms
of the seasonally unadjusted money stock, the
result would be to introduce substantially more
seasonality into short-term interest rates than now
exists. It can be argued not only that greater seasonality in interest rates would not be harmful
but also that it would be positively beneficial.
Greater seasonality in interest rates would presumably tend to push production from busy, highinterest seasons into slack, low-interest seasons.
Although the argument for seasonality in
interest rates could be pushed further, there is an
important practical reason for not initially adopting a money rule stated in terms of the seasonally
unadjusted money stock. The reason is that the
rule ties the growth rate of the money stock to the
seasonally adjusted unemployment rate and to
the interest rate. The rule has been developed
through an examination of past experience. If the
seasonal were taken out of the money stock, a
different seasonal would be put into interest rates,
and possibly into the unemployment rate as well.
Seasonal factors for these variables, especially for
the unemployment rate, determined from past data
would no longer be correct if the money stock
seasonal were removed. Seasonally adjusting the
unemployment index by the old factors could
produce considerable uncertainty over the application of the monetary rule. Thus, application of
the rule through the seasonally unadjusted money
stock, if desirable at all, should only come about
through gradual reduction rather than immediate
elimination of seasonality. A further reason for a
gradual approach would be to permit the financial markets to adjust more easily to changed
seasonality.
488

J U LY / A U G U S T

2008

The point of this discussion is not to urge
acceptance of a rule framed in terms of the unadjusted money stock, since this step would not be
initially desirable in any case. Rather, the point
is to emphasize that seasonality is in the money
stock only in order to reduce the seasonality of
other variables, primarily interest rates. The seasonality of the money stock, unlike variables such
as agricultural production, is not inherent in the
workings of the economy but rather exists because
the Federal Reserve wants it to exist. The money
stock can be made to assume any seasonal pattern
the Federal Reserve wants it to assume.
The monetary rule should be framed, at least
initially, in terms of the seasonally adjusted
money stock—using the latest estimated seasonal
factors. In subsequent years changes in these
seasonal factors should not result from mechanical application of seasonal adjustment techniques
to the money stock data but rather should be the
result of a deliberate policy choice. The policy
choice would be based on the desire to change
seasonality of other variables. For example, if it
were thought desirable to take the seasonality out
of short-term interest rates, the seasonal factors
for the money stock would then be changed to
take account of changes in tax dates and other
factors.
Under a money stock policy, whether or not
guided by a monetary rule, revised seasonal factors
cannot properly be applied to past data. If the
changes are applied to past data with the result
that some monthly growth rates of adjusted data
become relatively high while others become relatively low, the conclusion to be drawn is not that
policy was mistaken as a result of using faulty
seasonal factors. Instead, the conclusion is merely
that seasonal policy differed in the past from current policy or from the seasonal pattern assumed
by the investigator who computed the seasonal
factors. Seasonal policy can be shown to be
“wrong” only by showing that undesirable seasonals exist in other variables.
One final problem deserves discussion. While
it appears from the analysis of seasonally unadjusted money stock data that revisions of the data
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

are relatively unimportant, at least from the evidence for 1961-69, how should the policy rule be
adjusted when there are major data revisions—
as in the middle of 1969? For example, suppose
that revisions indicate that monetary growth has
been much higher than had been expected, and
higher than was desirable. On the one hand, policy
could ignore the past high rate of growth and
simply maintain the current rate of growth of the
revised series in the desired range. On the other
hand, the policy could be to return the money
stock to the level implied by applying the desired
growth rate to the money stock in some past base
period. The first alternative involves ratifying an
undesirable high past rate of growth, while the
second may involve a wrenching change in the
money stock to return it to the desired growth
path. The proper policy would no doubt have to
be decided on a case-by-case basis. However, a
useful presumption might be to adopt the second
alternative, but to set as the base the money stock
6 months in the past and to return to the desired
growth path over a period of several months.
Improving Control Over the Money Stock.
The analysis above has shown that under present
conditions the money stock can be controlled
quite accurately. However, it should be emphasized that there are numerous possibilities for
improving control. Although detailed treatment
of this subject is beyond the scope of this study,
a few very brief comments appear appropriate.
There are three basic methods for improving
control. The first method is that of improving the
data. The more quickly the deposit data are available, the more quickly undesirable movements
in the money stock can be recognized and corrected. And the more accurate the deposit data,
the fewer the mistakes caused by acting on erroneous information. It is clear that expenditures of
money on expanding the number and coverage
of deposit surveys and on more rapid processing
of the raw survey data can improve deposit data.
The second method of improving control is
through research, which increases our understanding of the forces making for changes in the
money stock. For example, transfers between
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

demand and time deposits might be more accurately predicted through research into the causes
of such transfers.
The third method of improving control is
through institutional changes. To reduce fluctuations in excess reserves and thereby achieve a
more dependable relationship between total
reserves and deposits, the federal funds market
might be improved by making possible transfers
between the East and West Coasts after east coast
banks are closed. Also helpful would be a change
from lagged to contemporaneous reserve requirements. More radical reforms such as equalization
of reserve requirements for city, country, and
nonmember banks and elimination of reserve
requirements on time deposits should also be
considered.

V. SUMMARY
Purposes of the Study
The primary purpose of this study has been
to argue that a major improvement in monetary
policy would result through a systematic policy
approach based on adjustments in the money
stock. Equal emphasis has been placed on the
“systematic” part and the “money stock” part of
this approach. The analysis has proceeded first
by showing why policy adjustments should be
made through money stock adjustments, and
second by showing how these policy adjustments
might be systematically linked to the current
business situation through a policy guideline or
rule-of-thumb. A third, and subsidiary, part of
this study is an analysis of the reasons for preferring the money stock over other monetary aggregates, and of some of the problems in reaching
desired levels of the money stock.
It has been emphasized throughout that this
policy approach is one that is justified for the
intermediate-term future on the basis of knowledge
now available. The specific recommendations
are not intended to be good for all time. Indeed,
the approach has been designed to encourage
evaluation of the results so that the information
obtained thereby can be incorporated into policy
decisions in the future.
J U LY / A U G U S T

2008

489

Poole

The Theory of Monetary Policy Under
Uncertainty
Since policymakers have repeatedly emphasized the importance of uncertainty, it is necessary
to analyze policy problems within a model that
explicitly takes uncertainty into account. In particular, only within such a model is it possible to
examine the important current issue of whether
policy adjustments should proceed through
interest rate or money stock changes.
A monetary policy operating through interest
rate changes sets interest rates either through
explicit pegging as was used in World War II or
through open market operations directed toward
the maintenance of rates in some desired range.
Under such a policy the money stock is permitted
to fluctuate to whatever extent is necessary to keep
interest rates at the desired levels. On the other
hand, a policy operating through money stock
changes uses open market operations to set the
money stock at its desired level while permitting
interest rates to fluctuate freely.
If there were perfect knowledge of the relationships between the money stock and interest rates,
the issue of money stock versus interest rates
would be nonexistent. With perfect knowledge,
changes in interest rates would be perfectly predictable on the basis of policy-induced changes
in the money stock, and vice versa. It would, therefore, be a matter of preference or prejudice, but
not of substance, whether policy operated through
interest rates or the money stock.
To analyze the interest versus money issue,
then, it is necessary to assume that there is a stochastic link between the two variables. And, of
course, this is in fact the case. There are two fundamental reasons for the stochastic link. First, the
demand for money depends not only on interest
rates and the level of income but also on other
factors, which are not well understood. As a result,
the demand for money fluctuates in a random
fashion even if income and interest are unchanged.
If the stock of money is fixed by policy, these
random demand fluctuations will force changes
in interest and/or income in order to equate the
amount demanded with the fixed supply.
The second source of disturbances between
money and interest stems from disturbances in
490

J U LY / A U G U S T

2008

the relationship between expenditures—especially
investment-type expenditures—and interest rates.
Given an interest rate fixed by policy, these disturbances produce changes in income through
the multiplier process, and these income changes
in turn change the quantity of money demanded.
With interest fixed by policy, the stock of money
must change when the demand for money changes.
On the other hand, if the money stock were fixed
by policy, since the expenditure disturbance
changes the relationship between income and
interest, some change in the levels of income
and/or interest would be necessary for the quantity of money demanded to equal the fixed stock.
Money stock and interest rate policies are
clearly not equivalent in their effects, given that
disturbances in money demand and in expenditures do occur. Since the effects of these policies
are different, which policy to prefer depends on
how the effects differ and on policy goals. At this
level of abstraction, it is clearly appropriate to
concentrate on the goals of full employment and
price stability. Unfortunately, the formal model
that has been worked out, which is examined carefully in Section I above, applies only to the goal
of stabilizing income. If “income” is interpreted
to mean “money income,” then the goals of
employment and price level stability are included
but are combined in a crude fashion.
The basic differences in the effects of money
stock and interest rate policies can be seen quite
easily by examining extreme cases. Suppose first
that there are no expenditure disturbances, so
there is a perfectly predictable relationship
between the interest rate and the level of income.
In that case, a policy that sets the interest rate
sets income, and policymakers can choose the
level of the interest rate to obtain the level of
income desired. When the interest rate is set by
policy, disturbances in the demand for money
change the stock of money but not the level of
income. On the other hand, if policy sets the
money stock, then the money demand disturbances would affect interest and income leading
to less satisfactory stabilization of income than
would occur under an interest rate policy.
The other extreme case is that in which there
are disturbances in expenditures but not in money
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

demand. If policy sets the interest rate, expenditure disturbances will produce fluctuations in
income. But if the money stock is fixed, these
income fluctuations will be smaller. This point
can be seen by considering a specific example such
as a reduction in investment demand. This disturbance reduces income. But given an unchanged
money demand function, with the fall in income,
interest rates must fall so that the amount of
money demanded will equal the fixed stock of
money. The decline in the interest rate will stimulate investment expenditures, thus offsetting in
part the impact on income of the initial decline
in the investment demand function. With expenditures disturbances, then, to stabilize income, it
is clearly better to follow a money stock policy
than an interest rate policy.
The conclusion is that the money versus
interest issue depends crucially on the relative
importance of money demand and expenditures
disturbances. It is especially important to note
that nothing has been said about the size of the
interest elasticity of the demand for money, or
of the interest elasticity of investment demand.
These coefficients, and others, determine the relative impacts of changes in money demand and
in investment and government expenditures when
the changes occur. The interest versus money
issue does not depend on these matters, however,
but only on the relative size and frequency of
disturbances in the money demand and expenditures functions.23
The analysis above is modified in detail by
considering possible interconnections between
money demand and expenditures disturbances.
It is also true that in general the optimal policy
is not a pure interest or pure money stock policy,
but a combination of the two. These matters, and
a number of others, are discussed in Section I.

Evidence on Relative Magnitudes of
Real and Monetary Disturbances
Resolution of the money versus interest issue
depends on the relative size of real and monetary
disturbances. Unfortunately, there is no com23

For a full understanding of this important point, the reader should
refer to the analysis of Section I.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

pletely satisfactory body of evidence on this
matter. Indeed, because of the conceptual difficulties of designing empirical studies to investigate the issue, the evidence is unlikely to be fully
satisfactory for some time to come. Nevertheless,
by examining a number of different types of evidence, a substantial case can be built favoring
the use of the money stock as the policy control
variable.
Before discussing the evidence, it is necessary
to define in more detail what is meant by “disturbance.” Consider first a money demand disturbance. The demand for money depends on the
levels of income and of interest rates, and on other
variables. The simplest form of such a function
uses GNP as the income variable, and one interest
rate—say the Aaa corporate bond rate—and all
other factors affecting the demand for money are
treated as disturbances. To the extent possible,
of course, these other factors should be allowed
for, but for policy purposes these factors must be
either continuously observable or predictable in
advance so that policy may be adjusted to offset
any undesirable effects on income of these other
factors. Factors not predictable in advance must
be treated as random disturbances.
Similarly, expenditures disturbances are
defined as the deviations from a function linking
income to the interest rate and other factors. These
other factors would include items such as tax rates,
government expenditures, strikes, and population
changes. Again, for policy purposes these factors
must be forecast, and so errors in the forecasts of
these items must be included in the disturbance
term. It is important to realize that the disturbances
will be defined differently for scientific purposes
ex post because the true values of government
spending and so forth can be used in the functions
once data on these items are available.
In the discussion of the theoretical issues
above it was noted that an expenditure disturbance
would have a larger impact on income under an
interest rate policy than under a money stock
policy. Simulation of the FR-MIT model provides
the estimate that the impact on income of an
expenditures disturbance, say in government
spending, is over twice as large under an interest
rate policy as under a money stock policy. An
J U LY / A U G U S T

2008

491

Poole

error in forecasting government spending, then,
would lead to twice as large an error in income
under an interest rate policy. Since there is no
systematic record of forecasting errors for variables such as government spending and strikes,
there is no way of producing evidence on the size
of such forecasting errors. However, after listing
the variables that must be forecast, as is done in
Section II, it is difficult to avoid feeling that errors
in forecasting are likely to be quite significant.
These real disturbances, including forecast
errors in government expenditures, strikes, and
so forth, must be compared with the disturbances
in money demand. The reduced-form studies
conducted by a number of investigators provide
some evidence on this issue. These studies compare the relative predictive power of monetarist
and Keynesian approaches in explaining fluctuations in income. From these studies the predictive power of both approaches appears about
equal. However, the predictive power of the
Keynesian approach relies on ex post observation
of “autonomous” expenditures, and it is clear
that these expenditures are subject to forecasting
errors ex ante whereas the money stock can be
controlled by policy.
The evidence from the reduced-form studies
suggests that when forecast errors of autonomous
expenditures are included in the disturbance
term, the disturbances are larger on the real side
than on the monetary side. There are many difficulties with the reduced-form approach and so
these results must be interpreted cautiously.
Nevertheless, the results cannot be ignored.
The final piece of evidence offered in Section II
is a study by the author of the stability of the
demand for money function over time. Using a
very simple function relating the income velocity
of money to the Aaa corporate bond rate, he found
that a function fitted to quarterly data for 1947-60
also fits data for 1961-68 rather well. The reader
interested in the precise meaning of “rather well”
should turn to the technical discussion in
Section II.
Evidence on relative stability is difficult to
obtain and subject to varying interpretations. No
single piece of evidence is decisive, but all the
various scraps point in the same direction. The
492

J U LY / A U G U S T

2008

evidence is not such that a reasonable man can
say that he has no doubts whatsoever. But since
policy decisions cannot be avoided, the reasonable decision based on the available evidence is
to adopt the money stock as the monetary policy
control variable.

A Monetary Rule for Guiding Policy
The conclusion from the theoretical and
empirical analysis is that the money stock ought
to be the policy control variable. For this conclusion to be very useful, it must be shown in detail
how the money stock ought to be used. It is not
enough simply to urge policymakers to make the
“appropriate” adjustments in the money stock in
the light of all “relevant” information.
There is no general agreement on exactly what
types of adjustments are appropriate. However, it
would probably be possible to obtain agreement
among most economists that ordinarily the money
stock should not grow faster than its long-run average rate during a period of inflation and should
not grow slower than its long-run average rate
during recession. But many economists would
want to qualify even this weak statement by saying
that there may at times be special circumstances
requiring departures from the implied guideline.
Others would say that there is no hope at present
of gauging correctly the impact of special circumstances (or even of “standard” circumstances) so
that policy should maintain an absolutely steady
rate of growth of the money stock.
The basic issues are, first, whether policymakers can forecast disturbances well enough to
adjust policy to offset them, and second, the
extent to which money stock adjustments to offset short-run disturbances will cause undesirable
longer-run changes in income and other variables.
The theoretical possibilities are many, but the
empirical knowledge does not exist to determine
which theoretical cases are important in practice.
It is for this reason that a systematic policy
approach is needed so that policy can be easily
evaluated and improved with experience.
Policy could be linked in a systematic way to
a large-scale model of the economy. Target values
of GNP and other goal variables could be selected
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

by policymakers, and then the model solved for
the values of the money stock and other control
variables (for example, discount rate) needed to
achieve policy goals. While this approach may
be feasible in the future, it is not feasible now
because a sufficiently accurate model does not
exist. Instead, policy decisions are now made
largely on the basis of intuitive reactions to current business developments.
Given this situation, the obvious approach is
to specify precisely how policy decisions ought
to depend on current developments, and this is
the approach taken in Section III. The specification there takes the form of a policy guideline, or
rule-of-thumb. The proposed rule is purposely
simple so that evaluation of its merits would be
relatively easy. Routine evaluation of an operating guideline would over time produce a body of
evidence that could be used to modify and complicate the rule. But it is necessary to begin with
a simple rule because the knowledge that would
be necessary to construct a sophisticated rule
does not exist.
The proposed rule assumes that full employment exists when the unemployment rate is in the
4.0 to 4.4 percent range. The rule also assumes that
at full employment, a growth rate of the money
stock of 3 to 5 percent per annum is consistent
with price stability. Therefore, when unemployment is in the full employment range, the rule calls
for monetary growth at the 3 to 5 percent rate.
The rule calls for higher monetary growth
when unemployment is higher, and lower monetary growth when unemployment is lower.
Furthermore, when unemployment is relatively
high the rule calls for a policy of pushing the
Treasury bill rate down provided monetary growth
is maintained in the specified range; similarly,
when unemployment is relatively low the rule
calls for a policy of pushing the bill rate up provided monetary growth is in the specified range.
Finally, the rule provides for adjusting the rate of
growth of money according to movements in the
Treasury bill rate in the recent past. The exact rule
proposed is in Table 3 and the detailed rationale
for the various components of the rule is explained
in the discussion accompanying that table.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

The rule is specified throughout in terms of
2 percent ranges for the rate of growth of the
money stock on a month-by-month basis. By
expressing the rule in terms of a range, leeway is
provided for smoothing undesirable interest rate
fluctuations and for minor policy adjustments in
response to other information. Furthermore, it is
not proposed that this rule-of-thumb or guideline
be followed if there is good reason for a departure.
But departures should be justified by evidence
and not be based on vague intuitive feelings of
what is needed since the rule was carefully
designed from the theoretical and empirical
analysis of Sections I and II, and from a careful
review of post-accord policy.
There is no way of really testing the proposed
rule short of actually using it. However, it is useful to compare the rule with post-accord policy.
A detailed comparison may be found in Section III.
A summary comparison suggests, however, that
for the period January 1952 through July 1968
the rule would have provided a less appropriate
policy than the actual policy in only 63 of the
199 months in the period. The rule was judged
to be less appropriate if it called for a higher—
lower—rate of monetary growth than actually
occurred and unemployment 12 months hence
was below—above—the desired range of 4.0 to
4.4 percent. The rule was also judged less appropriate than the actual policy if actual policy was
not within the rule but unemployment nevertheless was in the desired range 12 months hence.
The rule actually has slightly fewer errors if the
criterion is unemployment either 6 or 9 months
following the months in question.
The rule has the great virtue of turning policy
around promptly as imbalances develop and of
avoiding cases such as the 2.2 percent rate of
decline in the money stock from July 1957 through
January 1958, during which time the unemployment rate rose from 4.2 percent to 5.8 percent.
Furthermore, it seems most unlikely that the rule
would produce greater instability than the policy
actually followed. Actual policy has, as measured
by the money stock, been most expansionary
during the early and middle stages of business
cycle expansions and most contractionary during
the last stages of business expansions and early
J U LY / A U G U S T

2008

493

Poole

stages of business contractions. Unless a very
improbable lag structure exists, the rule would
surely be more stabilizing than the actual historical pattern of monetary growth.

Selection and Control of a Monetary
Aggregate
The analysis in this study is almost entirely
in terms of the narrowly defined money stock.
The reasons for using the narrowly defined money
stock as opposed to other monetary aggregates
may be stated fairly simply.
Some economists favor the use of bank credit
as the monetary aggregate because they view
policy as operating through changes in the cost
and availability of credit. The major difficulty
with this view is that there is no unambiguous
way of defining the amount of credit in the economy. And even if a satisfactory definition could
be worked out, there is no current possibility of
obtaining timely data on the total amount of credit
or of controlling the total amount.
The definitional problem arises largely from
the activities of financial intermediaries. Suppose,
for example, that an individual sells some corporate debentures and invests the proceeds in a
fixed-income type of investment fund, which in
turn uses the funds to buy the very same debentures sold by the individual. If both the debentures
and the investment fund shares are counted as
part of total credit, then in this example total credit
has risen without any additional funds being
made available to the corporation to finance new
facilities and so forth.
As another example, it is difficult to see that
it would make any substantial difference to aggregate economic activity whether a corporation
financed inventories through sales of commercial
paper to the public or through borrowing from
banks that raised funds through sales of CD’s to
the public. Since there are numerous close substitutes for bank credit, the amount of bank credit
is most unlikely to be an appropriate figure to
emphasize. Furthermore, since bank credit is only
a small part of total credit there is essentially no
possibility of controlling total credit, however
defined, through adjustments in bank credit.
494

J U LY / A U G U S T

2008

Ultimately the issue again becomes that of
the stability of various functions. If the demand
and supply functions for all of the various credit
instruments, including those of financial intermediaries, were stable and were known, then it
would be possible to focus on any aggregate that
was convenient. For if all the functions were
known, then there would be known relationships
among various credit instruments, the money
stock, and stocks and flows of goods. But the
demand and supply functions for the various
credit instruments are not known, and it is
unlikely that they ever will be known with any
degree of precision. There are two basic reasons
for this state of affairs. The first, and less important, is that given the great degree of substitutability among credit instruments, substitutions are
constantly taking place as a result of changes in
regulations, including tax regulations. But second
and more important, individual credit instruments
are greatly influenced by changes in tastes and
technology, factors that economists do not understand well.
As an example of the effects of regulations,
consider the substitution in recent years of debentures for preferred stock as a result of the tax laws
permitting deduction of interest. As examples of
the effects of changes in tastes and technology,
consider the inventions of new instruments such
as CD’s and the shares in dual-purpose investment
funds. Furthermore, the relationships among
credit instruments will change as attitudes toward
risk change due to numerous factors including
perhaps fading memories of the last recession or
depression.
Money viewed as the medium of exchange
seems to be substantially less subject to changes
in tastes and technology than do other financial
assets. Of course, money is not immune to these
problems, as shown by the uncertainty presently
existing over the impact of credit cards. But a
great deal of empirical work on money has been
completed and the major findings have been substantiated by a number of different investigators.
And the interpretation of the empirical findings
is usually clear because the empirical work has
been conducted within the framework of a welldeveloped theory of money. There is, on the other
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

hand, no satisfactory theory of bank credit to
guide empirical work and to permit interpretation
of the significance of empirical findings.
For these reasons, and others, bank credit
does not appear to be an appropriate monetary
aggregate for policy to control. However, because
bank credit and the money stock were so highly
correlated in the past, it must be admitted that it
probably would not have made much difference
which one was used. From recent experience,
however, it appears that changes in banks’ nondeposit sources of funds are likely to become more,
rather than less, important, and so in the future the
correlation between money and bank credit is
likely to be lower than in the past. If this prediction is correct, then the issue is a significant one.
As a monetary aggregate, to be used for policy
adjustments, the money stock has clear advantages over the monetary base and various reserve
measures. These aggregates are almost always
examined in adjusted form, where the adjustments
allow for such factors as changes in the currency/
deposit ratio, in reserve requirements, and in
shifts between time and demand deposits. The
adjustments are made because the effects of these
various factors are understood and are thought
to be worth offsetting. The adjustments have the
effect of making the base an almost constant fraction of the money stock, or making total reserves
an almost constant fraction of demand deposits.
It obviously makes more sense to look directly at
the money stock, especially since given the nature
of the adjustments it is no easier to control the
adjusted base or adjusted total reserves than to
control the money stock.
The final aggregate to be considered is the
broadly defined money stock—the narrow stock
plus time deposits. No strong case can be made
against the broad money stock. From existing
empirical work both definitions of money appear
to work equally well. The theoretical distinction
between demand deposits and passbook savings
deposits depends on the costs of transferring
between the two types of deposits, and these costs
appear to be quite low. However, CD’s do appear
to be theoretically different and probably should
be excluded from the definition of money. The
major reason for excluding all time deposits from
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

the definition is that in the future banks may
invent new instruments that will be classified as
time deposits for regulatory purposes but for
which the matter of definition as money may not
be at all clear.
The issue of controllability is a technical one
and need not be discussed carefully in this summary. However, two conclusions may be stated.
First, instructions from the FOMC to the Manager
of the Open Market Account should take the form
of a specified average weekly change in the money
stock over the period between FOMC meetings.
Such an instruction must be distinguished from
one in terms of the average level of the money
stock over the period between FOMC meetings.
The average-level specification has several technical difficulties and should be avoided.
The second conclusion is that it is possible
to control the rate of growth of the money stock
over a 3-month period in a range of 1 percent on
either side of a desired rate of growth. This conclusion is based on an analysis of monthly changes
in the money stock over the 1951-68 period, a
period during which little or no attention was
paid to stabilizing monetary growth, and it takes
the historical record at face value. Assuming that
efforts to control the money stock would in fact
succeed in part rather than make money growth
less stable than in the past, the estimate of plus
or minus 1 percent is an upper limit to the errors
in controlling the growth rate of money over 6month periods.

Concluding Remarks
The orientation throughout this study has
been the redirection of monetary policy on the
basis of currently available theory and evidence.
The recommendations are not utopian; in the
author’s view they are supported by current
knowledge and are operationally feasible. The
approach has been in terms of what ought to be
done in the near future, rather than in terms of
what might be done eventually if enough information accumulates.
No effort has been made to slide over gaps in
our knowledge; rather, the emphasis has been on
how policy should be formed given the huge gaps
in our knowledge. Indeed, it is precisely these
J U LY / A U G U S T

2008

495

Poole

gaps in our knowledge that lead to the conclusion
favoring policy adjustments through the money
stock.
It is the contention of this study that policy
can be improved if there is explicit recognition
of the importance of uncertainty. As much attention should be given to the consequences of errors
in projections as to the projections themselves.
Policy may be improved more by “don’t know”
answers to questions than by projections believed
by no one.
This is the static view. If policy can be
improved now through greater attention to uncertainty, in the long run it can be improved further
only through a reduction in uncertainty. This
longer view underlies the proposal for a policy
rule-of-thumb. Policy successes and failures ought
to be incorporated into a policy design in a form
that will repeat the successes and prevent the
recurrence of the failures. Policymaking will
always require judgment, but the judgment will
be applied to changing problems at a moving
frontier of knowledge. A systematic formulation
of policy will speed the accumulation of knowledge so that the policy problems of today will
become the technical staff problems of tomorrow.

REFERENCES
Andersen, Leonall C. and Jordan, Jerry L. “Monetary
and Fiscal Actions: A Test of Their Relative
Importance in Economic Stabilization.” Federal
Reserve Bank of St. Louis Review, November 1968,
pp. 11-24.
Ando, Albert and Modigliani, Franco. “The Relative
Stability of Monetary Velocity and the Investment
Multiplier.” American Economic Review,
September 1965a, 55, pp. 693-728.
Ando, Albert and Modigliani, Franco. “Rejoinder.”
American Economic Review, September 1965b, 55,
pp. 786-90.
Brainard, William. “Uncertainty and the Effectiveness
of Policy.” American Economic Review: Papers
and Proceedings of the 79th Annual Meeting of the
American Economic Association, May 1967, 57,
pp. 411-25.

496

J U LY / A U G U S T

2008

Brunner, Karl and Meltzer, Allan H. “The Federal
Reserve’s Attachment to the Free Reserve Concept.”
Subcommittee on Domestic Finance, Banking, and
Currency Committee, House of Representatives,
88th Congress, 2nd Session. Washington, DC:
Government Printing Office, 1964.
de Leeuw, Frank and Gramlich, Edward. “The Federal
Reserve–MIT Econometric Model.” Federal Reserve
Bulletin, January 1968, 54, pp. 11-40.
DePrano, Michael and Mayer, Thomas. “Tests of the
Relative Importance of Autonomous Expenditures
and Money.” American Economic Review,
September 1965a, 55, pp. 729-52.
DePrano, Michael and Mayer, Thomas. “Rejoinder.”
American Economic Review, September 1965b, 55,
pp. 791-92.
Fels, Rendigs and Hinshaw, C. Elton. Forecasting
and Recognizing Business Cycle Turning Points.
New York: National Bureau of Economic Research,
1968.
Friedman, Milton and Meiselman, David. The
Relative Stability of Monetary Velocity and the
Investment Multiplier in the United States, 18971958. Commission on Money and Credit,
Stabilization Policies. Englewood Cliffs, NJ:
Prentice-Hall, Inc., 1963.
Friedman, Milton and Meiselman, David. “Reply to
Donald Hester.” Review of Economics and Statistics,
November 1964, 46, pp. 369-76.
Friedman, Milton and Meiselman, David. “Reply to
Ando and Modigliani and to DePrano and Mayer.”
American Economic Review, September 1965, 55,
pp. 753-85.
Hester, Donald D. “Keynes and the Quantity Theory:
A Comment on the Friedman-Meiselman CMC
Paper.” Review of Economics and Statistics,
November 1964a, 46, pp. 364-68.
Hester, Donald D. “Rejoinder.” Review of Economics
and Statistics, November 1964b, 46, pp. 376-77.
Holmes, Alan R. “Operational Constraints on the
Stabilization of Money Supply Growth,” Controlling

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Poole

Monetary Aggregates. Boston: Federal Reserve
Bank of Boston, 1969.
Holt, Charles C. “Linear Decision Rules for Economic
Stabilization and Growth.” Quarterly Journal of
Economics, February 1962, 76, pp. 20-45.
Laidler, David E.W. The Demand for Money: Theories
and Evidence. Scranton, PA: International Textbook
Company, 1969.
Latané, Henry A. “Cash Balances and the Interest
Rate—A Pragmatic Approach.” Review of Economics
and Statistics, November 1954, 36, pp. 456-60.
Latané, Henry A. “Income Velocity and Interest
Rates—A Pragmatic Approach.” Review of
Economics and Statistics, November 1960, 42,
pp. 445-49.
Mincer, Jacob, ed. Economic Forecasting and
Expectations. New York: National Bureau of
Economic Research, 1969.
Moore, Geoffrey H. and Shiskin, Julius. “Indicators
of Business Expansions and Contractions.”
Occasional Paper 103, National Bureau of Economic
Research, 1967.
Poole, William. “Optimal Choice of Monetary Policy
Instruments in a Simple Stochastic Macro Model.”
Quarterly Journal of Economics, May 1970, 84,
pp. 197-216.
Reynolds, Lloyd G. Economics. Third Edition.
Homewood, IL: Richard D. Irwin, Inc., 1969.
Samuelson, Paul A. Economics. Seventh Edition.
New York: McGraw-Hill, 1967.
Theil, Henri. Optimal Decision Rules for Government
and Industry. Amsterdam: North-Holland Publishing
Company, 1964.
Zarnowitz, Victor. “An Appraisal of Short-Term
Economic Forecasting.” Occasional Paper 104,
National Bureau of Economic Research, 1967.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2008

497

498

J U LY / A U G U S T

2008

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W