View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Working Paper Series

Optimized Taylor Rules for Disinflation
When Agents are Learning

WP 14-07

Timothy Cogley
New York University
Christian Matthes
Federal Reserve Bank of Richmond
Argia M. Sbordone
Federal Reserve Bank of New York

This paper can be downloaded without charge from:
http://www.richmondfed.org/publications/

Optimized Taylor Rules for Disin‡ation When
Agents are Learning
Timothy Cogley

Christian Matthesy

Argia M. Sbordonez

March 2014
Working Paper No. 14-07
Abstract
Highly volatile transition dynamics can emerge when a central bank disin‡ates
while operating without full transparency. In our model, a central bank commits to a Taylor rule whose form is known but whose coe¢ cients are not.
Private agents learn about policy parameters via Bayesian updating. Under
McCallum’s (1999) timing protocol, temporarily explosive dynamics can arise,
making the transition highly volatile. Locally-unstable dynamics emerge when
there is substantial disagreement between actual and perceived feedback parameters. The central bank can achieve low average in‡ation, but its ability to
adjust reaction coe¢ cients is more limited.
Key words: In‡ation, monetary policy, learning, policy reforms, transitions
JEL Codes: E31, E52
Acknowledgement: For comments and suggestions, we thank Klaus Adam,
Martin Ellison, George Evans, Boyan Jovanovic, Thomas Sargent, Michael
Woodford, a referee, and seminar participants at the Banque de France, the
Centre for Dynamic Macroeconomics conference at the University of St Andrews, Duke, the European Central Bank, FRB Atlanta, FRB Philadelphia,
FRB Richmond, the Federal Reserve Board, the Hungarian Central Bank, London Business School, the National Bank of Poland conference "DSGE and Beyond," the Norges Bank, the Norwegian School of Management, NYU, Oxford,
Pompeu Fabra, Rutgers, the 2011 SCE and SED meetings, the Toulouse School
of Economics, and UQAM. The views expressed here do not necessarily re‡ect
the position of the Federal Reserve Banks of New York or Richmond or the
Federal Reserve System.
Department of Economics, New York University, 19 W. 4th St., 6FL, New York, NY, 10012,
USA. Email: tim.cogley@nyu.edu. Tel. 212-992-8679.
y
Research Department, Federal Reserve Bank of Richmond, 701 East Byrd Street, Richmond,
VA 23219, USA. Email: christian.matthes@rich.frb.org. Tel. 804-697-4490.
z
Macroeconomic and Monetary Studies Function, Federal Reserve Bank of New York, 33 Liberty
Street, New York, NY 10045, USA. Email: argia.sbordone@ny.frb.org. Tel. 212-720-6810.

1

1

Introduction

We examine the problem of a newly-appointed central bank governor who inherits a
high average in‡ation rate from the past. The bank has no o¢ cial in‡ation target
and lacks the political authority unilaterally to set one, but it has some ‡exibility
in choosing how to implement a vague mandate. We assume the new governor’s
preferences di¤er from those of his predecessor and that he wants to disin‡ate. We
seek an optimal Taylor-type rule and study how learning a¤ects the choice of policy
parameters.
Sargent (1982) studies an analogous problem in which the central bank not only
has a new governor but also undergoes a fundamental institutional reform. He argues
that by suitably changing the rules of the game, the government can persuade the
private sector in advance that a low-in‡ation policy is its best response. In that case,
the central bank can engineer a sharp disin‡ation at low cost. Sargent discusses a
number of historical examples that support his theory, emphasizing the institutional
changes that establish credibility.
Our scenario di¤ers from Sargent’s in two ways. We take institutional reform o¤
the table, assuming instead just a change of personnel. We also take away knowledge
of the new policy and assume that the private sector must learn about it. This is
tantamount to assuming that the private sector does not know the new governor’s
preferences.
Our scenario is more like the Volcker disin‡ation than the end of interwar hyperin‡ations. Erceg and Levin (2003) and Goodfriend and King (2005) explain the
cost of the Volcker disin‡ation by pointing to a lack of transparency and credibility.
Erceg and Levin contend that Volcker’s policy lacked transparency, and they develop
a model in which the private sector must learn the central bank’s long-run in‡ation
target. In their model, learning increases in‡ation persistence relative to what would
occur under full information, thereby raising the sacri…ce ratio and producing output
losses like those seen in the early 1980s. Goodfriend and King claim that Volcker’s
disin‡ation lacked credibility because no important changes were made in the rules
of the game. Because the private sector was initially unconvinced that Volcker would
disin‡ate, the new policy collided with expectations inherited from the old regime
and brought about a deep recession.
The analysis of Erceg, Levin, Goodfriend, and King is positive and explains why

2

the Volcker disin‡ation was costly. In contrast, we address normative questions, such
as how learning alters the central bank’s choices and what policy is optimal in that
case. Our problem is motivated by the Volcker disin‡ation, and a stylized version of
that episode serves as the vehicle for our analysis, but our objective is not to explain
the Volcker disin‡ation. On the contrary, our goal is to illustrate a powerful force
that arises when a new policy must be learned and to describe how the bank’s choices
are a¤ected.
We study this problem in the context of a dynamic new Keynesian model modi…ed
in two ways. Following Ascari (2004) and Sbordone (2007), we assume that target in‡ation need not be zero. We also replace rational expectations with Bayesian learning.
We assume the central bank commits to a simple Taylor-type rule whose functional
form is known but whose coe¢ cients are not. Private agents learn those coe¢ cients
via Bayesian updating. The bank chooses policy-rule parameters by minimizing a
discounted quadratic loss function, taking learning into account. Thus our model can
be interpreted as representing the consequences of incomplete transparency about
policy coe¢ cients when the central bank is committed to a simple rule.
The main quantitative results can be summarized as follows. The optimal simple rule under the full-information benchmark brings in‡ation down from 4.6% to
about zero in four quarters, with reaction coe¢ cients of 1.05 and 0.11, respectively,
on in‡ation and output growth. The sacri…ce ratio, de…ned as the cumulative loss
in output divided by the change in target in‡ation, is approximately 0.5% in this
benchmark case. The optimal simple rule under learning reduces target in‡ation to
1 percent with reaction coe¢ cients of 0.25 on in‡ation and 0.15 on output growth.
The transition takes about 10 quarters with much more oscillation, and the sacri…ce
ratio is about three times as large than under full information.
The reason why the bank’s choice under learning di¤ers substantially from the
full-information optimum is that the equilibrium law of motion under learning can
be a temporarily explosive process, i.e. one that is asymptotically stationary but
which has unstable autoregressive roots during the transition. When locally-unstable
dynamics emerge, the transition is highly volatile and dominates expected loss. The
central bank’s main challenge is to …nd a way to manage this potential for explosive
volatility.
Uncertainty about the in‡ation target is a lesser evil. In our examples, the bank
always achieves low average in‡ation, though sometimes it stops short of zero –the
3

optimum under full information1 – because the transition cost would be too great.
Uncertainty about policy feedback parameters is more problematic because this is
what creates the potential for temporarily-explosive dynamics. Locally-unstable dynamics emerge when there is substantial disagreement between actual and perceived
feedback parameters. It follows that one way for the bank to cope is to adopt a
policy that is close to the private sector’s prior. By choosing feedback parameters
su¢ ciently close to the private sector’s prior mode, the bank can ensure that the
equilibrium law of motion is nonexplosive throughout the transition, sacri…cing better long-term performance for lower transitional volatility. For the model described
below, this approximates the optimal strategy. Thus the bank’s choice of feedback
parameters is more constrained by the private sector’s initial beliefs.
A lack of transparency can therefore make disin‡ation very costly even under commitment to a simple rule. Furthermore, although conventional wisdom emphasizes
the value of an explicit long-run in‡ation target, our analysis says that transparency
about reaction coe¢ cients is equally important, perhaps even more so.
Our approach to learning di¤ers from much of the macro-learning literature, in
particular from the branch emanating from Marcet and Sargent (1989a, 1989b), Cho,
Williams, and Sargent (2002), and Evans and Honkapohja (2001, 2003). Models in
that tradition typically assume that agents use reduced-form statistical representations such as vector autoregressions (VARs) for forecasting. They also commonly
assume that agents update parameter estimates by recursive least squares. In contrast, we assume that agents update beliefs via Bayes’ theorem. The agents who
inhabit our model utilize VARs for forecasting, but their VARs satisfy cross-equation
restrictions analogous to those in rational-expectations models. As a consequence,
the actual and perceived laws of motion (ALM and PLM, respectively) are tightly
linked. In our model, agents know the ALM up to the unknown policy coe¢ cients,
and their PLM is the perceived ALM (i.e., the ALM evaluated at their current estimate of the policy coe¢ cients). Because agents know the ALM’s functional form,
they can use Bayes theorem to update beliefs. Nevertheless, the assumption that
agents are Bayesian is not critical. We also examine whether our insights are robust
to alternative forms of learning, and we …nd that they are.
The remainder of the paper is organized as follows. Section 2 describes the model,
and section 3 explains how agents update beliefs. Section 4 presents results for our
1

We abstract from the zero lower bound on nominal interest.

4

baseline speci…cation, and section 5 examines a number of perturbations to that
speci…cation. Section 6 explains how our paper relates to a number of others in the
literature, and section 7 concludes. A series of online appendices contains additional
results.2

2

A dynamic new-Keynesian model with positive
target in‡ation

We begin by describing the timing protocol, a critical element in learning models.
Then, taking beliefs as given, we describe our behavioral assumptions and the model’s
structure. A discussion of how beliefs are updated is deferred to section 3.

2.1

The timing protocol

Private agents enter period t with beliefs about policy coe¢ cients inherited from
t 1. They treat estimated parameters as if they were known with certainty and
formulate plans accordingly. Following McCallum (1999), we assume that the central
bank sets the systematic part of its instrument rule at the beginning of the period
based on information inherited from t 1.3 Then period t shocks are realized. Agents
observe the central bank’s policy action and infer a perceived policy shock ~"it : They
also observe realizations of the private-sector shocks. Current-period outcomes are
then determined in accordance with beginning-of-period plans. After observing those
outcomes, private agents update their estimates of policy coe¢ cients and carry them
forward to t + 1.

2.2

The model

Our model is a dynamic new Keynesian model in which agents form expectations using a subjective forecasting model that can di¤er from the equilibrium law of motion.
Monetary policy is determined according to a Taylor-type rule that allows target
in‡ation to di¤er from zero. Private-sector behavior is characterized by two blocks
2

Appendices are posted at http://files.nyu.edu/tc60/public/.
McCallum (1999) contends that monetary policy rules should be speci…ed in terms of lagged
variables because the Fed lacks good current-quarter information about in‡ation, output, and other
arguments of policy reaction functions. For instance, the Bureau of Economic Analysis released the
advance estimate of 2013.Q4 GDP on January 30, 2014, one month after the end of the quarter.
3

5

of equations, an intertemporal IS curve and an Ascari-Sbordone version of the new
Keynesian Phillips curve. The model features staggered price setting and habit persistence in consumption. A log-linearized version is presented here. For details on
how this representation was derived, see appendix A.
2.2.1

Monetary policy

The baseline model assumes that the central bank commits to a Taylor rule in di¤erence form,
it

it

1

=

(

)+

t 1

where it is the nominal interest rate,

y (yt 1

yt 2 ) + "it ;

is in‡ation, yt is log output, and "it is an

t

i.i.d. normal policy shock with mean zero and variance
are collected in a vector = [ ; ;
long-run in‡ation target and
and

2
i.

The policy coe¢ cients

0

; where represents the central bank’s
are feedback parameters on the in‡ation gap

y;
y

(1)

i]

and output growth, respectively.
We adopt this form because it seems promising for environments like ours. For
instance, Coibion and Gorodnichenko (2011) establish that a rule of this form ameliorates indeterminacy problems in Calvo models with positive target in‡ation, and Orphanides and Williams (2007) demonstrate that it performs well under least-squares
learning. More generally, a number of economists have argued that the central bank
should engage in a high degree of interest smoothing (e.g. Woodford (1999)). In addition, Erceg and Levin (2003) contend that output growth, rather than the output
gap, is more appropriate for estimated policy reaction functions for the U.S.
Private agents know the form of the policy rule but not its coe¢ cients. At any
given date, their perceived policy rule is
it
where

t

= [ t;

t;

it
yt ;

~"it = "it + (

1

=
it ]

t( t 1

t)

+

yt (yt 1

yt 2 ) + e
"it ;

represents the beginning-of-period t estimate of
t) t 1

+(

y

yt )

yt

1

+

t t

(2)
and
(3)

is a perceived policy shock. Private agents believe that ~"it is white noise, but it
actually depends on lags of in‡ation and output growth and errors in estimates of
policy coe¢ cients.

6

The perceived law of motion depends on the perceived policy (2). The actual law
of motion depends on actions taken by the central bank and decisions made by the
private sector, so it involves both the actual policy (1) and the perceived policy (2).
The central bank minimizes a discounted quadratic loss function,
L = E0

P

t

t

[

2
t

+

y (yt

y)2 +

i (it

i)2 ];

(4)

that penalizes variation in in‡ation and the output gap, and deviations of the nominal
interest rate from its steady state. We assume that the central bank arbitrarily sets
i

and optimizes with respect to ;

; and

y;

taking private-sector learning into

4

account.
2.2.2

Behaviorial assumptions

The agents who inhabit the private sector are boundedly-rational DSGE modelers
who know a lot about their environment but not quite as much as agents in a fullinformation rational-expectations model. They understand the structure of the economy and the form of the monetary-policy rule, but they do not know its coe¤ecients.
They build a structural model of the economy and use it for forecasting, decision
making, and learning.
Their behavior is boundedly rational in two respects. Their …rst-order conditions
take the form of nonlinear expectational di¤erence equations that they cannot solve.
Instead, they log-linearize around a steady state and work with the resulting system
of linear expectational di¤erence equations. Not knowing the economy’s true steady
state, however, they expand around the perceived steady state in period t. The
true steady state x is the deterministic steady state associated with the true policy
coe¢ cients : The perceived steady state xt is de…ned as the long-horizon forecast
associated with the current estimate
varies through time because changes in

t:

The private sector’s long-run forecast xt
have level e¤ects on nominal variables and

on some real variables (Ascari 2004). Since perceptions of change as agents update
their beliefs, so do their long-run forecasts. Although nonstandard, expanding around
the perceived steady state better re‡ects the agents’knowledge and state of mind at
date t:
4

The central bank does not experiment because it knows everything. Private agents do not
experiment because they are atomistic and cannot unilaterally in‡uence the bank’s actions. For
both, the marginal cost of experimentation would be positive and the marginal bene…t zero.

7

Private agents also behave as anticipated-utility modelers (Kreps 1998), treating
the current estimate

t

as if it were known with certainty. In the context of a single-

agent decision problem, Cogley and Sargent (2008) compare the resulting decision
rules with exact Bayesian decision rules and demonstrate that the approximation is
good as long as precautionary motives are not too strong. Like a log-linear approximation, this imposes a form of certainty equivalence, for it implies that decision rules are
the same regardless of the degree of parameter uncertainty. The anticipated-utility
approach is standard in the macro-learning literature.
2.2.3

A new-Keynesian IS curve

As usual, a representative household maximizes expected utility subject to a ‡ow
budget constraint. The household’s period-utility function is
Ut = bt log (Ct

Ct 1 )

Ht1+
;
t
1+

where Ct is consumption of a …nal good, Ht represents hours of work, bt and t are
preference shocks, and measures the degree of habit persistence in consumption.
The …rst-order condition is a conventional consumption Euler equation. After loglinearizing, agents obtain a version of the new Keynesian IS curve,
yt
where
t

yt =
t

Et

t

yt)

(yt+1

t+1

t+1

+ it

t+1

r ;

(5)

is a transformation of the marginal utility of consumption,
1

(yt

y t )+

2

yt

1

yt

(

) + Et yt+1

t

yt +

t+1

+"yt : (6)

The parameter
is a subjective discount factor, r and are steady-state values
for the real-interest rate and the growth rate of technological progress, respectively,
and y t is the private sector’s beginning-of-period long-run forecast for output. The
coe¢ cients

1

and

2

are combinations of preference and technology parameters, and

and "yt are technology and preference shocks, respectively. Further details can be
found in appendix A.
t

This representation di¤ers in three ways from standard IS equations. One di¤erence concerns the choice of the expansion point. As mentioned above, agents expand
around the perceived steady state y t instead of the actual steady state y: In addition, the anticipated-utility assumption implies that Et yt+1 = y t ; explaining the
8

appearance of y t on the right-hand side of equations (5) and (6). A second di¤erence concerns the expectation operator Et ; which represents forecasts formed with
respect to the private sector’s perceived law of motion. In contrast, the central bank
takes expectations with respect to the actual law of motion, which we denote by Et :5
Finally, two shocks appear, a persistent shock t to the growth rate of technology,
t

= 1

+

t 1

(7)

+ " t;

and a white-noise shock "yt .
2.2.4

A new-Keynesian Phillips curve

Following Calvo (1983), we assume that a continuum of monopolistically competitive
…rms produce a variety of di¤erentiated intermediate goods that are sold to a …nalgoods producer. Intermediate-goods producers reset their prices at random intervals,
with
representing the probability that their price remains the same. Thus we
abstract from indexation or other backward-looking pricing in‡uences, in accordance
with the estimates of Cogley and Sbordone (2008). Since pricing and supply decisions
depend on the beliefs of private agents, they again log-linearize around perceived
steady states, obtaining the following block of equations,
t

t

=

t Et ( t+1

+

t

t)

1t Et [(

t

=

2t Et [(

t

=

1t

(

t

1)(
1)(
t)

+

+

t (yt
t)

t+1
t)

t+1

t 1

2t

yt) + & t
+

+

t+1 ]

t

t

+ ut + " t ;

t+1 ];
t

et (

)

t

(8)

(9)
(10)

:

This representation di¤ers in four ways from standard versions of the NKPC.
First, the NKPC coe¢ cients
= (1 + t );
t = (1 + ) et ;
(1 +
t [1
1t =
1
t (1+ t )
;
1t = (1
(1+ t ) 1

1

t

t)

1

];

et = [1 (1+ t )(1+ ][1
1
t)
& t = et ;
(1 + t ) 1 ;
2t =
2t = (1 + t ) ;

5

(1+

t)

]

;
(11)

We assume that the central bank knows the private sector’s prior over : Because the central
bank’s information set subsumes that of the private sector, the law of iterated expectations implies
Et (Et xt+j ) = Et (xt+j ) for any random variable xt+j and j
0 such that both expectations
exist. Because the central bank can reconstruct private forecasts, it also follows that Et (Et xt+j ) =
Et (xt+j ): But Et xt+j 6= Et xt+j :

9

depend on deep parameters and estimates of target in‡ation
are the subjective discount factor ; the probability 1

t:

The deep parameters

that an intermediate-goods

producer can reset its price, the elasticity of substitution across varieties ; and the
Frisch elasticity of labor supply 1= : As Cogley and Sbordone (2008) emphasize, even
though the deep parameters are invariant to changes in policy, the NKPC coe¢ cients
are not. The latter change as beliefs about
to the usual expressions when

t

are updated. Equation (11) collapses

= 0:

Second, a variable
t

t

ln

Z

1

(pt (i) =Pt )

di ;

(12)

0

measuring the resource cost of cross-sectional price dispersion, has …rst-order e¤ects
on in‡ation and other variables. If t were zero, this variable would drop out of a
…rst-order expansion.
Third, higher-order leads of in‡ation appear on the right-hand side of (8). To
retain a …rst-order form, we introduce an intermediate variable t that has no interesting economic interpretation and add equation (9). This is simply a device for
obtaining a convenient representation.
Finally, two cost-push shocks are present, a persistent shock ut that follows an
AR(1) process,
ut =

u ut 1

(13)

+ "ut ;

and a white-noise shock " t .
2.2.5

Calibration

Parameters of the pricing model are taken from estimates in Cogley and Sbordone
(2008),
= 0:6;

= 0:99;

= 10:

Preference parameters are calibrated as follows. The parameter

(14)
is the inverse of the

Frisch elasticity of labor supply. The literature provides a large range of values for this
elasticity, typically high in the macro literature and low in the labor literature. We set
= 0:5; which implies a Frisch elasticity of 2 and represents a compromise between
the two. We think our calibration is reasonable, given that the model abstracts from
wage rigidities. The parameter

that governs habit formation in consumption is

calibrated to 0:7, a value close to those estimated in Smets and Wouters (2007) and
Justiniano, Primiceri and Tambalotti (2010).
10

We also adopt a standard calibration for loss-function parameters. We assume
the central bank assigns equal weights to annualized in‡ation and the output gap.
Since the model expresses in‡ation as a quarterly rate, this corresponds to

y

= 1=16:

We also set i to 0:5, which implies that the weight on ‡uctuations of the annualized nominal interest rate is half the weights attached to ‡uctuations in annualized
in‡ation and the output gap.6
Turning to parameters governing the shocks, we set
from average growth. For the persistent shocks ut and
Cogley, Primiceri, and Sargent (2010),
u

= 0:4;

100

= 0:27;

100

u

= 0; thereby abstracting
t;

estimates are taken from

(15)

= 0:12;
= 0:5:

Last but not least, the standard deviations of the white noise shocks "yt and "

t

are

set equal to
=

3

y

(16)

= 0:01=4:

Learning about monetary policy

Everyone knows the model of the economy and the form of the policy rule, but private
agents do not know the policy coe¢ cients. Instead, they learn about them by solving
a signal-extraction problem. If

entered linearly, they could do this with the Kalman

…lter. Because enters non-linearly, however, agents must solve a nonlinear …ltering
problem. This section describes how this is done. We …rst conjecture a perceived
law of motion (PLM) and then derive the actual law of motion (ALM) under the
PLM. After that, we verify that the PLM is the perceived ALM. Having veri…ed that
private agents know the ALM up to unknown policy coe¢ cients, we use the ALM to
derive the likelihood function. Agents combine the likelihood with a prior over policy
parameters and use the posterior mode as their point estimate.

3.1

The perceived law of motion

By stacking the IS equations, the aggregate supply block, exogenous shocks, and
perceived monetary-policy rule, the private sector’s model of the economy can be
6

Results for economies with learning are not sensitive to the choice of

11

i.

represented as a system of linear expectational di¤erence equations,
At St = Bt Et St+1 + Ct St

1

+ Dte
"t ;

(17)

where St is the model’s state vector, e
"t is a vector of perceived innovations, and
At ; Bt ; Ct ; and Dt depend on the model’s deep parameters (see appendix A.5 for
details). These matrices have time subscripts because they depend on estimates of
the policy coe¢ cients

t.

We conjecture that the PLM is the reduced-form VAR

associated with (17),
St = Ft St

1

+ Gte
"t ;

where Ft solves Bt Ft2 At Ft +Ct = 0 and Gt = (At

(18)
Bt Ft )

1

Dt :7 As in a conventional

rational-expectations model, (18) serves two functions, describing agents’ currentquarter plans and how they forecast future outcomes.

3.2

The actual law of motion

To …nd the actual law of motion, we stack the actual policy rule (equation 1) with
equations governing private sector behavior (5-7, 8-13). This results in another system
of expectational di¤erence equations,
At St = Bt Et St+1 + Cat St

1

+ Dt "t :

(19)

The state vector and the matrices At ; Bt ; and Dt are the same as in (17). In addition,
all rows of Cat agree with those of Ct except for the one corresponding to the monetarypolicy rule. In that row, the true policy coe¢ cients
t (see appendix A.5).

replace the estimated coe¢ cients

A solution for the ALM can be found as follows. Since outcomes are determined
in accordance with agents’plans (equation 18), they depend on the perceived shocks
~"t : A relation between perceived and actual innovations can be found by subtracting
(19) from (17),
Dt~"t = Dt "t + (Cat

Ct )St 1 :

(20)

Substituting this relation back into agents’ plans expresses outcomes in terms of
actual shocks,
St = Ht St
7

1

+ Gt "t ;

(21)

Following Sims (2001), this matrix quadratic equation is solved using a generalized Schur decomposition.

12

where
Ht = Ft + (At

Bt Ft ) 1 (Cat

(22)

Ct ):

The ALM depends on both actual policy coe¢ cients, because that is what governs
central bank behavior, and on perceived policy coe¢ cients, because that is what
guides private-sector behavior.
When there is a unique nonexplosive solution for (Ft ; Gt ), the solution for Ht is
also unique but not necessarily nonexplosive. When multiple nonexplosive solutions
for (Ft ; Gt ) exist, there are also multiple solutions for Ht ; and our programs choose
one of them. However, this kind of multiplicity never occurs in our simulations.

3.3

The PLM is the perceived ALM

The reduced-form ALM and PLM are both V AR(1) processes with conditionally
gaussian innovations. Under the ALM, the conditional mean and variance are8
mtjt 1 (

true )

= Ht (

Vtjt 1 (

true )

= Gt V" (

(23)

true )St 1 ;
0
true )Gt ;

where Ht ( true ) and V" ( true ) are the ALM conditional mean and variance arrays
evaluated at the true value true : If the agents in the model were interviewed and
asked their view of the ALM, they would answer by replacing

true

in Cat with

t;

thus obtaining Ct ; implying
m
~ tjt 1 ( t ) = Ft St 1 ;
V~tjt 1 ( t ) = Gt V" ( t )G0t :

(24)

These expressions coincide with the conditional mean and variance under the PLM.
Hence the PLM is the perceived ALM. This is true not only asymptotically but for
every date during the transition.9
8

According to the timing protocol, Ht and Gt can be regarded either as beginning-of-period t
estimates or end-of-period t 1 estimates, which explains why it is legitimate to use them to calculate
the conditional mean and variance.
9
Among other things, this implies that private-sector forecasts are consistent with contingency
plans for the future. For instance, for j > 0; log-linear consumption Euler equations between periods
t + j and t + j + 1 hold in expectation at t:

13

3.4

The likelihood function

The observables are stacked in a vector Xt = [ t ; ut ; yt ;

t ; it ]

0

= eX St ; where eX is an

appropriately de…ned selection matrix (see appendix A.5). The other elements of St
allow us to express the model in …rst-order form but convey no additional information
beyond that contained in the history of Xt : Using the prediction-error decomposition,
the likelihood function for data through period t can be expressed as
p(X t j ) =

Qt

j=1

p(Xj jX j 1 ; ):

(25)

Since the private sector knows the ALM up to the unknown policy parameters, they
can use it to evaluate the terms on the right-hand side of (25). According to the
ALM, Xt is conditionally normal with mean and variance
mX
tjt 1 ( ) = eX Ht ( )St 1 ;

(26)

VtjtX 1 ( ) = eX Gt V" ( )G0t e0X ;
where Ht ( ) and V" ( ) are the ALM conditional mean and variance, respectively,
evaluated at some value of : It follows that the log-likelihood function is
1 Pt
X
j=1 f ln jVjjj 1 ( )j
2
0
X
+[Xj mX
jjj 1 ( )] Vjjj 1 ( )

ln p(X t j ) =

3.5

(27)
1

[Xj

mX
jjj 1 ( )]g:

The private sector’s prior and posterior

Private agents have a prior p( ) over the policy coe¢ cients. At each date t; they …nd
the log posterior kernel by summing the log likelihood and log prior. Because of the
anticipated-utility assumption, their decisions depend only on a point estimate, not
on the entire posterior distribution. Among the various point estimators from which
they can choose, they adopt the posterior mode,
t

= arg max ln p(X t j ) + ln p( ) :

(28)

Notice that agents take into account that past outcomes were in‡uenced by past
beliefs. By inspecting the ALM and PLM, one can verify that past values of the
conditional mean mX
jjj

1

X
and the conditional variance Vjjj

1

depend on past estimates

as well as the current candidate : Past estimates are bygones at t and are held
constant when agents update the posterior mode.
14

Notice also that the estimates are based not just on the policy rule but also
on equations for in‡ation and output. The agents exploit all information about

;

taking advantage of cross-equation restrictions implied by the ALM. How much the
cross-equation restrictions matter in this context is examined below.

4

Quantitative analysis of a backward-looking rule

A new governor appears at date 0 and formulates a new policy rule that becomes
operative at date 1. After observing the private sector’s prior, the governor chooses
the long-run in‡ation target and reaction coe¢ cients ; y to minimize expected
loss under the new policy, with the standard deviation of policy shocks
exogenously. We initially assume that
later examine what happens when

4.1

i

i

i

being set

= 0:001 (10 basis points per quarter) and

is zero.

Initial conditions

The economy is initialized at the steady state under the old regime. To create a
scenario like the end of the Great In‡ation, we calibrate the old regime to match
estimates of the policy rule for the period 1966-1981. We assume that the policy rule
for that period had the same functional form as in equation (1), and we estimate
;

;

y;

and

2
i

by OLS. Point estimates and standard errors are reported in table

1.
Table 1: The Old Regime
i

y

0.0116 0.043
0.12 0.0033
(0.013) (0.08) (0.04) (0.01)
Note: Estimates of policy coe¢ cients, 1966-1981, with standard errors in parentheses.

The estimate for

implies an annualized in‡ation target of 4.6 percent. The

reaction coe¢ cients are both close to zero, with

y

being slightly larger than

.

Policy shocks are large in magnitude and account for a substantial fraction of the
total variation in the nominal interest rate. Standard errors are large, especially for
. The economy is initialized at the steady state associated with this policy rule,
0

= 0:0116; y0 =

0:0732; and i0 = 0:0217; where in‡ation and nominal interest are

expressed as quarterly rates.
15

4.2

Evaluating expected loss and …nding the optimal simple
rule

If the model fell into the linear-quadratic class, the loss function could be evaluated
and optimal policy computed using methods developed by Mertens (2009a, 2009b).
The central bank has quadratic preferences, and many elements of the transition equation are linear, but learning introduces a nonlinear element. Since this is essential,
we use other methods for evaluating expected loss.
We proceed numerically. We start by specifying a grid of values for ;

; and

Then, for each node on the grid, we simulate 100 sample paths, updating privatesector estimates t by numerical maximization at each date. The sample paths are
y:

each 25 years long, and the terminal continuation value is set to zero, representing
a decision maker with a long but …nite horizon. Realized loss is calculated for each
sample path, and expected loss is the cross-path average of realized loss. The optimal
rule among this family is the node with smallest expected loss.

4.3

A full-information benchmark

To highlight the role of learning, we begin by describing the optimum under full
information. When private agents know the new policy coe¢ cients, the optimized
Taylor rule sets = 0;
= 1:05; and y = 0:11. Figure 1 depicts average responses
of in‡ation, output, and nominal interest gaps, which are de…ned as deviations from
the steady state of the new regime. Recall that the economy is initialized in the
steady state of the old regime and that the disin‡ation commences at date 1.10
The nominal interest rate rises at date 1, causing in‡ation to decline sharply and
overshoot the new target. After that, in‡ation converges from below. This rolls back
the price level, partially counteracting the e¤ects of high past in‡ation. As Woodford
(2003) explains, a partial rollback of the price level is a feature of optimal monetary
policy under commitment because a credible commitment on the part of the central
bank to roll back price increases restrains a …rm’s incentive to increase its price in
the …rst place. Under full information, the optimal simple rule shares this property.
The initial increase in the nominal interest rate causes the output gap to fall below
zero. Since in‡ation and output growth are below target at date 1, the central bank
10

In‡ation and nominal interest gaps at date 0 coincide because the steady-state real interest rate
is the same under the two regimes.

16

0.1
Inflation Gap
N ominal Interes t Gap
Output Gap

0.08

0.06

0.04

0.02

0

-0.02

-0.04

-0.06

0

2

4

6
Quarters

8

10

12

Figure 1: Response of in‡ation, output and interest rate under full information
cuts the interest rate at date 2, damping the output loss and initiating a recovery.
Convergence to the new steady state is rapid, with in‡ation, output, and interest
gaps closing in about a year. After 4 periods, in‡ation is close to its new target,
which is 4.6 percentage points below the old target. The cumulative loss in output
is approximately 2.6 percent. The sacri…ce ratio, de…ned as the cumulative loss in
output divided by the change in target in‡ation, is 0.56 percent. The sacri…ce ratio
is small under full information because the model has no indexation. Although prices
are sticky, the absence of indexation means that in‡ation is weakly persistent. The
absence of indexation also explains why the bank seeks a substantial rollback in the
price level.
Under full information, the economy is highly fault tolerant11 with respect to
policies away from the optimum. Figure 2 portrays iso-expected loss contours as
a function of ; ; and y . Each panel involves a di¤erent setting for , ranging
from 0 to 3 percent per annum, and
and y are shown on the horizontal and
vertical axes, respectively. Expected loss is normalized by dividing by the loss under
the optimal rule so that contour lines represent gross deviations from the optimum.
The diamond in the upper left panel depicts the optimal simple rule. Expected loss
increases slowly as policy moves away from the optimum. For instance, when = 0;
11

Levin and Williams (2003) introduced the term “fault tolerance”to describe the extent to which
expected loss increases as policies move away from the optimum.

17

1.5

1

25
1.

0
0

0.5

1

3
105

20

1
1.0

1.75

5
1.0

1.25

5

3

0.5

1.75

0.5

1.7
5

ψ

y

1

2

2

1.5

1.5

ψ

1.5

2

0
0

2.5

0.5

1

π

1.5

ψ

1.5

2

8.25

1.5
4.
25

5
4.2

25
8.

15

1

0.5

4

8

10

10

4

15

8

5

0.5

8.5

4.5

1

0
0

2.5

π

0.5

1

ψ

1.5

2

0
0

2.5

0.5

1

π

ψ

1.5

2

2.5

π

Figure 2: Iso-expected loss contours under full information
relative loss remains below 2 for most combinations of
only when

and

y

and rises above 10

approaches zero. Although expected loss is higher for higher values of

; the surface remains relatively ‡at. Later we contrast this with an absence of fault
tolerance under learning.

4.4

A Taylor rule optimized for learning

We assume that private agents initially anticipate a continuation of the old regime,
and we calibrate their priors using the estimates of policy coe¢ cients for 1966-1981
shown in table 1. In particular, they believe that policy coe¢ cients are independent
a priori,
p( ) = p( )p(

)p(

and they adopt truncated normal priors for ;
For ;

;

y;

2
y )p( i );

; and

y

(29)
and a gamma prior for

2
i:

the mean and standard deviation of an untruncated normal density

are set equal to the numbers in table 1. To enforce nonnegativity, the unrestricted
priors are truncated at zero and renormalized so that transformed priors integrate to
unity. For 2i ; hyperparameters are chosen so that the implied mode and standard
deviation match the numbers in the table. The results are shown in …gure 3.
18

40

8

35

7

30

6

25

5

20

4

15

3

10

2

5
0
0

1
0.01

0.02

0.03

0.04

0
0

0.05

10

0.2

0.4

0.6

ψ

0.8

1

1.2

π

200

8

150

6
100
4
50

2
0
0

0.2

0.4

ψ

0.6

0.8

0
0

1

0.01

0.02

y

σ

0.03

0.04

0.05

i

Figure 3: A prior based on the old regime
Priors for

and

y

concentrate slightly to the right of zero, and little mass is

assigned to values greater than 0.25. On the other hand, priors for

and

i

assign

non-negligible probability to a broad range of values. According to this speci…cation,
private agents are open to persuasion about and i but are skeptical that the central
bank will react aggressively to in‡ation or output. Overcoming that skepticism will
be a major challenge for the central bank.
i

Figure 4 portrays iso-expected loss contours as a function of ; ; y : As before,
is held constant at 10 basis points per quarter. The left-hand column depicts the

results of a broad search over a coarse grid, while the column on the right portrays
calculations based on a …ner grid that focuses on the low expected-loss region of the
policy-coe¢ cient space. Expected loss is again normalized by dividing by the loss for
the rule optimized for learning.
In the left-hand column, regions of low expected loss concentrate in the southwest
quadrant of the panels, near the prior mode for

and

y:

Expected loss increases

rapidly as the feedback coe¢ cients move away. Indeed, in the northeast quadrant,
expected loss is more than 100 times greater than under the optimal simple rule. The
optimal simple rule under full information is marked by an asterisk and lies in the

19

1

2

0.05

10

2

0
0

0.2

0.4

0.6

0.8

1

0
0

1.2

1

0.2

0.3

0.4

1.25

1.05

1.01

0.15

1.5

1.5

0.2

1.25

0

y

0.1

Learning

500

10

10

ψ

10

0.1
FI

50

ψ

10

1.25

0.15
0.5

1.5

1.05

1.5

1.25

y

10

0
50
100

10

0.2

0.5
0.1
0.05
0.2

0.8

1

0
0

1.2

0.1

0.2

0.2

1.

0.3

0.4

1.5

y

0.6

500
0
10

ψ

0.4

10

10
100

1

10

2

2

0
0

50

5

0.15
0.5
0.1
10

0
0

0.2

0.4

0.6

ψ

0.8

1

10

2

0.05
0
0

1.2

0.1

0.2

ψ

π

0.3

0.4

π

Figure 4: Iso-expected loss contours under learning
high-loss region.
The reason why the economy loses fault tolerance under learning is that the equilibrium law of motion can be a temporarily explosive process, i.e. one that is asymptotically stationary but which has explosive autoregressive roots during the transition.
The agents in our model want to be on the stable manifold, but they don’t know where
it is. Their plans are based on the PLM, which depends on Ft , but outcomes are governed by the ALM, which involves Ht : The eigenvalues of Ft are never outside the
unit circle but the eigenvalues of Ht can be explosive even when those of Ft are not.
Thus, actions that would be stable under the PLM can be unstable under the ALM.
The matrices Ht and Ft di¤er because of disagreement between the actual policy
and the perceived policy

t

(see equation 22). The eigenvalues of Ht are close

to those of Ft (hence are nonexplosive) when

t

is close to

emerge when there is substantial disagreement between

20

t

: Explosive eigenvalues

and : On almost all sim-

0.8
0.7
0.6

ψ

y

0.5
0.4
0.3
0.2
Learning

0.1
0

0.1

0.2

0.3

0.4

ψ

0.5

0.6

0.7

0.8

π

Figure 5: Nonexplosive region for H1
ulated paths, the private sector eventually learns enough about

to make explosive

eigenvalues vanish, but the transition is highly volatile and dominates expected loss
when the initial disagreement is large and/or learning is slow.
The shaded area in …gure 5 depicts the region of the policy-coe¢ cient space for
which the eigenvalues of H1 are nonexplosive. Since the nonexplosive region is similar
for all settings of ; we just show it for = 0: This region is sensitive to
and y ;
however, and concentrates near the prior mode. The central bank can move far
from the private sector’s prior mode without generating locally-unstable dynamics,
but moving

and/or

y

far from their prior modes makes the transition turbulent.

To locate the optimum under learning, we search on a …ner grid in the southwest
quadrant of the ( ; y ) space. Isoloss contours are shown in the right column of …gure
4, and the optimum is marked by a diamond,

= 0:01;

= 0:25; and

y

= 0:15.

Relative to the full-information solution, target in‡ation is slightly higher, and the
reaction to output growth is a bit more aggressive. The main di¤erence, however, is
that the central bank responds less aggressively to in‡ation. Since the full-information
optimum

= 1:05 lies outside the nonexplosive region, the transition would be

initially very turbulent. Furthermore, since the private sector is prejudiced against
such large values of , explosive eigenvalues would remain active for too long. For
these reasons, the optimal policy puts
and y only slightly outside the nonexplosive
region. The bank can adjust

more freely, however, thereby achieving low average

in‡ation.

21

Because the location of the nonexplosive regions depends more on

and

y

than

on , uncertainty about reactions coe¢ cients is more problematic than uncertainty
about target in‡ation. As shown in appendix B, when uncertainty about

;

y;

and

is deactivated and is the only uncertain policy parameter, the initial nonexplosive
region expands to …ll most of the ( ; y ) space. Since the ALM becomes nonexploi

sive for most policies, the economy becomes highly fault tolerant, and private agents
learn

very quickly. For these reasons, the model behaves much as it does under full

information. The optimal policy is similar, and impulse response functions resemble
those in …gure 1. In contrast, when uncertainty about is deactivated and
; y;
and

i

are uncertain, the results are qualitatively similar to those shown here. Uncer-

tainty about feedback parameters is more costly because it activates locally-explosive
dynamics.
A second loss of fault tolerance emerges in the right column of …gure 4. For small
values of

, estimates occasionally stray too close to zero, pushing the PLM close

to the indeterminacy region. Outcomes are highly volatile when this occurs, causing
expected loss to rise. For an e¤ective stabilization, the bank must choose a value
for
that not only achieves determinacy under full information but which also be
guards against estimates straying too closely to zero during the transition.
Figure 6 portrays impulse response functions for in‡ation, output, and nominal
interest gaps for the optimal simple rule under learning. The transition is longer
and more volatile than under full information. In‡ation again declines at impact,
overshooting

and partially rolling back past increases in the price level, but now

in‡ation oscillates as it converges to its new long-run target. The transition takes
about two and a half years, with in‡ation remaining below target for most of that
time. There is also a shallow but long-lasting decline in output. The output gap
reaches a trough of -0.9 percent in quarter 5 and remains negative for 3 years. The
cumulative output gap during this time is -6.6 percent. Since in‡ation falls permanently by 3.6 percentage points, the sacri…ce ratio amounts to 1.8 percent of lost
output per percentage point of in‡ation, 3 times larger than under full information.
According to Ascari and Ropele (2011), most estimates of the sacri…ce ratio for the
Volcker disin‡ation lie between 1 and 3, so our model is in the right ballpark.

22

0.06
Inflation Gap
Nominal Interest Gap
Output Gap

0.04
0.02
0
-0.02
-0.04
-0.06
-0.08

0

5

10
Quarters

15

20

Figure 6: Average responses under the Taylor rule optimized for learning
ψ
0.06

π

0.4
Av erage Estimate
True Value

0.04

0.3
0.2

0.02
0
0

0.1
10

20

ψ

30

0
0

40

10

20
σ

y

30

40

30

40

i

0.015
0.15
0.01
0.1
0.005

0.05
0
0

10

20

30

0
0

40

Quarters

10

20
Quarters

Figure 7: Average estimates of policy coe¢ cients

23

Figure 7 portrays mean estimates of the policy coe¢ cients, again averaged across
100 sample paths. The true coe¢ cients are shown as dashed lines while average
estimates are portrayed as solid lines. The estimates move quickly toward their
respective true values and are not far o¤ after 10 quarters. Rapid convergence of
and y are crucial for eliminating locally-explosive dynamics. Beliefs about target
in‡ation and the policy shock variance also quickly approach neighborhoods of their
respective true values, but this seems secondary for transitional volatility.

5

Perturbations to the baseline learning model

To highlight aspects of the baseline model, we now turn to a number of perturbations.
For the sake of brevity, the main points are summarized here, and a full presentation
of results is relegated to a series of appendices.

5.1

McCallum’s information constraint

McCallum’s information constraint plays a critical role in our analysis. To highlight
its importance, we constrast the backward-looking Taylor rule in equation (1) with
one involving contemporaneous feedback to in‡ation and output growth,
it

it

1

=

(

t

)+

y (yt

yt 1 ) + "it :

(30)

There is also a slight change in the timing protocol. Private agents still enter period
t with beliefs about policy coe¢ cients inherited from t 1, and they treat estimated
parameters as if they were known when updating their decision rules. But now the
central bank and private sector simultaneously execute their contingency plans when
period t shocks are realized. After observing current-quarter outcomes, private agents
update estimates and carry them forward to t + 1. All other aspects of the model
remain the same, including the prior on ( ;

;

y;

i ):

Because actual central banks cannot observe current quarter output or the price
level, they would not be able to implement this policy. We examine it here in order
to isolate the consequences of lags in the central bank’s information ‡ow.
As shown in appendix C, locally-explosive dynamics vanish in this case, and the
learning economy becomes highly fault tolerant. The model therefore behaves more
like its full-information counterpart than did the economy with a backward-looking
rule. For instance, while the full-information optimum sets = 0;
= 2:4; and
24

y

= 0:1, the rule optimized for learning sets

= 0,

= 1:4; and

y

= 0:1: The

learning rule has the same in‡ation target and reaction coe¢ cient on output growth
as under full information, but it responds to in‡ation gaps a bit less aggressively.
Compared with the baseline model, however, the central bank is less constrained by
initial beliefs and freer to adjust its reaction coe¢ cients. The learning transition is
also shorter and less volatile than for the backward-looking rule, and the sacri…ce
ratio is about the same (1.9 percent for the contemporaneous rule as opposed to 1.8
percent for the backward-looking policy). Learning is slower than for the backwardlooking rule, but that is because there is less transitional volatility in in‡ation and
output growth.
Many of the di¢ culties reported for the baseline case follow from the fact that
central banks cannot observe current quarter output and in‡ation. If a rule with contemporaneous feedback to in‡ation and output were feasible, it would be superior.
The di¤erence between contemporaneous and backward-looking Taylor rules is more
pronounced under learning than under full information. More important, though,
even when allowing contemporaneous feedback in the policy rule, the optimal simple
rule under learning responds less aggressively to in‡ation, and disin‡ation carries a
nontrivial cost. Learning precludes a sharp low-cost disin‡ation for the contemporaneous rule as well as the backward-looking policy.

5.2

Policy shocks

The baseline calibration for

i

re‡ects a tension between two considerations. On the

one hand, estimated policy reaction functions never …t exactly, implying

i

> 0: On

the other, a fully optimal policy would presumably be deterministic, implying
The baseline speci…cation compromises with a small positive value (
points per quarter).
If the true value of

i

i

i

= 0:

= 10 basis

were zero and known with certainty, the signal extrac-

tion problem would unravel, with agents perfectly inferring the other three policy
coe¢ cients after just three periods. This would not happen in our model even if
i were zero because the agents’ prior on i encodes a belief that monetary-policy
shocks are present. Prior uncertainty about

i

is enough to preserve a nontrivial

signal-extraction problem.
Furthermore, since the initial nonexplosive region depends neither on i nor on
prior beliefs about i ; the central bank’s main challenge in a i = 0 economy would be
25

the same. It follows that the optimized rule should be similar. Appendix D con…rms
this intuition: when

i

= 0 and all other aspects of the baseline economy are held

constant, the optimized rule sets

= 0:01;

= 0:15; and

y

= 0:1: Thus target

in‡ation and the response to output growth are about the same, and the response
to in‡ation is a bit weaker. The same is true when i = 0 and the prior standard
deviation for

i

is reduced by half. In both cases, impulse response functions under

the optimized rule resemble those in …gure 6.
That agents entertain a belief that policy shocks are present is critical. Whether
actual policy shocks are small or zero is secondary.

5.3

A two-tier approach12

In the baseline model, the central bank introduces two reforms at once, reducing
target in‡ation and strengthening stabilization by responding more aggressively to
in‡ation and output growth. Appendix E contrasts this with a two-tier approach
that separates the reforms, with policymakers …rst switching to a rule designed to
bring target in‡ation down and thereafter changing feedback parameters to stabilize
the economy around the new target.
We formulate this approach as follows. We assume that for a certain period whose
length is exogenously set, the policymaker reduces

but continues with response

coe¢ cients inherited from the old regime. After this initial period, when beliefs about
have had a chance to adjust, the policymaker adjusts the reaction coe¢ cients. Once
again, all other aspects of the baseline speci…cation remain the same. Appendix E
considers models in which the …rst stage lasts 10 and 20 quarters, respectively.
Alas, the two-tier approach prolongs the transition and makes matters worse.
Delaying the second reform postpones but does not circumvent the problem of coping
with locally-explosive dynamics. This challenge now emerges at the end stage 1 rather
than the beginning of the disin‡ation, but it does not go away.
A separation of reforms also retards learning. During stage 1, beliefs about
and y harden around old-regime values because agents observe more weak responses
to in‡ation and output growth. This hinders learning about
and y in stage 2.
Less obviously, the separation of reforms also retards learning about target in‡ation
in stage 1. Wherever
Since
12

appears in the likelihood function it is multiplied by

remains close to zero during stage 1,

We thank Klaus Adam for suggesting this exercise.

26

:

is weakly identi…ed and hard to learn

about. One of the purposes of a simultaneous reform is to strengthen identi…cation
of

by increasing

: The two-tier approach also postpones this until stage 2.

As shown in appendix E, optimized Taylor rules set

= 2 percent per annum,

= 0:15; and y = 0:15 or 0:2: Target in‡ation is therefore slightly higher than for
simultaneous reforms, the in‡ation response is a bit weaker, and reaction to output
growth is about the same. Learning is slower, the transition is longer and more
volatile, and expected loss is substantially higher.

5.4

Single-equation learning

Agents in the baseline model exploit cross-equation restrictions on the ALM when
estimating policy coe¢ cients. This places a heavy computational burden on decision
makers who are supposed to be boundedly rational. Appendix F lightens their burden
by assuming that agents estimate equation (1) by recursive least squares with either
constant or decreasing gain. All other aspects of the baseline speci…cation remain the
same.
Although estimates of policy coe¢ cients sometimes di¤er from those in the baseline model, optimized Taylor rules are essentially the same. That results are similar
for constant and decreasing gain algorithms is not surprising because the samples are
small and the rates at which the respective algorithms discount past data are almost
the same. That the results are similar to those for full-system learning means that
cross-equation restrictions are less informative than in a full-information rationalexpectations model. In the latter, private decision rules are predicated on knowledge
of the true policy coe¢ cients and therefore convey a lot of information about them.
In a learning model, private decision rules are predicated on estimates of policy coef…cients and encode less information about the true policy. Somewhat to our surprise,
little is to be gained by exploiting cross-equation restrictions in the learning economy.
Single-equation learning is almost as good.

6

Discussion of related literature

The literature on monetary policy with adaptive learning is vast, and good surveys
can be found in Evans and Honkopohja (2009) and Gaspar, et al. (2011). Here we
discuss a few papers that are especially relevant to ours.
A number of papers identify attributes of the equilibrium law of motion that
27

are in‡uential in our analysis. For instance, Erceg and Levin (2003), Orphanides
and Williams (2005), Milani (2006, 2007), and Slobodyan and Wouters (2012) examine new Keynesian models with adaptive learning and demonstrate that learning
enhances in‡ation persistence.13 Orphanides and Williams emphasize that central
banks should take steps to counteract this increase in persistence. In their model,
this is done by reacting more aggressively to in‡ation. In ours, adverse initial beliefs
that reaction coe¢ cients are close to zero prevent the bank from responding aggressively, and in‡ation persistence is restrained – at least during the transition – by
keeping reaction coe¢ cients close to prior beliefs.
The conclusion that knowledge of the in‡ation target a¤ords little bene…t for stabilization when other aspects of monetary policy are uncertain also appears in Eusepi
and Preston (2010). The reasons supporting this conclusion di¤er, however. First
and foremost, our notions of stability di¤er. Eusepi and Preston examine whether
learning dynamics eventually converge to a rational-expectations equilibrium (REE).
For a model with least-squares learning, they demonstrate that learning dynamics
can fail to converge to REE when the central bank’s in‡ation target is known but
other aspects of monetary policy are not and that convergence to REE is restored if
the central bank can credibly communicate the variables upon which nominal interest
decisions are conditioned. Our model is closer to the latter case: private agents know
the arguments and form of the policy rule, and their estimates eventually converge
to the true policy coe¢ cients.14 Our conclusion depends not on limiting beliefs but
on the nature of the transition: uncertainty about reaction coe¢ cients is more costly
in our environment because this is what activates temporarily explosive dynamics.
Among the above cited papers, only Erceg and Levin consider the implications for
disin‡ation, and their analysis focuses on uncertainty about target in‡ation and takes
the policy rule as given. None analyze how the potential for explosive transitional
dynamics in‡uences the central bank’s strategy.
Hagedorn (2011) examines optimal disin‡ation in a new Keynesian model with
perfect credibility and rational expectations. He demonstrates that the transition
path for the nominal interest rate is uniformly lower than it would be under the
original in‡ation target. Hagedorn stops short of characterizing optimal policy under
13

Milani and Slobodyan and Wouters estimate DSGE models with learning and show that structural sources of in‡ation persistence such as indexation lose empirical support when learning is
introduced and that exogenous shocks become less persistent.
14
We have no theorem to this e¤ect, but this is what happens in the simulations.

28

learning, however, commenting that this would require solving a challenging signalextraction problem. His notion of optimality is broader than ours, but we tackle
the signal-extraction problem. The price of extending the model in this direction
was narrowing the family of policies to Taylor rules. Embracing a broader notion of
optimality would be an important extension.
For a stylized, small-scale new Keynesian model, Gaspar, et al. (2006, 2011) show
how to do this. They study optimal monetary policy in an environment where agents
learn adaptively and the central bank takes the learning process into account when
formulating its policy.15 The optimal rule shares some features of optimal policy
under commitment and rational expectations, but commitment plays no role and the
bank relies instead on its ability to in‡uence estimated in‡ation persistence. Like
Hagedorn, their notion of optimality is broader than ours, and they characterize the
optimal policy by numerically solving a dynamic program. Their approach is feasible
in models with a low-dimensional state vector but would run afoul of the curse of
dimensionality in ours. We chose to enrich the economic environment at the expense
of narrowing our focus to Taylor rules. Scaling their methods to a larger model would
be another important extension.

7

Conclusion

We model transitional dynamics that emerge when a central bank tries to disin‡ate
when operating without full transparency. The bank commits to a simple Taylor
rule whose form is known but whose coe¢ cients are not. Private agents learn about
those coe¢ cients via Bayesian updating. Under a McCallum timing protocol, locallyexplosive dynamics can emerge when the new policy lacks transparency, making the
transition highly volatile. The potential for locally explosive outcomes dominates
expected loss and materially alters the bank’s choice of feedback parameters relative
to what would be chosen if operating under complete transparency and credibility.
The bank copes by choosing feedback parameters close to the private sector’s initial
beliefs. Uncertainty about target in‡ation is secondary, and the bank can reduce
average in‡ation substantially without generating much turbulence. Its ability to
achieve greater stability by adjusting reaction coe¢ cients is more limited.

15

They do not study disin‡ation, however.

29

References
[1] Ascari, G., 2004. Staggered prices and trend in‡ation: some nuisances. Review
of Economic Dynamics 7, 642–667.
[2] Ascari, G. and T. Ropele. 2011. Disin‡ations in a Medium-Scale DSGE Model:
Money Supply versus Interest Rate Rules, unpublished manuscript.
[3] Calvo, G. 1983. Staggered Prices in a Utility-Maximizing Framework. Journal of
Monetary Economics 12, 383-398.
[4] Cho, I.K., N. Williams, and T.J. Sargent. 2002. Escaping Nash In‡ation. Review
of Economic Studies 69, 1-40.
[5] Cogley, T., G. Primiceri, and T.J. Sargent. 2010. In‡ation-Gap Persistence in
the U.S. American Economics Journal - Macroeconomics 2, 43-69.
[6] Cogley, T. and T.J Sargent, 2008. Anticipated Utility and Rational Expectations
as Approximations of Bayesian Decision Making. International Economic Review
49, 185-221.
[7] Cogley, T. and A.M. Sbordone. 2008. Trend In‡ation, Indexation, and In‡ation
Persistence in the New Keynesian Phillips Curve. American Economic Review
98, 2101-2126.
[8] Coibion, O. and Y. Gorodnichenko. 2011. Monetary policy, trend in‡ation and
the great moderation: an alternative interpretation. American Economic Review
101, 341-370.
[9] Erceg, C. and A. Levin. 2003. Imperfect credibility and in‡ation persistence.
Journal of Monetary Economics, 50 (4): 915-944.
[10] Eusepi, S. and B. Preston. 2010. Central bank communication and expectations
stabilization. American Economic Journal: Macroeconomics 2: 235-271.
[11] Evans, G.W. and S. Honkapohja. 2001. Learning and Expectations in Macroeconomics. Princeton University Press: Princeton, N.J.
[12] Evans, G.W. and S. Honkapohja. 2003. Expectations and the stability problem
for optimal monetary policies. Review of Economic Studies, 70, 807-824.
30

[13] Evans, G.W. and S. Honkapohja. 2009. Expectations, Learning and Monetary
Policy: An Overview of Recent Research. in K. Schmidt-Hebbel and C. Walsh
(eds.) Monetary Policy under Uncertainty and Learning, Central Bank of Chile:
27-76.
[14] Gaspar, Vitor, Frank Smets and David Vestin. 2006. Adaptive Learning, Persistence, and Optimal Monetary Policy. Journal of the European Economic Association 4, 376-385.
[15] Gaspar, Vitor, Frank Smets and David Vestin. 2011. In‡ation Expectations,
Adaptive Learning and Optimal Monetary Policy, in B. Friedman and M. Woodford (eds) Handbook of Monetary Economics, vol 3B, North-Holland: 1055-1095.
[16] Goodfriend, M. and R.G. King, 2005. The Incredible Volcker Disin‡ation. Journal of Monetary Economics, 52 (5): 981-1016.
[17] Justiniano, A., G. Primiceri, and A. Tambalotti. 2010. Investment Shocks and
Business Cycles. Journal of Monetary Economics, 57(2): 132-145.
[18] Kreps, D. 1998. Anticipated Utility and Dynamic Choice, in D.P. Jacobs, E.
Kalai, and M. Kamien, eds., Frontiers of Research in Economic Theory, (Cambridge: Cambridge University Press, 1998), 242–74.
[19] Levin, A.T. and J.C. Williams. 2003. Robust monetary policy with competing
reference models. Journal of Monetary Economics, 50 (5): 945-975.
[20] Marcet, A. and T.J. Sargent. 1989a. Convergence of least-squares learning mechanisms in self-referential linear stochastic models. Journal of Economic Theory
48, 337-368.
[21] Marcet, A. and T.J. Sargent. 1989b. Convergence of least-squares learning in
environments with hidden state variables and private information. Journal of
Political Economy 97, 1306-1322.
[22] McCallum, B.T., 1999. Issues in the design of monetary policy rules. In: Taylor,
J.B., Woodford, M. (Eds.), In: Handbook of Macroeconomics, vol. 1C. Elsevier,
Amsterdam.

31

[23] Mertens, Elmar. 2009a. Managing Beliefs about Monetary Policy under Discretion. Unpublished manuscript, Federal Reserve Board.
[24] Mertens, Elmar. 2009b. Discreet Commitments and Discretion of Policymakers
with Private Information. Unpublished manuscript, Federal Reserve Board.
[25] Milani, Fabio. 2006. A Bayesian DSGE Model with In…nite-Horizon Learning: Do
‘Mechanical’Sources of Persistence Become Super‡uous? International Journal
of Central Banking 2 (3): 87–106.
[26] Milani, Fabio. 2007. Expectations, Learning and Macroeconomic Persistence.
Journal of Monetary Economics, 54 (7): 2065-82
[27] Hagedorn, Marcus, 2011. Optimal disin‡ation in new Keynesian models. Journal
of Monetary Economics 58: 248-261.
[28] Orphanides, A. and J.C. Williams, 2005. Imperfect Knowledge, In‡ation Expectations and Monetary Policy, in B. Bernanke and M. Woodford eds, The In‡ation
Targeting Debate
[29] Orphanides, A. and J.C. Williams, 2007. Robust monetary policy with imperfect
knowledge. Journal of Monetary Economics, 54: 1406-1435.
[30] Sargent, T.J. 1982. The Ends of Four Big In‡ations, in In‡ation: Causes and
E¤ects, edited by Robert Hall, University of Chicago press, pp. 41-97.
[31] Sbordone, A.M. 2007. In‡ation persistence: Alternative interpretations and policy implications. Journal of Monetary Economics 54, 1311–1339.
[32] Sims, C.A. 2001. Solving Linear Rational Expectations Models. Computational
Economics 20, 1-20.
[33] Slobodyan, Sergey and Raf Wouters, 2012. Learning in a Medium-Scale DSGE
Model with Expectations Based on Small Forecasting Models. American Economic Journal: Macroeconomics 4(2): 65-101
[34] Smets, F. and R. Wouters. 2007. Shocks and Frictions in US Business Cycles: A
Bayesian DSGE Approach. American Economic Review, 97 (3): 586-606.
[35] Woodford, M. 1999. Optimal Monetary Policy Inertia, NBER wp 7261.
32

[36] Woodford, M. 2003. Interest and Prices. Princeton University Press: Princeton
NJ.

33