View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Working Paper Series

Indeterminacy and Learning:
An Analysis of Monetary Policy in the
Great Inflation

WP 14-02

Thomas A. Lubik
Federal Reserve Bank of Richmond
Christian Matthes
Federal Reserve Bank of Richmond

This paper can be downloaded without charge from:
http://www.richmondfed.org/publications/

Indeterminacy and Learning:
An Analysis of Monetary Policy in the Great In‡ation
Thomas A. Lubik
Federal Reserve Bank of Richmondy

Christian Matthes
Federal Reserve Bank of Richmondz

January 2014
Working Paper No. 14-02

Abstract
We argue in this paper that the Great In‡ation of the 1970s can be understood as the
result of equilibrium indeterminacy in which loose monetary policy engendered excess
volatility in macroeconomic aggregates and prices. We show, however, that the Federal
Reserve inadvertently pursued policies that were not anti-in‡ationary enough because it
did not fully understand the economic environment it was operating in. Speci…cally, it
had imperfect knowledge about the structure of the U.S. economy and it was subject to
data misperceptions. The real-time data ‡ow at that time did not capture the true state
of the economy, as large subsequent revisions showed. It is the combination of learning
about the economy and, more importantly, the use of data riddled with measurement
error that resulted in policies, which the Federal Reserve believed to be optimal, but
when implemented led to equilibrium indeterminacy in the economy.

JEL Classification:
Keywords:

C11; C32; E52
Federal Reserve; Great Moderation
Bayesian Estimation; Least Squares Learning

The views expressed in this paper are those of the authors and should not be interpreted as those of the
Federal Reserve Bank of Richmond or the Federal Reserve System. We wish to thank our discussant Ryan
Charour, Roberto Billi, Tim Cogley, Martin Ellison, Zheng Liu, Mark Watson, and seminar participants at
UC Davis, the Federal Reserve Bank of San Francisco, Oxford University, Warwick University, the Bank of
England, the 2013 meeting of the Society for Economic Dynamics in Seoul, and the 2013 Federal Reserve
System Meeting in Macroeconomics for useful comments.
y
Research Department, P.O. Box 27622, Richmond, VA 23261. Tel.: +1-804-697-8246. Email:
thomas.lubik@rich.frb.org.
z
Research Department, P.O. Box 27622, Richmond, VA 23261. Tel.: +1-804-697-4490. Email: christian.matthes@rich.frb.org.

1

1

Introduction

There are three strands of narratives about the Great In‡ation and the Great Moderation
in the academic literature. At opposite ends of the spectrum are the good/bad luck and
good/bad policy stories. The 1970s were a time of economic upheaval with strong and persistent exogenous shocks that occurred with high frequency. It was simply bad luck to have
been a central banker at that time since despite best intentions the incidence of shocks was
too much for the central banker’s arsenal to handle. When the 1980s came around, however,
the reduced incidence and persistence of shocks rang in the Great Moderation. This view
is exempli…ed by Sims and Zha (2006). An almost orthogonal narrative argues that the
Federal Reserve conducted bad policy in the 1970s in that it was not aggressive enough
in …ghting in‡ation. It is only through Volcker’s disin‡ation engineered through a highinterest rate policy that the Great In‡ation was reined in. This bad policy view has been
advocated by Clarida, Gali, and Gertler (2000) and subsequently Lubik and Schorfheide
(2004). A third narrative, typically associated with Orphanides (2001), relies on the idea
that the Federal Reserve did not perceive the economic scenario of the 1970s correctly. Data
misperceptions led it to implement policies that delivered bad outcomes and that were only
abated in the 1980s with a better understanding of the state of the world.
Our paper attempts to integrate the bad policy narrative with the data misperception
narrative. More speci…cally, we provide an explanation for why the Federal Reserve, almost unwillingly, engaged at …rst in monetary policy that led to bad outcomes (the Great
In‡ation), but subsequently pursued a policy that resulted in good outcomes (the Great
Moderation). We show that what appears in the data as good and bad outcomes is the
result of an optimal policy problem under imperfect information. In doing so, we also integrate various recent contributions to the empirical and theoretical macroeconomic literature
on learning.
We take as starting point the observation by Orphanides (2001) that the Federal Reserve
did not perceive the productivity slowdown as it was occurring during the 1970s. We capture
this misperception of the data by assuming that the Federal Reserve observes all data with
error. In addition, we assume that the central bank does not know the true data-generating
process. It gathers information by estimating a reduced-form VAR and then updates its
beliefs about the state of the world and the underlying economic model using least-squares
learning. The linear-quadratic optimal policy problem and its solution follows Primiceri
(2006).

2

Private sector behavior is captured by a typical New Keynesian framework that is close
to that in Lubik and Schorfheide (2004) for reference purposes. The private sector knows
the current monetary policy rule, forms rational expectations conditional on that rule, and
assumes that the central bank’s policy rule is time-invariant. The optimal rule derived
from the central bank’s policy problem thus combines with the private sector system into a
rational expectations model. The original source of indeterminacy, that is, of multiplicity
of solutions, that arise from the rational expectations system is the same as in Bullard and
Mitra (2002), Woodford (2003), and Lubik and Schorfheide (2004); to wit, a violation of
the Taylor principle. In the face of in‡ationary pressures, the central bank is not aggressive
enough in raising the real rate of interest through its control of the nominal interest rate.
As shown in these papers, the violation of the Taylor principle can be tied to the value of
the policy coe¢ cients in a (linear) interest rate rule.
In this paper, we thus provide a rationale for why the central bank may choose policy coe¢ cients that inadvertently induce indeterminate outcomes. Given the learning mechanism
and the misperception of the data due to measurement issues, the estimated coe¢ cients
of the central bank’s reduced-form model, and thus the optimal policy coe¢ cients, change
period by period. The rational expectations equilibrium that arises each period is either
unique or indeterminate given the policy rule in place. It is the endogenous shifts of the
policy coe¢ cients for …xed private sector parameters that move the economy across the
threshold between the determinate and indeterminate regions of the parameter space. ‘Bad
policy’, that is, indeterminacy, arises not because of intent but because of data mismeasurement and incomplete knowledge of the economy on the part of the central bank.
We estimate the model on real-time and …nal data using Bayesian methods. Our …ndings con…rm the pattern of indeterminacy and determinacy during, respectively, the Great
In‡ation and the Great Moderation as identi…ed by Lubik and Schorfheide (2004). Yet, this
pattern is rationalized by data misperception, as was argued by Orphanides (2001), and by
central bank learning, as was argued by Primiceri (2006). Federal Reserve policy led to indeterminate outcomes especially during the second half of the 1970s and before the disin‡ation
under Volcker’s chairmanship took hold. Afterwards, during the Volcker-Greenspan period,
endogenous policy under learning with measurement error led to determinate outcomes in
the Great Moderation.
The driver for these results is the extent of data revisions, and, thus, the ex post implied data misperception. We identify two especially prominent turning points when the
initially observed output decline turned out to be much less dramatic following the revi-

3

sion. In other words, the Federal Reserve was confronted with a situation where a decline
in growth implied a lessening of in‡ationary pressures and a commensurately softer policy.
Since the real economy was in better shape than originally believed, the Federal Reserve
unwittingly violated the Taylor principle. Intriguingly, the largest change in policy, based
on our estimated policy coe¢ cients, occurred at the end of 1974, at the height of stag‡ation
in the wake of the abandonment of price controls earlier that year. We …nd that the Federal Reserve under Burns pursued an aggressively anti-in‡ationary policy that resulted in a
determinate equilibrium in the middle of the Great In‡ation decade. This set in motion a
shift toward an increasingly less accommodative policy stance that culminated in what has
come to be known as the Volcker disin‡ation.
Traditionally, DSGE models for the analysis of monetary policy have been estimated
using …nal data. It is only very recently that the importance of real-time data for understanding monetary policy decisions is being considered in this literature.1 Collard and Dellas
(2010) demonstrate in an, albeit calibrated,2 New Keynesian DSGE model that monetary
misperceptions, interpreted as the di¤erence between real-time and revised data, are an
important driver of observed economic ‡uctuations through a monetary policy transmission
channel. They also show that this type of error imparts endogenous persistence on in‡ation dynamics without the need to introduce exogenous sources, such as price indexation.
Neri and Ropele (2011) substantiate these insights by estimating a similar model for Euro
area real-time data using Bayesian methods. They …nd that data misperceptions lead to
estimated interest-rate smoothing coe¢ cients that are higher than in the standard model.
This …nding parallels our results since an increasingly more inertial policy rule was one of
the drivers of the switch from indeterminacy to determinacy in the early 1980s.
These papers model monetary policy in terms of an ad-hoc interest-rate feedback rule.
This speci…cation is by de…nition not designed to address the question that is central to the
Lubik and Schorfheide (2004) interpretation of the Great In‡ation, namely, why a central
bank would, in fact, choose an apparently suboptimal policy that leads to indeterminacy.
For this to happen, as we show in this paper, the central bank needs to face both model and
data uncertainty. Pruitt (2012) develops a model along these lines by modifying Sargent,
1

This is notwithstanding earlier contributions, such as Orphanides and Williams (2005), which use
reduced-form models and nonsystem-based empirical methods to understand the implications of data misperceptions.
2
Collard, Dellas, and Smets (2009) estimate this model using Bayesian methods and …nd strong support
for the data mismeasurement speci…cation in terms of overall …t. However, they do not use real-time data
in their estimation. Consequently, measurement error takes on the role of a residual that is not disciplined
by the relevant data concept in the empirical model.

4

Williams, and Zha (2006) to take account of the real-time data issue that the Federal Reserve
faced in the 1970s and 1980s. He shows that data misperceptions introduce sluggishness into
the learning process which can jointly explain the persistent rise of in‡ation in the 1970s
and the ensuing fall in the 1980s as the Federal Reserve gained a better understanding
of the underlying true model. Pruitt’s model is reduced form, in which the central bank
chooses in‡ation and unemployment directly by minimizing quadratic loss in these two
variables subject to a backward-looking and not micro-founded Phillips-curve relationship.
He therefore cannot address the issue of indeterminacy during the Great In‡ation. Moreover,
he does not link his results to observed interest rate policies, that is, the Volcker disin‡ation
in terms of a sharp Federal Funds rate hike is absent.3
Our paper also connects with the recent and emerging literature on regime-switching in
macroeconomics. Following the contributions of Sims and Zha (2006), Davig and Leeper
(2007), and Farmer, Waggoner, and Zha (2009), who study Markov-switching in the parameters of a structural VAR and in the coe¢ cients of a monetary policy rule, Liu, Waggoner
and Zha (2011), Bianchi (2013), and Davig and Doh (2013) estimate regimes and coe¢ cients within the context of New Keynesian models. Generally, they …nd evidence of a
regime shift in the early 1980s, thus supporting the argument in Lubik and Schorfheide
(2004) who imposed this break date exogenously. What these papers do not allow for is the
possibility of indeterminacy. High in‡ation is the outcome of a higher in‡ation target and
a weaker policy response. Moreover, in this line of research the emphasis is on identifying
the break endogenously within the con…nes of a DSGE model, whereas our paper proposes
an explanation and a microfoundation for why these regime switches occurred.
The paper is structured as follows. The next section presents a simple example of the
mechanism that we see at work. We …rst discuss determinate and indeterminate equilibria
in a simple rational expectations model and then show how a least-squares learning mechanism can shift the coe¢ cient that determines equilibrium outcomes across the boundary
between determinacy and indeterminacy. We present our theoretical model in section 3 and
discuss the timing and information assumptions in detail. We also explain how we compute equilibrium dynamics in our framework, and how we choose indeterminate equilibria.
Section 4 elaborates on our choice of data and estimation issues. Section 5 presents the
baseline estimation results, while section 6 contains a bevy of robustness checks. Section 7
3

In a more recent contribution, Givens and Salemi (2013) estimate a simple forward-looking New Keynesian framework with real-time data and data misperception. The central bank solves optimal policy under
discretion, but does not have to learn the structure of the economy. They only estimate the model from the
early 1980s on and do not consider indeterminate equilibria.

5

concludes and lays out a path for future research.

2

A Primer on Indeterminacy and Learning

Methodologically, the argument in our paper rests on two areas in dynamic macroeconomics,
namely the determinacy properties of linear rational expectations models and the dynamic
properties of least-squares learning. In this section, we introduce and discuss these issues
by means of a simple example. The key points that we want to emphasize are: …rst,
whether a rational expectations equilibrium is determinate or indeterminate depends on
the values of structural parameters; second, in a learning environment the values of the
inferred underlying structural parameters are varying over time. By connecting these two
concepts we can develop a rationale for the behavior of the Federal Reserve during the
Great In‡ation of the 1970s and in later periods. The discussion of equilibrium determinacy
borrows a simple framework from Lubik and Surico (2010), while the exposition of leastsquares learning is based on Evans and Honkapohja (2001).

2.1

Determinate and Indeterminate Equilibria

We consider a simple expectational di¤erence equation:
xt = aEt xt+1 + "t ;
where a is a structural parameter, "t is a white noise process with mean zero and variance

(1)
2,

and Et is the rational expectations operator conditional on information at time t. A solution
to this equation is an expression that does not contain any contemporaneous endogenous
variables and that depends only on exogenous shocks and lagged values of the variables in
the information set. The type of solution depends on the value of the parameter a.
If jaj < 1, there is a unique (‘determinate’) solution which is simply:
xt = " t :

(2)

This solution can be found by iterating the equation (1) forward. Imposing covariance
stationarity and utilizing transversality arguments results in this expression. Substituting
the determinate solution into the original expectational di¤erence equation veri…es that it
is, in fact, a solution.
On the other hand, if jaj > 1, there are multiple solutions, and the rational expectations

equilibrium is indeterminate. In order to derive the entire set of solutions we follow the
approach developed by Lubik and Schorfheide (2003). We rewrite the model by introducing
6

endogenous forecast errors
Et

1 t

t

= xt

Et

1 xt ,

which by de…nition have the property that

= 0. This imposes restrictions on the set of admissible solutions. De…ne

t

= Et xt+1

so that equation (1) can be rewritten as:
t

=

1
a

1
1
"t +
:
a
a t

t 1

(3)

We note that under the restriction jaj > 1 this is a stable di¤erence equation, where the

process for

t

is driven by the exogenous shock "t and the endogenous error

covariance-stationary stochastic process for
no further restrictions on the evolution of

t

t
4
.

t.

Any

is a solution for this model since there are

In general, the forecast error can be expressed as a linear combination of the model’s fundamental disturbances and extraneous sources of uncertainty, typically labeled ‘sunspots’.
We can therefore write:
t

where the sunspot
ter.5

t

= m"t +

t;

(4)

is a martingale-di¤erence sequence, and m is an unrestricted parame-

Substituting this into equation (3) yields the full solution under indeterminacy:
xt =

1
xt
a

1

+ m"t

1
"t
a

1

+

t:

(5)

The evolution of xt now depends on an additional (structural) parameter m that indexes
speci…c rational expectations equilibria.
Indeterminacy a¤ects the behavior of the model in three main ways. First, indeterminate solutions exhibit a richer lag structure and more persistence than the determinate
solution. This feature can be exploited for distinguishing between the two types of rational
expectations equilibria for a given model. In the simple example, this is fairly obvious: Under determinacy the solution for xt is white noise, while under indeterminacy the solution is
described by an ARMA(1,1) process. Speci…cally, the (composite) error term exhibits both
serial correlation and a di¤erent variance when compared with the determinate solution.
Second, under indeterminacy sunspot shocks can a¤ect equilibrium dynamics. Other things
being equal, data generated by sunspot equilibria are inherently more volatile than their
determinate counterparts. The third implication is that indeterminacy a¤ects the response
of the model to fundamental shocks, whereas the response to sunspot shocks is uniquely
4

In the case of determinacy, the restriction imposed is that t = 0, 8 t, which implies t = "t .
There is a technical subtlety in that t is, in the terminology of Lubik and Schorfheide (2003), a reducedform sunspot shock: t = m t , with m a structural parameter and t a structural sunspot shock. Setting
m = 0 would therefore result in a sunspot equilibrium without sunspots. Moreover, in less simple models,
there would be additional restrictions on the coe¢ cients since they depend in general on other structural
parameters.
5

7

determined. In the example, innovations to "t could either increase or decrease xt depending
on the sign of m.
What is important for the purposes of our paper is that the nature and properties of the
equilibrium can change when the parameter a changes in such a way that it moves across
the boundary between determinacy and indeterminacy, which is given by jaj = 1. The

simple example assumes that the parameter a is …xed. Our argument about indeterminacy,
namely that it is caused by the central bank’s data misperceptions, relies on the idea that
parameters that a¤ect the type of equilibrium, such as coe¢ cients in a monetary policy rule,
move around. We capture this rationale formally by means of a learning mechanism. The
next section introduces this idea by discussing a simple example of how least-squares learning
in combination with measurement error can result in time variation of the parameters that
determine the type of equilibrium.6

2.2

Indeterminacy through Learning

We illustrate the basic mechanism by means of a simple example. The true data-generating
process is equation (1), where we assume for illustration purposes that a = 0:01. The
solution under rational expectations is therefore xt = "t , and thus determinate. In the
environment with learning we assume that the agents have the perceived law of motion:
xt = bxt

1

+

t;

(6)

which they estimate by least squares in order to gain knowledge about the underlying
structural model. In our full model, this would be equivalent to the VAR that the central
bank estimates to understand the evolution of the economy. In the determinate case b = 0,
while under indeterminacy b = 1=a. The least-squares estimate of the lag coe¢ cient in the
perecived law of motion, bbt , is varying over time as the information changes under constant-

gain learning. It is this mechanism that introduces persistence in the actual evolution of
the economy. We note that in this special case any deviation from bbt = 0 indicates an
indeterminate equilibrium.

We can derive the actual law of motion, that is, the evolution of the data-generating

process under the application of the perceived law of motion by substituting the latter into
6

There is a subtlety here that we abstract from in this paper. We assume that the private sector operates
under rational expectations in an environment where structural and policy parameters are believed to be
…xed forever. The private sector is myopic in the sense that it does not realize that the policy parameters
are time-varying and can change period by period. Moreover, the private sector does not take into account
that the central bank solves a learning problem. These assumptions considerably simplify our computational
work since, with respect to the latter assumption, we do no have to solve an additional private sector learning
problem.

8

(1). In our full model framework, the counterpart is the Federal Reserve’s announcement
of the policy rule to the private sector each period. Since Et bbt xt + t+1 = bbt xt for given

bbt , we …nd:7

xt = 1

abbt

1

"t .

(7)

Although the rational expectations solution is i:i:d: the learning mechanism by itself introduces persistence into the actual path of xt . An econometrician would therefore see
persistent data and might erroneously conclude that they were generated from an indeterminate equilibrium, where jaj > 1.

A central element of our argument is how mismeasured data in‡uence beliefs and, thus,

economic outcomes.8 We now demonstrate in a simple example how data mismeasurement
can lead agents astray in that they believe to be in an indeterminate equilibrium. Empirical
procedures that agents use in learning models, such as recursive-least squares algorithms,
rely on forming second-moment matrices of observed data, which then enter the calculation
of estimated coe¢ cients. If the data are measured with error, even if that error has zero
mean, this will lead to a biased estimate of this second-moment matrix and will therefore
induce a bias in the parameter estimates. We present results from a simulation exercise
along these lines in Figure 1. We draw i:i:d: shocks for 180 periods and estimate the lag
coe¢ cient in the perceived law of motion. Panel A of Figure 1 shows the estimate and the
5th and 95th percentile bands for bbt in the case when there is no measurement error. The
estimates are centered at zero.

In a second simulation for the same draws of the shocks we add measurement error.

From period 80 to 100 we force the learning agent to observe the actual data with error,
which we assume to be equal to 2 standard deviations of the innovations in the model. After
period 100, the measurement error disappears. As Panel B of Figure 1 shows, agents believe
that there is substantial persistence in the economy as there would be under indeterminacy.
The estimate of the perceived autoregressive reduced-form parameter b reaches values as
high as 0:4, which would indicate a structural parameter of a = 2:5 and therefore an
indeterminate solution to (1). Given the time series from Panel A, an econometrician
tasked with deciding between a determinate and an indeterminate equilibrium would likely
favor the latter because of the higher observed persistence.9
7

We abstract from the subtlety that under indeterminacy the rational expectations solution is an
ARMA(1,1) process, which could be re‡ected in the perceived law of motion.
8
Pruitt (2012) elaborates in more detail why measurement error can have important consequences for
models of learning.
9
This intuition is discussed in more detail in Lubik and Schorfheide (2004). Figure 1 on p. 196 shows
the likelihood functions for both cases.

9

We want to emphasize that in our simulation the true value of a = 0:01. The incorrect
inference stems from the combination of least-squares learning and, more importantly, the
introduction of measurement error. The simple example simulation thus shows that an
economy can inadvertently drift into the indeterminacy region of the parameter space. We
now turn to our full modelling framework, where we add an optimal policy problem to
capture the idea that an optimizing central bank, despite best intentions, can inadvertently
generate an indeterminate equilibrium in the economy.

3

The Model

3.1

Overview and Timing Assumptions

Our model consists of two agents, a central bank and a private sector. The central bank
is learning about the state of the economy. It only has access to economic data that are
measured with error and it is not aware of the mismeasurement. The central bank treats
the observed data as if they are measured without error.10 Furthermore, the central bank
does not know the structure of the data-generating process. Instead, it uses a reducedform speci…cation to conduct inference. The central bank’s policy is guided by an ad-hoc
quadratic loss function, which is minimized every period to derive a linear optimal policy
rule. The private sector knows the central bank’s current period policy rule and determines
in‡ation and output accordingly. It is aware of the mismeasurement problem that the central
bank faces and the stochastic process that governs the measurement errors. The private
sector itself does not face the mismeasurement problem; it observes the data perfectly. At
the same time, the private sector is myopic in that it treats the policy coe¢ cients, which
are varying period by period, as …xed inde…nitely.11
The timing of the model is such that the central bank estimates its model of the economy
at the beginning of period t using data up to and including period t

1. The central bank

then minimizes its loss function subject to its estimated law of motion for the private sector,
treating the parameter estimates as …xed. This results in optimal policy coe¢ cients, which
are then communicated to the public. The private sector observes the true state of the world
and the policy coe¢ cients. Then, shocks are realized and equilibrium outcomes are formed.
The central bank’s policy rule, taken as given by the private sector, and the structural
equations of the private sector form a linear rational expectations model that can have a
10

We consider alternative speci…cations (in which the central bank has access to …nal data) as a robustness
exercise.
11
We will discuss this “anticipated utility”assumption that the private sector shares with the central bank
in more detail below.

10

determinate or an indeterminate solution, depending in which region of the parameter space
the estimates fall. The central bank observes these new outcomes and updates its estimates
at the beginning of the next period.

3.2

The Central Bank

The central bank deviates from rational expectations in two critical aspects. First, it does
not know the structure of the economy. Hence, it conducts inference based on a reducedform model. We follow the learning literature and endow the central bank with a VAR,
which we restrict in such a way that it resembles the speci…cation in Primiceri (2006),
which serves as a benchmark. However, we explicitly focus on the nominal interest rate as
the central bank’s policy instrument.12 The central bank employs a learning mechanism,
namely least-squares learning with constant gain, to update its model of the economy. The
second key aspect of our approach is that the central bank observes the actual data with
error. This is designed to mimic the problems central banks face when data arrive in real
time but are potentially riddled with error.
We assume that the central bank observes Xt , a noisy measurement of the true state
Xttrue :
Xttrue = Xt +
where

t

t;

(8)

is a measurement error independent of the true outcome Xttrue . We assume that

the error is serially correlated of order one:
t

=

t 1

+ "t ;

(9)

where the Gaussian innovation "t has zero mean and is independent of Xttrue . While it may
be problematic to justify autocorrelated measurement errors on a priori grounds, we note
that it is a key …nding in Orphanides’(2001) analysis of monetary policy during the Great
In‡ation. Perhaps more importantly, we also assume that the central bank does not learn
about the measurement error, which therefore persists during the estimation period. We
consider alternative assumptions in a robustness exercise below.
The central bank sets the interest rate target:
iCB
= it + "it ;
t

(10)

based on a policy rule of the form:
iCB
=
t

K
X

k
t Xt k

+

t it 1 ;

k=1

12

This is also a key di¤erence to the approach in Pruitt (2012).

11

(11)

where "it is a zero-mean monetary policy implementation error. The policy coe¢ cients
t

and

t

are chosen from an optimal policy problem. Time variation in the coe¢ cients

arises from the learning problem described below. We follow Primiceri (2006) and Sargent,
Williams, and Zha (2006) in assuming that the central bank chooses the policy coe¢ cients
to minimize the quadratic loss function:
Wt = Et

1
X

(j t)

(

target 2

) +

j

y(

y target )2 +

yj

i (it

it

1)

2

;

(12)

j=t

subject to estimated laws of motion for the relationship between the state variables, in‡ation

t

and output growth

instrument. 0 <

yt , the policy variable iCB
t , and the de…nition of the policy

< 1 is the constant discount factor, and

loss function that we treat as structural

parameters.13

y;

target

i

and

0 are weights in the
y target

are …xed target

values for in‡ation and output growth, respectively.
In order to learn about the structure of the economy, the central bank estimates the
following VAR:
Xj =

n
X

At;k Xj

k

+

k=1

m
X

Bt;l ij

l

+ uj :

(13)

l=0

The set of matrices A and B carry t-subscripts since they are re-estimated every period.
They are, however, taken as …xed by the central bank when it minimizes its loss function.
This leads to a standard linear-quadratic decision problem that the central bank needs to
solve every period for a varying set of coe¢ cient matrices. Similar to Primiceri (2006),
we restrict the matrices in the central bank’s model further so that we have one equation
that resembles a backward-looking Phillips curve and another that resembles a dynamic
IS-equation. Speci…cally, the central bank estimates the two-equation model:
j

yj

= c

;t

+ at (L)

j 1

= cy;t + dt (L) yj

+ bt (L) yj
1

+

t it 1

1

+ ut ;

+ uyt :

(14)
(15)

We thus have Xttrue = [ t ; yt ]0 as the nominal interest rate is not observed with error. All
coe¢ cients in the lag-polynomials at (L), bt (L), and dt (L), and the interest rate coe¢ cient
t

are potentially changing over time, as are the intercepts c
13

;t

and cy;t .

A loss function of this kind can be derived from a representative household’s utility function within a New
Keynesian framework. In this case, y and i would be functions of underlying structural parameters. While
it is conceptually possible to derive a loss function within our learning framework, it is beyond the scope
of our paper. Nevertheless, using a welfare-based loss function with a reduced-form model of the economy
might be problematic since it raises the question how the central bank can calculate the welfare-based loss
function without knowledge of the structure of the economy.

12

Given the estimates of the empirical model, the central bank needs to update its beliefs
about the state of the economy. In line with much of the learning literature (see Evans and
Honkapohja, 2001), we assume that it uses least-squares learning. The algorithm works as
follows. Suppose the central bank wants to estimate an equation of the following form:
qt = p0t

1 t

+

(16)

t

where qt is the dependent variable or a vector of dependent variables, pt
matrix of regressors,

t

the residual(s), and

t

1

a vector or

the vector of parameters of interest. The

least-squares learning algorithm can be written as:
Rt = Rt
t

=

1

t 1

0
1 pt 1

+ gt pt

+ gt Rt 1 pt

1

Rt
p0t

qt

;

1

1 t 1

(17)
;

(18)

which are the updating formulas for recursive least-squares estimation. Rt is an estimate
of the second-moment matrix of the data. A key parameter is the gain gt . The standard
assumption in the literature, as in Primiceri (2006) and Sargent, Williams, and Zha (2006),
is to use a constant gain gt = g. This amounts to assuming that the agents who estimate
using constant gain think that parameters drift over time. The size of this gain determines
by how much estimates are updated in light of new data. It encodes a view about how
much signal (about the coe¢ cients) and how much noise is contained in a data point. We
initialize Rt and

t

using a training sample, which we assume to consist of 10 quarters of

real-time data.14 The central bank in our model estimates its two-equation model equation
by equation, which is a standard assumption in the literature.

3.3

The Private Sector

The behavior of the private sector is described by a New Keynesian Phillips curve that
captures in‡ation dynamics using both forward- and backward-looking elements:
t

0

t

=

[

Et

t+1 + (1

)

t 1

t]

+ yt

zt :

1 is the coe¢ cient determining the degree of in‡ation indexation, while

(19)
> 0 is a

coe¢ cient determining the slope of the Phillips curve. zt is a serially correlated shock with
law of motion zt =

z zt 1

yt =

+ "zt . Output dynamics is governed by an Euler-equation:
1

(it

it

Et (

t+1

14

t ))

+ Et yt+1 + gt ;

(20)

An alternative is to use a decreasing gain. For instance, a recursive version of OLS would set the gain
equal to a decreasing function of t.

13

where

> 0 is the coe¢ cient of relative risk aversion. gt is a serially correlated shock with

law of motion gt =

g gt 1

+ "gt . The innovations to both AR(1) processes are Gaussian.

yt can be interpreted as output relative to a stochastic trend. Shocks to the latter are
captured by the generic process gt . We connect yt in the structural private sector equations
to output growth

yt in the central bank’s VAR via the measurement equation in the

model’s state-space representation as in An and Schorfheide (2007).
The private sector equations share the same structure as in Lubik and Schorfheide
(2004) for reference purposes. The equations can be derived from an underlying utility
and pro…t maximization problem of, respectively, a household and a …rm. Since these
steps are well known we do not report these derivations explicitly. We deviate from the
standard speci…cation in that we include the time-varying in‡ation target

t

separately in

these equations because the views the private sector holds about the steady-state level of
in‡ation change as the central bank changes its policy rule. The private sector knows the
steady-state real interest rate and can thus infer the implied steady-state level of in‡ation
from the current period monetary policy rule.
The private sector equation system is closed by the monetary policy reaction function
(10). This results in the three-equation model that forms the backbone for the standard
DSGE model used in the analysis of monetary policy (Smets and Wouters, 2003). The
policy rule is communicated to the private sector after the central bank has solved its
optimal policy problem. The private sector thus knows the time t policy rule when making
its decision at time t. We assume that the private sector believes the policy rule will not
change in the future. This is akin to the anticipated utility assumption that the central
bank is making above and that is more generally often made in the learning literature.15
More speci…cally, the private sector realizes that the central bank makes a mistake in terms
of basing the policy rule decision on mismeasured data. Yet, it is myopic in the sense
that it does not assign any positive probability to changes in that policy rule when making
decisions.

3.4

Deriving the Equilibrium Dynamics

Conditional on the central bank’s reaction function, the private sector generates the …nal
data. The central bank observes these with error and uses them as an input in the next
period’s optimal policy problem under learning. The behavior of the two agents, the central
bank and the private sector as a data-generating process for the …nal data, is thus intricately
15

Cogley and Sargent (2008) present an extensive discussion of the game-theoretic concept of ‘anticipated
utility’and how it relates to Bayesian decisionmaking in a macroeconomic context.

14

linked in an essentially nonlinear manner. We now describe how to combine the two systems
into a state-space format that can be used for likelihood-based inference.
In order to derive the equilibrium dynamics we de…ne the vectors Qt and Zt . Qt
contains all variables that directly enter the private agents equilibrium conditions: Qt =
0

Xttrue ; it ; zt ; gt . Zt adds to that vector the variables that are needed for the central bank’s
reaction function:16 Zt = Qt ;

true
true 0
t ; t 1 ; t 2 ; Xt 1 ; Xt 2 .

The private sector’s equilib-

rium conditions and the de…nition of GDP growth can be stacked to give the following set
of forward-looking equations:
AQt = BEt Qt+1 + CQt

1

+ "Q
t :

(21)

"Q
t contains all exogenous innovations that appear in the private sector equations described
above. It is worth noting that the private sector structural equations do not feature time
variation. It is only the time-varying nature of the central bank’s decision rules (and the
private sector’s knowledge of those time-varying decision rules) that will make the private
sector decision rules vary over time and allow the resulting rational expectations equilibrium
to possibly drift between determinate and indeterminate regions.
Equation (21) can not yet be solved: There is no equation determining the nominal interest rate. In other words, A, B and C do not have full row rank. We will therefore combine
equation (21) with the central bank’s decision rule and the de…nition of the mismeasured
economic data Xt :
Z
Z
AZ
t Zt = Bt Et Zt+1 + Ct Zt

1

+ "Z
t

(22)

We de…ne 1i to be a selector vector that selects it from Qt . AZ
t is then given by:
0

B
B
B
B
Z
At = B
B
B
B
@

A
0
0
0
0
0
1i

0
I
0
0
0
0
0

0
0
I
0
0
0

0
0
0
I
0
0

1
t

2
t

0
0
0
0
I
0

1

0
0
0
0
0
I
1
t

2
t

C
C
C
C
C:
C
C
C
A

(23)

It is not immediately obvious that AZ
t is a square matrix. The 0 and I arrays are always
assumed to be of conformable size. BtZ is a matrix of zeroes except for the upper left-hand
16

To economize on notation, we derive the equilibrium dynamics for our benchmark case in which the
central bank reacts to three lags of in‡ation and output growth.

15

corner where B resides. CtZ is given by:
0

C
0
0
0
I
0

B
B
B
B
Z
Ct = B
B
B
B
@

t

0
I
0
0
0
0

0
0
0
I
0
0
0

0
0
0
0
0
0
3
t

0
0
0
0
0
I
0

0
0
0
0
0
0
3
t

1

C
C
C
C
C:
C
C
C
A

(24)

"Z
t contains all the i:i:d: Gaussian innovations in the model. At each date t, we can use
standard tools to solve equation (22). The reduced form of the model is then given by:
Zt = St Zt

1

+ Tt "zt

(25)

In order to compute a model solution when the equation solver indicates nonexistence of
equilibrium, we use a projection facility. That is, if a policy rule in a certain period implies
nonexistence of a stationary equilibrium, the policy rule is discarded and last year’s policy
rule is carried out.17 If the policy rule implies an indeterminate equilibrium, we pick the
equilibrium chosen by the rational expectations solver as in Sims (2002).

4

Data and Estimation

4.1

Data

In our model, there are two data concepts. The key assumption we make is that the central
bank only has access to real-time data. That is, its decisions are based on data releases as
they …rst become available. These are then subject to data revisions later on. We therefore
use real-time data from the Federal Reserve Bank of Philadelphia for the estimation problem
of the central bank. Our sample period starts in 1968:Q3 based on data availability. The
last data point is 2012:Q2. We use the …rst 10 quarters of data for a pre-sample analysis to
initialize the agents’prior. The e¤ective sample period over which the model is estimated
starts therefore in 1970:Q2. The data are collected quarterly.
A central assumption in our framework is that the private sector serves as data-generating
process for the …nal data. Our estimation combines real-time and …nal observations on output growth and the in‡ation rate in addition to the nominal interest rate, which is observed
without error (since it is the policy instrument of the central bank). We use as policy rate
the Federal Funds rate, whereas output growth is measured as the growth rate of real GDP,
17

This is based on the approach used by Cogley, Matthes, and Sbordone (2011).

16

and in‡ation is the percentage change in the GDP de‡ator. Figure 2 depicts the real-time
and the …nal data for the growth rate in real GDP and in the GDP de‡ator. The Appendix
contains further details on the construction of the data series.
In our estimation exercise, we …nd it convenient to calibrate some parameters. Table 1
lists the calibrated parameters and their source. We set the in‡ation target

target

in the

central bank’s loss function to an annual rate of 2%. While the Federal Reserve did not
have an o¢ cial in‡ation target for much of the sample period, we take it to be commonly
understood, and even mandated by the (revision to the) Federal Reserve Act of 1977, that
it pursued stable prices, a proxy for which we consider an in‡ation rate of 2%. The output
growth target

y target is set to a quarter-over-quarter rate of 0.75%, which is roughly the

sample average. We …x the discount factor at

= 0:99.

The model estimation turned out to be sensitive to the speci…cation of the backwardlooking New Keynesian Phillips curve and Euler equations. For instance, Sargent and Surico
(2011) …nd almost purely backward-looking dynamics in their rational-expectations model.
We therefore experimented with various speci…cations of the lag terms in these equations.
The best-…tting speci…cation was one with a backward-looking coe¢ cient of 0:5 in the New
Keynesian Phillips curve and no backward-looking dynamics for the output gap in the
Euler-equation. We thus …x the respective coe¢ cients at these values in our estimation.
We assume that the lag length in all central bank regressions is 3. Based on a preliminary
investigation, we found that for shorter lag lengths most of the draws from the posterior
distribution would have implied indeterminacy throughout the sample, which we did not
…nd plausible. We therefore …x the gain for the regressions at 0:01, which is at the lower
end of the values used in the learning literature. When we estimated this parameter (while
restricting it to be no smaller than 0:01) all estimates clustered around this value. As in
Primiceri (2006) we therefore chose to calibrate it.

4.2

Likelihood Function and Bayesian Inference

We use the Kalman …lter to calculate the likelihood function. Let Yt denote the observables.
Our solution method for solving linear rational expectations models, the Gensys algorithm
from Sims (2002) and adapted by Lubik and Schorfheide (2003) for the case of indeterminacy, delivers a law of motion for each time period for the vector of variables as a solution
to the expectational di¤erence equations given before. The state-space system to calculate

17

the likelihood function is then given by:
Yt = RZt + "yt ;
Zt = St Zt

1

+ Tt "zt ;

(26)
(27)

where St is the time t solution to the above equation system.
A key element of the learning algorithm is the speci…cation of the initial beliefs held
by the central bank. In order to pin down the agent’s beliefs, we follow Primiceri (2006)
and use real-time data from a training sample, together with a …xed gain parameter. The
training sample only includes the information available to the central bank at the end of
the period, not the …nal data releases. We prefer this approach since otherwise the number
of parameters to estimate becomes very large.
In our benchmark speci…cation, we also assume that the central bank never has access to
updated data and never learns the true values of the variables. This assumption is made for
convenience but also parsimony: It relieves us from having to model the process by which
data gets updated over time. In order to avoid stochastic singularity, when we use the full
set of real-time and …nal data we add a monetary policy shock to the model. The central
bank’s decision problem is una¤ected by this because of certainty equivalence. Furthermore,
we assume that the measurement errors in the central bank’s observation of output growth
and in‡ation are AR(1) processes, the parameters of which we estimate along with the
model’s other structural parameters. In any case, the private sector knows the structure of
the measurement errors and understands the central bank’s informational shortcomings.
We use a standard Metropolis-Hastings algorithm to take 300,000 draws from which we
discard the …rst 50,000 as burn-in. The estimation problem is computationally reasonably
straightforward, but time consuming since we have to solve a linear-quadratic dynamic programming problem and a linear rational expectations model every period for every draw.18

5

Estimation Results

5.1

Parameter Estimates, Impulse Responses and Equilibrium Determinacy

Figure 3 shows the marginal posterior distributions for each parameter that we estimate,
while Table 2 reports their median estimates and the 5th and 95th percentile. The dotted
line in each graph in Figure 3 represents the prior distribution. The data appear quite
18

We also estimated the model using the adaptive Metropolis-Hastings algorithm of Haario, Saksman, and
Tamminen (2001) to safeguard against any pathologies. The results remain unchanged.

18

informative as the posteriors are generally more concentrated and often shifted away from
the priors. The estimation algorithm seems to capture the behavior around the posterior
mode reasonably well, with parameters being tightly estimated. The “supply” and “demand” shocks, zt and gt , respectively, show a high degree of persistence at bz = 0:93 and

bg = 0:73. These numbers are very close to those found by Lubik and Schorfheide (2004)
and other papers in the literature for this sample period. While the measurement error in

the in‡ation rate is small, not very volatile, and especially not very persistent (b = 0:08),

the picture is di¤erent for output growth. Its median AR(1) coe¢ cient is estimated to be
byg row th = 0:48, which is considerable. This observation appears to con…rm the notion that
the Federal Reserve missed the productivity slowdown in the 1970s and thus misperceived

the state of the business cycle in their real-time observations of output growth. Finally,

the estimates of the weights in the central bank’s loss function reveal a low weight on output growth and a considerably stronger emphasis on interest rate smoothing. The latter
especially generates the observed persistence in interest rate data.
Figure 4 contains the key result in the paper. It shows our model-based evaluation
of which type of equilibrium the U.S. economy was in over the estimated sample period.
For this purpose, we de…ne a determinacy indicator as follows. A value of ‘1’ indicates a
unique equilibrium, while a value of ‘0’means indeterminacy. The indicator is computed by
drawing from the posterior distribution of the estimated model at each data point, whereby
each draw results in either a determinate or an indeterminate equilibrium. We then average
over all draws, so that the indicator can be interpreted as a probability similar to the
concept of a transition probability in the regime-switching literature. As it turns out, our
estimation results are very unequivocal as far as equilibrium determinacy is concerned since
the indicator attains either 0 or 1.
Two observations stand out from Figure 4. First, the U.S. economy has been in a unique
equilibrium since the Volcker disin‡ation of 1982:Q3, which, according to conventional wisdom, implemented a tough anti-in‡ationary stance through sharp interest rate increases.
In the literature, these are commonly interpreted as a shift to a policy rule with a much
higher feedback coe¢ cient on the in‡ation term (see Clarida, Galí, and Gertler, 2000). The
second observation is that before the Volcker disin‡ation the economy alternated between a
determinate and an indeterminate equilibrium. The longest indeterminate stretch was from
1977:Q1 until 1980:Q4 which covers the end of Burns’chairmanship of the Federal Reserve,
Miller’s short tenure, and the early Volcker period of a policy of nonborrowed reserve targeting. This was preceded by a short determinacy period starting at the end of 1974. The

19

U.S. economy was operating under an indeterminate equilibrium at the beginning of our
e¤ective sample period.
We report impulse response functions to a monetary policy shock (an innovation to the
central bank’s interest rate target in equation (10) ) in Figure 5.19 Since the optimal policy
rule changes period by period, there is a set of impulse reponses for each data point. We
focus on four time periods, the …rst quarters of 1975, 1979, 1990, and the last data point in
2012. We established that the U.S. economy was operating in a determinate equilibrium in
1975, 1990, and 2012. In these periods, a monetary policy shock raises the Federal Funds
rate, lowers in‡ation, and lowers output growth, just as the intuition for the basic New
Keynesian framework would suggest.20 The strength of the individual responses depends
solely on the policy coe¢ cients since the other structural parameters of the model are
treated as …xed for the entire sample.
The pattern for 1979 is strikingly di¤erent, however. In response to a positive interest
rate shock, in‡ation and output growth both increase with a prolonged adjustment pattern
for the former variable. Moreover, the Federal Funds rate remains persistently high for
several years, as opposed to its response in 1975. The key di¤erence is that the equilibrium
in 1979 is indeterminate. This …nding is consistent with the observation in Lubik and
Schorfheide (2003) that indeterminacy changes the way a model’s variables respond to
fundamental shocks. This can be seen in our simple example where in the indeterminate
solution (5) the response of xt to "t depends on the sign of the indeterminacy parameter m.
Furthermore, a quick calculation shows that the Taylor principle, in terms of the response
of the real interest rate (that is, the nominal rate less one-step ahead in‡ation), is violated
in 1979 despite the strong and persistent Federal Funds rate response.
Our benchmark results show that the analysis provided by Clarida, Galí, and Gertler
(2000) and Lubik and Schorfheide (2004) is essentially correct. The U.S. economy was in
an indeterminate equilibrium for much of the 1970s, which it escaped from only in the
early 1980s during a period that coincided with changes in the Federal Reserve’s operating
procedures under Volcker’s chairmanship; hence the moniker the Volcker disin‡ation. The
19

Impulses responses to the other shocks are available on request. They are consistent with the pattern
displayed in this Figure. Supply shocks raise output growth and lower in‡ation, while demand shocks lead
to increases in both. The interest rate has a stabilizing e¤ect by increasing in accordance with the feedback
mechanism embodied in equation (11). The exception is the pattern for 1979, the reason for which we discuss
in this section.
20
This statement obviously has to be quali…ed for 2012 since the U.S. economy was operating under a zero
lower bound for the Federal Funds rate, which we do not impose in order to stay within a linear framework.
In this speci…c case, the interest rate response can be seen as that of a shadow interest rate that would
obtain in the absence of the lower bound.

20

mechanism through which the indeterminate equilibria arose and switches between the types
of equilibria occurred is consistent with Orphanides’(2001) view that the Federal Reserve
misperceived the incoming data. We attempt to capture this idea by means of a central
bank learning framework. We now dig deeper into our model’s mechanism to understand
the origins of the Great In‡ation and Volcker’s disin‡ation.

5.2

The Volcker Disin‡ation of 1974

What drives the switches between determinate and indeterminate equilibria is the policy
chosen by the central bank. Depending on the incoming data, the optimal policy coe¢ cients
can change every period. This feature of our framework is a marked di¤erence from much
of the literature, which describes policy via a time-invariant rule that is possibly subject
to exogenous breaks or regime switches. Moreover, we treat the structural parameters of
the private sector as invariant over the entire sample period. This allows us to focus on the
changing nature of the policy coe¢ cients as the source of changes in equilibrium.
We contrast the optimal Federal Funds rate from the model with the actual rate in
Figure 6. We note that for much of the sample period optimal policy is tighter than the
actual realized policy rate. This obviously explains the determinate outcomes after the
Volcker disin‡ation of 1982, as the Federal Reserve wanted to implement a tighter policy
than was eventually realized. At the same time, we also note that during the indeterminacy
period during the second half of the 1970s the perceived optimal Federal Funds rate path
was considerably above the realized path. This illustrates the main point of our paper. The
chosen policy appears quite aggressively anti-in‡ationary, but, as we know from the results
above, it resulted in an indeterminate equilibrium for much on the 1970s and, hence, in the
Great In‡ation. Figure 6 also shows a quite dramatic interest rate hike in late 1974, when
the Federal Reserve intended to raise the Federal Funds rate to almost 30%. As Figure 4
shows, this generated a switch from indeterminacy to determinacy that persisted for a year
despite a sharp reversal almost immediately. For the period of the Great Moderation the
optimal policy tracks the realized path quite closely, thus con…rming the anti-in‡ationary
credentials of the Volcker-Greenspan period.21
In Figure 7, we plot the measurement errors in in‡ation and output growth against the
determinacy indicator. This gives insight into the underlying determinants of the optimal
21

Overall, our model captures the realized data quite well, which is admittedly not a very high bar to cross.
A more stringent test of our model is whether it produces reasonable forecasts. Carboni and Ellison (2009)
demonstrate that Sargent, Williams, and Zha (2006) fails along this dimension. We report one measure
along this dimension in Figure 10, where we show one-step-ahead in‡ation forecasts of the private sector
and the central bank. Either series lines up reasonably well with alternative in‡ation expectation measures.

21

policy choice. The measurement error is the di¤erence between the real-time data and the
…nal data. A positive measurement error thus means that the data are coming in stronger
than they actually are. Consider the in‡ation picture in the third quarter of 1974. The
Federal Reserve observes in‡ation that is two percentage points higher than in the …nal
revision, which is, of course, not known to the policymakers. We note that the true data,
i.e., the …nal data, are generated by the private sector equations. The seemingly high
in‡ation thus prompts the Fed to jack up the policy rate, as shown in Figure 6. This is
quickly reversed, however, as the next data points indicate a negative measurement error in
in‡ation and a considerable one in output growth. Because of the persistence in the learning
process and the sluggishness of the Federal Reserve’s backward-looking model, determinacy
switches tend to last for several periods.
Figure 7 also shows that the volatility and extent of the measurement errors seemingly
declined after the early 1980s. Arguably, this could be an underlying reason why the Great
Moderation period is, in fact, one of equilibrium determinacy. Moreover, as time goes
by, the learning central bank develops a better understanding of the underlying structural
model and the nature of the measurement error simply because of longer available data
series. Nevertheless, data misperception issues can still arise, as evidenced by the spike
in late 2008 in output growth and the increased volatility of the in‡ation error during the
Great Recession.
Whether an equilibrium is determinate or indeterminate is determined by the private
sector equations once the central bank has communicated the policy rule for this period.22
The switches should therefore be evident from changes in the chosen policy parameters.
We can back out time series for the policy coe¢ cients from the estimated model. Since
the speci…ed form of the policy rule contains more lags, namely three, than is usual for the
simple New Keynesian framework upon which most of our intuition is built, we report the
normalized sum of those coe¢ cients, that is, the long-run coe¢ cients, to gauge the e¤ective
stance of policy in Figure 8.
At the beginning of the sample, the in‡ation coe¢ cients are essentially zero. With only
mild support from positive output coe¢ cients, the resulting equilibrium in the economy
is indeterminate. The switch to a determinate equilibrium in 1974:Q3 is evident from
the sharp rise in the long-run in‡ation coe¢ cient. This is accompanied by a smaller, yet
still substantial increase in the output coe¢ cient. The switch back to an indeterminate
22

This is where the assumption of anticipated utility bears most weight since we can solve the linear
rational expectations model in the usual manner (Sims, 2002) and do not have to account for the potential
future switches in policy in every period.

22

equilibrium during the late Burns-Miller period seems a knife-edge case as both in‡ation
and output coe¢ cients come down, but to levels that might not be considered a priori
inconsistent with a determinate equilibrium. The behavior of the coe¢ cient on the lagged
interest rate is interesting in this respect. It is well known (Woodford, 2003) that highly
inertial policy rules support determinate equilibria even if the in‡ation coe¢ cients are not
large. The rule becomes less inertial as the early 1970s progress, reaching almost zero in
1976. It then climbs only gradually which is consistent with the indeterminate equilibrium
occurring in the late 1970s.
After 1980 the policy coe¢ cients gradually move upward. Almost all of this movement is
driven by the normalizing factor, that is, the coe¢ cient on the lagged nominal interest rate.
Individual policy coe¢ cients show virtually no variation after the 1980s. What is striking
from the graphs is that the Volcker disin‡ation is essentially absent from the output and
in‡ation coe¢ cients. It appears only as the endpoint of the gradual rise in the lagged
interest-rate coe¢ cient in 1982. We can therefore interpret the Volcker disin‡ation not as
an abrupt change in the Federal Reserve’s responsiveness to in‡ation, but rather as the
culmination of a policy move toward a super-inertial policy.23 A more pointed explanation
is that the Volcker disin‡ation happened in 1974 under Burns’chairmanship. The sudden
spike in in‡ation prompted the Federal Reserve to act tough by sharply increasing its
feedback coe¢ cients and gradually implementing a more inertial regime, which is evident
from the continuous rise of the coe¢ cient on lagged in‡ation. It reached its long-run value
right in time for what the literature has identi…ed as the onset of the Volcker disin‡ation.
The groundwork was prepared, however, by Burns in 1974.
The analysis of Liu, Waggoner, and Zha (2011) o¤ers an interesting contrast to our
interpretation of the determinacy episode of 1974-75. They …nd a short-lived switch to a
high-in‡ation target regime (with a level of the in‡ation target around 5% in annual terms,
see their Figure 4 on p. 281) that coincides with our switch to a determinate equilibrium.
The in‡ation target in their model is the steady-state or long-run level of in‡ation if no
further regime changes were to occur. The central bank in our model always targets 2%
annual in‡ation, but as its views change, so does its perceived long-run level of in‡ation.
It only has one instrument to balance two goals, an in‡ation target and a GDP growth
target. Consequently, the long-run level of in‡ation is not necessarily equal to its target.
Figure 9 depicts the perceived long-run view of in‡ation held by the Federal Reserve in our
framework. We see that the estimated long-run level of in‡ation is in the ballpark of the Liu,
23

Coibion and Gorodnichenko (2011) o¤er a similar interpretation.

23

Waggoner and Zha (2011) estimate just before the switch to determinacy. This is driven in
both frameworks by the in‡ationary spike of the …rst oil-price shock. What follows afterward
is interpreted di¤erently. Our Federal Reserve tightens policy and switches to an aggressively
anti-in‡ationary regime, while the Liu-Waggoner-Zha Federal Reserve accommodates the
spike by switching to a high-in‡ation target. Within a year, however, the long-run perceived
level of in‡ation in our model decreases, as does the in‡ation target in the model of Liu,
Waggoner, and Zha (2011).
Finally, we can also contrast the Federal Reserve’s and the private sector’s one-period
ahead in‡ation expectations. These are reported in Figure 10. The private sectors’ expectations are the rational expectations from within the structural model given the policy
rule, while the Fed’s expectations are computed from its reduced-form model as a oneperiod ahead forecast. The latter are noticeably less volatile and smoother than the former,
which re‡ects the di¤erent nature of the expectation formation. Moreover, the Federal Reserve’s expectations were consistently higher than the private sector’s expectations during
the Great In‡ation, whereas in the Great Moderation the respective expectations line up
more closely and ‡uctuate around a 2% in‡ation target. This is therefore further evidence
of data misperceptions as the underlying source for indeterminate outcomes. The Federal
Reserve consistently expected higher in‡ation than actually materialized and chose policy
accordingly.
Our results thus con…rm those of Clarida, Galí, and Gertler (2000), Lubik and Schorfheide
(2004), and others that have argued that the Great Moderation was kick-started by a shift
to a more anti-in‡ationary policy under Volcker; whereas the Great In‡ation was largely the
outcome of weak policy. Much of this literature rests, however, on sub-sample estimation
with exogenous break dates. Moreover, it also often lacks a rationale as to why a central
bank would pursue ostensibly sub-optimal policies. Our approach is not subject to the
former concern; and we provide an answer to the latter question. Our model is estimated
over the whole sample period, while the shifts between determinacy occur endogenously as
the central bank changes its behavior in the light of new data. The seemingly sub-optimal
behavior is rationalized by the signal extraction problem the policymakers face. From their
perspective policy is chosen optimally. Where our results deviate from the previous literature is that they show that the 1970s also exhibited determinate equilibria, especially for
an extended period in the middle of the decade.
The middle of the 1970s, that is, the time period for which we identi…ed switches between
indeterminacy and determinacy, coincides with one of the most tumultuous periods in U.S.

24

economic history. The sharp run-up in in‡ation throughout 1974 that is visible in the data
(see Figure 2) is commensurate with the end of price controls on April 30, 1974. The
devaluation of the U.S. Dollar and the …rst oil price shocks were also contributing factors to
the in‡ationary picture. Moreover, the winter of 1974-75 marked the most acute period of
stag‡ation in U.S. history. The Burns Federal Reserve rose to the occasion by hiking interest
rates to combat in‡ation, which is acknowledged by the literature (see Hetzel, 2008, pp.
108). Incoming data on real GDP in 1974:Q4 and 1975:Q5 showed a much weaker picture
of the economy than was ultimately revealed. Facing political pressures for further stimulus
to combat the recession the Federal Reserve relented and relaxed its tightening stance in
mid-1975 which we pick up in our framework as a switch back to indeterminacy. Yet,
throughout the remainder of the 1970s monetary policy, as Hetzel (2008, pp. 113) argues,
remained dison‡ationary and exhibited an increasing focus on money growth targets, which
we plausibly pick up in terms of a continuous shift to a more inertial policy rule. While
Burns’record as chairman of the Federal Reserve may not quite deserve the exalted status
attributed to Volcker, we argue that his performance during the 1970s warrants a second
look.

6

Robustness

It is well known that models with learning are quite sensitive to speci…cation assumptions.
We therefore conduct a broad range of robustness checks to study the validity of our interpretation of the Great In‡ation in the benchmark model. We …nd that our results are
broadly robust. We begin by assessing the sensitivity of the baseline results to changes
in individual parameters based on the posterior median estimates. This gives us an idea
how signi…cant, in a statistical sense, our determinacy results are. The second exercise
involves changing the central bank’s forecasting model to make it closer to the underlying structural model of the private sector. Both exercises con…rm the robustness of our
benchmark …ndings. These are sensitive, however, to a modi…cation of how we capture the
central bank’s initial beliefs at the beginning of the sample. We show how alternative, but
arguably equally plausible assumptions, change the determinacy pattern considerably over
the full sample period. Finally, we also consider alternative information sets of the central
bank, speci…cally the length of time after which it gains access to the …nal data.

25

6.1

Sensitivity to Parameters

Our results in terms of the determinacy indicators are fairly unequivocal in terms of which
equilibrium is obtained at each data point over the sample period. Probabilities of a determinate equilibrium are either zero or one. As we pointed out above, the determinacy
indicator is an average over the draws from the posterior distribution at each point, which
appears highly concentrated in either the determinacy or the indeterminacy region of the
parameter space. A traditional coverage region to describe the degree of uncertainty surrounding the determinacy indicator would therefore not be very informative.
To give a sense of the robustness of the indicator with respect to variations in the parameters, we perform the following exercise. We …x all parameters at their estimated posterior
means. We then vary each parameter one by one for each data point and each imputed
realization of the underlying shocks and measurement errors, and record whether the resulting equilibrium is determinate or indeterminate. As the results of, for instance, Bullard
and Mitra (2002) indicate, the boundary between determinacy and indeterminacy typically
depends on all parameters of the model. In the New Keynesian framework speci…cally, it
depends on the Phillips curve parameter
certainly is the case in our model as

well,24

and the indexation parameter

. While this

we …nd, however, that the determinacy indicator

is not sensitive to almost all parameters in the model, the exception being the two weights
in the central bank’s loss function,

y

and

i.

25 .

We report the simulation results for the two parameters in Figures 11 and 12, respectively. We vary each parameter over the range [0; 1]. Each point in the underlying grid in
these …gures is a combination of a quarterly calendar date and a value of the parameter
within this range. We depict indeterminate equilibria in blue and determinate equilibria in
red. The posterior median of

y

is 0.065. The horizontal cross-section at this value replicates

Figure 4. Indeterminacy in the early 1970s was followed by a determinate period around
1975, after which another bout of indeterminacy toward the late 1970s was eradicated by
the Volcker disin‡ation.
Figure 11 shows that a higher weight on output growth in the Federal Reserve’s loss
function would generally tilt the economy toward indeterminacy, other things being equal.
The reason is that a higher weight on output reduces the relative weight on in‡ation so that
24

New analytical results by Bhattarai, Lee, and Park (2013) in a New Keynesian model with a rich lag
structure support this conjecture.
25
This …nding is reminiscent of the results in Dennis (2006), who estimates these weights using likelihoodbased methods in a similar model, albeit without learning and measurement error. He …nds that the main
determinant of …t and of the location of the likelihood function in the parameter space is the central bank’s
preference parameters.

26

in the presence of in‡ation surprises, be they in the actual or in the real-time data that
are subject to measurement error, the central bank responds with less vigor in the implied
policy rule. A second observation is that the indeterminacy and determinacy regimes in
the early to mid-1970s are largely independent of the central bank’s preferences. Similarly,
the pattern of determinate equilibria from the mid-1990s on appears unavoidable in the
sense that even a relatively stronger preference for output growth would not have resulted
in indeterminacy. The pattern for variations in the weight on interest rate smoothing

i

is

similar. At the posterior median of 0:65 the determinacy indicator is not sensitive to large
variations in this parameter.

6.2

Model Structure

Our results obviously depend on the speci…cation of the model used by the private sector,
that is, the structural model that we regard as the data-generating process for the …nal
data, and on the empirical model used by the central bank to learn about the private
sector’s model. In our baseline speci…cation we chose a restricted VAR for the central
bank’s forecasting model, whereby we follow the by now standard speci…cation of Primiceri
(2006). It is restricted in the sense that we included a one-period lagged nominal interest
rate in the output equation, but not lagged values for the in‡ation rate. Moreover, lag
lengths were chosen by the usual criteria and not with respect to the ARMA-structure
implied by the structural model.
We therefore consider an alternative speci…cation that removes some of these restrictions.
Speci…cally, we include lag polynomials for the in‡ation rate and the nominal interest rate
in the empirical output equation.26 This brings the empirical model closer to the reducedform structural model since output growth in the Euler-equation (20) depends on the real
interest rate path, whereas the New Keynesian Phillips curve (19) only depends on output.
The results from this speci…cation (not reported, but available upon request) are generally
the same as for our benchmark. The determinacy period during the mid-1970s lasts longer;
a determinate equilibrium now obtains at the beginning of the e¤ective sample period. This
is driven by a comparatively large measurement error in in‡ation at the beginning of the
sample, which prompted a sharp initial interest rate hike. As we have seen in the baseline
speci…cation, switching dates between determinacy and indeterminacy are associated with
large measurement errors.
26

We are grateful to Alejandro Justiniano for suggesting this speci…cation to us.

27

6.3

The Role of Initial Beliefs

A key determinant of the model’s learning dynamics is the choice of initial beliefs held by
the central bank. Since updating the parameter estimates in the face of new data can be
quite slow, initial beliefs can induce persistence and therefore make switching less likely,
everything else equal. There is no generally accepted way to choose initial beliefs. In our
baseline speci…cation we pursued what we believe to be the most plausible approach in that
we use a training sample to estimate initial beliefs as part of the overall procedure. We
set the initial mean beliefs before the start of the training sample to zero and initialize
R (the recursively estimated second moment matrix of the data) to be of the same order
of magnitude as the second moment matrix in the training sample. As an alternative, we
pursue a variant of the model where we estimate the scale of the initial second moment
matrix by estimating a scale factor that multiplies both initial R matrices. Results (not
reported, but available on request) are unchanged from our benchmark.
When we substantially change the magnitude of R by making the initial values an order
of magnitude larger, we do get changes in the indeterminacy indicator, but the value at
the posterior mode of that speci…cation is 30 log points lower than in our benchmark. The
determinacy indicator for this speci…cation is depicted in Figure 13. Indeterminacy lasts
throughout the 1970s and well into the middle of the 1980s. Initial beliefs are such that
policy is too accommodative and the data pattern in the 1970s is not strong enough to lead
to di¤erent central bank policies. Moreover, the learning mechanism is moving slowly so
that initial beliefs need not be dispersed quickly. In this speci…cation it takes a while for the
Federal Reserve to catch up after the period that is commonly associated with the Volcker
disin‡ation. For the rest of the Volcker-Greenspan period, determinate equilibria obtain.

6.4

The Information Set of the Central Bank

In our benchmark case, we do not allow the central bank to use …nal data. This is obviously
an extreme assumption since data revisions occur frequently and the revised data generally
get closer to the …nal data. In our analysis, we treat the data vintage of 2012:Q3 as …nal,
which it may not necessarily be since the Bureau of Economic Analysis periodically revises
its procedures. In any case, the actual Federal Reserve during the third quarter of 2012
was certainly aware of the …nal data as of this date as opposed to the central bank in our
stylized environment. We therefore assess the robustness of our …ndings with respect to the
central bank’s information set, namely which data it observes and when it does so.
First, we ask what would happen if the central bank had access to real-time data with
28

a one-period lag. The indeterminacy indicator behaves quite erratically in this case (not
reported). Moreover, the model’s posterior mode takes on a value that is 600 log points
lower than for our benchmark case. This implies that the alternative timing assumption
is rejected by the data in favor of the benchmark speci…cation. Next, we allow the central bank to use …nal data throughout. Since this speci…cation features a smaller set of
observables (mis-measured data does not enter the model anymore), we can not directly
compare posterior values. The implied indeterminacy indicator in Figure 14, however, is
closer to our benchmark case than the previous robustness check. The second switch to
indeterminacy happens later in 1992, but we still see the temporary switch in the middle of
the 1970s. Naturally, this assumption on the immediate availability of …nal data is a priori
less reasonable since it implies knowledge that the central bank could not have had.
This result shows the centrality of data misperceptions in understanding the transition
from the Great In‡ation to the Great Moderation. As argued by Beyer and Farmer (2007),
models with indeterminate equilibria have the tendency to dominate the data since they
o¤er a route for estimation algorithms to capture the observed persistence in the data. This
is evident from Figure 14 as indeterminacy lasts into the 1990s, well beyond a break date
that most researchers would consider plausible. Data misperception is therefore the critical
element that uni…es the di¤erent strands of interpretation of the Great In‡ation and the
subsequent Great Moderation.

7

Conclusion

We argue in this paper that the Great In‡ation of the 1970s can be understood as the result
of equilibrium indeterminacy in which loose monetary policy engendered excess volatility
in macroeconomic aggregates and prices. We show, however, that the Federal Reserve inadvertently pursued policies that were not anti-in‡ationary enough because it did not fully
understand the economic environment it was operating in. Speci…cally, it had imperfect
knowledge about the structure of the U.S. economy and it was subject to data misperceptions since the real-time data ‡ow did not capture the true state of the economy, as large
subsequent revisions showed. It is the combination of learning about the economy and, more
importantly, signal extraction to …lter out measurement noise that resulted in policies that
the Federal Reserve believed to be optimal, but when implemented led to an indeterminate
equilibrium in the economy.
This paper combines the insights of Clarida, Galí, and Gertler (2000) and Lubik and
Schorfheide (2004) about the susceptibility of New Keynesian modelling frameworks to sub29

optimal interest rate rules with the observation of Orphanides (2001) that monetary policy
operates in a real-time environment with an imperfect understanding of the same. It is only
the passage of time that improves the central bank’s understanding of the economy through
learning. Additionally, a reduction in measurement error, that is, a closer alignment of the
real-time data with their …nal revisions, reduces the possibility of implementing monetary
policies that imply indeterminacy. Consequently, and in this light, the Volcker disin‡ation
and the ensuing Great Moderation can be understood as just the result of better data and
improved Federal Reserve expertise.
The key contributions of our paper are therefore twofold. First, we o¤er an interpretation
of the Great In‡ation and the Great Moderation that combines and reconciles the good
policy/bad policy viewpoint with the data misperception argument. The weakness of the
former is that it o¤ers no explanation for why the Burns-Miller Federal Reserve behaved in
a manner that ostensibly led to an indeterminate equilibrium. We provide this explanation
by introducing measurement error through data misperceptions into the methodological
framework of Lubik and Schorfheide (2004). Interestingly, our results should also o¤er
comfort to the good luck/bad luck viewpoint as espoused by, for instance, Sims and Zha
(2006) since we …nd that the switches between determinacy and indeterminacy are largely
driven by the good or bad luck of obtaining real-time data that are close, or not close, to
the …nal data. The second contribution, and one that follows from the previous one, is
that of a cautionary tale for policymakers. The possibility of slipping into an indeterminate
equilibrium is reduced with better knowledge about the structure of the economy and the
quality of the data.
The main criticism to be leveled against our approach is that the private sector behaves
in a myopic fashion despite forming expectations rationally. In order to implement our
estimation algorithm, we rely on the anticipated utility assumption of Sargent, Williams,
and Zha (2006). This means that the private sector in the model maintains the belief,
despite all evidence to the contrary, that policy, which is changing period by period, will be
…xed forever. A key extension of our paper would therefore include private sector learning
of the central bank’s learning problem.

30

References
[1] An, Sungbae and Frank Schorfheide (2007): “Bayesian Analysis of DSGE Models”.
Econometric Reviews, 26(2-4), pp. 113-172.
[2] Beyer, Andreas, and Roger E. A. Farmer (2007): “Testing for Indeterminacy: An
Application to U.S. Monetary Policy: Comment”. American Economic Review, 97(1),
pp. 524-529.
[3] Bhattarai, Saroj, Jae Won Lee, and Woong Yong Park (2013): “Price Indexation,
Habit Formation, and the Generalized Taylor Principle”. Manuscript.
[4] Bianchi, Francesco (2013): “Regime Switches, Agents’Beliefs, and Post-World War II
U.S. Macroeconomic Dynamics”. Review of Economic Studies, 80(2), pp. 463-490.
[5] Bullard, James, and Kaushik Mitra (2002): “Learning About Monetary Policy Rules”.
Journal of Monetary Economics, 49(6), pp. 1105-1129.
[6] Carboni, Giacomo, and Martin Ellison (2009): “The Great In‡ation and the Greenbook”. Journal of Monetary Economics, 56(6), pp. 831-841.
[7] Clarida, Richard, Jordi Gali, and Mark Gertler (2000): “Monetary Policy Rules and
Macroeconomic Stability: Evidence and Some Theory”. Quarterly Journal of Economics, 115, pp. 147-180.
[8] Cogley, Timothy, Christian Matthes, and Argia Sbordone (2011): “Optimal Disin‡ation under Learning”. Federal Reserve Bank of New York Sta¤ Reports 524.
[9] Cogley, Timothy, and Thomas J. Sargent (2008): “Anticipated Utility and Rational Expectations as Approximations of Bayesian Decision Making”. International Economic
Review, 49(1), pp. 185-221.
[10] Coibion, Olivier, and Yurij Gorodnichenko (2011): “Monetary Policy, Trend In‡ation
and the Great Moderation: An Alternative Interpretation”. American Economic Review, 101, pp. 341–370.
[11] Collard, Fabrice, and Harris Dellas (2010): “Monetary Misperceptions, Output, and
In‡ation Dynamics”. Journal of Money, Credit and Banking, 42(2-3), pp. 483-502.
[12] Collard, Fabrice, Harris Dellas, and Frank Smets (2009): “Imperfect Information and
the Business Cycle”. Journal of Monetary Economics, 56(Supplement), pp. S38-S56.
31

[13] Davig, Troy, and Taeyoung Doh (2013): “Monetary Policy Regime Shifts and In‡ation
Persistence”. Forthcoming, Review of Economics and Statistics.
[14] Davig, Troy, and Eric M. Leeper (2007): “Generalizing the Taylor Principle”. American
Economic Review, 97(3), pp. 607-635.
[15] Dennis, Richard A. (2006): “The Policy Preferences of the US Federal Reserve”. Journal of Applied Econometrics, 21, pp. 55-77.
[16] Evans, George W. and Seppo Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press. Princeton, NJ.
[17] Farmer, Roger E.A., Daniel F. Waggoner, and Tao Zha (2009): “Understanding
Regime-Switching Rational Expectations Models”. Journal of Economic Theory, 144,
pp. 1849-1867.
[18] Givens, Gregory E., and Michael K. Salemi (2013): “Inferring Monetary Policy Objectives with a Partially Observed State”. Manuscript.
[19] Haario, Heiki, Eero Saksman, and Johanna Tamminen (2001): “An Adaptive Metropolis Algorithm”. Bernoulli, 7(2), pp. 223-242.
[20] Hetzel, Robert L. (2008): The Monetary Policy of the Federal Reserve: A History.
Cambridge University Press. New York, NY.
[21] Liu, Zheng, Daniel Waggoner and Tao Zha (2011): “Sources of Macroeconomic Fluctuations: A Regime Switching DSGE Approach”. Quantitative Economics, 2(2), pp.
251-301.
[22] Lubik, Thomas A. and Frank Schorfheide (2003): “Computing Sunspot Equilibria in
Linear Rational Expectations Models”. Journal of Economic Dynamics and Control,
28(2), pp. 273-285.
[23] Lubik, Thomas A. and Frank Schorfheide (2004): “Testing for Indeterminacy: An
Application to US Monetary Policy”. American Economic Review, 94(1), pp. 190-217.
[24] Lubik, Thomas A. and Paolo Surico (2010): “The Lucas Critique and the Stability of
Empirical Models”. Journal of Applied Econometrics, 25, pp. 177-194.
[25] Neri, Stefano, and Tiziano Ropele (2011): “Imperfect Information, Real-Time Data
and Monetary Policy in the Euro Area”. Economic Journal, 122, pp. 651-674.
32

[26] Orphanides, Athanasios (2001): “Monetary Policy Rules Based on Real Time Data”.
American Economic Review, 91(4), pp. 964-985.
[27] Orphanides, Athanasios, and John C. Williams (2005): “The Decline of Activist Stabilization Policy: Natural Rate Misperceptions, Learning, and Expectations”. Journal
of Economic Dynamics and Control, 29, pp. 1927-1950.
[28] Primiceri, Giorgio (2006): “Why In‡ation Rose and Fell: Policymakers’Beliefs and US
Postwar Stabilization Policy”. Quarterly Journal of Economics, 121, pp. 867-901.
[29] Pruitt, Seth (2012): “Uncertainty Over Models and Data: The Rise and Fall of American In‡ation”. Journal of Money, Credit and Banking, 44(2-3), pp. 341-365.
[30] Sargent, Thomas J., and Paolo Surico (2011): “Two Illustrations of the Quantity
Theory of Money: Breakdowns and Revivals”. American Economic Review, 101(1),
pp. 109-128.
[31] Sargent, Thomas J., Noah Williams, and Tao Zha (2006): “Shocks and Government
Beliefs: The Rise and Fall of American In‡ation”. American Economic Review, 96(4),
pp. 1193-1224.
[32] Sims, Christopher A. (2002): “Solving Linear Rational Expectations Models”. Computational Economics, 20, pp. 1-20.
[33] Sims, Christopher A., and Tao Zha (2006): “Were There Regime Switches in U.S.
Monetary Policy?” American Economic Review, 96(1), pp. 54-81.
[34] Smets, Frank and Raf Wouters (2003): “An Estimated Dynamic Stochastic General
Equilibrium Model of the Euro Area”. Journal of the European Economic Association,
1(5), pp. 1123-1175.
[35] Woodford, Michael (2003): Interest and Prices: Foundations of a Theory of Monetary
Policy. Princeton University Press.

33

Appendix: Data Construction
All data we use is quarterly. The Federal Funds rate is the average rate in a quarter
obtained from the Board of Governors. For quarterly in‡ation and quarterly output growth
data, we use the real-time database at the Federal Reserve Bank of Philadelphia. The in‡ation data is constructed using a GDP de‡ator-based price index since this index gives us the
longest available time series. The real-time output growth series is constructed using the
real output series. Both the output and price level series are seasonally adjusted. As a proxy
for …nal data, we use the data of the most recent vintage we had access to when estimating
the model (data up to 2012:Q1 available in 2012:Q2). The data starts in 1968:Q3.

34

Table 1: Calibration
Parameter
target

In‡ation Target
Output Target

y target

Discount Factor
Indexation NKPC
Habit Parameter

Source

2.00%
0.75%

Implied FOMC Target
Q/Q Sample Average

0.99
0.50
0.00

y

Lag Length in CB regression
Gain Parameter

Value

n
g

Standard
Pre-Sample Estimate
Pre-Sample Estimate

3
0.01

Table 2: Posterior Mean Estimates

Shocks:

z
g
z
g
i

Measurement:
y growth
y growth

Structural:
y
i

5th Percentile

Median

95th Percentile

0.91
0.70
0.002
0.011
0.006

0.93
0.73
0.002
0.012
0.007

0.94
0.76
0.003
0.014
0.008

0.03
0.41
0.0020
0.0054

0.08
0.48
0.0022
0.0059

0.17
0.56
0.0024
0.0064

0.03
0.06
0.54

0.04
0.07
0.65

0.05
0.08
0.76

35

36

b

b

0

0

20

20

40

40

60

60

100

120

80

100

120

P anel B : P arameter E s tim ates with Meas urement Error

80

P anel A : P arameter E s timate

140

140

160

160

Figure 1: Indeterminacy through Learning: Least-Squares Estimate of Perceived Law of Motion

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

-0.4

-0.2

0

0.2

0.4

180

180

200

200

Output Growth (Annualized)
20
Final
Real Time

15
10
5
0
-5
-10
-15
1970

1975

1980

1985

1990

1995

2000

2005

2010

2015

Inflation (Annualized)
14
Final
Real Time

12
10
8
6
4
2
0
-2
1970

1975

1980

1985

1990

1995

2000

2005

Figure 2: Real-Time and Final Data: Real GDP Growth and GDP De‡ator

37

2010

2015

ρ

ρ

z

50

0
0.85

0.9

0.95

1

σ

g

40

2000

20

1000

0

0.65

0.7

0.75

0.8

0

1

z

2

3

4
-3

σ

σ

g

ρ

i

1000

1000

10

500

500

5

0
0.005

0.01

0.015

0.02

0

4

6

8

10

0

0

x 10
π

0.2

0.4

-3

ρ

y

σ

growth

x 10

σ

π

10

4000

2000

5

2000

1000

0

0

0.5

1

0
1.5

2

2.5

3

0

4

y

growth

6

8

-3

λ

κ

-3

x 10

λ

y

100

100

10

50

50

5

0
0.02

0.04

0.06

0.08

0
0.04

0.06

0.08

0.1

0
0.4

0.6

Figure 3: Prior and Posterior Parameter Density (Benchmark Speci…cation)

38

x 10
i

0.8

1

Determinacy Indicator
1.5

1

0.5

0

1975

1980

1985

1990

1995

Figure 4: Determinacy Indicator: Benchmark Speci…cation

39

2000

2005

2010

Impulse Responses: Inflation

Impulse Responses: Output Growth

1.5

4
1975
1979
1990
2012

1

1975
1979
1990
2012

2

0
0.5
-2
0
-4
-0.5

-1

-6

0

5

10

15

-8

20

0

5

10

Impulse Responses: Federal Funds Rate
3
1975
1979
1990
2012

2.5
2
1.5
1
0.5
0
-0.5

0

5

10

15

20

Figure 5: Impulse Response Functions to a Monetary Policy Shock

40

15

20

30
Optim al
Actual
25

20

15

10

5

0

-5
1970

1975

1980

1985

1990

1995

2000

2005

2010

Figure 6: Evolution of the Federal Funds Rate: Actual vs. Prescribed under Optimal Policy

41

2015

M easurem ent Error In Infl ation vs. Determ inacy Indicator
5
4
3
2

1

1
0

0.5

-1
0

-2
-3
-4
-5
1970

1975

1980

1985

1990

1995

2000

2005

2010

2015

M easurem ent Error In Output Growth vs. Determ inacy Indicator
10

1
0

0.5
0

-10
1970

1975

1980

1985

1990

1995

2000

Figure 7: Estimated Measurement Errors

42

2005

2010

2015

Long-run Response T o Inflation
4
2
0
-2
1970

1975

1980

1985

1990

1995

2000

2005

2010

2015

2005

2010

2015

2010

2015

Long-run Response T o Output Growth
1.5
1
0.5
0
1970

1975

1980

1985

1990

1995

2000

Response T o T he Lagged Nom inal Interest Rate
1

0.5

0
1970

1975

1980

1985

1990

1995

2000

2005

Figure 8: Long-Run Policy Coe¢ cients: Benchmark Speci…cation

43

5

Percent

1

0

0
1970

1975

1980

1985

1990

1995

Figure 9: Perceived Long-Run Level of In‡ation

44

2000

2005

2010

2015

One Quarter Ahead Inflation Expectations (annualized)
14
Federal Reserve
Private Sector
12

10

Percent

8

6

4

2

0

-2

-4
1970

1975

1980

1985

1990

1995

Figure 10: Implied In‡ation Expectations: Benchmark

45

2000

2005

2010

2015

varying λ

y

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
1970

1975

1980

1985

1990

1995

2000

2005

Figure 11: Sensitivity of Determinacy Indicator to Parameters: Output

46

2010

2015

varying λ

i

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
1970

1975

1980

1990

1985

1995

2000

2005

Figure 12: Sensitivity of Determinacy Indicator to Parameters: Interest-Rate Smoothing

47

2010

2015

1.5

1

0.5

0

1975

1980

1985

1990

1995

2000

2005

Figure 13: Determinacy Indicator under Alternative Initial Beliefs

48

2010

1.5

1

0.5

0

1975

1980

1985

1990

1995

2000

Figure 14: Determinacy Indicator without Measurement Error

49

2005

2010