View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Federal Reserve Bank of Chicago

Risk Management for Monetary Policy
Near the Zero Lower Bound
Charles Evans, Jonas Fisher, François Gourio, and
Spencer Krane

May 2015
WP 2015-03

Risk Management for Monetary Policy
Near the Zero Lower Bound∗
Charles Evans

Jonas Fisher

François Gourio

Spencer Krane

May 21, 2015
Abstract
As projections have inflation heading back toward target and the labor market
continuing to improve, the Federal Reserve has begun to contemplate an increase in
the federal funds rate. There is however substantial uncertainty around these projections. How should this uncertainty affect monetary policy? In many standard models
uncertainty has no effect. In this paper, we demonstrate that the zero lower bound
on nominal interest rates implies that the central bank should adopt a looser policy
when there is uncertainty. In the current context this result implies that a delayed
liftoff is optimal. We demonstrate this result theoretically in two canonical macroeconomic models. Using numerical simulations of our models, calibrated to the current
environment, we find optimal policy calls for 2 to 3 quarters delay in liftoff relative to
a policy that does not take into account uncertainty about policy being constrained
by the ZLB. We then use a narrative study of Federal Reserve communications and
estimated policy reaction functions to show that risk management is a longstanding
practice in the conduct of monetary policy.

JEL Classification Numbers: E52, E58
Keywords: monetary policy, risk management, zero lower bound

∗

All the authors are affiliated with the Federal Reserve Bank of Chicago. We thank numerous seminar
participants, Gadi Barlevy, Jeffrey Campbell, Stefania D’Amico, Alan Greenspan, Alejandro Justiniano,
John Leahy, Sydney Ludvigson, Leonardo Melosi, Taisuke Nakata, Serena Ng, Valerie Ramey, David Reifschneider, David Romer, Glenn Rudebusch, Paolo Surico, François Velde, Johannes Wieland, and Justin
Wolfers for their help and comments, and Theodore Bogusz, David Kelley and Trevor Serrao for superb research assistance. We also thank Michael McMahon for providing us with machine-readable FOMC minutes
and transcripts and Thomas Stark for help with the Philadelphia Fed’s real-time data. Unless otherwise
noted data were accessed via Haver Analytics. The views expressed herein are those of the authors and do
not necessarily represent the views of the Federal Open Market Committee or the Federal Reserve System.

1

Introduction

To what extent should uncertainty affect monetary policy? This classic question is relevant
today as the Fed considers when to start increasing the federal funds rate. In the March
2015 Summary of Economic Projections, most Federal Open Market Committee (FOMC)
participants forecast that the unemployment rate will return to its long-run neutral level by
late 2015 and that inflation will gradually rise back to its 2 percent target. This forecast
could go wrong in two ways. One is that the FOMC may be overestimating the underlying
strength in the economy or the tendency of inflation to return to target. Guarding against
these risks calls for cautious removal of accommodation. The second is that the economy
could be poised for stronger growth and inflation than currently projected. This risk calls
for more aggressive rate hikes. How should policy manage these divergent risks?
If the FOMC misjudges the impediments to growth and inflation and reduces monetary
accommodation too soon, it could find itself in the uncomfortable position of having to
reverse course and being constrained by the zero lower bound (ZLB) again. It is true the
FOMC has access to unconventional policy tools at the ZLB, but these appear to be imperfect
substitutes for the traditional funds rate instrument. In contrast, if the Fed keeps rates too
low and inflation rises too quickly, it most likely could be brought back into check with
modest increases in interest rates. Since the unconventional tools available to counter the
first scenario may be less effective than the traditional tools to counter the second scenario,
the costs of premature liftoff may exceed those of delay. It therefore seems prudent to refrain
from raising rates until the FOMC is highly certain that growth is sustainable and inflation
is returning to target.1
In this paper we establish theoretically that uncertainty about monetary policy being
constrained by the ZLB in the future implies an optimally looser policy, which in the current
context means delaying liftoff – the risk management framework just described. We formally
define risk management as the principle that policy should be formulated taking into account
the dispersion of shocks around their means. Our main theoretical contribution is to provide
1

Evans (2014)’s speech at the Petersen Institute of Economics discusses these issues at greater length.

1

a simple demonstration, using standard models of monetary policy, that the ZLB implies a
new role for such risk management through two distinct economic channels.
The first channel - which we call the expectations channel – arises because the possibility
of a binding ZLB tomorrow leads to lower expected inflation and output today, and hence
dictates some counteracting policy easing today. The second channel – which we call the
buffer stock

channel – arises because, if inflation or output are intrinsically persistent,

building up output or inflation today reduces the likelihood and severity of hitting the ZLB
tomorrow. Optimal policy when either of these channels is operative should be looser at
times when a return to the ZLB remains a distinct possibility. In simulations calibrated to
the current environment we find that optimal policy prescribes 2 to 3 quarters of delay in
liftoff relative to a policy that does not take this uncertainty into account. However under
the optimal policy the central bank must be prepared to raise rates quickly as the threat of
being constrained by the ZLB recedes.
Would it be unusual for the Fed to take into account uncertainty in setting its policy
rate? The second part of the paper argues that risk management has been a longstanding
practice in U.S. monetary policy. Therefore advocating it in the current policy environment
would be consistent with a well-established approach of the Federal Reserve. Of course,
because the ZLB was not until recently perceived as an important constraint, the theoretical
rationales for risk management were different in the past. It is true that in a wide class of
models that abstract from the ZLB, optimal policy involves adjusting the interest rate in
response to the mean of the distribution of shocks and information on higher moments is
irrelevant (the so-called “certainty equivalence” principle.) However, there is an extensive
literature covering departures from this result based on nonlinear economic environments or
uncertain policy parameters that justify taking a risk management approach away from the
ZLB.
We explore whether policy-makers have actually practised risk management prior to the
ZLB period in two ways. First, we analyze Federal Reserve communications over the period
1987-2008 and find numerous examples when uncertainty or the desire to insure against im-

2

portant risks to the economy were used to help explain the setting of policy. Confirmation of
this view is found in Greenspan (2004) who states “. . . the conduct of monetary policy in the
United States has come to involve, at its core, crucial elements of risk management.” Second,
we estimate a conventional forecast-based monetary policy reaction function augmented with
a variety of measures of risk based on financial market data, Federal Reserve Board staff
forecasts, private-sector forecasts, and narrative analysis of the FOMC minutes. We find
clear evidence that when measured in this way risk has had a statistically and economically
significant impact on the interest rate choices of the FOMC. Thus, risk management appears
to be old hat for the FOMC.
If the monetary policy toolkit contained alternative instruments that were perfect substitutes for changing the policy rate, then the ZLB would not present any special economic
risk and our analysis would be moot. We do not think this is the case. Even though most
central bankers believe unconventional policies such as large scale asset purchases (LSAPs)
or more explicit and longer-term forward guidance about policy rates can provide considerable accommodation at the ZLB, few argue that these tools are on an equal footing with
traditional policy instruments.2
One reason for this is that effects of unconventional policies on the economy naturally are
much more uncertain than those of traditional tools. There are divergent empirical estimates
of the effects and uncertainty about the theoretical mechanism behind those effects. Various
studies of LSAPs, for example, provide a wide range of estimates of their ability to put
downward pressure on private borrowing rates and influence the real economy. Furthermore,
the effects of both LSAPs and forward guidance on interest rates are complicated functions
of private-sector expectations, which makes their economic effects highly uncertain as well.3
2

For example, while there is econometric evidence that changes in term premia influence activity and
inflation, some studies find the effects appear to be less powerful than comparably sized movements in the
short term policy rate, see D’Amico and King (2015), Kiley (2012) and Chen, Curida, and Ferrero (2012).
3
Bomfin and Meyer (2010), D’Amico and King (2013) and Gagnon, Raskin, Remache, and Sack (2010)
find noticeable effects of LSAPs on Treasury term permia while Chen et al. (2012) and Hamilton and Wu
(2010) unearth only small effects. Krishnamurthy and Vissing-Jorgensen (2013) argue that the LSAPs have
only had a substantial influence on private borrowing rates in the mortgage market. Engen, Laubach, and
Reifschneider (2015) and Campbell, Evans, Fisher, and Justiniano (2012) analyze the interactions between
LSAPs, forward guidance, and private sector expectations.

3

Uncertainty about the transmission mechanism of LSAPs is reflected in Krishnamurthy and
Vissing-Jorgensen (2013)’s discussion of the various hypotheses that have been proposed.
Unconventional tools also carry potential costs. The four most commonly cited are: the
large increases in reserves generated by LSAPs risk unleashing inflation; a large balance
sheet may make it more difficult for the Fed to raise interest rates when the time comes;
the extended period of very low interest rates and Federal Reserve intervention in the longterm Treasury and mortgage markets may induce inefficient allocation of credit and financial
fragility; and the large balance sheet puts the Federal Reserve at risk of incurring financial
losses if rates rise too quickly and such losses could undermine its support and independence.4
Costs reduce the incentive to use any policy tool. The costs of unconventional tools also are
very hard to quantify, and so naturally elevate the level of uncertainty associated with them.
A consequence of this uncertainty over the benefits and costs of unconventional tools is
that they are likely to be used more cautiously than traditional policy instruments, as suggested by the classic Brainard (1967) analysis. For example Bernanke (2012) emphasizes that
because of the uncertain costs and benefits of them “. . . the hurdle for using unconventional
policies should be higher than for traditional policies.” In addition, at least conceptually,
some of the benefits of unconventional policies may be decreasing, and the costs increasing,
in the size of the balance sheet or in the amount of time spent in a very low interest rate
environment.5 Accordingly, policies that had wide-spread support early on in a ZLB episode
might be difficult to extend or expand with an already large balance sheet.
So, while valuable, unconventional policies also appear to be less-than-perfect substitutes
for changes in short term policy rates. Accordingly, the ZLB presents a different set of risks
to policymakers than those that they face during more conventional times and thus it is
4

These costs are mitigated, however, by additional tools the Fed has introduced to exert control over
interest rates when the time comes to exit the ZLB and by enhanced supervisory and regulatory efforts to
monitor and address potential financial stability concerns. Furthermore, continued low rates of inflation and
contained private-sector inflationary expectations have reduced concerns regarding an outbreak of inflation.
5
Krishnamurthy and Vissing-Jorgensen (2013) argue successive LSAP programs have had a diminishing
influence on term premia. Surveys conducted by Blue Chip and the Federal Reserve Bank of New York
also indicate that market participants are less optimistic that further asset purchases would provide much
stimulus if the Fed was forced to expand their use in light of unexpected economic weakness.

4

worthy of consideration on its own accord. We abstract from unconventional policy tools for
the remainder of our analysis.

2

Rationales for risk management near the ZLB

The canonical framework of monetary policy analysis assumes that the central bank sets
the nominal interest rate to minimize a quadratic loss function of the deviation of inflation
from its target and the output gap, and that the economy is described by a set of linear
equations. In most applications, uncertainty is incorporated as additive shocks to these
linear equations, capturing factors outside the model that lead to variation in economic
activity or inflation.6 A limitation of this approach is that, by construction, it denies that
a policymaker might choose to adjust policy in the face of changes in uncertainty about
economic fundamentals. However, the evidence discussed below in Sections 3 and 4 suggests
that in practice policymakers are sensitive to uncertainty and respond by following what
appears to be a risk management approach. Motivating why a central banker should behave
in this way requires some departure from the canonical framework. The main contribution
of this section is to consider a departure associated with the possibility of a binding ZLB in
the future.
We show that when a policymaker might be constrained by the ZLB in the future, optimal policy today should take account of uncertainty about fundamentals. We focus on two
distinct channels through which this can occur. First we use the workhorse forward-looking
New Keynesian model to illustrate the expectations channel, in which the possibility of a
binding ZLB tomorrow leads to lower expected inflation and an output gap occurring today,
thus necessitating policy easing today. We then use a backward-looking “Old” Keynesian
set-up to illustrate the buffer stock channel, in which it can be optimal to build up output
or inflation today in order to reduce the likelihood and severity of being constrained by the
6

This framework can be derived from a micro-founded DSGE model (see for instance Woodford (2003),
Chapter 6), but it has a longer history and is used even in models that are not fully micro-founded. The
Federal Reserve Board staff routinely conducts optimal policy exercises in the FRB/US model, see for
example English, López-Salido, and Tetlow (2013).

5

ZLB tomorrow. Both of these channels operate in modern DSGE models such as Christiano,
Eichenbaum, and Evans (2005) and Smets and Wouters (2007), but they are more transparent if we consider them in separate, although related, models. After describing these two
channels we construct some numerical simulations to assess their quantitative effects.

2.1

The expectations channel

The simple New Keynesian model has well established micro-foundations based on price
stickiness. Given that there are many excellent expositions of these foundations, e.g. Woodford (2003) or Gali (2008), we just state our notation without much explanation. The model
consists of two main equations, the Phillips curve and the IS curve.
The Phillips curve is specified as
πt = κxt + βEt πt+1 + ut ,

(1)

where πt and xt are both endogenous variables and denote inflation and the output gap at
date t; Et is the date t conditional expectations operator with rational expectations assumed;
ut is a mean zero exogenous cost-push shock; and 0 < β < 1, κ > 0. For simplicity we assume
the central bank has a constant inflation target equal to zero so πt is the deviation of inflation
from that target. The cost-push shock represents exogenous changes to inflation such as an
independent decline in inflation expectations, dollar appreciation or changes in oil prices.
The IS curve is specified as
xt = Et xt+1 −

1
(it − Et πt+1 − ρnt ) ,
σ

(2)

where σ > 0, it is the nominal interest rate controlled by the central bank, and ρnt is the
natural rate of interest given by
ρnt = ρ̄ + σgt + σEt (zt+1 − zt ).

6

(3)

The variable gt is an exogenous mean zero demand shock, and zt is the exogenous log of
potential output. Since zt and gt are exogenous, so is the natural rate. Equation (2) indicates
that ρnt corresponds to the setting of the nominal interest rate consistent with expected
inflation at target and the output gap equal to zero.7 If potential output is constant and the
demand shock equals zero, then the natural rate equals the constant ρ̄ > 0.
Our analysis is centered around uncertainty in the natural rate.8 From (3) we see that
this uncertainty derives from uncertainty about gt and Et (zt+1 − zt ). We interpret the former
as arising due to a variety of factors, including fiscal policy, foreign economies’ growth, and
financial considerations such as de-leveraging.9 The latter source of uncertainty is over the
variety of factors that can influence the expected rate of growth in potential output, for
example as emphasized in the recent debate over “secular stagnation.”
We adopt the canonical framework in assuming the central bank acts to minimize a
quadratic loss function with the understanding that private-sector behavior is governed by
(1)–(3). The loss function is
∞


1 X t 2
L = E0
β πt + λx2t ,
2
t=0

(4)

where λ ≥ 0. We further assume the ZLB constraint, i.e. it ≥ 0, abstracting from the
possibility that the effective lower bound on it is slightly negative. The short term interest
rate is the central bank’s only policy instrument and it is set by solving for optimal policy
under discretion. In particular, each period the central bank sets the nominal interest rate
with the understanding that private agents anticipate that it will re-optimize in the following
periods.
7

Woodford (2003, p. 248) defines the natural rate as the equilibrium real rate of return in the case of
fully flexible prices. As discussed by Barsky, Justiniano, and Melosi (2014), in medium-scale DSGE models
with many shocks the appropriate definition of the natural rate is less clear.
8
There is ample evidence of considerable uncertainty regarding the natural rate. See for example Barsky
et al. (2014), Hamilton, Harris, Hatzius, and West (2015) and Laubach and Williams (2003).
9
Uncertainty itself could give rise to gt shocks. A large amount of recent work, following Bloom (2009),
suggests that private agents react to increases in economic uncertainty, leading to a decline in economic
activity. One channel is that higher uncertainty may lead to precautionary savings which depresses demand,
as in emphasized by Basu and Bundick (2013), Fernández-Villaverde, Guerró-Quintana, Kuester, and RubioRamı́rez (2012) and Born and Pfeifer (2014).

7

We focus on optimal policy under discretion for two reasons. First, the case of commitment with a binding ZLB already has been studied extensively. In particular it is well
known from the contributions of Krugman (1998), Egertsson and Woodford (2003), Woodford (2012) and Werning (2012) that commitment can reduce the severity of the ZLB problem
by creating higher expectations of inflation and the output gap. One implication of these
studies is that the central bank should commit to keeping the policy rate at zero longer than
would be prescribed by discretionary policy. By studying optimal policy under discretion we
find a different rationale for a policy of keeping rates “lower for longer” that does not rely on
the central bank having the ability to commit to a time-inconsistent policy.10 Nevertheless
below we discuss intuition for why our main result should extend to the case of commitment.
Second, this approach may better approximate the institutional environment in which the
FOMC operates.

2.1.1

A ZLB scenario

We study optimal policy when the central bank is faced with the following simple ZLB
scenario. The central bank observes the current value of the natural rate, ρn0 , and the costpush shock u0 ; moreover, there is no uncertainty in the natural rate after t = 2, ρnt = ρ̄ > 0
for all t ≥ 2, nor in the cost push shock after t = 1, ut = 0 for all t ≥ 1. However, there
is uncertainty at t = 1 regarding the natural rate ρn1 .11 The variable ρn1 is assumed to be
distributed according to the probability density function fρ (·).
This very simple scenario keeps the optimal policy calculation tractable while preserving
the main insights. We also think it captures some key elements of uncertainty faced by the
FOMC today. We do not have to take a stand on whether the ZLB is binding before t = 0.
One possibility is that the natural rate ρnt was sufficiently negative for t < 0 so that the
10

Implicitly we are assuming the central bank does not have the ability to employ what Campbell et al.
(2012) call “Odyssean” forward guidance. However our model is consistent with the central bank using
forward guidance in the “Delphic” sense they describe because agents anticipate how the central bank reacts
to evolving economic conditions.
11
It is easy to verify that if the uncertainty about the natural rate is only at t = 0 the optimal policy
would be to set the interest rate to the expected value of the natural rate and and the amount of uncertainty
would have no affect. This is why our scenario has more than two periods.

8

optimal policy rate was set at zero, it = 0, for t < 0, but the economy has been improving
so that by t = 0 the natural rate is close to zero. The question is whether to raise the policy
rate at t = 0, t = 1 or t = 2. Our formulation allows us to consider this optimal timing of
liftoff.

2.1.2

Analysis

To find the optimal policy, we solve the model backwards from t = 2 and focus on the
policy choice at t = 0. First, for t ≥ 2, it is possible to perfectly stabilize the economy by
setting the nominal interest rate equal to the (now positive) natural rate, it = ρnt = ρ̄. This
leads to πt = xt = 0 for t ≥ 2.12 The optimal policy at t = 1 will depend on the realized
value of the natural rate ρn1 . If ρn1 ≥ 0, then it is again possible (and optimal) to perfectly
stabilize by setting i1 = ρn1 , leading to x1 = π1 = 0. However if ρn1 < 0, the ZLB binds and
consequently x1 = ρn1 /σ < 0 and π1 = κρn1 /σ < 0. The expected output gap at t = 1 is hence
R0
E0 x1 = −∞ ρfρ (ρ)dρ/σ ≤ 0 and expected inflation is E0 π1 = κE0 x1 < 0.
Because agents are forward-looking, this low expected output gap and inflation feed
backward to t = 0. A low output gap tomorrow depresses output today by a wealth effect
via the IS curve. Low inflation tomorrow depresses inflation today since price setting is
forward looking in the Phillips curve and also depresses output today by raising the real
interest rate via the IS curve. The optimal policy at t = 0 must take into account these
effects. This implies that optimal policy will be looser than if there was no chance that the
ZLB binds tomorrow.
Mathematically, substituting for π0 and i0 using (1) and (2), and taking into account the
ZLB constraint, optimal policy at t = 0 solves the following problem:
min
x0


1
1
(κx0 + βE0 π1 + u0 )2 + λx20 s.t. x0 ≤ E0 x1 + (ρn0 + E0 π1 ) .
2
σ

12

(5)

This simple interest rate rule implements the equilibrium πt = xt = 0, but is also consistent with other
equilibria. However there are standard ways to rule out these other equilibria. See Gali (2008, pp. 76–77)
for a discussion. Henceforth we will not consider this issue.

9

Two cases arise, depending on whether the ZLB binds at t = 0 or not. Define the threshold
value
ρ∗0

Z 0

κ
κ2
κ
= −σ
ρfρ (ρ)dρ.
u0 − 1 + + β
λ + κ2
σ
λ + κ2
−∞

(6)

If ρn0 > ρ∗0 , then the optimal policy is to follow the standard monetary policy response to an
inflation shock to the Phillips curve, βE0 π1 + u0 , leading to:
x0 = −

κ
(βE0 π1 + u0 ) ;
λ + κ2

π0 =

λ
(βE0 π1 + u0 ) .
λ + κ2

(7)

The corresponding interest rate is
i0 = ρn0 + E0 π1 + σ(E0 x1 − x0 ),

Z 0
κ
κ
κ2
n
= ρ0 + σ
u0 + 1 + + β
ρfρ (ρ)dρ.
λ + κ2
σ
λ + κ2
−∞
As long as

R0
−∞

(8)

ρfρ (ρ)dρ < 0, (8) implies that the optimal interest rate is lower than if there

was no chance of a binding ZLB tomorrow, i.e. if fρ (ρ) = 0 for ρ ≤ 0. The interest rate is
lower today to offset the deflationary and recessionary effects of the possibility of a binding
ZLB tomorrow. If ρn0 < ρ∗0 , then the ZLB binds today and optimal policy is i0 = 0. In this
case
ρn 
κ
x0 = 0 + 1 +
E0 x1 ;
σ
σ



ρn0
κ2
π0 = κ + (1 + β) κ +
E 0 x1 .
σ
σ

(9)

Notice from (6) that higher uncertainty makes it more likely that the ZLB will bind
at t = 0. Specifically, even if agents were certain that the ZLB would not bind at t = 1,
E0 x1 = E0 π1 = 0 and i0 = 0 if ρn0 ≤ −σκu0 /(λ + κ2 ). So the possibility of the ZLB binding
tomorrow increases the chances of being constrained by the ZLB today.
R0
Since E0 x1 is a sufficient statistic for −∞ ρfρ (ρ)dρ in (8), the optimal policy has the
flavor of a traditional forward-looking policy reaction function that only depends on the
conditional expectations of output and inflation gaps. However E0 x1 is not independent
of a mean preserving spread or any other change in the distribution of ρn1 . Accordingly,
optimal policy here departs from the certainty equivalence principle which says that the
10

extent of uncertainty in the underlying fundamentals (in our case ρn1 ) does not affect the
optimal interest rate.13 Furthermore, as a practical matter the central bank must infer
private agents’ E0 x1 in order to determine optimal policy. Since E0 x1 depends on the entire
distribution of ρn1 , so must the central bank’s estimates of it, which is a much more difficult
inference problem than in the certainty equivalence case.
Turning specifically to the issue of uncertainty, we obtain the following unambiguous
comparative static result:
Proposition 1 Higher uncertainty, i.e. a mean-preserving spread, in the distribution of the
natural rate ρn1 tomorrow leads to a looser policy today.
To see this, rewrite the key quantity

R0
−∞

ρfρ (ρ)dρ = E min(ρ, 0). Since the min function is

concave, higher uncertainty through a mean-preserving spread about ρn1 leads to lower, i.e.
more negative, E0 x1 and E0 π1 , and hence lower i0 .14
The effect of higher uncertainty on i0 is unambiguous, but the effect on the output gap
and inflation is more subtle. If the ZLB does not bind at t = 0 initially, higher uncertainty
leads to lower E0 π1 and E0 x1 and consequently higher x0 and lower π0 according to equation
(7). On the other hand, if the ZLB does bind at t = 0 initially, then higher uncertainty leads
to lower x0 and π0 according to equation (9).15 Hence, overall the effect of higher uncertainty
on π0 is unambiguously negative, but the effect on x0 may be positive or negative.
Another interesting feature of the solution is that the distribution of the positive values
of ρn1 is irrelevant for policy. That is, policy today is adjusted only with respect to the states
of the world in which the ZLB might bind tomorrow. The logic is that if a very high value
of ρn1 is realized, monetary policy can adjust to it and prevent a bout of inflation. This is
a consequence of the standard principle that, outside the ZLB, natural rate shocks can and
should be perfectly offset by monetary policy.
13

Recent statements of the certainty equivalence principle in models with forward-looking variables can
be found in Svensson and Woodford (2002, 2003).
14
See Mas-Colell, Whinston, and Green (1995, Proposition 6.D.2, pp. 199) for the relevant result regarding
the effect of a mean preserving spread on the expected value of concave functions of a random variable.
15
Finally, there is a case where the ZLB does not bind initially, but it binds if uncertainty is higher. In
this case, x0 may be lower or smaller with higher uncertainty, while π0 is always smaller.

11

2.1.3

Discussion

Proposition 1 has several predecessors; perhaps the closest are Adam and Billi (2007), Nakata
(2013a,b) and Nakov (2008) who demonstrate numerically how, in a stochastic environment,
the ZLB leads the central bank to adopt a looser policy. Our contribution is to provide
a simple analytical example.16 This result has been correctly interpreted to mean that if
negative shocks to the natural rate lead the economy to be close to the ZLB, the optimal
response is to lower the interest rate aggressively to reduce the likelihood that the ZLB
becomes binding. The same logic applies to liftoff. Following an episode where the ZLB
has been a binding constraint, the central bank should not raise rates as if it were sure the
ZLB constraint would never bind again.17 Even though the best forecast may be that the
economy will recover and exit the ZLB – i.e. in the context of the model, that E0 (ρn1 ) > 0
– it can be optimal to have zero interest rates today. Note that policy is looser when the
probability of being constrained by the ZLB in the future is high or the potential severity of
R0
the ZLB problem is large, i.e. −∞ ρfρ (ρ)dρ is a large negative number; the economy is less
sensitive to interest rates (high σ), and the Phillips curve is steep (high κ).
With higher uncertainty, the increase in interest rates on average will be faster from t = 0
to t = 2. This follows since the t = 2 interest rate is unaffected by uncertainty while at
t = 0 it is lower. More generally, when uncertainty about being constrained by the ZLB in
the future dissipates, the interest rate can rise quickly because the effects holding it down
disappear along with the uncertainty.
While we have deliberately focused on a very simple example, our results hold under more
general conditions. For instance, the same results still hold if {ρnt }t≥2 follows an arbitrary
stochastic process as long as it is positive. In the appendix we consider the case of optimal
policy with uncertainty about cost-push inflation. We show that optimal policy also is looser
if there is a chance of a binding ZLB in the future due to a low cost-push shock. Furthermore,
16

See also Nakata and Schmidt (2014) for a related analytical result in a model with two-state Markov
shocks.
17
Indeed, private sector forecasters attribute a significant likelihood of a return to the ZLB: respondents
to the January 2015 Federal Reserve Bank of New York survey of Primary Dealers put the odds of returning
to the ZLB within two years following liftoff at 20%.

12

the risk that inflation picks up due to a high cost-push shock does not affect policy today. If
such a shock were to occur tomorrow, it will lead to some inflation; however, there is nothing
that policy today can do about it. Finally, while the model chosen is highly stylized, the
core insights would likely continue to hold in a medium-scale model with a variety of shocks
and frictions.
Intuitively, we expect a version of Proposition 1 to still hold with commitment as well.
Optimal policy with commitment involves promising at t = 0 that should the ZLB bind at
t = 1, the central bank will keep interest rates lower for t ≥ 2 than it would otherwise. As is
well known, this policy reduces the size of the inflation and output gaps at t = 1, but it does
not eliminate them entirely. These gaps then could generate negative expected inflation and
output gaps at t = 0 that become more negative the larger the t = 1 uncertainty. So higher
uncertainty should lead to looser policy at t = 0 just as in the case of discretion.
One obvious limitation to these results is that we have assumed (and will continue do
so when studying the backward-looking model below) that there is no cost to raising rates
quickly if needed. For example, our welfare criterion does not value interest rate smoothing.
Smoothing has been rationalized by Goodfriend (1991) and others as facilitating financial
market adjustments or as a signaling tool. It is true also that estimated reaction functions
include lagged funds rate terms to fit historical data. Nonetheless there have been instances
when the FOMC has moved quickly. Some of these occurred as recessions unfolded, but
not all: between February 1994 and February 1995 rates were tightened by 300 basis points
(bps) and between November 1988 and February 1989 by nearly 165 bps. Moreover, as Sack
(2000) and Rudebusch (2002)) argue, interest rate smoothing might reflect learning about
an uncertain economy rather than a desire to avoid large changes in interest rates per se.
The policy prescriptions derived from our models are specifically aimed at addressing such
uncertainty.

13

2.2

The buffer stock channel

The buffer stock channel does not rely on forward-looking behavior, but rather on the view
that the economy has some inherent momentum, e.g. due to adaptive inflation expectations,
inflation indexation, habit persistence, adjustment costs or hysteresis. Suppose that output
or inflation have a tendency to persist. If there is a risk that the ZLB binds tomorrow,
building up output and inflation today creates some buffer against hitting the ZLB tomorrow.
This intuition does not guarantee that it is optimal to increase output or inflation today.
In particular, the benefit of higher inflation or output today in the event that a ZLB event
arises tomorrow must be weighed against the costs of excess output and inflation today, as
well as tomorrow’s cost to bring down the output gap or inflation if the ZLB turns out not
to bind. So it is important to verify that our intuition holds up in a model.
To isolate the buffer stock channel from the expectations channel we focus on a purely
backward-looking “Old” Keynesian model. Purely backward-looking models do not have
micro-foundations like the New Keynesian model does, but backward-looking elements appear to be important empirically.18 Backward-looking models have been studied extensively in the literature, including by Laubach and Williams (2003), Orphanides and Williams
(2002), Reifschneider and Williams (2000) and Rudebusch and Svensson (1999).
The model we study simply replaces the forward-looking terms in (1) and (2) with
backward-looking terms:
πt = ξπt−1 + κxt + ut ;
1
xt = δxt−1 − (it − ρnt − πt−1 ) ,
σ

(10)
(11)

where 0 < ξ < 1 and 0 < δ < 1. This model is essentially the same as the simple example
Reifschneider and Williams (2000) use to motivate their analysis of monetary policy con18

Indeed empirical studies based on medium-scale DSGE models, such as those considered by Christiano
et al. (2005) and Smets and Wouters (2007), find backward-looking elements are essential to account for the
empirical dynamics. Backward-looking terms are important in single-equation estimation as well. See for
example Fuhrer (2000), Gali and Gertler (1999) and Eichenbaum and Fisher (2007).

14

strained by the ZLB. Unlike in the New Keynesian model it is difficult to map ρnt directly to
underlying fundamental shocks as we do in equation (3). For simplicity we continue to refer
to this exogenous variable as the natural rate and use (3) as a guide to interpreting it, but
it is perhaps better to think of it as simply a “demand” shock or “IS” shock.

2.2.1

Analysis

We consider the ZLB scenario described in Section 2.1.1 and again solve the model backwards
from t = 2 to determine optimal policy at t = 0 and how this is affected by uncertainty in
the natural rate at t = 1. After t = 1 the economy does not experience any more shocks,
but it inherits initial lagged inflation and output terms π1 and x1 , which may be positive
or negative. The output gap term can be easily adjusted by changing the interest rate it ,
provided the central bank is not constrained by the ZLB at t = 2, i.e. if ρn2 = ρ̄ is large
enough, an assumption we will maintain.19 Given the quadratic loss, it is optimal to smooth
this adjustment over time, so the economy will converge back to its steady-state slowly. The
details of this adjustment after t = 2 are not very important for our analysis. What is
important is that the overall loss of starting from t = 2 with lagged inflation π1 and output
gap x1 is a quadratic function of π1 only; we can write it as W π12 /2, where W is a constant
that depends on λ, κ, ξ and β and is calculated in the appendix.
Turn now to optimal policy at t = 1. Take the realization of ρn1 and last period’s output
gap x0 and inflation π0 as given. Substituting for π1 and i1 using (10) and (11), and taking
into account the ZLB constraint, optimal policy at t = 1 solves the following problem:
V (x0 , π0 , ρn1 ) = min
x1


π0 + ρn1
1
W
(ξπ0 + κx1 )2 + λx21 + β π12 s.t. x1 ≤ δx0 +
.
2
2
σ

where the policymaker now anticipates the cost of having inflation π1 tomorrow, and her
choices are affected by yesterday’s values x0 and π0 .
19

Relaxing it would only strengthen our results.

15

Depending on the value of ρn1 , two cases can arise. Define the threshold value:
ρ∗1 (x0 , π0 )


=−


(1 + βW )κξ
σ + 1 π0 − σδx0 .
(1 + βW )κ2 + λ

(12)

For ρn1 ≥ ρ∗1 (x0 , π0 ) the ZLB is not binding, otherwise it is. Hence the probability of hitting
R ρ∗1 (x0 ,π0 )
the ZLB is −∞
fρ (ρ)dρ . In contrast to the forward-looking case, the probability of
being constrained by the ZLB constraint is now endogenous at t = 1 and can be influenced
by policy at t = 0. As indicated by (12), a higher output gap or inflation at t = 0 will reduce
the likelihood of hitting the ZLB at t = 1.
If ρn1 ≥ ρ∗1 (x0 , π0 ) optimal policy at t = 1 yields
x1 = −

(1 + βW )κξ
π0 ;
(1 + βW )κ2 + λ

π1 =

λξ
π0 .
(1 + βW )κ2 + λ

This is similar to the forward-looking model’s solution that reflects the trade-off between
output and inflation, except that optimal policy now takes into account the cost of having
inflation away from target tomorrow, through W . The loss for this case is V (x0 , π0 , ρn1 ) =
W π02 /2 since in this case the problem is the same as the one faced at t = 2. If ρn1 < ρ∗1 (x0 , π0 )
the ZLB binds, in which case
π0 + ρn1
x1 = δx0 +
;
σ

π1 = κδx0 + π0



κ
ρn
ξ+
+κ 1.
σ
σ

The expected loss from t = 1 on as a function of the output gap and inflation at t = 0 is
then given by:
Z
W 2 +∞
L(x0 , π0 ) =
π
fρ (ρ)dρ +
2 0 ρ∗1 (x0 ,π0 )

2
Z ρ∗1 (x0 ,π0 )

1 + βW 
κ
ρ 2 λ
π0 + ρ
κδx0 + π0 ξ +
+κ
+
δx0 +
fρ (ρ)dρ.
2
σ
σ
2
σ
−∞
This expression reveals that the initial conditions x0 and π0 matter by shifting the payoff
from continuation in the non-ZLB states, W π02 /2; the payoff in the case where the ZLB
16

binds (the second integral); and the relative likelihood of ZLB and non-ZLB states through
ρ∗1 (x0 , π0 ). Since the loss function is continuous in ρ (even at ρ∗1 (x0 , π0 )), this last effect is
irrelevant for welfare at the margin.
The last step is to find the optimal policy at time 0, taking into account the effect on the
expected loss tomorrow:
min
x0


1
ρn + π−1
(ξπ−1 + κx0 + u0 )2 + λx20 + βL(x0 , π0 ) s.t. x0 ≤ δx−1 + 0
.
2
σ

We use this expression to prove the following, which is analogous to Proposition 1:
Proposition 2 For any initial condition, a mean-preserving spread in the distribution of
the natural rate ρn1 tomorrow, leads to a looser optimal policy today.
From (10) and (11), higher uncertainty also leads to larger x0 and π0 . The proof of Proposition 2 is in the appendix. Note that it incorporates the case of uncertainty regarding
cost-push shocks at t = 1 and shows that a mean preserving spread in the cost-push shock
tomorrow leads to looser policy today as well.
Our model also implies that an increase in uncertainty over the initial output gap will
lead to looser policy. Specifically we have:
Proposition 3 Suppose the initial output gap x−1 is unknown at t = 0 but becomes known
at t = 1 and the central bank has a priori distribution over x−1 . Then a mean-preserving
spread in this prior distribution leads optimal policy to be looser at t = 0.
The proof of this proposition is similar to the one for Proposition 2. This result is particularly
germane to the current policy environment where there is uncertainty over the amount of
slack in the economy. Therefore Proposition 3 provides an additional rationale for delaying
liftoff.

17

2.2.2

Discussion

As far as we know Proposition 2 is a new result, but its implications are similar to those
of Proposition 1. As in the forward-looking case, liftoff from an optimal zero interest rate
should be delayed today with an increase in uncertainty about the natural rate or cost-push
shock that raises the odds of the ZLB binding tomorrow. Similarly, even if not constrained
by the ZLB today, an increase in uncertainty about the likelihood of being constrained by the
ZLB tomorrow leads to a reduction in the policy rate today. So the buffer stock channel and
the expectations channel have very similar policy implications but for very different reasons.
The expectations channel involves the possibility of being constrained by the ZLB tomorrow
feeding backward to looser policy today. The buffer stock channel has looser policy today
feeding forward to reduce the likelihood and severity of being at the ZLB tomorrow. Note
that as in the forward-looking model optimal policy prescribes that interest rates rise as the
likelihood of being constrained by the ZLB in the future falls, even if the output gap or
inflation do not change.
It is useful to compare the policy implications of the buffer stock channel to the argument
developed in Coibion, Gorodnichenko, and Wieland (2012). That paper studies the tradeoff
between the level of the inflation target and the risk of hitting the ZLB using policy reaction
functions instead of optimal policy.20 Our analysis does not require a drastic change in
monetary policy in order to improve outcomes. It is achieved via standard interest rate
policy rather than a credibility-damaging change to the inflation target.

2.3

Quantitative assessment

We now assess the quantitative significance of the expectations and buffer stock channels
using calibrated versions of the forward- and backward-looking models that we solve numerically. With parameters drawn from the literature and initial conditions calibrated to early
2015, we compare equilibrium outcomes under optimal discretion to alternative policies that
20

Another difference is that they study a medium-scale DSGE model with both forward- and backwardlooking elements; because of this added complexity they use a different solution method.

18

do not take into account uncertainty. Our numerical methods are described in the appendix.
Importantly, and in contrast to most of the literature, they allow for uncertainty to affect
policy and to be reflected in welfare.

2.3.1

Parameter values

The parameter values are reported in Table 1. We use the same values for parameters that
are common to both models. The time period is one quarter, with t = 1 taken to be 2015q1.
The natural rate ρnt is the sum of deterministic and random components. We assume the
deterministic component rises linearly between t = 1 and t = T > 1, after which it remains
constant at ρ = 1.75%, which corresponds to the median long run funds rate in the March
2015 FOMC Summary of Economic Projections, less the FOMC’s inflation target π ∗ = 2.
The random component is AR(1) with auto-correlation coefficient ρε and innovation standard
deviation σε . We also assume there is an i.i.d. cost-push shock with standard deviation σu .
There is no uncertainty for t > T .
Obviously the degree of uncertainty we assume is central to our findings. The particular
values of ρε and σε are not as important to our results as the unconditional volatility they
imply. There is wide variation in estimates of volatility in the natural rate, corresponding
to differences in theoretical concepts, models and empirical methods used. Our calibration
implies the unconditional standard deviation of the natural rate is 2.5% at an annual rate.
This lies within the range of estimates in Barsky et al. (2014), Cúrdia, Ferrero, Ging Cee Ng,
and Tambalotti (2015) and Laubach and Williams (2003). The auto-correlation coefficient
is set midway between the values in Adam and Billi (2007) and Cúrdia et al. (2015). We
set the standard deviation of the cost-push shock σu close to the value used in Adam and
Billi (2007). Assuming serial correlation or a moderately different unconditional standard
deviation of the cost-push shock is not very important for our results. Finally, by assuming
the economy is not subject to shocks for t > T and that the long run natural rate ρ̄ is a
known constant we have been conservative in our specification of uncertainty.
The Phillips curve slope, elasticity of inter-temporal substitution and the discount factor
19

Table 1: Parameter values
Parameter
β
κ
σ
σε
σu
ρε
ρu
λ
π∗
ρn1
T
ρ
δ
ξ
x0
π0
φ
γ

Description

Value

Discount factor
Slope of Phillips Curve
Inverse elasticity of substitution
Std. dev. natural rate innovation
Std. dev. of cost-push innovation
Serial correlation of natural rate
Serial correlation of cost-push
Weight on output stabilization
Steady-state inflation (annualized)
Value of natural rate at time 1
Quarters to reach terminal natural rate
Terminal natural rate (annualized)
Backward-looking IS curve coef.
Backward-looking Phillips curve coef.
Initial condition for the output gap
Initial condition for inflation
Taylor rule coefficient on inflation
Taylor rule coefficient on output gap

0.995
0.025
2
1.32
0.10
0.85
0
0.25
2
-0.5
24
1.75
0.75
0.95
-1.5
1.3
1.5
0.5

Note: Values of standard deviations, inflation, the output gap, and the
natural rate are shown in percentage points.

are all set to values common in the New Keynesian literature. For the backward-looking
model we set the coefficient on lagged inflation in (10) to ξ = 0.95, reflecting the fact that
inflation has been very persistent in recent years.21 The coefficient on lagged output in (11)
is δ = 0.75, in order to generate significant persistence in the output gap. For the backwardlooking model we assume an initial inflation rate of 1.3%, a recent reading for core PCE
inflation, and an initial output gap x0 = −1.5%, based on a simple calculation using the
2014q4 unemployment rate (5.7%), an estimate of the natural rate of unemployment (5.0%)
and Okun’s law. As indicated by Proposition 3, adding uncertainty about the initial output
gap would only strengthen our results.22
21

Note that it is not clear how to map estimates of the lagged inflation coefficient in the literature to our
backward-looking model since these are based on Phillips curves with forward looking terms.
22
In the appendix we discuss the implications for our results of different values for the initial gaps, uncertainty, ρn0 , δ and ξ.

20

We measure the quantitative effect of uncertainty on policy by comparing equilibrium
outcomes under optimal discretion to a scenario in which we solve for optimal discretion
when the central bank observes the current natural rate and cost-push shocks but acts as if
there will be no more shocks. Private agents understand this policy but take into account
the true nature of uncertainty. Actual outcomes will be inconsistent with the central bank’s
assumptions so we call this the “naive” policy. We also compare equilibrium outcomes under
optimal discretion to those obtained assuming the central bank follows a reaction function
with weights on inflation and the output gap as in Taylor (1993), and a constant term equal
to 3.75% corresponding to ρ + π ∗ .

2.3.2

Results for the forward-looking model

Figure 1 displays representative paths of the nominal interest rate, inflation and the output
gap under optimal discretion (red), the naive policy (green) and the Taylor rule (blue),
calculated by setting the ex post realized shocks to zero, the modal outcome. Under the
modal outcome the interest rate under the naive policy follows the natural rate exactly. The
difference between the green and red interest rate paths indicates the substantial impact
uncertainty has on optimal policy; the naive policy is between 50 and 150 bps above the
optimal policy for 2 years. This difference in policy has little impact on the output gap,
but under optimal policy the inflation gap is closed much faster. The inflation gap is more
negative under the naive policy because the interest rate is higher both initially and in the
future since it does not take into account uncertainty about the ZLB.23 The Taylor rule
prescribes rates above both the optimal and naive policies for most of the simulation period
and because agents are forward looking this feeds backward to cause much more negative
gaps.24
Table 2 summarizes the distribution of outcomes under the three different policies based
23

One might be surprised that inflation is far below target under the naive policy even though the output
gap is near target. This reflects that we plotted the modal outcome, rather than the mean, and that the
distribution of inflation and output gap outcomes are skewed to the left.
24
For some calibrations the outcomes under the Taylor rule can be so bad that liftoff is delayed and rates
are below the optimal policy throughout the simulation period.

21

Figure 1: Liftoff in the forward-looking model
Nominal Interest Rate

Inflation

4

2.5
2

%

%

3
2

1.5
1

1

0.5

0

0
5

10

15

20

5

Quarters
Output Gap

10

15

20

Quarters

0.5
Optimal Discretion

%

0
-0.5

Naive
-1
-1.5

Taylor

-2
5

10

15

20

Quarters

on simulating 50,000 paths drawn from the calibrated distributions of the shocks. Optimal
discretion implies 1/3 the expected loss of the naive policy and 1/8 the loss of the Taylor
rule.25 One way to interpret these losses is to calculate the per period reduction in the output
and inflation gaps that would make the central banker indifferent between the outcomes
under the optimal policy and those under the alternatives. Both gaps would have to be
43% and 65% smaller under the naive policy and the Taylor rule, respectively, to achieve
this indifference. The median liftoff (defined as the nominal interest rate exceeding 25 bps)
under optimal discretion is delayed by 2 quarters compared to the other policies; the mean
liftoff is delayed by more than 3 quarters reflecting skewness in the outcomes. At the time
of liftoff inflation and output are much closer to target under optimal discretion compared
to the two alternative policies.
When comparing policies it is also important to assess how well each balances the risks
25

The sub-optimality of the Taylor rule does not hold by definition because it provides commitment which
may lead to more favorable outcomes.

22

Table 2: Forward-looking simulation
Statistic

Optimal Discretion

Naive

Taylor Rule

0.02
4.11
3
1.81
0.08
2.69
-0.72
1.87

0.06
1.00
1
0.88
-1.44
2.42
-1.44
1.88

0.16
1.00
1
0.35
-1.62
2.17
-2.63
0.97

Expected loss
Mean time at liftoff
Median time at liftoff
Median π at liftoff
Median x at liftoff
75th percentile max(π)
25th percentile min(x)
Median standard deviation ∆i

of bad outcomes. We do this by comparing the 75th percentile across simulations of the
maximum inflation gap and the 25th percentile of the lowest output gap over the first 6
years. Under optimal policy the bad output outcomes are much lower than under either
alternative policies. The bad inflation outcomes do not seem particularly high under any of
the policies.
The statistic in the bottom row is the median standard deviation of changes in the
nominal interest rate. By comparing interest rate volatility under the Taylor rule in our
model with that implied by the same Taylor rule in the data we can determine whether the
uncertainty underlying our results is reasonable. If the volatility were much higher in our
simulations we would conclude that it is unreasonably large. In fact, the 0.97 standard deviation in our Taylor rule simulations is only a little larger than the 0.88 standard deviation we
find in our data.26 Interest rates are more volatile under both the optimal and naive policies
because they respond to all fundamental shocks rather than just inflation and output.27

2.3.3

Results for the backward-looking model

Figure 2 is the analogue of Figure 1 for the backward-looking model. Obviously, the dynamics
of return to target are quite different from the forward-looking model, but the key qualitative
results are the same. As in the forward-looking model, optimal policy is substantially looser
26
27

The appendix describes how we calculate the interest rate implied by the Taylor rule with our data.
We thank Johannes Wieland for suggesting that we assess the volatility of the nominal interest rate.

23

than both the naive policy and the Taylor rule. Here the optimal policy prescribes much
more delay in lifting off from the ZLB. Delay now occurs under the naive policy because
it is optimal to stimulate output strongly in order to return inflation to target, but this
delay is less than under the optimal policy. The optimal policy also has a sharper liftoff
than the naive policy. However the increases under optimal policy are equivalent to just 25
bps a meeting, the same as the “measured pace” during the Fed tightening over 2004–2006.
Qualitatively the differences in the output and inflation outcomes across the three policies
are similar to the forward-looking model as well. Taking into account uncertainty about the
ZLB leads the optimal policy to return inflation to target faster than the naive policy and
it achieves this by allowing the output gap to overshoot more to build a buffer against the
possibility of bad shocks in the future.
Figure 2: Liftoff in the backward-looking model
Inflation
2.5

3

2

%

%

Nominal Interest Rate
4

2
1

1.5
1

0

0.5
5

10

15

20

0

Quarters
Output Gap

5

10

15

20

Quarters

1
Optimal Discretion

%

0
Naive
-1
Taylor
-2
5

10

15

20

Quarters

Table 3 is constructed analogously to Table 2. It shows that optimal policy provides
only a marginal improvement over the naive policy in terms of expected losses, due to the
24

offsetting effects of the inflation and output gaps. The median gaps are roughly closed
at liftoff under both the optimal and naive policies, but they are quite large under the
Taylor rule. The bad outcomes are similar across the three scenarios. Finally, note that the
volatility of the interest rate under the Taylor rule is lower here compared to the data and
the forward-looking model so the underlying uncertainty is not excessive.
Table 3: Backward-looking simulation
Statistic

Optimal Discretion

Naive

Taylor Rule

0.27
12.5
10
2.00
0.32
3.02
-1.65
2.96

0.28
10.3
7
1.81
0.00
2.83
-1.70
3.10

0.60
1.00
1
1.21
-1.27
2.81
-1.54
0.54

Expected loss
Mean time at liftoff
Median time at liftoff
Median π at liftoff
Median x at liftoff
75th percentile max(π)
25th percentile min(x)
Median standard deviation ∆i

Figure 3: Large cost-push shock in the backward-looking model
Cost-push shock
Optimal Discretion
Naive
Taylor Rule

4

%

0.6

%

Nominal Interest Rate
5

0.8

0.4

3
2

0.2

1

0

0
5

10

15

20

5

15

20

Inflation

0.5

2.5

0

2

%

%

Output Gap

10

-0.5
-1

1.5
1

-1.5

0.5
5

10

15

20

5

25

10

15

20

We conclude by illustrating one of the risks the optimal policy is able to address, namely
the possibility that a shock will drive up inflation before the baseline liftoff. Figure 3 depicts
a particular simulation where there is a large positive cost-push shock before the liftoff under
the optimal policy shown in Figure 2. The shocks trigger earlier liftoff under the optimal
policy so that the inflation response is mild. The implication is that staying at zero longer
under the optimal policy does not impair the ability of the central bank to respond to future
contingencies. However, it does have to be prepared to raise rates promptly. We obtain
similar results with the forward looking model.

3

Historical precedents for risk management

The previous section demonstrates that the ZLB justifies a risk management approach to
monetary policy. One may question whether following such an approach would be a departure from past FOMC behavior. Clearly, concerns about the ZLB are a relatively recent
phenomenon. Nevertheless there are many reasons why a risk management approach can be
justified when away from the ZLB and we begin this section by reviewing these rationales.
We then demonstrate that the Federal Reserve has used risk management to justify its policy
decisions over the period 1987-2008.
The FOMC minutes and other Federal Reserve communications reveal a number of
episodes when uncertainty or insurance were used to justify its policy decisions. Sometimes the FOMC indicated that it took a wait-and-see approach to taking further actions
or muted a funds rate move due to its uncertainty over the course of the economy or the
extent to which early policy moves had yet shown through to economic activity and inflation.
At other times the Committee said its policy stance was taken in part as insurance against
undesirable outcomes; during these times, the FOMC often noted that the potential costs of
a policy overreaction likely were modest compared to the scenario it was insuring against.
Two episodes are particularly revealing. The first is the hesitancy of the Committee to
raise rates in 1997 and 1998 to counter inflationary threats because of uncertainty generated
by the Asian financial crisis and the subsequent rate cuts following the Russian default.
26

The second is the loosening of policy over 2000 and 2001, when uncertainty over the degree
to which growth was slowing and the desire to insure against downside risks appeared to
influence policy. Furthermore, later in the period, the Committee’s aggressive actions also
seemed to be influenced by attention to the risks associated with the ZLB on interest rates.
While the historical record is replete with references that suggest uncertainty or insurance
motives influenced the stance of policy, it is unclear at this stage whether risk management
had a material impact on policy. Therefore we conclude this section by quantifying these
references into variables that we use in Section 4 to assess the importance of risk management
for actual policy decisions.

3.1

Rationales for risk management away from the ZLB

Policymakers have long-emphasized the importance of uncertainty in their decision- making.
As Greenspan (2004) put it: “(t)he Federal Reserve’s experiences over the past two decades
make it clear that uncertainty is not just a pervasive feature of the monetary policy landscape;
it is the defining characteristic of that landscape.” This sentiment seems at odds with linearquadratic models in which optimal policy involves adjusting the interest rate in response to
only the mean of the distribution of shocks away from the ZLB. What kinds of factors cause
departures from such conditions and justify the risk management approach?
Relaxing the assumption of a quadratic loss function is perhaps the simplest way to generate a rationale for risk management. The quadratic loss function is justified by Woodford
(2003) as being a local approximation to consumer welfare. However, it might not be a good
approximation when large shocks drive the economy far from the underlying trend; alternatively it might simply be an inadequate approximation of FOMC behavior. Examples of
models with asymmetric loss functions include Surico (2007), Kilian and Manganelli (2008),
and Dolado, Marı́a-Dolores, and Ruge-Murcia (2004). The latter paper shows the optimal
policy rule can involve nonlinear output gap and inflation terms if policymakers are less
averse to output running above potential than below it. The relevance of higher moments in
the distribution of shocks for optimal policy is an obvious by-product of these nonlinearities.
27

Nonlinearities in economic dynamics are another natural motivation. For example, suppose recessions are episodes when self-reinforcing dynamics amplify the effects of downside
shocks. This could be modeled as a dependence of current output on lagged output, as in
our backward-looking model, but this dependence is concave rather than linear. Intuitively,
negative shocks have a more dramatic effect on reducing future output than positive shocks
have on increasing it, and so greater uncertainty leads to looser optimal policy to guard
against the more detrimental outcomes. Alternatively, suppose the Phillips curve is convex,
perhaps owing to downward nominal wage rigidities that become more germane with low
inflation. Here, a positive shock to the output gap leads to a significant increase of inflation
above target while a negative shock leads to a much smaller decline in inflation. The larger
the spread of these shocks, the greater the odds of experiencing a bad inflation outcome.
Optimal policy guards against this, leading to a tightening bias.28
The risk management approach also appears in the large literature on how optimal monetary policy should adjust for uncertainty about the true model of the economy. Brainard
(1967) derived the important result that uncertainty over the effects of policy should lead
to caution and smaller policy responses to deviations from target. In contrast, the robust
control analysis of Hansen and Sargent (2008) has been interpreted to mean that uncertainty over model miss-specification should generate aggressive policy actions. As explained
by Barlevy (2011), both the attenuation and aggressiveness results depend on the specifics
of the underlying environment. Nonetheless, these analyses still often indicate that higher
moments of the distribution of shocks can influence the setting of optimal policy.

3.2

1997–1998

The year 1997 was a good one for the U.S. economy: real GDP increased 3-3/4 percent
(the March 1998 third estimate), the unemployment rate fell to 4.7 percent and core CPI
inflation was 2-1/4 percent. With solid growth and tight labor markets, the FOMC clearly
was concerned about a buildup in inflationary pressures. As noted in the Federal Reserve’s
28

The fact that a convex Phillips curve can lead to a role for risk management has been discussed by
Laxton, Rose, and Tambakis (1999) and Dolado, Marı́a-Dolores, and Naveira (2005).

28

February 1998 Monetary Policy Report:
The circumstances that prevailed through most of 1997 required that the Federal
Reserve remain especially attentive to the risk of a pickup in inflation. Labor
markets were already tight when the year began, and nominal wages had started
to rise faster than previously. Persistent strength in demand over the year led to
economic growth in excess of the expansion of the economy’s potential, intensifying the pressures on labor supplies.
Indeed, over much of the period between early 1997 and mid-1998, the FOMC directive
maintained a bias indicating that it was more likely to raise rates to battle inflationary
pressures than it was to lower them. Nonetheless, the FOMC left the funds rate unchanged
at 5.5 percent from March 1997 until September 1998. Why did it do so?
Certainly the inaction in large part reflected the forecast for growth to moderate to a more
sustainable pace as well as the fact that actual inflation had remained contained despite tight
labor market conditions. Based on the funds rate remaining at 5.5 percent, the August 1998
Greenbook projected GDP growth to slow from 2.9 percent in 1998 to 1.7 percent in 1999.
The unemployment rate was projected to rise to 5.1 percent by the end of 1999 and core CPI
inflation was projected to edge down to 2.1 percent. But, in addition, on several occasions
heightened uncertainty over the outlook for growth and inflation apparently reinforced the
decision to refrain from raising rates. The following quote from the July FOMC 1997 minutes
is a revealing example:
While the members assessed risks surrounding such a forecast as decidedly tilted
to the upside, the slowing of the expansion should keep resource utilization from
rising substantially further, and this outlook together with the absence of significant early signs of rising inflationary pressures suggested the desirability of
a cautious “wait and see” policy stance at this point. In the current uncertain
environment, this would afford the Committee an opportunity to gauge the momentum of the expansion and the related degree of pressure on resources and
prices.

29

Furthermore, the Committee did not see high costs to “waiting and seeing.” They thought
any increase in inflation would be slow, and that if needed a limited tightening would be
sufficient to reign in any emerging price pressures. This is seen in the following quote from
the same meeting:
The risks of waiting appeared to be limited, given that the evidence at hand
did not point to a step-up in inflation despite low unemployment and that the
current stance of monetary policy did not seem to be overly accommodative
. . . In these circumstances, any tendency for price pressures to mount was likely
to emerge only gradually and to be reversible through a relatively limited policy
adjustment.
Thus, it appears that uncertainty and associated risk management considerations supported
the Committee’s decision to leave policy on hold.
Of course, the potential fallout of the Asian financial crisis on the U.S. economy was a
major factor underlying the uncertainty about the outlook. The baseline scenario was that
the associated weakening in demand from abroad and a stronger dollar would be enough to
keep inflationary pressures in check but would not be strong enough to cause inflation or
employment to fall too low. As Chairman Greenspan noted in his February 1998 HumphreyHawkins testimony to Congress, there were substantial risks to this outlook, with the delicate
balance dictating unchanged policy:
However, we cannot rule out two other, more worrisome possibilities. On the one
hand, should the momentum to domestic spending not be offset significantly by
Asian or other developments, the U.S. economy would be on a track along which
spending could press too strongly against available resources to be consistent
with contained inflation. On the other, we also need to be alert to the possibility that the forces from Asia might damp activity and prices by more than is
desirable by exerting a particularly forceful drag on the volume of net exports
and the prices of imports. When confronted at the beginning of this month with
these, for the moment, finely balanced, though powerful forces, the members of
the Federal Open Market Committee decided that monetary policy should most
appropriately be kept on hold.
30

By late in the summer of 1998, this balance had changed, as the strains following the
Russian default weakened the outlook for foreign growth and tightened financial conditions in
the U.S. The Committee was concerned about the direct implications of these developments
on U.S. financial markets, already evident in the data, as well as for the real economy, which
were still just a prediction. The staff forecast prepared for the September FOMC meeting
reduced the projection for growth in 1999 by about 1/2 percentage point to 1-1/4 percent,
predicated on a 75 bp reduction in the funds rate spread out over three quarters. Such a
forecast was not a disaster – indeed, at 5.2 percent, the unemployment rate projected for the
end of 1999 was still below the staff’s estimate of its natural rate. Nonetheless, the FOMC
moved much faster than assumed by the staff, lowering rates 25 bps at its September and
November meetings as well at an inter-meeting cut in October. According to the FOMC
minutes, the rate cuts were made in part as insurance against a worsening of financial
conditions and weakening activity. As they noted in September:
. . . such an action was desirable to cushion the likely adverse consequences on
future domestic economic activity of the global financial turmoil that had weakened foreign economies and of the tighter conditions in financial markets in the
United States that had resulted in part from that turmoil. At a time of abnormally high volatility and very substantial uncertainty, it was impossible to predict
how financial conditions in the United States would evolve . . . In any event, an
easing policy action at this point could provide added insurance against the risk
of a further worsening in financial conditions and a related curtailment in the
availability of credit to many borrowers.
While the references to insurance are clear, the case also can be made that these policy
moves were in large part made to realign the misses in the expected paths for growth and
inflation from the FOMC’s policy goals. At this time the prescriptions to address the risks
to their policy goals were in conflict – risks to achieving the inflation mandate called for
higher interest rates while risks to achieving the maximum employment mandate called for
lower rates. As the above quote from Chairman Greenspan’s 1998 testimony indicated, in
early 1998 the Committee thought that a 5-1/2 percent funds rate setting kept these risks
31

in balance. Subsequently, as the odds of economic weakness increased, the Committee cut
rates to bring the risks to the two goals back into balance. As Chairman Greenspan said in
his February 1999 Humphrey Hawkins testimony:
To cushion the domestic economy from the impact of the increasing weakness
in foreign economies and the less accommodative conditions in U.S. financial
markets, the FOMC, beginning in late September, undertook three policy easings
. . . These actions were taken to rebalance the risks to the outlook, and, in the
event, the markets have recovered appreciably.
So were the late 1998 rate moves a balancing of forecast probabilities, insurance against
a downside skew in possible outcomes, or some combination of both? There is no easy
answer. This motivates our econometric work in Section 4 that seeks to disentangle the
normal response of policy to expected outcomes from uncertainty and other related factors
that may have influenced the policy decision.

3.3

2000–2001

In the end, the economy weathered the fallout from the Russian default well. The strength of
the economy and underlying inflationary pressures led the FOMC to execute a series of rate
hikes that brought the funds rate up to 6.5 percent by May of 2000. At the time of the June
2000 FOMC meeting, the unemployment rate stood at 4.1 percent and core PCE inflation,
which the Committee was now using as its main measure of consumer price inflation, was
running at about 1-3/4 percent, up from 1-1/2 percent in 1999. The staff forecasted growth
would moderate to a rate near or a little below potential but that unemployment would
remain near its current level and that inflation would rise to 2.3 percent in 2001 – and
this forecast was predicated on another 75 bps tightening. Despite this outlook, the FOMC
decided to leave rates unchanged. What drove this pause? It seems likely to us that risk
management was an important consideration.
In particular, the FOMC appeared to want to see how uncertainty over the outlook
would play out. First, the incoming data and anecdotal reports from Committee members’
32

business contacts pointed to a slowdown in growth, but the degree of the slowing was not
clear. Second, with rates having risen substantially over the past year, and given the lags
from policy changes to economic activity, it was unlikely that the full effects of the hikes had
yet been felt. Given the relatively high level of the funds rate and the slowdown in growth
that appeared in train, the Committee seemed wary of over-tightening. Third, despite the
staff forecast, the FOMC apparently considered the costs of waiting in terms of inflation
risks to be small. Accordingly, they thought it better to put a rate increase on hold and
see how the economy evolved. The June 2000 minutes contain a good deal of commentary
supporting this interpretation:

29

The increasing though still tentative indications of some slowing in aggregate
demand, together with the likelihood that the earlier policy tightening actions
had not yet exerted their full retarding effects on spending, were key factors
in this decision. The uncertainties surrounding the outlook for the economy,
notably the extent and duration of the recent moderation in spending and the
effects of the appreciable tightening over the past year . . . reinforced the argument
for leaving the stance of policy unchanged at this meeting and weighting incoming
data carefully. . . .Members generally saw little risk in deferring any further policy
tightening move, particularly since the possibility that underlying inflation would
worsen appreciably seemed remote under prevailing circumstances.
In the second half of 2000 it became increasingly evident that growth had slowed to a
pace somewhat below trend and inflation was moving up at a slower pace than the staff
had projected in June. The Committee’s response was to hold the funds rate at 6.5 percent
through the end of 2000. But the data around the turn of the year proved to be weaker
than anticipated. In a conference call on January 3, 2001, the FOMC cut the funds rate to
6 percent and lowered it again to 5-1/2 percent at the end-of-month FOMC meeting.30
29

The Committee had already invoked such arguments earlier in this cycle. As noted in the July 2000
Monetary Policy Report: “The FOMC considered larger policy moves at its first two meetings of 2000 but
concluded that significant uncertainty about the outlook for the expansion of aggregate demand in relation
to that of aggregate supply, including the timing and strength of the economy’s response to earlier monetary
policy tightenings, warranted a more limited policy action.”
30
At that meeting the Board staff was forecasting that growth would stagnate in the first half of the year,
but that the economy would avoid an outright recession even with the funds rate at 5.75 percent. Core PCE
inflation was projected to rise modestly to a little under 2.0 percent.

33

In justifying the aggressive ease, the minutes stated:
Such a policy move in conjunction with the 50 basis point reduction in early January would represent a relatively aggressive policy adjustment in a short period
of time, but the members agreed on its desirability in light of the rapid weakening in the economic expansion in recent months and associated deterioration
in business and consumer confidence. The extent and duration of the current
economic correction remained uncertain, but the stimulus . . . would help guard
against cumulative weakness in economic activity and would support the positive factors that seemed likely to promote recovery later in the year . . . In current
circumstances, members saw little inflation risk in such a “front-loaded” easing policy, given the reduced pressures on resources stemming from the sluggish
performance of the economy and relatively subdued expectations of inflation.
According to this quote, not only was the actual weakening in activity an important consideration in the policy decision, but uncertainty over the extent of the downturn and the
possibility that it might turn into an outright recession seemed to spur the Committee to
make a large move. The “help guard against cumulative weakness” and “front-loaded” language could be read as the Committee taking out some additional insurance against the
possibility that the weakening activity would snowball into a recession. This could have
reflected a concern about the kinds of non-linear output dynamics or perhaps non-quadratic
losses associated with a large recession that we discussed in Section 3.1.
The FOMC steadily brought the funds rate down further over the course of 2001 against
the backdrop of weakening activity, and the economy seemed to be skirting a recession.
Then the tragic events of September 11 occurred. There was, of course, huge uncertainty
over how international developments, logistics disruptions, and the sentiment of households,
businesses, and financial markets would affect spending and production. By November the
staff was forecasting a modest recession: growth in the second half of 2001 was projected to
decline 1-1/2 percent at an annual rate and rise at just a 1-1/4 percent rate in the first half
of 2002. By the end of 2002 the unemployment rate was projected to rise to 6.1 percent and
core PCE inflation was projected to be 1-1/2 percent. These forecasts were predicated on
the funds rate remaining flat at 2-1/4 percent.
34

The FOMC, however, was worried about something more serious than the shallow recession forecast by the Staff. Furthermore, a new risk came to light, namely the chance that
disinflationary pressures might emerge, that, once established, would be more difficult to
fight with the funds rate already low. In response, the Committee again acted aggressively,
cutting the funds rate 50 bps in a conference call on September 17 and again at their regular
meetings in October and November. The November 2001 FOMC meeting minutes note:
. . . members stressed the absence of evidence that the economy was beginning
to stabilize and some commented that indications of economic weakness had
in fact intensified. Moreover, it was likely in the view of these members that
core inflation, which was already modest, would decelerate further. In these
circumstances insufficient monetary policy stimulus would risk a more extended
contraction of the economy and possibly even downward pressures on prices that
could be difficult to counter with the current federal funds rate already quite
low. Should the economy display unanticipated strength in the near term, the
emerging need for a tightening action would be a highly welcome development
that could be readily accommodated in a timely manner to forestall any potential
pickup in inflation.
This passage suggests that the large rate cuts were not only aimed at preventing the
economy from falling into a serious recession with deflationary consequences, but that the
Committee was also concerned that such an outcome “could be difficult to counter with the
current funds rate already quite low.” Accordingly, the aggressive policy moves could in part
also have reflected insurance against the future possibility of being constrained by the ZLB,
precisely the policy scenario and optimal policy prescription described in Section 2.

3.4

Quantifying references to uncertainty and insurance in FOMC Minutes

We have shown that Federal Reserve communications contain many references that suggest
uncertainty or insurance motives influenced the stance of policy. But, has risk management
had a material impact on policy? We now show how we quantified these references into
variables that can be used to assess the importance of risk management for actual policy
35

decisions.
In the spirit of the narrative approach pioneered by Romer and Romer (1989), we built
judgmental indicators based on our reading of the FOMC minutes covering the period from
the beginning of Greenspan’s chairmanship in 1987 to 2008. We concentrated on the paragraphs that describe the Committee’s rationale for its policy decision, reading these passages
for references to when uncertainty or insurance considerations appeared closely linked to the
FOMC’s decision. Other portions of the minutes were excluded from our analysis in order
to better isolate arguments that directly influenced the policy decision from more general
discussions of unusual data or forecast uncertainty.
We constructed two separate judgemental variables, one for uncertainty (hUnc) and one
for insurance (hIns), where “h” stands for “human-coded.” The uncertainty variable was
coded to plus (minus) one if we judged that the Committee appealed to uncertainty to
position the funds rate higher (lower) than it otherwise would be based on the staff forecast
alone. If uncertainty did not appear to be an important factor influencing the policy decision,
we coded the indicator as zero. We coded the insurance variable similarly by identifying when
the minutes cited insurance against some adverse outcome as an important consideration in
the stance of policy.31
As an example of our coding, consider the June 2000 meeting discussed above when the
FOMC decided to wait to assess future developments before taking further policy action.
The commentary below highlights the role of uncertainty in this decision (our italics):
The increasing though still tentative indications of some slowing in aggregate
demand, together with the likelihood that the earlier policy tightening actions
had not yet exerted their full retarding effects on spending, were key factors in
this decision. The uncertainties surrounding the outlook for the economy, notably
the extent and duration of the recent moderation in spending and the effects of
the appreciable tightening over the past year, including the 1/2 percentage point
increase in the intended federal funds rate at the May meeting, reinforced the
31

A value of plus (minus) one for either variable could reflect the Committee raising (lowering) rates by
more (less) than they would have if they ignored uncertainty or insurance or a decision to keep the funds
rate at its current level when a forecast-only call would have been to lower (raise) rates.

36

argument for leaving the stance of policy unchanged at this meeting and weighting
incoming data carefully.
We coded this meeting as a minus one for hUnc – rates were lower because uncertainty over
the economic outlook and the effects of past policy moves appear to have been important
factors in the Committee’s decision not to raise rates. Similarly, the January and November
2001 quotes cited above led us to code hIns as a minus one for those meetings, since, as we
noted in the narrative, the Committee appeared to be making aggressive rate moves in part
to insure against downside risks to the baseline scenario.
We did not code all mentions of uncertainty or insurance as a plus or minus one. For
example, the March 1998 minutes referred to uncertainties over the economic outlook and
said that the Committee could wait for further developments before tightening to counter
potential inflation developments. However, at that time the FOMC was not obviously in
the midst of a tightening cycle; the baseline forecast seemed consistent with the funds rate
setting at the time; and the commentary over the need to tighten was in reference to an
indefinite point in the future. So, in our judgment, uncertainty did not appear to be a very
important factor holding back a rate increase at this meeting and we coded it as a zero.32
Of course, this coding of the minutes is inherently subjective and there is no definitive way
to judge the accuracy of the decisions we made. Consequently we also constructed objective
measures of how often references to uncertainty or insurance appeared in the policy paragraphs of the minutes. In particular, we constructed variables which measure the percentage
of sentences containing words related to uncertainty or insurance in conjunction with references to economic activity and/or inflation.33 The measures for uncertainty and insurance
are denoted mUnc and mIns, where “m” indicates these variables are “machine-coded.”
Figures 4 and 5 show plots of our minutes-based uncertainty and insurance variables.
32

From the minutes, “ . . . should the strength of the economic expansion and the firming of labor markets
persist, policy tightening likely would be needed at some point to head off imbalances that over time would
undermine the expansion in economic activity. Most saw little urgency to tighten policy at this meeting,
however . . . (o)n balance, in light of the uncertainties in the outlook and given that a variety of special factors
would continue to contain inflation for a time, the Committee could await further developments bearing on
the strength of inflationary pressures without incurring a significant risk . . . .”
33
The appendix describes our coding algorithm in more detail.

37

0

-1

Indicator

Percent of Sentences
10
20

30

1

Figure 4: Minutes-based uncertainty variables

Jan 85

Jan 90

Jan 95
Jan 00
FOMC Meeting Month

Jan 05

Jan 10

Non-zero values of the human-coded variables are indicated by red dots and the blue bars
indicate the machine-coded sentence counts. The uncertainty indicator hUnc “turns on” in
31 out of the 128 meetings between 1993 and 2008. Indications that insurance was a factor
in shading policy are not as common, but still show up 14 times in hIns. Most of the time
– 24 for uncertainty and 11 for insurance – we judged that rates were set lower than they
otherwise would have been to account for these factors.
The hUnc and hIns codings are not always reflected in the sentence counts. There are also
meetings where the sentence counts are positive but we did not judge them to indicate that
rates were set differently than they “normally” would have been. For example, in March of
2007 hUnc is coded zero for uncertainty whereas mUnc finds uncertainty referenced in nearly
one-third of the sentences in the policy section of the minutes. Inspection of the minutes
indicates that the Committee was uncertain over both the degree to which the economy
38

0

-1

Indicator

Percent of Sentences
5
15
10

1

20

Figure 5: Minutes-based insurance variables

Jan 85

Jan 90

Jan 95
Jan 00
FOMC Meeting Month

Jan 05

Jan 10

was weakening and whether their expectation of a decline in inflation, which was running
uncomfortably high at the time, actually would materialize. In the end, they did not adjust
current policy in response to these conflicting uncertainties. Hence we coded hUnc to zero
in this case.
Note that we did not attempt to measure a variable for risk management per se. The minutes often contain discussions of policies aimed at addressing risks to attaining the Committee’s goals. However, many times this commentary appears to surround policy adjustments
aimed instead at balancing (possibly conflicting) risks to the outlook for output and inflation, not unlike the response to changes in economic conditions prescribed by the canonical
framework for studying optimal policy under discretion. Such risk-balancing was discussed
in our narrative of the 1997-1998 period.34
34

Indeed, for much of our sample period, the Committee discussed risks about the future evolution of

39

4

Econometric evidence of risk management

So far we have uncovered clear evidence that risk management considerations have been
a pervasive feature of Federal Reserve communications. But, it is not clear at this stage
whether risk management has had a material impact on the FOMC’s policy decisions. If it
has, then calling for a risk management approach in the current policy environment would be
consistent with a well-established approach to monetary policy. In this section we describe
econometric evidence suggesting that risk management has had a material impact on the
FOMC’s funds rate choices in the pre-ZLB era.
We estimate monetary policy reaction functions of the kind studied in Clarida, Gali, and
Gertler (2000) and many other papers. These have the funds rate set as a linear function
of output gap and inflation forecasts; there is no role for risk management unless risk feeds
directly into the point forecasts. To quantify the role of risk beyond such a direct influence
we add variables that proxy for risk to the reaction function.35

4.1

Empirical strategy

Let Rt∗ denote the notional target for the funds rate in period t. We assume the FOMC sets
this target according to
Rt∗ = R∗ + β (Et [πt,k ] − π ∗ ) + γEt [xt,q ] + µst ,

(13)

output or inflation in order to signal a possible bias in the direction of upcoming rate actions. For example,
in the July 1997 meeting described earlier, the minutes indicate: “An asymmetric directive was consistent
with their view that the risks clearly were in the direction of excessive demand pressures . . . ” Since the
Committee delayed tightening at this meeting, this “risk” reference communicated that the risks to price
stability presented by the baseline outlook would likely eventually call for rate increases. But it does not
appear to be a reference that variance or skewness in the distribution of possible inflation outcomes should
dictate some non-standard policy response.
35
There is a large literature that examines non-linearities in policy reaction functions (see Gnabo and
Moccero (2014), Mumtaz and Surico (2015), and Tenreyro and Thwaites (2015) for reviews of this literature
and recent estimates), but surprisingly little work that speaks directly to risk management. We discuss the
related literature below.

40

where πt,k denotes the average annualized inflation rate from t to t + k, π ∗ is the FOMC’s
target for inflation, xt,q is the average output gap from t to t + q, st is a risk management
proxy, and Et denotes expectations conditional on information available to the FOMC at
date t. The coefficients β, γ and µ are fixed over time. R∗ is the desired nominal rate when
inflation is at target, the output gap is closed and risk does not influence policy other than
through the forecast, µ = 0. If the average output and inflation gaps are both zero and the
FOMC acts as if the natural rate is constant and out of its control, then R∗ = r∗ + π ∗ , where
r∗ is the real natural rate of interest.36
We make two more assumptions to arrive at our estimation equation. First, the FOMC
has a preference for interest rate smoothing and so does not choose to hit its notional target
instantaneously; as a practical matter it is necessary to include lags of the funds rate to fit
the data. Second, the FOMC does not have perfect control over interest rates which gives
rise to an error term υt . These assumptions lead to the following specification for the actual
funds rate, Rt
Rt = (1 − A(1))Rt∗ + A(L)Rt−1 + υt
where A(L) =

PN −1
j=0

(14)

aj+1 Lj is a polynomial in the lag operator L with N denoting the

number of funds rate lags. The error term υt is assumed to be mean zero and serially
independent. Combining (13) and (14) yields our estimation equation:
Rt = b0 + b1 Et [πt,k ] + b2 Et [xt,q ] + A(L)Rt−1 + b3 st + υt .

(15)

where bi , i = 0, 1, 2, 3 are simple functions of A(1), β, γ, µ, r∗ and π ∗ .37
We use the publicly available Board staff forecasts of core CPI inflation (in percentage
points) and the output gap (percentage point deviations of real GDP from its potential) to
measure Et [πt,k ] and Et [xt,q ] with k = q = 3.38 These forecasts are available for every FOMC
36

There is no presumption that (13) reflects optimal policy and so assuming a constant natural rate is not
inconsistent with our theoretical analysis. We explored using forecasted growth in potential output derived
from Board staff forecasts to proxy for the natural rate and found this did not affect our results.
37
We make no attempt to address the possibility of hitting the ZLB in our estimation. See Chevapatrakul,
Kim, and Mizen (2009) and Kiesel and Wolters (2014) for papers that do this.
38
The appendix describes our data in more detail.

41

meeting. We estimate (15) both meeting-by-meeting and quarter-by-quarter.39 When we
estimate (15) at the quarterly frequency we use staff forecasts corresponding to FOMC
meetings closest to the middle of each quarter.40 We measure Rt at the meeting frequency
with the funds rate target announced (or estimated) at the end of the day of a meeting
and at the quarterly frequency with the average target funds rate over the 30 trading days
following the meeting closest to the middle of the quarter. Provided the error term υt is
serially uncorrelated and is orthogonal to the forecasts and the risk proxies we can obtain
consistent estimates of β, γ and µ by estimating (15) by ordinary least squares. We keep N
sufficiently large to ensure that υt is serially uncorrelated.
To quantify the role of risk we study the magnitude and statistical significance of estimates of µ in (13). An insignificant estimate of µ cannot be interpreted as evidence against
a role for risk management because risk might operate by influencing point forecasts as in
our forward-looking model. We also could find no effect because risk might tilt policy in
opposite directions depending on the circumstances. With the exception of our human-coded
FOMC-based variables, none of our risk proxies accounts for the fact that perceived risks to
the forecast might have different effects on policy depending on the nature of the risk and the
state of the economy. For example, an increase in uncertainty about the inflation outlook
should lead to tight policy if this increase occurs during a period of heightened concerns
about rising inflation, but to looser policy if concerns are over unwanted disinflation. As
such, estimates of the effect of any given proxy will at best reflect the nature of the risk and
the circumstances in which it has arisen that have predominated over the sample period.
Finally, we do not allow for the coefficients on the forecasts to depend on our risk proxies
as is suggested by the work of Brainard (1967) and others. However we show in the appendix
that if these forecast coefficients are linear functions of risk then the null hypothesis that a
given proxy’s coefficient is zero in our now misspecified model encompasses the null that the
forecast coefficients are invariant to risk as measured by that proxy.
39

We assume meetings are equally spaced even though this is not true in practice. We account for this
discrepancy when we calculate standard errors by allowing for heteroskedasticity.
40
Gnabo and Moccero (2014) also estimate quarterly reaction functions using Board staff forecasts.

42

4.2

Proxies for risk management

In addition to our human- and machine-coded FOMC-based variables we consider several
proxies for risk management that do not rely on interpreting the FOMC minutes. Two of
these variables are constructed using the Board staff’s forecast seen by the FOMC at their
regular meetings and so we study them using our meeting frequency reaction functions. The
remaining ones are measured at the quarterly frequency and can be divided into two groups
based on whether they primarily reflect variance or skewness in the forecast.
The two additional FOMC-based proxies involve revisions to the Board staff’s forecasts
for the output gap (frGap) and core CPI inflation (frInf). The revisions correspond to
changes between meeting m and m − 1 in the forecasts over the same one year period that
starts in the quarter of meeting m − 1. A big change in the forecast is usually triggered
by unusual events that may be difficult to interpret and hence generate uncertainty about
the forecast. If the Committee was only worried about these events on its point forecast,
then the post-shock forecasts of the output gap or inflation would be sufficient to describe
the policy setting. However if uncertainty has a separate effect on policy then the forecast
revisions might enter significantly.
Three of the quarterly proxies exploit financial market data: VXO, SPD and JLN. VXO
is the Chicago Board Options Exchange’s measure of market participants’ expectations of
S&P 500 stock index volatility over the next 30 days. Since the S&P 500 reflects earnings
expectations VXO should, at least in part, measure market participants’ uncertainty about
the economic outlook.41 SPD is the difference between the quarterly average of daily yields
on BAA corporate bonds and 10 year Treasury bonds. Gilchrist and Zakrajs̆ek (2012)
demonstrate that this variable measures private-sector default risk plus other factors that
may indicate downside risks to economic growth.42 JLN is Jurado, Ludvigson, and Ng
(2015)’s measure of the common variation in the one-year-ahead unforecastable components
41

Using a VAR framework Bekaert, Hoerova, and Lo Duca (2013) find weak evidence that positive innovations to VXO lead to looser policy. Gnabo and Moccero (2014) find that policy responds more aggressively
to economic conditions and is less inertial in periods of high uncertainty as measured by VXO.
42
Alcidi, Flamini, and Fracasso (2011), Castelnuovo (2003) and Gerlach-Kristen (2004) consider reaction
functions including SPD.

43

of a large number of activity, inflation and financial indicators. Given its basis in measuring
uncertainty about macroeconomic forecasts JLN is a natural risk proxy to consider. But,
unlike VXO and SPD, it does not measure real-time uncertainty, and similar to these two
measures it confounds macroeconomic and financial uncertainty.
The remaining proxies are based on the Survey of Professional Forecasters (SPF) which
surveys forecasters about their point forecasts of GDP growth and GDP deflator inflation
and their probability distributions for these forecasts. We use both kinds of information
to construct measures of variance and skewness in the economic outlook one year ahead.43
Variance is measured using the median among forecasters of the standard deviations calculated from each individual’s probability distribution (vGDP and vInf) and the interquartile
range of point forecasts across individuals (DvGDP and DvInf.)44 Skewness is measured
using the median of the individual forecasters’ mean minus mode (sGDP and sInf) and the
difference between the mean and the mode of the cross-forecaster distribution of point forecasts (DsGDP and DsInf). So a positive (negative) value for one of these proxies represents
upside (downside) risk to the modal forecast. The principle advantage of these proxies is
that they are real-time measures of perceived risks in the forecast. The main drawback of
the measures based on survey respondents’ forecast distributions is that the bins they are
asked to put probability mass on are relatively wide, so statistics based on them may contain substantial measurement error. The proxies based on the cross-section of forecasts are
properly thought of as measuring forecaster disagreement rather than variance or skewness
in the outlook per se. However there is a large literature that uses forecaster disagreement
as a proxy for perceived risk.45
All estimates are based on samples that end in 2008 to avoid the ZLB period, but begin
at different dates to address idiosyncratic features of the data. The benchmark start date is
determined by the onset of Alan Greenspan’s tenure on the FOMC in 1987, but later dates
43

The forecast distributions are for growth and inflation in the current and following year. We use D’Amico
and Orphanides (2014)’s procedure to translate these into distributions of four-quarter-ahead forecasts.
44
Gnabo and Moccero (2014) find statistically insignificant effects of DvInf on monetary policy.
45
As discussed by Baker, Bloom, and Davis (2015) there is no consensus on how good a proxy it is. Note
that we do not study Baker et al. (2015)’s measure of uncertainty since it confounds uncertainty about
monetary policy and the economic outlook.

44

are used in several cases. The sample for the FOMC-based indicators starts in 1993 because
prior to then inter-meeting changes in the target funds rate were much more common then
afterwards; the Committee often voted on a bias to future policy moves and the Chairman
subsequently acted on his discretion. We cannot use inter-meeting moves because we lack
contemporaneous staff forecasts. Furthermore the change in the frequency of inter-meeting
moves raises the spectre of instability in the reaction function.46 The pre-1993 inter-meeting
moves are less of a concern for our quarterly models because in these specifications the funds
rate is not as closely tied to any particular meeting. So we chose to include these data
points to maximize the number of observations except when considering the proxies based
on individuals’ forecast distributions from the SPF. In these cases the first observation is
1992q1 to coincide with a discrete change in SPF methodology.47
Table 4: Summary statistics for the FOMC-based risk proxies
Correlation with
forecast of
Variable

Obs.

Mean

Std. Dev.

Min

Max

Inflation

Output gap

Inflation forecast
Output gap forecast
hUnc
hIns
mUnc
mIns
frInf
frGap

128
128
128
128
128
128
128
128

2.45
-0.14
-0.13
-0.06
2.92
0.83
-0.01
-0.01

0.45
1.58
0.48
0.33
4.80
2.45
0.18
0.41

1.30
-4.85
-1
-1
0
0
-0.63
-2.00

3.53
3.08
1
1
30.8
16.7
0.63
0.77

1.00
0.21
-0.23
0.18
-0.06
-0.10
0.23
0.24

0.21
1.00
-0.33
0.15
0.14
0.08
0.01
0.29

Tables 4 and 5 display summary statistics for Board staff forecasts of inflation and the
output gap and the various proxies for risk management at the meeting and quarterly fre46

Between 1990 and 1992 only 4 of the 18 changes in the funds rate target occurred at a meeting. In
contrast, between 1993 and 2008, 54 of the 61 changes in the funds rate target occurred at FOMC meetings.
Ignoring inter-meeting moves causes specification problems if interest rate smoothing is a function of not
just time, but also the number of policy moves. Indeed, when we estimated our meeting frequency models
starting in 1987 our point estimates are (statistically) similar, but even with 5 funds rate lags substantial
serial correlation remained in the residuals.
47
In 1992 the SPF narrows the bins it uses to summarize the forecast probability distributions of individual
forecasters. See D’Amico and Orphanides (2014) and Andrade, Ghysels, and Idier (2013) for attempts to
address this change in bin sizes.

45

Table 5: Summary statistics for quarterly risk proxies
Correlation with
forecasts of
Variable
Inflation forecast
Output gap forecast
VXO
JLN
vInf
vGDP
DvInf
DvGDP
SPD
sInf
sGDP
DsInf
DsGDP

Obs.

Mean

Std. Dev.

Min

Max

Inflation

Output Gap

86
86
86
86
68
68
86
86
86
68
68
86
86

2.97
-0.45
21.0
0.96
0.74
0.9
0.6
0.73
2.11
0.05
-0.10
0.06
0.3

1.02
1.69
8.48
0.05
0.06
0.12
0.18
0.27
0.65
0.08
0.19
0.20
0.27

1.33
-4.4
10.6
0.89
0.6
0.67
0.24
0.3
1.37
-0.12
-0.54
-0.5
-0.5

5.32
3.08
62.1
1.22
0.90
1.30
1.10
1.64
5.60
0.30
0.47
0.51
0.90

1.00
-0.04
-0.02
-0.06
-0.22
-0.22
0.25
0.37
-0.34
0.23
-0.10
0.01
-0.22

-0.04
1.00
0.04
-0.04
-0.08
0.22
-0.35
-0.05
-0.34
-0.12
-0.48
-0.23
0.21

quencies. The main thing to notice from these tables is that no risk proxy displays a particulary large positive or negative correlation with either the output gap or inflation forecast.
This suggests that our proxies contain information that is not already incorporated into
these forecasts. Nevertheless there are some variables with moderately large correlations in
absolute value so the forecasts do reflect underlying risks to the outlook to some extent.
Interestingly, skewness in forecasters’ GDP forecasts (sGDP) is relatively strongly negatively
correlated with the outlook for activity.
Table 6: Cross-correlations of FOMC-based risk proxies
Variable

hUnc

hIns

mUnc

mIns

frInf

hIns
mUnc
mIns
frInf
frGap

-0.05
-0.07 0.02
-0.13 -0.09
0.10 0.05
-0.11 0.08

0.04
-0.07
0.05

0.06
0.11

0.25

Tables 6 and 7 display cross-correlations of the FOMC-based and quarterly proxies, re-

46

spectively. As suggested by Figures 4 and 5 the human and machine coded FOMC variables
for uncertainty and insurance are essentially uncorrelated. These variables also appear unrelated to the forecast revision variables. Several correlations among the quarterly proxies are
worth noting. Forecaster variance and disagreement about the GDP growth outlook (vGDP
and DvGDP) are both positively correlated with VXO and SPD, suggesting the financial
variables do reflect to some extent uncertainty about the growth outlook. Also, the relatively
high correlation of SPD with sGDP suggests the former to some extent captures skewness
in the growth outlook. The correlation of vGDP with vInf and DvGDP with DvInf are
both fairly large suggesting uncertainty about inflation and GDP often move together. The
correlations of the corresponding forecaster uncertainty and disagreement variables (vGDP
with DvGDP and vInf with DvInf) are somewhat large too. Evidently disagreement among
forecasters is similar to the median amount of uncertainty they see. Finally Jurado et al.
(2015)’s measure of macroeconomic uncertainty JLN is highly correlated with VXO and SPD
and to some extent DvGDP, but much less so with any of the other risk proxies.
Table 7: Cross-correlations of quarterly risk proxies
Variable

VXO

vInf

vGDP

DvInf

JLN
vInf
vGDP
DvInf
DvGDP
SPD
sInf
sGDP
DsInf
DsGDP

0.54
0.04 0.23
0.40 0.29 0.40
0.15 0.18 0.29
0.54 0.38 0.16
0.73 0.67 0.15
-0.27 -0.11 0.29
0.21 0.22 -0.09
-0.28 -0.17 0.10
0.07 -0.08 0.04

0.03
0.37
0.32
-0.16
-0.04
-0.24
0.10

0.33
0.26
0.08
0.25
0.13
-0.22

4.3

JLN

DvGDP

SPD

sInf

sGDP

DsInf

0.35
-0.14 -0.18
0.16 0.43 -0.15
-0.14 -0.08 0.04
0.08 0.02 -0.08

0.15
-0.17

-0.17

Policy rule findings

Table 8 shows our policy rule estimates with and without the various FOMC-based variables;
Tables 9 and 10 show estimates with and without the quarterly variance and skewness proxies.
47

Except for the human coded variables hUnc and hIns, prior to estimation the risk proxies
have been normalized to have mean zero and unit standard deviation so their coefficients
indicate percentage point responses of the funds rate to standard deviation changes. The
tables have the same layout: the first column shows the policy rule excluding any risk
proxies and the other columns show the policy rules after adding the indicated risk proxy.
The coefficient associated with a given risk proxy corresponds to an estimate of µ in (13).
P
The speed of adjustment to the notional funds rate target ( N
j=1 aj ) and the coefficients on
the forecasts of inflation (β) and the output gap (γ) are similar across specifications and
consistent with estimated forecast-based policy rules in the literature.
Table 8: FOMC-based risk proxies in monetary policy rules

P5

j=1

aj

β
γ

(1)

(2)

(3)

(4)

(5)

(6)

(7)

.81∗∗∗

.80∗∗∗

.81∗∗∗

.80∗∗∗

.81∗∗∗

.84∗∗∗

.81∗∗∗

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

(.04)

1.89∗∗∗

1.95∗∗∗

1.86∗∗∗

1.90∗∗∗

1.89∗∗∗

1.75∗∗∗

1.89∗∗∗

(.17)

(.16)

(.17)

(.17)

(.17)

(.22)

(.17)

.85∗∗∗

.88∗∗∗

.85∗∗∗

.83∗∗∗

.85∗∗∗

.80∗∗∗

.85∗∗∗

(.05)

(.05)

(.05)

(.05)

(.05)

(.06)

(.06)

∗∗

hUnc

.40

(.16)

hIns

.48
(.45)

.11∗

mUnc

(.06)

mIns

-.0006
(.05)

.47∗∗

frGap

(.19)

frInf

-.009
(.14)

LM
Obs.

.31
128

.07
128

.59
128

.58
128

.31
128

.63
128

.20
128

Note: Superscripts ∗∗∗ , ∗∗ and ∗ indicate statistical significance at the 1, 5 and
10 percent levels. Standard errors are robust to heteroskedasticity. Entries in the
“LM” row are p-values of Durbin’s test for the null hypothesis of no serial correlation
in the residuals up to fifth order.

48

From Table 8 we see that the coefficient on the human coding of uncertainty (hUnc) is
statistically significant at the 5% level and it indicates that when uncertainty has shaded the
policy decision above or below the forecast-only prescription it has moved the notional target
by 40 bps. With interest rate smoothing the immediate impact is much smaller; the 95%
confidence interval is 2-14 bps. The machine coding of uncertainty (mUnc) is significant
at the 10% level but the effect is small. The insurance indicators (hIns and mIns) are
not significant, but the point estimate of the hIns coefficient is similar to its uncertainty
counterpart. The coefficient on the output gap forecast revision variable (frGap) is large
and significant, indicating a one standard deviation positive surprise in the forecast raises
the notional target by 47 bps over and above the impact this surprise has on the forecast
itself.48 In contrast, revisions to the inflation outlook (frInf) do not influence policy beyond
their direct effect on the forecast.
Table 9 shows clear evidence that variance in the economic outlook has shaded policy away
from the forecast-only prescription. The coefficients on VXO and JLN are both statistically
and economically significant with one standard deviation increases lowering the notional
target funds rate by 43 and 29 bps.49 Disagreement over the GDP forecast (DvGDP) has
a significant coefficient which is similar to the ones for VXO and JLN, suggesting that the
latter variables’ correlation with monetary policy reflects uncertainty in the growth outlook.
That all these coefficients are negative suggests that higher uncertainty about growth has
influenced the FOMC when it was concerned about recessionary dynamics and lowered the
funds rate more than prescribed by the forecast alone. The only other significant coefficient
in Table 9 corresponds to the measure of individual forecasters’ views about the uncertainty
in their inflation forecasts (vInf). In this case uncertainty shades the policy higher, by about
20 bps. This suggests that higher uncertainty about the inflation forecast has influenced the
FOMC when it was concerned about inflation rising above desired levels and raised rates
48

The magnitude and significance of this coefficient is largely driven by the sharp decline in the funds rate
in 2008 that occurred alongside substantial downward revisions to the output gap forecast.
49
The JLN variable can be expressed as a linear combination of the three uncertainty measures constructed
with the underlying activity, inflation, and financial indicators separately. We used Jurado et al. (2015)’s
replication software to separate out these components, and found that the estimated effects of JLN are driven
primarily by the financial indicators.

49

Table 9: Quarterly variance proxies in monetary policy rules

P2

j=1

aj

β

(1)

(2)

(3)

(4)

(5)

(6)

(7)

.69∗∗∗

.69∗∗∗

.70∗∗∗

.70∗∗∗

.69∗∗∗

.69∗∗∗

.69∗∗∗

(.03)

(.03)

(.03)

(.04)

(.04)

(.04)

(.03)

∗∗∗

1.73

(.12)

γ

∗∗∗

.80

(.06)

∗∗∗

1.73

(.11)
∗∗∗

.84

(.06)

∗∗∗

1.72

∗∗∗

2.21

(.12)

(.17)

∗∗∗

.81

(.06)

∗∗∗

.78

(.07)

∗∗∗

2.13

(.16)
∗∗∗

.77

(.06)

∗∗∗

1.75

(.11)
∗∗∗

.78

(.07)

1.88∗∗∗
(.13)

.81∗∗∗
(.06)

-.43∗∗∗

VXO

(.11)

-.29∗∗∗

JLN

(.09)

.21∗∗

vInf

(.10)

vGDP

.03
(.12)

DvInf

-.09
(.13)

-.38∗∗∗

DvGDP

(.13)

LM
Obs.

.53
86

.56
86

.86
86

.71
68

.59
68

.52
86

.86
86

Note: Superscripts ∗∗∗ , ∗∗ and ∗ indicate statistical significance at the 1, 5 and
10 percent levels. Standard errors are robust to heteroskedasticity. Entries in the
“LM” row are p-values of Durbin’s test for the null hypothesis of no serial correlation
in the residuals up to second order.

above levels prescribed by the baseline forecast.
Similarly strong evidence that skewness has mattered for policy decisions is found in Table
10. The coefficients on the interest rate spread indicator of downside risks to activity (SPD),
skewness in the outlook for inflation measured from forecasters’ own forecast distributions
(sInf) and skewness in the inflation outlook measured across point forecasts (DsInf) are all
significant. An increase in perceived downside risks to activity lowers the funds rate, while
an increase in perceived upside risks to inflation raises it. The effects seem large; increases
in the skewness proxies change the notional target by -56, 23 and 40 bps, respectively. These
findings reinforce those for the variance proxies and similarly seem consistent with our reading

50

Table 10: Quarterly skewness proxies in monetary policy rules

P2

j=1

aj

β

(1)

(2)

(3)

(4)

(5)

(6)

.69∗∗∗

.68∗∗∗

.71∗∗∗

.70∗∗∗

.72∗∗∗

.69∗∗∗

(.03)

(.03)

(.04)

(.04)

(.03)

(.03)

∗∗∗

1.73

(.12)

γ

∗∗∗

.80

(.06)

∗∗∗

1.55

(.11)
∗∗∗

.71

(.06)

∗∗∗

2.02

(.16)
∗∗∗

.80

(.07)

∗∗∗

2.09

(.16)
∗∗∗

.74

(.08)

∗∗∗

1.74

(.10)
∗∗∗

.89

(.08)

1.69∗∗∗
(.12)

.81∗∗∗
(.07)

-.56∗∗∗

SPD

(.14)

.23∗∗

sInf

(.10)

sGDP

-.15
(.11)

.40∗∗∗

DsInf

(.13)

DsGDP

-.16
(.12)

LM
Obs.

.53
86

.90
86

.34
68

.67
68

.61
86

.62
86

Note: Superscripts ∗∗∗ , ∗∗ and ∗ indicate statistical significance at the 1,
5 and 10 percent levels. Standard errors are robust to heteroskedasticity.
Entries in the “LM” row are p-values of Durbin’s test for the null hypothesis
of no serial correlation in the residuals up to second order.

of FOMC communications. The point estimates for skewness in the GDP outlook (sGDP
and DsGDP) have surprisingly negative signs. However these coefficients are relatively small
and insignificant.
Taken together, these results indicate that risk management concerns broadly conceived
have had a statistically and economically significant impact on policy decisions over and
above how those concerns are reflected in point forecasts. The effects we find suggest that
the Committee acted aggressively to offset concerns about declining growth or rising inflation.
We conclude from this econometric analysis that risk management does not just appear in
the words of the FOMC – it is reflected in their deeds as well.

51

5

Conclusion

We have focused on risk surrounding the forecast as a relevant consideration for monetary
policy near the ZLB, but other issues are relevant to the liftoff calculus as well. In particular,
policymakers may face large reputational costs of reversing a decision. Empirically, it is well
known that central banks tend to go through “tightening” and “easing” cycles which in turn
induce substantial persistence in the short-term interest rate. Uncertainty over the outlook
may be one reason for this persistence. But another reason why policymakers might be
reluctant to reverse course is that it would damage their reputation, perhaps because the
public would lose confidence in the central bank’s ability to understand and stabilize the
economy. With high uncertainty, this reputation element would lead to more caution. In
the case of liftoff, it argues for a longer delay in raising rates to avoid the reputational costs
of reverting back to the ZLB.
Another reputational concern is the signal the public might infer about the central bank’s
commitment to its stated policy goals. With regard to liftoff, suppose it occurred with
output or inflation still far below target. Large gaps on their own pose no threat to the
central bank’s credibility if the public is confident that the economy is on a path to achieve
its objectives in a reasonable period of time and that it is willing to accommodate this
path. However, if there is uncertainty over the strength of the economy, early liftoff might
be construed as a less-than-enthusiastic endorsement of the central bank’s ultimate policy
objectives. Motivated by the current situation, we have focused in the paper on the case of
a central bank that is undershooting its inflation target, but similar issues would arise if risk
management considerations dictated an aggressive tightening to guard against inflation and
the central bank failed to act accordingly. In a wide class of models such losses of credibility
can have deleterious consequences for achieving the central bank’s objectives.

52

References
Adam, K. and R. M. Billi (2007). Discretionary monetary policy and the zero lower bound
on nominal interest rates. Journal of Monetary Economics 54 (3), 728–752.
Alcidi, C., A. Flamini, and A. Fracasso (2011). Policy regime changes, judgment and taylor
rules in the greenspan era. Economica 78, 89–107.
Andrade, P., E. Ghysels, and J. Idier (2013). Tails of inflation forecasts and tales of monetary
policy. UNC Kenan-Flagler Research Paper No. 2013-17.
Baker, S. R., N. Bloom, and S. J. Davis (2015). Measuring economic policy uncertainty.
Stanford University manuscript.
Barlevy, G. (2011). Robustness and macroeconomic policy. Annual Review of Economics 3,
1–24.
Barsky, R., A. Justiniano, and L. Melosi (2014). The natural rate and its usefulness for
monetary policy making. American Economic Review 104 (4), 37–43.
Basu, S. and B. Bundick (2013). Downside risk at the zero lower bound. Boston College
Manuscript.
Bekaert, G., M. Hoerova, and M. Lo Duca (2013). Risk, uncertainty and monetary policy.
Journal of Monetary Economics 60, 771–788.
Bernanke, B. S. (2012). The changing policy landscape. In Monetary Policy Since the Onset
of the Crisis, Economic Policy Symposium, pp. 1–22. Federal Reserve Bank of Kansas
City.
Bloom, N. (2009). The effect of uncertainty shocks. Econometrica 77 (3), 623–685.
Bomfin, A. and L. Meyer (2010). Quantifying the effects of fed asset purchases on treasury
yields. Macroeconomics Advisors, Monetary Policy Insights: Fixed Income Focus.
Born, B. and J. Pfeifer (2014). Policy risk and the business cycle. Journal of Monetary
Economics 68, 68–85.
Brainard, W. (1967). Uncertainty and the effectiveness of policy. American Economic
Review 57 (2), 411–425.
Campbell, J. R., C. L. Evans, J. D. Fisher, and A. Justiniano (2012). Macroeconomic effects
of federal reserve forward guidance. Brookings Papers on Economic Activity Spring, 1–54.
Castelnuovo, E. (2003). Taylor rules, omitted variables, and interest rate smoothing in the
US. Economics Letters 81, 55–59.
Chen, H., V. Curida, and A. Ferrero (2012). The macroeconomic effects of large scale asset
purchase programmes. Economic Journal 122, 289–315.
53

Chevapatrakul, T., T. Kim, and P. Mizen (2009). The Taylor principle and monetary policy
approaching a zero bound on nominal rates: Quantile regression results for the united
states and japan. Journal of Money, Credit and Banking 41 (8), 1705–1723.
Christiano, L., M. Eichenbaum, and C. Evans (2005). Nominal rigidities and the dynamic
effects of a shock to monetary policy. Journal of Political Economy 113 (1), 1–45.
Clarida, R., J. Gali, and M. Gertler (2000). Monetary policy rules and macroeconomic
stability: Evidence and some theory. Quarterly Journal of Economics CXV (1), 147–180.
Coibion, O., Y. Gorodnichenko, and J. Wieland (2012). The optimal inflation rate in new
keynesian models: Should central banks raise their inflation targets in light of the zero
lower bound? Review of Economic Studies 79 (4), 1371–1406.
Cúrdia, V., A. Ferrero, Ging Cee Ng, and A. Tambalotti (2015). Has U.S. monetary policy
tracked the efficient interest rate. Journal of Monetary Economics 70, 72–83.
D’Amico, S. and T. King (2013). Flow and stock effects of large-scale treasury purchases:
Evidence on the importance of local supply. Journal of Financial Economics 108, 425–448.
D’Amico, S. and T. King (2015). Policy expectations, term premia, and macroeconomic
performance. Federal Reserve Bank of Chicago manuscript.
D’Amico, S. and A. Orphanides (2014). Inflation uncertainty and disagreement in bond risk
premia. Chicago Fed working paper 2014-24.
Dolado, J. J., P. R. Marı́a-Dolores, and M. Naveira (2005). Are monetary policy reaction
functions asymmetric?: The role of nonlinearity in the phillips curve. European Economic
Review 49 (2), 485–503.
Dolado, J. J., P. R. Marı́a-Dolores, and F. J. Ruge-Murcia (2004). Nonlinear monetary
policy rules: Some new evidence for the us. Studies in Nonlinear Dynamics and Econometrics 8 (3).
Egertsson, G. B. and M. Woodford (2003). The zero bound on interest rates and optimal
monetary policy. Brookings Papers on Economic Activity 2003 (1), 139–211.
Eichenbaum, M. and J. D. Fisher (2007). Esimating the frequency of price re-optimization
in calvo-style models. Journal of Monetary Economics 54 (7), 2032–2047.
Engen, E. M., T. T. Laubach, and D. Reifschneider (2015). The macroeconomic effects of
the federal reserve’s unconventional monetary policy. Finance and Ecnomics Discussion
Series 2012-5, Board of Governors of the Federal Reserve System.
English, W. B., D. López-Salido, and R. Tetlow (2013). The federal reserve’s framework for
monetary policy – rcent changes and new questions. Federal Reserve Board, Finance and
Economics Discussion Series 2013-76.
54

Evans, C. L. (2014). Patience is a virtue when normalizing monetary policy. Text of speech
to Peterson Institute for International Economics.
Fernández-Villaverde, J., P. Guerró-Quintana, K. Kuester, and J. Rubio-Ramı́rez (2012).
Fiscal volatility shocks and economic activity. Philladelphia Fed Working Paper No. 1132/R.
Fuhrer, J. C. (2000). Habit formation in consumption and its implications for monetarypolicy models. America Economic Review 90 (3), 367–390.
Gagnon, J., M. Raskin, J. Remache, and B. Sack (2010). Optimal fiscal and monetary policy
with occasionally binding zero bound constraints. Federal Reserve Bank of New York Staff
Report No. 441.
Gali, J. (2008). An Introduction to the New Keynesian Framework. Princeton, NJ: Princeton
University Press.
Gali, J. and M. Gertler (1999). Inflation dynamics: a structural econometric analysis. Journal
of Monetary Economics 44, 192–222.
Gerlach-Kristen, P. (2004). Interest rate smoothing: Monetary policy inertia or unobserved
variables. Contributions in Macroeconomics 4 (1), 1534–6005.
Gilchrist, S. and E. Zakrajs̆ek (2012). Credit spreads and business cycle fluctuations. American Economic Review 102 (4), 1692–1720.
Gnabo, J. and D. N. Moccero (2014). Risk management, nonlinearity and agressiveness in
monetary policy: The case of the US Fed. Journal of Banking & Finance.
Goodfriend, M. (1991). Interest rates and the conduct of monetary policy. CarnegieRochester Conference Series on Public Policy 34, 7–30.
Greenspan, A. (2004). Risk and uncertainty in monetary policy. American Economic Review 94 (2), 33–40.
Hamilton, J. and J. Wu (2010). The effectiveness of alternative monetary policy tools in a
zero lower bound environment. UCSD manuscript.
Hamilton, J. D., E. S. Harris, J. Hatzius, and K. D. West (2015). The equilibrium real funds
rate: Past, present and future. UCSD manuscript.
Hansen, L. and T. Sargent (2008). Robustness. Princeton, NJ: Princeton University Press.
Jurado, K., S. Ludvigson, and S. Ng (2015). Measuring uncertainty. American Economic
Review 105 (3), 1177–1276.
Kiesel, K. and M. H. Wolters (2014). Estimating monetary policy rules when the zero lower
bound on nominal interest rates is approached. Kiel Working Paper No. 1898.
55

Kiley, M. T. (2012). The aggregate demand effects of short- and long-term interest rates. Finance and Ecnomics Discussion Series 2012-54, Board of Governors of the Federal Reserve
System.
Kilian, L. and S. Manganelli (2008). The central banker as a risk manager: Estimating
the federal reserve’s preferences under greenspan. Journal of Money, Credit and Banking 40 (6).
Krishnamurthy, A. and A. Vissing-Jorgensen (2013). The ins and outs of lsaps. Kansas City
Federal Reserve Symposium on Global Dimensions of Unconventional Monetary Policy.
Krugman, P. R. (1998). It’s baaack: Japan’s slump and the return of the liquidity trap.
Brookings Papers on Economic Activity Fall, 137–187.
Laubach, T. and J. C. Williams (2003). Measuring the natural rate of interest. The Review
of Economics and Statistics 85 (4), 1063–1070.
Laxton, D., D. Rose, and D. Tambakis (1999). The U.S. phillips curve: The case for asymmetry. Journal of Economic Dynamics & Control 23, 1459–1485.
Mas-Colell, A., M. D. Whinston, and J. R. Green (1995). Microeconomic Theory. Oxford,
United Kingdom: Oxford University Press.
Mumtaz, H. and P. Surico (2015). The transmission mechanism in good and bad times.
Forthcoming International Economic Review.
Nakata, T. (2013a). Optimal fiscal and monetary policy with occasionally binding zero
bound constraints. Finance and Ecnomics Discussion Series 2013-40, Board of Governors
of the Federal Reserve System.
Nakata, T. (2013b). Uncertainty at the zero lower bound. Finance and Ecnomics Discussion
Series 2013-09, Board of Governors of the Federal Reserve System.
Nakata, T. and S. Schmidt (2014). Conservatism and liquidity traps. Finance and Ecnomics
Discussion Series 2014-105, Board of Governors of the Federal Reserve System.
Nakov, A. A. (2008). Optimal and simple monetary policy rules with zero floor on the
nominal interest rate. International Journal of Central Banking 4 (2), 73–127.
Orphanides, A. and J. C. Williams (2002). Robust monetary policy rules with unknown
natural rates. Brookings Papers on Economic Activity Fall, 63–145.
Reifschneider, D. and J. C. Williams (2000, November). Three lessons for monetary policy
in a low-inflation era. Journal of Money, Credit, and Banking 32 (4), 936–966.
Romer, C. D. and D. H. Romer (1989). Does monetary policy matter? a new test in the spirit
of friedman and schwartz. In O. Blanchard and S. Fischer (Eds.), NBER Macroeconomics
Annual 2005, Volume 4.
56

Rudebusch, G. and L. E. Svensson (1999). Policy rules for inflation targeting. In J. B. Taylor
(Ed.), Monetary Policy Rules. Chicago, IL: University of Chicago Press.
Rudebusch, G. D. (2002). Term structure evidence on interest rate smoothing and monetary
policy inertia. Journal of Monetary Economics 49, 1161–1187.
Sack, B. (2000). Does the fed act gradually? a var analysis. Journal of Monetary Economics 46 (1), 229–256.
Smets, F. and R. Wouters (2007). Shocks and frictions in US business cycles: A Bayesian
DSGE approach. The American Economic Review 97 (3), 586–606.
Surico, P. (2007). The fed’s monetary policy rule and U.S. inflation: The case of asymmetric
preferences. The Journal of Economic Dynamics & Control 31, 305–324.
Svensson, L. and M. Woodford (2002). Optimal policy with partial information in a forwardlooking model: Certianty-equivalence redux. Manuscript.
Svensson, L. and M. Woodford (2003). Indicator variables for optimal policy. Journal of
Monetary Economics 50, 691–720.
Taylor, J. B. (1993). Discretion versus policy rules in practice. Carnegie-Rochester Conference Series on Public Policy 39, 195–214.
Tenreyro, S. and G. Thwaites (2015). Pushing on a string: US monetary policy is less
powerful in recessions. Manuscript.
Werning, I. (2012). Managing a liquidity trap: Monetary and fiscal policy. MIT Manuscript.
Woodford, M. (2003). Interest and Prices. Princeton, NJ: Princeton University Press.
Woodford, M. (2012). The changing policy landscape. In Methods of Policy Accommodation
at the Interest-Rate Lower Bound, Economic Symposium, pp. 185–288. Federal Reserve
Bank of Kansas City.

57

Working Paper Series
A series of research studies on regional economic issues relating to the Seventh Federal
Reserve District, and on financial and economic topics.
Examining Macroeconomic Models through the Lens of Asset Pricing
Jaroslav Borovička and Lars Peter Hansen

WP-12-01

The Chicago Fed DSGE Model
Scott A. Brave, Jeffrey R. Campbell, Jonas D.M. Fisher, and Alejandro Justiniano

WP-12-02

Macroeconomic Effects of Federal Reserve Forward Guidance
Jeffrey R. Campbell, Charles L. Evans, Jonas D.M. Fisher, and Alejandro Justiniano

WP-12-03

Modeling Credit Contagion via the Updating of Fragile Beliefs
Luca Benzoni, Pierre Collin-Dufresne, Robert S. Goldstein, and Jean Helwege

WP-12-04

Signaling Effects of Monetary Policy
Leonardo Melosi

WP-12-05

Empirical Research on Sovereign Debt and Default
Michael Tomz and Mark L. J. Wright

WP-12-06

Credit Risk and Disaster Risk
François Gourio

WP-12-07

From the Horse’s Mouth: How do Investor Expectations of Risk and Return
Vary with Economic Conditions?
Gene Amromin and Steven A. Sharpe

WP-12-08

Using Vehicle Taxes To Reduce Carbon Dioxide Emissions Rates of
New Passenger Vehicles: Evidence from France, Germany, and Sweden
Thomas Klier and Joshua Linn

WP-12-09

Spending Responses to State Sales Tax Holidays
Sumit Agarwal and Leslie McGranahan

WP-12-10

Micro Data and Macro Technology
Ezra Oberfield and Devesh Raval

WP-12-11

The Effect of Disability Insurance Receipt on Labor Supply: A Dynamic Analysis
Eric French and Jae Song

WP-12-12

Medicaid Insurance in Old Age
Mariacristina De Nardi, Eric French, and John Bailey Jones

WP-12-13

Fetal Origins and Parental Responses
Douglas Almond and Bhashkar Mazumder

WP-12-14

1

Working Paper Series (continued)
Repos, Fire Sales, and Bankruptcy Policy
Gaetano Antinolfi, Francesca Carapella, Charles Kahn, Antoine Martin,
David Mills, and Ed Nosal

WP-12-15

Speculative Runs on Interest Rate Pegs
The Frictionless Case
Marco Bassetto and Christopher Phelan

WP-12-16

Institutions, the Cost of Capital, and Long-Run Economic Growth:
Evidence from the 19th Century Capital Market
Ron Alquist and Ben Chabot

WP-12-17

Emerging Economies, Trade Policy, and Macroeconomic Shocks
Chad P. Bown and Meredith A. Crowley

WP-12-18

The Urban Density Premium across Establishments
R. Jason Faberman and Matthew Freedman

WP-13-01

Why Do Borrowers Make Mortgage Refinancing Mistakes?
Sumit Agarwal, Richard J. Rosen, and Vincent Yao

WP-13-02

Bank Panics, Government Guarantees, and the Long-Run Size of the Financial Sector:
Evidence from Free-Banking America
Benjamin Chabot and Charles C. Moul

WP-13-03

Fiscal Consequences of Paying Interest on Reserves
Marco Bassetto and Todd Messer

WP-13-04

Properties of the Vacancy Statistic in the Discrete Circle Covering Problem
Gadi Barlevy and H. N. Nagaraja

WP-13-05

Credit Crunches and Credit Allocation in a Model of Entrepreneurship
Marco Bassetto, Marco Cagetti, and Mariacristina De Nardi

WP-13-06

Financial Incentives and Educational Investment:
The Impact of Performance-Based Scholarships on Student Time Use
Lisa Barrow and Cecilia Elena Rouse

WP-13-07

The Global Welfare Impact of China: Trade Integration and Technological Change
Julian di Giovanni, Andrei A. Levchenko, and Jing Zhang

WP-13-08

Structural Change in an Open Economy
Timothy Uy, Kei-Mu Yi, and Jing Zhang

WP-13-09

The Global Labor Market Impact of Emerging Giants: a Quantitative Assessment
Andrei A. Levchenko and Jing Zhang

WP-13-10

2

Working Paper Series (continued)
Size-Dependent Regulations, Firm Size Distribution, and Reallocation
François Gourio and Nicolas Roys

WP-13-11

Modeling the Evolution of Expectations and Uncertainty in General Equilibrium
Francesco Bianchi and Leonardo Melosi

WP-13-12

Rushing into American Dream? House Prices, Timing of Homeownership,
and Adjustment of Consumer Credit
Sumit Agarwal, Luojia Hu, and Xing Huang

WP-13-13

The Earned Income Tax Credit and Food Consumption Patterns
Leslie McGranahan and Diane W. Schanzenbach

WP-13-14

Agglomeration in the European automobile supplier industry
Thomas Klier and Dan McMillen

WP-13-15

Human Capital and Long-Run Labor Income Risk
Luca Benzoni and Olena Chyruk

WP-13-16

The Effects of the Saving and Banking Glut on the U.S. Economy
Alejandro Justiniano, Giorgio E. Primiceri, and Andrea Tambalotti

WP-13-17

A Portfolio-Balance Approach to the Nominal Term Structure
Thomas B. King

WP-13-18

Gross Migration, Housing and Urban Population Dynamics
Morris A. Davis, Jonas D.M. Fisher, and Marcelo Veracierto

WP-13-19

Very Simple Markov-Perfect Industry Dynamics
Jaap H. Abbring, Jeffrey R. Campbell, Jan Tilly, and Nan Yang

WP-13-20

Bubbles and Leverage: A Simple and Unified Approach
Robert Barsky and Theodore Bogusz

WP-13-21

The scarcity value of Treasury collateral:
Repo market effects of security-specific supply and demand factors
Stefania D'Amico, Roger Fan, and Yuriy Kitsul
Gambling for Dollars: Strategic Hedge Fund Manager Investment
Dan Bernhardt and Ed Nosal
Cash-in-the-Market Pricing in a Model with Money and
Over-the-Counter Financial Markets
Fabrizio Mattesini and Ed Nosal
An Interview with Neil Wallace
David Altig and Ed Nosal

WP-13-22

WP-13-23

WP-13-24

WP-13-25

3

Working Paper Series (continued)
Firm Dynamics and the Minimum Wage: A Putty-Clay Approach
Daniel Aaronson, Eric French, and Isaac Sorkin
Policy Intervention in Debt Renegotiation:
Evidence from the Home Affordable Modification Program
Sumit Agarwal, Gene Amromin, Itzhak Ben-David, Souphala Chomsisengphet,
Tomasz Piskorski, and Amit Seru

WP-13-26

WP-13-27

The Effects of the Massachusetts Health Reform on Financial Distress
Bhashkar Mazumder and Sarah Miller

WP-14-01

Can Intangible Capital Explain Cyclical Movements in the Labor Wedge?
François Gourio and Leena Rudanko

WP-14-02

Early Public Banks
William Roberds and François R. Velde

WP-14-03

Mandatory Disclosure and Financial Contagion
Fernando Alvarez and Gadi Barlevy

WP-14-04

The Stock of External Sovereign Debt: Can We Take the Data at ‘Face Value’?
Daniel A. Dias, Christine Richmond, and Mark L. J. Wright

WP-14-05

Interpreting the Pari Passu Clause in Sovereign Bond Contracts:
It’s All Hebrew (and Aramaic) to Me
Mark L. J. Wright

WP-14-06

AIG in Hindsight
Robert McDonald and Anna Paulson

WP-14-07

On the Structural Interpretation of the Smets-Wouters “Risk Premium” Shock
Jonas D.M. Fisher

WP-14-08

Human Capital Risk, Contract Enforcement, and the Macroeconomy
Tom Krebs, Moritz Kuhn, and Mark L. J. Wright

WP-14-09

Adverse Selection, Risk Sharing and Business Cycles
Marcelo Veracierto

WP-14-10

Core and ‘Crust’: Consumer Prices and the Term Structure of Interest Rates
Andrea Ajello, Luca Benzoni, and Olena Chyruk

WP-14-11

The Evolution of Comparative Advantage: Measurement and Implications
Andrei A. Levchenko and Jing Zhang

WP-14-12

4

Working Paper Series (continued)
Saving Europe?: The Unpleasant Arithmetic of Fiscal Austerity in Integrated Economies
Enrique G. Mendoza, Linda L. Tesar, and Jing Zhang

WP-14-13

Liquidity Traps and Monetary Policy: Managing a Credit Crunch
Francisco Buera and Juan Pablo Nicolini

WP-14-14

Quantitative Easing in Joseph’s Egypt with Keynesian Producers
Jeffrey R. Campbell

WP-14-15

Constrained Discretion and Central Bank Transparency
Francesco Bianchi and Leonardo Melosi

WP-14-16

Escaping the Great Recession
Francesco Bianchi and Leonardo Melosi

WP-14-17

More on Middlemen: Equilibrium Entry and Efficiency in Intermediated Markets
Ed Nosal, Yuet-Yee Wong, and Randall Wright

WP-14-18

Preventing Bank Runs
David Andolfatto, Ed Nosal, and Bruno Sultanum

WP-14-19

The Impact of Chicago’s Small High School Initiative
Lisa Barrow, Diane Whitmore Schanzenbach, and Amy Claessens

WP-14-20

Credit Supply and the Housing Boom
Alejandro Justiniano, Giorgio E. Primiceri, and Andrea Tambalotti

WP-14-21

The Effect of Vehicle Fuel Economy Standards on Technology Adoption
Thomas Klier and Joshua Linn

WP-14-22

What Drives Bank Funding Spreads?
Thomas B. King and Kurt F. Lewis

WP-14-23

Inflation Uncertainty and Disagreement in Bond Risk Premia
Stefania D’Amico and Athanasios Orphanides

WP-14-24

Access to Refinancing and Mortgage Interest Rates:
HARPing on the Importance of Competition
Gene Amromin and Caitlin Kearns

WP-14-25

Private Takings
Alessandro Marchesiani and Ed Nosal

WP-14-26

Momentum Trading, Return Chasing, and Predictable Crashes
Benjamin Chabot, Eric Ghysels, and Ravi Jagannathan

WP-14-27

Early Life Environment and Racial Inequality in Education and Earnings
in the United States
Kenneth Y. Chay, Jonathan Guryan, and Bhashkar Mazumder

WP-14-28

5

Working Paper Series (continued)
Poor (Wo)man’s Bootstrap
Bo E. Honoré and Luojia Hu

WP-15-01

Revisiting the Role of Home Production in Life-Cycle Labor Supply
R. Jason Faberman

WP-15-02

Risk Management for Monetary Policy Near the Zero Lower Bound
Charles Evans, Jonas Fisher, François Gourio, and Spencer Krane

WP-15-03

6