View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Conventional and Unconventional Monetary Policy
Vasco Cúrdia and Michael Woodford
The authors extend a standard New Keynesian model to incorporate heterogeneity in spending
opportunities and two sources of (potentially time-varying) credit spreads and to allow a role for
the central bank’s balance sheet in equilibrium determination. They use the model to investigate
the implications of imperfect financial intermediation for familiar monetary policy prescriptions,
and to consider additional dimensions of central bank policy—variations in the size and composition of the central bank’s balance sheet and payment of interest on reserves—alongside the traditional question of the proper choice of setting an operating target for an overnight policy rate. The
authors also give particular attention to the special problems that arise when the policy rate reaches
the zero lower bound. They show that it is possible within a single unified framework to identify
the criteria for policy to be optimal along each dimension. The suggested policy prescriptions apply
equally well when financial markets work efficiently as when they are substantially disrupted and
interest rate policy is constrained by the zero lower bound. (JEL E44, E52)
Federal Reserve Bank of St. Louis Review, July/August 2010, 92(4), pp. 229-64.

T

he recent global financial crisis has
confronted central banks with a number of questions beyond the scope of
many conventional accounts of the
theory of monetary policy. For example, do projections of the paths of inflation and of aggregate
real activity under some contemplated path for
policy provide a sufficient basis for monetary
policy decisions, or must financial conditions
be given independent weight in such deliberations? That the Fed began aggressively cutting
its target for the federal funds rate in late 2007
and early 2008, while inflation was arguably
increasing and real GDP was not yet known to be
contracting—and has nonetheless often been criticized as responding too slowly in this period—
suggests that familiar prescriptions that focus on

inflation and real GDP alone, such as the Taylor
(1993) rule or common accounts of “flexible
inflation targeting” (Svensson, 1997), may be
inadequate to circumstances of the kind recently
faced.1 As a further, more-specific question, how
should a central bank’s interest rate policy be
affected by the observation that other key interest
rates no longer co-move with the policy rate (the
federal funds rate in the case of the United States)
in the way they typically have in the past? The
dramatically different behavior of the LIBOR-OIS
spread, shown in Figure 1, since August 2007, has
drawn particular comment. Indeed, John Taylor
1

See Mishkin (2008) for discussion of some of the considerations
behind the Fed’s relatively aggressive rate cuts in the early part of
the crisis.

Vasco Cúrdia is an economist at the Federal Reserve Bank of New York. Michael Woodford is a professor in the department of economics at
Columbia University. The authors thank Michele Boldrin, Gauti Eggertsson, Marvin Goodfriend, Jamie McAndrews, John Taylor, and Kazuo
Ueda for helpful discussions; Neil Mehrotra, Ging Cee Ng, and Luminita Stevens for research assistance; and the National Science Foundation
for research support of the second author.

© 2010, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, the Federal Reserve Bank of New York or the regional Federal Reserve Banks.
Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s),
and full citation are included. Abstracts, synopses, and other derivative works may be made only with prior written permission of the Federal
Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

229

Cúrdia and Woodford

Figure 1
Spread Between the U.S. Dollar LIBOR Rate and the Corresponding OIS Rate
Basis Points
400
1M
350

3M
6M

300
250
200

150
100

50

3/

1/

3/
20
3/ 06
2
5 / 00 6
3/
2
7 / 00 6
3/
2
9 / 00 6
3/
11 200
/3 6
/2
1/ 00
3/ 6
2
3 / 00 7
3/
2
5 / 00 7
3/
2
7 / 00 7
3/
2
9 / 00 7
3/
11 200
/3 7
/2
1 / 0 07
3/
2
3 / 00 8
3/
2
5 / 00 8
3/
2
7 / 00 8
3/
2
9 / 00 8
3/
11 200
/3 8
/2
1 / 0 08
3/
2
3 / 00 9
3/
2
5 / 00 9
3/
2
7 / 00 9
3/
2
9 / 00 9
3/
20
09

0

SOURCE: Bloomberg.

himself (Taylor, 2008) has suggested that movements in this spread should be taken into account
in an extension of his famous rule.
In addition to such new questions about
traditional interest rate policy, the very focus on
interest rate policy as the central question about
monetary policy has been called into question.
The explosive growth of base money in the United
States since September 2008 (shown in Figure 2)
has led many commentators to suggest that the
main instrument of U.S. monetary policy has
changed from an interest rate policy to one often
described as “quantitative easing.” Does it make
sense to regard the supply of bank reserves (or
perhaps the monetary base) as an alternative or
superior operating target for monetary policy?
Does this (as some would argue) become the only
important monetary policy decision once the
230

J U LY / A U G U S T

2010

overnight rate (the federal funds rate) has reached
the zero lower bound, as it effectively has in the
United States since December 2008 (Figure 3)?
And now that the Federal Reserve has legal authorization to pay interest on reserves (under the
Emergency Economic Stabilization Act of 2008),
how should this additional potential dimension
of policy be used?
The past two years have also seen dramatic
developments in the composition of the asset side
of the Fed’s balance sheet (Figure 4). Whereas the
Fed had largely held Treasury securities on its
balance sheet before the fall of 2007, other kinds
of assets—including both a variety of new “liquidity facilities” and new programs under which the
Fed has essentially become a direct lender to certain sectors of the economy—have rapidly grown
in importance. How to manage these programs has
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 2
Liabilities of the Federal Reserve
$ Billions
2,500
Currency
Reserves
2,000

Treasury SFA

1,500

1,000

500

7
3/
20
07
7/
3/
20
07
9/
3/
20
07
11
/3
/2
00
7
1/
3/
20
08
3/
3/
20
08
5/
3/
20
08
7/
3/
20
08
9/
3/
20
08
11
/3
/2
00
8
1/
3/
20
3/ 09
3/
20
09
5/
3/
20
7/ 09
3/
20
09
9/
3/
20
09
5/

3/
20
0

3/

1/

3/
20
0

7

0

SOURCE: Federal Reserve Board.

occupied much of the attention of policymakers
recently. How should one think about the aims of
these programs and the relation of this new component of Fed policy to traditional interest rate
policy? Is Federal Reserve credit policy a substitute for interest rate policy, or should it have different goals from those of interest rate policy?
These are clearly questions that a theory of
monetary policy adequate to our present circumstances must address. Yet, not only these questions
received relatively little attention until recently,
but the very models commonly used to evaluate
the effects of alternative prescriptions for monetary policy have little to say about them. Many
New Keynesian (NK) models abstract entirely
from the role of financial intermediation in the
economy (by assuming a representative household) or assume perfect risk-sharing (to facilitate
aggregation), so that the consequences of financial
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

disruptions cannot be addressed. Many models
include only a single interest rate (or only a single
interest rate of a given maturity, with long rates
tied to short rates through a no-arbitrage condition) and hence cannot say anything about the
proper response to changes in spreads. And many
models abstract entirely from the balance sheet
of the central bank, so that questions about the
additional dimensions of policy resulting from
the possibility of varying the size and composition of the balance sheet cannot be addressed.2
2

In a representative-household model, abstraction from the role of
the central bank’s balance sheet in equilibrium determination is
relatively innocuous; in particular, Eggertsson and Woodford (2003)
show that introducing both a large range of possible choices about
the composition of the balance sheet and transactions frictions that
accord a special role to central bank liabilities need not imply any
additional channels through which monetary policy can affect the
economy when the zero lower bound is reached. However, we wish
to reconsider this question in a framework where financial intermediation is both essential and costly.

J U LY / A U G U S T

2010

231

Cúrdia and Woodford

Figure 3
FOMC Operating Target for the Federal Funds Rate
Percent
6

5

4

3

2

1

3/

1/

2/
20
0

7
2/
20
07
5/
2/
20
07
7/
2/
20
07
9/
2/
20
07
11
/2
/2
00
7
1/
2/
20
08
3/
2/
20
08
5/
2/
20
08
7/
2/
20
08
9/
2/
20
08
11
/2
/2
00
8
1/
2/
20
09
3/
2/
20
09
5/
2/
20
09
7/
2/
20
09
9/
2/
20
09

0

NOTE: Beginning with December 2008, the target rate is replaced with a target bank of 0 to 25 basis points.
SOURCE: Federal Reserve Board.

The aim of the research summarized here3 is
to show how such issues can be addressed in a
dynamic stochastic general equilibrium (DSGE)
framework. We extend a basic NK model in directions that are crucial for analysis of the questions
just posed: We introduce (i) nontrivial heterogeneity in spending opportunities, so that financial
intermediation matters for the allocation of
resources; (ii) imperfections in private financial
intermediation and the possibility of disruptions
to the efficiency of intermediation for reasons
taken here as exogenous; and (iii) additional
dimensions of central bank policy, by explicitly
considering the role of the central bank’s balance
sheet in equilibrium determination and by allow-

ing central bank liabilities to supply transactions
services. Unlike some other recent approaches to
the introduction of financial intermediation into
NK DSGE models4—which arguably include
some features that allow for greater quantitative
realism—our aim has been to develop a model
that departs from a standard (representativehousehold) model in only the most minimal ways
necessary to address the issues raised above. In
this way, we can nest the standard (and extensively
studied) model as a special case of our model so
that the sources of our results and the precise
significance of the new model elements introduced can be more clearly understood.
4

3

This paper summarizes results that are explained in greater detail
in Cúrdia and Woodford (2009a, 2009b, 2010).

232

J U LY / A U G U S T

2010

This has been a very active literature of late. See, for example,
Christiano, Motto, and Rostagno (2007), Faia and Monacelli (2007),
Gerali et al. (2008), and Gertler and Karadi (2009).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 4
Assets of the Federal Reserve
$ Billions
2,500
Treasuries
Other Assets
2,000

AD
MBS
TAF
Swap Lines

1,500

CPFF
Other Liquidity

1,000

500

1/
3/
20
07
3/
3/
20
07
5/
3/
20
07
7/
3/
20
07
9/
3/
20
07
11
/3
/2
00
7
1/
3/
20
08
3/
3/
20
08
5/
3/
20
08
7/
3/
20
08
9/
3/
20
08
11
/3
/2
00
8
1/
3/
20
3 / 09
3/
20
09
5/
3/
20
7 / 09
3/
20
09
9/
3/
20
09

0

SOURCE: Federal Reserve Board.

1. A MODEL WITH MULTIPLE
DIMENSIONS OF MONETARY
POLICY
Here we sketch the key elements of our model,
which extends the model introduced in Cúrdia
and Woodford (2009a), to introduce the additional
dimensions of policy associated with the central
bank’s balance sheet. (See this earlier paper, especially its technical appendix, for more details.)
We stress the similarity between the model developed there and the basic NK model and show how
the standard model is recovered as a special case
of the extended model.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

1.1 Heterogeneity and the Allocative
Consequences of Credit Spreads
Our model is a relatively simple generalization
of the basic NK model used by Goodfriend and
King (1997), Clarida, Gali, and Gertler (1999),
Woodford (2003), and others to analyze optimal
monetary policy. The model is still highly stylized
in many respects; for example, we abstract from
the distinction between the household and firm
sectors of the economy and instead treat all private
expenditure as the expenditure of infinitely lived
household-firms. Similarly, we abstract from the
consequences of investment spending for the
evolution of the economy’s productive capacity,
J U LY / A U G U S T

2010

233

Cúrdia and Woodford

instead treating all private expenditure as if it
were nondurable consumer expenditure (yielding
immediate utility at a diminishing marginal rate).
We depart from the assumption of a representative household in the standard model by supposing that households differ in their preferences.
Each household i seeks to maximize a discounted
intertemporal objective of the form
∞

1
(1) E 0 ∑ β t uτ t ( i ) (ct ( i ) ; ξt ) − ∫ v τ t ( i ) ht ( j ; i ); ξt dj  ,


0
t =0

(

)

where τ i 僆 {b,s} indicates the household’s “type”
in period t. Here ubc;ξ  and usc;ξ  are two different period utility functions, each of which may
also be shifted by the vector of aggregate taste
shocks, ξt and ν bh;ξ  and ν sh;ξ  are correspondingly two different functions indicating the period
disutility from working for the two types. As in
the basic NK model, there is assumed to be a continuum of differentiated goods, each produced
by a monopolistically competitive supplier; ct i 
is a Dixit-Stiglitz aggregator of the household’s
purchases of these differentiated goods. The
household similarly supplies a continuum of
different types of specialized labor, indexed by j,
that are hired by firms in different sectors of the
economy; the additively separable disutility of
work, ν τ h;ξ , is the same for each type of labor,
though it depends on the household’s type and
the common taste shock.
Each agent’s type, τ t i, evolves as an independent two-state Markov chain. Specifically, we
assume that each period, with probability 1 – δ
(for some 0 ≤ δ < 1), an event occurs that results
in a new type for the household being drawn;
otherwise it remains the same as in the previous
period. When a new type is drawn, it is b with
probability πb and s with probability πs , where
0 < πb, πs < 1, πb + πs +1. (Hence the population
fractions of the two types are constant at all times
and equal to πτ for each type τ .) We assume moreover that ucbc;ξ  > ucsc;ξ  when expenditure, c,
falls in the range of values that occur in equilibrium. (See Figure 5, where these functions are
graphed in the case of the calibration, which shows
the functions ucbc and ucsc used in the numerical work reported here.) Hence a change in a
household’s type changes its relative impatience
234

J U LY / A U G U S T

2010

to consume, given the aggregate state ξt ; in addition, each household’s current impatience to
consume depends on the aggregate state ξt . We
also assume that the marginal utility of additional
expenditure diminishes at different rates for the
two household types (see Figure 5); type b households (who are borrowers in equilibrium) have a
marginal utility that varies less with the current
level of expenditure, resulting in a greater degree
of intertemporal substitution of their expenditures
in response to interest rate changes. Finally, the
two types are also assumed to differ in the marginal
disutility of working a given number of hours;
this difference is calibrated so that the two types
choose to work the same number of hours in steady
state, despite their differing marginal utilities of
income. For simplicity, the elasticities of labor
supply of the two types are not assumed to differ.
The coexistence of the two types with differing impatience to consume creates a social function for financial intermediation. In the present
model, as in the basic NK model, all output is
consumed either by households or by the government; hence intermediation serves an allocative
function only to the extent that there are reasons
for the intertemporal marginal rates of substitution
of households to differ in the absence of financial
flows. The present model reduces to the standard
representative-household model in the case that
one assumes that ubc;ξ  = usc;ξ  and ν bh;ξ  =
ν sh;ξ .
We assume the following: that households
generally are able to spend an amount different
from their current income only by depositing
funds with or borrowing from financial intermediaries; that the same nominal interest rate itd is
available to all savers; and that a (possibly) different nominal interest itb is available to all borrowers,5 independent of the quantities that a given
household chooses to save or to borrow. For simplicity, we also assume that only one-period riskless nominal contracts with the intermediary are
possible for either savers and borrowers. The
assumption that households cannot engage in
financial contracting other than through the inter5

Here “savers” and “borrowers” identify households according to
whether they choose to save or borrow and not according to their
type.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 5
Marginal Utilities of Consumption for Two Household Types
5

4

3

−
λb
2
−
λs

uc (c)

1

uc(c)

b

s

−
cs
0

0

0.2

0.4

0.6

−
cb
0.8

1

1.2

1.4

1.6

1.8

2

–
–
NOTE: The values of c– s and c– b indicate steady-state consumption levels of the two types, and λ s and λ b their corresponding steadystate marginal utilities.

mediary sector represents one of the key financial
frictions. We also allow households to hold oneperiod riskless nominal government debt. But,
because government debt and deposits with intermediaries are perfect substitutes as investments,
households must pay the same interest rate itd in
equilibrium and their decision problem is the
same as the case in which they have only one
decision about how much to deposit with or borrow from intermediaries.
Aggregation is simplified by assuming that
households are able to sign state-contingent contracts with one another, through which they may
insure one another against both aggregate risk and
the idiosyncratic risk associated with a household’s random draw of its type, but also assuming
that households are only intermittently able to
receive transfers from the insurance agency;
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

between these infrequent occasions when a
household has access to the insurance agency,
it can only save or borrow through the financial
intermediary sector mentioned previously. The
assumption that households are eventually able
to make transfers to one another in accordance
with an insurance contract signed earlier means
that they continue to have identical expectations
regarding their marginal utilities of income far
enough in the future, regardless of their differing
type histories.
It then turns out that in equilibrium, the marginal utility of a given household at any point in
time depends only on its type τ t i at that time;
hence the entire distribution of marginal utilities
of income at any time can be summarized by two
state variables, λtb and λts, indicating the marginal
utilities of each of the two household types. The
J U LY / A U G U S T

2010

235

Cúrdia and Woodford

expenditure level of type τ is also the same for all
households of that type and can be obtained by
inverting the marginal-utility functions (graphed
in Figure 5) to yield an expenditure demand function cτ λ;ξt  for each type. Aggregate demand Yt
for the Dixit-Stiglitz composite good can then be
written as

(

)

(

)

(2) Yt = π b cb λtb ; ξt + π s c s λts ; ξt + Gt + Ξt ,
where Gt indicates the (exogenous) level of government purchases, and Ξt indicates resources
consumed by intermediaries (the sum of two
components, Xitp representing costs of the private
intermediaries and Ξtcb representing costs of central bank activities, each discussed further below).
Thus the effects of financial conditions on aggregate demand can be summarized by tracking
the evolution of the two state variables λtτ. The
marginal-utility ratio Ωt  λtb/λts ≥ 1 provides an
important measure of the inefficiency of the allocation of expenditure owing to imperfect financial
intermediation—since, in the case of frictionless
financial markets, we would have Ωt = 1 at all
times.
In the presence of heterogeneity, instead of a
single Euler equation each period relating the path
of the marginal utility of income of the representative household to the model’s single interest rate,
we have two Euler equations each period, one
for each household type and each involving a
different interest rate: itb for type b households
(who choose to borrow in equilibrium) and itd for
type s households (who choose to save). If we loglinearize these Euler equations,6 and combine
them with a log-linearized version of (2), we obtain
a structural relation of the form
Yˆ t = −σ ˆıtavg − E t π t +1 + E tYˆ t +1 − Et ∆g t +1

(

)

(3) − E ∆Ξ
ˆ
ˆ
ˆ − σs Ω
t
t +1
Ω t + σ ( sΩ + ψ Ω ) E t Ωt +1 ,

the percentage deviation of aggregate output from
its steady-state level;
ˆıtavg ≡ π b ˆıtb + π s ˆıtd
is the average of the interest rates that are relevant (at the margin) for all of the savers and
borrowers in the economy, where we define
ı τ  for τ 僆 {b,d};7 gt is a composîtτ  log1+ itτ /1 + –
ite “autonomous expenditure” disturbance as in
Woodford (2003, pp. 80, 249), taking account of
exogenous fluctuations in Gt, as well as exogenous
variation in the spending opportunities facing the
two types of households (reflected in the dependence of the functions uτ c;ξt  on the state vector
– –
ξt ); Ξ̂t  Ξt – Ξ/Y measures departures of the
quantity of resources consumed by the intermediary sector from its steady-state level8; and
–
Ω̂ t  logΩt /Ω measures the gap between the
marginal utilities of the two household types.
Note that the first four terms on the right-hand
side of (3) are exactly as in the basic NK model,
except for these differences: (i) instead of “the”
interest rate we have an average interest rate; (ii)
σ– is no longer the intertemporal elasticity of substitution for the representative household, but
instead a weighted average of the corresponding
parameters for the two types; and (iii) the composite disturbance, gt, similarly averages the changes
in spending opportunities for the two types. The
crucial differences are the presence of the new
terms involving Ξ̂t and Ω̂t , which exist only in the
case of financial frictions. The sign of the coefficient sΩ depends on the asymmetry of the degrees
of interest sensitivity of expenditure by the two
types; in the case shown in Figure 5 (which we
regard as the empirically relevant case), sΩ > 0
because the intertemporal elasticity of expenditure is higher for type b.9 In this case, a larger
value of Ω̂t reduces aggregate demand for given
expectations about the forward path of average
7

One can show that, for a log-linear approximation, the average
marginal utility of income in the population depends only on the
expected path of this particular average of the interest rates in the
economy.

8

We adopt this notation so that Ξ̂t is defined even when the model
–
is parameterized so that Ξ = 0.

9

In our calibration, ψΩ is a small negative quantity, but because it
is small its sign is not of great importance.

generalizing the “intertemporal IS relation” of
–
the basic NK model. Here Ŷt  logYt /Y  measures
6

Here and in the case of all other log-linearizations discussed below,
we log-linearize around a deterministic steady state in which the
inflation rate is zero and aggregate output is constant.

236

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

real interest rates; this can be thought of as representing “financial headwinds” of a kind sometimes discussed within the Federal Reserve
system.10
Log-linearization of the two Euler equations
also implies that
ˆ
ˆ = ωˆ + δˆ E Ω
(4) Ω
t
t
t t + 1,
where ωt îtb – îtd is the short-term credit spread
and δˆ is a coefficient satisfying 0 < δˆ <1. Thus the
marginal-utility gap, Ω̂t , is a forward-looking moving average of the expected path of the short-term
credit spread. Alternatively, we can view Ω̂t
itself as a credit spread, a positive multiple of the
spread between two long-term yields,

( )

rtτ ≡ 1 − δˆ

−1 ∞

∑ δˆ j Et ˆıtτ+ j

j =0

for τ 僆 {b,d}. Hence the terms in (3) involving Ω̂t
indicate that variations in credit spreads are relevant to aggregate demand. Credit spreads are also
relevant to the relation between the path of the
policy rate11 and aggregate expenditure because
of the identity
(5) ˆıtavg = ˆıtd + π bωˆ t
connecting the policy rate to the interest rate that
appears in (3). Under an assumption of Calvostyle staggered price adjustment, we similarly
obtain an aggregate supply relation that is only
slightly different from the “NK Phillips curve” of
the representative-household model. Specifically,
we obtain

(

)

ˆ −κ Ξ
ˆ
(6) π t = κ Yˆ t − Yˆ tn + β Et π t +1 + ut + κ Ω Ω
t
Ξ t,
where Ŷtn (the “natural rate of output”) is a composite exogenous disturbance that depends on
technology, preferences, and government purchases; ut (the “cost-push shock”) is another com10

11

See, for example, the reference by Alan Greenspan (1997) to the
U.S. economy in the early 1990s as “trying to advance in the face
of fifty-mile-an-hour headwinds,” owing to “severe credit constraint.” The point of the metaphor was that under such conditions,
a given reduction in the federal funds rate stimulated less expenditure than it ordinarily would have.
The identification of itd with the policy rate is discussed below in
Section 1.3.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

posite exogenous disturbance that depends on
variations in distorting taxes and in the degree of
market power in labor or product markets; and
the coefficients satisfy κ, κΞ > 0 and, in the case
that we regard as realistic, κΩ > 0 as well. Here the
first three terms on the right-hand side are identical to those of the standard “NK Phillips curve,”
subject to similar comments as above about the
dependence of κ on a weighted average of the
intertemporal elasticities of substitution of the two
types and the dependence of Ŷtn on a weighted
average of the preference shocks of the two types;
the final two terms appear only as a result of credit
frictions. We note in particular that increases in
credit spreads shift both the aggregate-supply
and aggregate-demand relations in our model.
In the presence of heterogeneity, household
behavior results in one further structural equation
that has no analog in the representative-household
model. This is a law of motion for bt , the per capita
level of private borrowing, which depends on the
fluctuations in the levels of expenditure of the
two types and, hence, on the fluctuations in both
marginal utilities λtτ. Details of this additional
relationship are provided in Cúrdia and Woodford
(2009a). We also suppose that the government
issues one-period riskless nominal debt, the real
value of which at the end of period t is denoted
b tg. We treat {b tg } as an exogenous process; this is
one of three independent fiscal disturbances that
we allow for.12 We suppose that government debt
can be held either by saving households or by the
central bank,13 and in equilibrium we suppose
that at least part of the public debt is always held
by households. Since government debt is a perfect
substitute for deposits with the intermediaries in
our model, from the standpoint of saving households, in equilibrium the yield on government
debt must always equal itd, the competitive interest
rate on deposits.
12

The other two disturbances are exogenous variations in government
purchases, Gt , of the composite good and exogenous variations in
a proportional sales tax rate.

13

We could also allow intermediaries to hold government debt, but
they will choose not to as long as itb > itd, as is always true in the
equilibria that we consider.

J U LY / A U G U S T

2010

237

Cúrdia and Woodford

1.2 Financial Intermediaries
We assume an intermediary sector made up
of identical, perfectly competitive firms. Intermediaries take deposits on which they promise
to pay a riskless nominal return, itd, one-period
later, and make one-period loans on which they
demand a nominal interest rate itb. An intermediary also chooses a quantity of reserves, Mt, to hold
at the central bank, on which it will receive a
nominal interest yield itm. Each intermediary takes
as given all three of these interest rates. We assume
that arbitrage by intermediaries need not eliminate
the spread between itb and itd for either of two reasons: (i) Resources are used in the process of loan
origination or (ii) intermediaries may be unable
to tell the difference between good borrowers
(who will repay their loans the next period) and
bad borrowers (who will be able to disappear without having to pay) and as a consequence may have
to charge a higher interest rate to good and bad
borrowers alike.
We suppose that origination of good loans in
real quantity Lt requires an intermediary to also
originate bad loans in quantity χtLt , where χt′,
χt′′ ≥ 0, and the function χtL may shift from
period to period for exogenous reasons. (While
the intermediary is assumed to be unable to discriminate between good and bad loans, it is able
to predict the fraction of loans that will be bad in
the case of any given scale of its lending activity.)
This scale of operations also requires the intermediary to consume real resources ΞtpLt ; mt  in
the period in which the loans are originated,
where mt  Mt /Pt , and ΞtpL; m is a convex function of its two arguments, with ΞpLt ≥ 0, Ξpmt ≤ 0,
ΞpLmt ≤ 0. We further suppose that for any scale of
operations, L, there exists a finite satiation level
– L, defined as the lowest
of reserve balances, m
t
value of m for which ΞpmtL;m = 0. (Our convexity
and sign assumptions then imply that Ξpmt L;m = 0
– L.) We assume the existence of a
for all m > m
t
finite satiation level of reserves for an equilibrium
to be possible in which the policy rate is driven
to zero, a situation of considerable practical relevance at present.
Given an intermediary’s choice of its scale of
lending operations, Lt , and reserve balances, mt ,
238

J U LY / A U G U S T

2010

to hold, we assume that it acquires real deposits,
dt , in the maximum quantity that it can repay
(with interest at the competitive rate) from the
anticipated returns on its assets (taking into
account the anticipated losses on bad loans). Thus
it chooses dt such that

(1 + i )d = (1 + i ) L + (1 + i ) m .
d
t

t

b
t

t

m
t

t

The deposits that it does not use to finance either
loans or the acquisition of reserve balances,
dt − mt − Lt − χt ( Lt ) − Ξtp ( Lt ; mt ),
are distributed as earnings to its shareholders.
The intermediary chooses Lt and mt each period
to maximize these earnings, given itd,itb,i tm. This
implies that Lt and mt must satisfy the first-order
conditions:
p
(7) Ξ Lt
( Lt ; mt ) + χ Lt ( Lt ) = ω t ≡

p
(8) − Ξmt
( Lt ; mt ) = δtm ≡

itb − itd
,
1 + itd

itd − itm
.
1 + itd

Equation (7) can be viewed as determining the
equilibrium credit spread, ωt , as a function
ωt Lt ;mt  of the aggregate volume of private credit
and the real supply of reserves.14 As indicated
above, a positive credit spread exists in equilibrium to the extent that ΞtpL; m, χtL, or both are
increasing in L. Equation (8) similarly indicates
how the equilibrium differential δ tm between the
interest paid on deposits and that paid on reserves
at the central bank is determined by the same two
aggregate quantities.
In addition to these two equilibrium conditions that determine the two interest rate spreads
in the model, the absolute level of (real) interest
rates must be such that the supply and demand
for credit are equal. Market clearing in the credit
market requires that
(9) bt = Lt + Lcb
t ,
14

Note that in terms of this definition of the credit spread, ωt, the
previously defined deviation corresponds to ω̂t  log1 + ωt/1 + ω– .

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

where Ltcb represents real lending to the private
sector by the central bank, as discussed next.

1.3 The Central Bank and Interest Rate
Determination
In our model, the central bank’s liabilities
consist of the reserves, Mt , (which also constitute
the monetary base) on which it pays interest at
the rate i tm. These liabilities in turn fund the central bank’s holdings of government debt and any
lending by the central bank to type b households.
We let Ltcb denote the real quantity of lending by
the central bank to the private sector; the central
bank’s holdings of government debt are then given
by the residual mt – Ltcb. We can treat mt (or Mt )
and Ltcb as the bank’s choice variables, subject to
the following constraints:
(10) 0 ≤ Lcb
t ≤ mt .
It is also necessary that the central bank’s choices
of these two variables also satisfy the bound
g
mt < Lcb
t + bt ,

where btg is the total outstanding real public debt,
so that a positive quantity of public debt remains
in the portfolios of households. In the calculations
below, however, we assume that this last constraint is never binding. (We confirm this in our
numerical examples.)
We assume that central bank extension of
credit other than through open-market purchases
of Treasury securities consumes real resources,
just as in the case of private intermediaries, and
represent this resource cost by a function ΞcbLtcb,
discussed further in Section 4, which is increasing
and at least weakly convex, with Ξcb ′0 > 0.
The central bank has one further independent
choice to make each period, which is the rate of
interest i tm to pay on reserves. We assume that if
the central bank lends to the private sector, it simply chooses the amount that it is willing to lend
and auctions these funds, so that in equilibrium
it charges the same interest rate itb on its lending
that private intermediaries do; this is therefore
not an additional choice variable for the central
bank. Similarly, the central bank receives the
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

market-determined yield itd on its holdings of
government debt.
The interest rate itd at which intermediaries
are able to fund themselves is determined each
period by the joint inequalities

(

)

(11) mt ≥ mtd Lt , δtm ,
(12) δtm ≥ 0,
together with the “complementary slackness”
condition that at least one of these—(11) and/or
(12)—must hold with equality each period; here
mtdL,δ m is the demand for reserves defined by (8),
– L in the
defined to equal the satiation level m
t
m
case that δ = 0. (Condition (11) may hold only
as an inequality, as intermediaries will be willing
to hold reserves beyond the satiation level as long
as the opportunity cost, δ tm, is zero.) We identify
the rate itd at which intermediaries fund themselves
with the central bank’s policy rate (e.g., the federal
funds rate in the case of the United States).
The central bank can influence the policy
rate through two channels: its control of the supply of reserves and its control of the interest rate
paid on them. By varying mt , the central bank can
change the equilibrium differential, δ tm, determined as the solution to (11) and (12). And by
varying i tm, it can change the level of the policy
rate, itd, that corresponds to a given differential.
Through appropriate adjustment on both margins,
the central bank can control itd and i tm separately
(subject to the constraint that i tm cannot exceed
itd). We also assume that for institutional reasons,
it is not possible for the central bank to pay a negative interest rate on reserves. (We may suppose
that intermediaries have the option of holding
currency, earning zero interest, as a substitute
for reserves, and that the second argument of the
resource cost function, Ξtpb; m, is actually the
sum of reserve balances at the central bank plus
vault cash.) Hence the central bank’s choice of
these variables is subject to the constraints
(13) 0 ≤ itm ≤ itd .
In our model, there are thus three independent
dimensions along which central bank policy can
J U LY / A U G U S T

2010

239

Cúrdia and Woodford

be varied: the quantity of reserves, Mt , that are
supplied; the interest rate i tm paid on those
reserves; and the breakdown of central bank assets
between government debt and lending Ltcb to the
private sector. Alternatively, we can specify these
three independent dimensions as (i) interest rate
policy, the central bank’s choice of an operating
target for the policy rate, itd; (ii) reserve-supply
policy, the choice of Mt , which in turn implies
a unique rate of interest i tm that must be paid on
reserves for the reserve-supply policy to be consistent with the bank’s target for the policy rate15;
and (iii) credit policy, the central bank’s choice
of the quantity of funds Ltcb to lend to the private
sector. We prefer this last identification of the
three dimensions of policy because in this case
our first dimension (interest rate policy) corresponds to the sole dimension of policy emphasized in many conventional analyses of optimal
monetary policy; the first two are additional
dimensions of policy introduced by our extension
of the basic NK model.16 Changes in central bank
policy along each of these dimensions has consequences for the bank’s cash flow, but we abstract
from any constraint on the joint choice of the
three variables associated with cash-flow concerns. (We assume that seignorage revenues are
simply turned over to the Treasury, where their
only effect is to change the size of lump-sum
transfers to the households.)
Given that central bank policy can be independently varied along each of these three dimensions, we can independently discuss the criteria for
policy to be optimal along each dimension. Below,
we take up each of the three dimensions in turn.
15

16

We might choose to call the second dimension “variation in the
interest rate paid on reserves,” which would correspond to something that the Board of Governors makes an explicit decision about
under current U.S. institutional arrangements, as is also true at
most other central banks. But describing the second dimension of
policy as “reserve-supply policy” allows us to address the question
of the value of “quantitative easing” under this heading as well.
Goodfriend (2009) similarly describes central bank policy as involving three independent dimensions. These correspond to our first
three dimensions, but he calls the first dimension (the quantity of
reserves, or base money) “monetary policy.” We believe that this
does not correspond to standard usage of the term “monetary policy,”
since the traditional focus of Federal Open Market Committee
(FOMC) deliberations about monetary policy has been the choice
of an operating target for the policy rate, as is generally the case
for central banks. Reis (2009) also distinguishes among the three
dimensions of policy in terms similar to ours.

240

J U LY / A U G U S T

2010

1.4 The Welfare Objective
In considering optimal policy, we take the
objective of policy to be the maximization of
average expected utility. Thus we can express
the objective as maximization of
∞

(14) E t0

∑ β t −t U t ,
0

t =t0

where the welfare contribution, Ut , each period
weights the period utility of each of the two types
by their respective population fractions at each
point in time. As shown in Cúrdia and Woodford
(2009a),17 this can be written as
(15) U t = U (Yt , Ωt , Ξt , ∆t ; ξt ) .
Here ∆t is an index of price dispersion in period t,
taking its minimum possible value of 1 when the
prices of all goods are identical; for any given total
quantity Yt of the composite good that must be
produced, the total disutility of working indicated
in (1) is greater the more dispersed are prices, as
this implies a correspondingly less uniform (and
hence less efficient) composition of output.
The total disutility of working is also a decreasing function of Ωt , since a larger gap between the
marginal utilities of the two types implies a lessefficient division of labor effort between the two
types. The average utility from consumption is
smaller, for given aggregate output Yt , the larger
is Ξt , since only resources Yt – Gt – Ξt are consumed by households. And the average utility
from consumption is also decreasing in Ωt , since
a larger marginal-utility gap implies a less-efficient
division of expenditure between the two types.
Thus the derived utility U. is a concave function
of Yt that reaches an interior maximum for given
values of the other arguments, and a monotonically decreasing function of Ωt , Ξt , and ∆t . The
17

Cúrdia and Woodford (2009a) analyze a special case of the present
model in which central bank lending and the role of central bank
liabilities in reducing the transactions costs of intermediaries are
abstracted from. However, the form of the welfare measure (15)
depends only on the nature of the heterogeneity in our model, and
the assumed existence of a credit spread and of resources consumed
by the intermediary sector; the functions that determine how Ωt
and Ξt are endogenously determined are irrelevant for this calculation, and those are the only parts of the model that are generalized
in this paper. Hence the form of the welfare objective in terms of
these variables remains the same.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

dependence of U. on Yt and ∆t is the same as in
the representative-household model of Benigno
and Woodford (2005), while the dependence on
Ωt and Ξt indicates new distortions resulting
from the credit frictions in our model.
As in Benigno and Woodford, the assumption
of Calvo-style price adjustment implies that the
index of price dispersion evolves according to a
law of motion of the form
∆t = h ( ∆ t −1, π t ),
where for a given value of ∆t –1, h∆t –1,. has an
interior minimum at an inflation rate that is near
zero when ∆t –1 is near 1. Thus for given paths of
the variables {Ωt ,Ξt } welfare is maximized by trying
(to the extent possible) to simultaneously keep
aggregate output near the (time-varying) level that
maximizes U and inflation near the (always low)
level that minimizes price dispersion. Hence our
model continues to justify concerns about output
and inflation stabilization common to the NK literature. However, it also implies that welfare can
be increased by reducing credit spreads and the
real resources consumed in financial intermediation. These latter concerns make the effects of
policy on the evolution of aggregate credit and
on the supply of bank reserves also relevant to
monetary policy deliberations. We now turn to
the question of how each of the three dimensions
of central bank policy can effect these several
objectives.

2. OPTIMAL POLICY: THE SUPPLY
OF RESERVES
We shall first consider optimal policy with
regard to the supply of reserves, taking as given
(for now) the way in which the central bank
chooses its operating target for the policy rate, itd
and the state-contingent level of central bank lending to the private sector, Ltcb. Under fairly weak
assumptions, we obtain a very simple result:
Optimal policy requires that intermediaries be
– L  at
satiated in reserves, that is, that Mt /Pt ≥ m
t t
all times.
For levels of reserves below the satiation point,
an increase in the supply of reserves has two
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

effects relevant for welfare: The resource cost of
financial intermediation, Ξtp, is reduced (for a given
level of lending by the intermediary sector), and
the credit spread, ωt , is reduced (again, for a given
level of lending) as a consequence of (7). Each of
these effects raises the value of the objective in
(14); note that the reductions in credit spreads
increase welfare because of their effect on the path
of the marginal-utility gap, Ωt , as a consequence
of (4). Hence an increase in the supply of reserves
is unambiguously desirable in any period in which
they remain below the satiation level.18 Once
reserves are at or above the satiation level, however, further increases reduce neither the resource
costs of intermediaries nor equilibrium credit
spreads (as in this case, Ξpmt = ΞpLmt = 0), so there
would be no further improvement in welfare.
Hence policy is optimal along this dimension if
– L  at all times,19 so that
and only if Mt /Pt ≥ m
t t
p
(16) Ξ mt
( Lt ; mt ) = 0.

This is just another example in which the
familiar Friedman rule for “the optimum quantity
of money” (Friedman, 1969) applies. Note, however, that our result has no consequences for
interest rate policy. While the Friedman rule is
sometimes taken to imply a strong result about
the optimal control of short-term nominal interest
rates—namely, that the nominal interest rate
should equal zero at all times—the efficiency
condition (16), together with the equilibrium
relation (8), implies only that the interest rate
differential, δ tm, should equal zero at all times.
With zero interest on reserves, this would also
require that itd = 0 at all times; but given that the
central bank is free to set any level of interest on
reserves consistent with (13), the efficiency condition (16) actually implies no restriction on either
the average level or the degree of state-contingency
of the central bank’s target for the policy rate, itd.
18

The discussion here assumes that the upper bound in (10) is not a
binding constraint. But if that constraint does bind, then an increase
in the supply of reserves relaxes the constraint, and this too increases
welfare, so that the conclusion in the text is unchanged.

19

To be more precise, policy is optimal if and only if (16) is satisfied
and the upper bound in (10) does not bind. Both conditions will
be satisfied by any quantity of reserves above some finite level.

J U LY / A U G U S T

2010

241

Cúrdia and Woodford

2.1 Is a Reserve-Supply Target Needed?
Our result about the importance of ensuring
an adequate supply of reserves might suggest that
the question of the correct target level of reserves
at each point in time should receive the same
degree of attention at meetings of the FOMC as
the question of the correct operating target for
the federal funds rate. But deliberations of that
kind are not needed to ensure fulfillment of the
optimality criterion (16); the efficiency condition
can alternatively be stated (using (8)) as requiring
that itd = itm at all times. Reserves should be supplied to the point at which the policy rate falls to
the level of the interest rate paid on reserves, or,
in a formulation that is more to the point, interest
should be paid on reserves at the central bank’s
target for the policy rate.
Given a rule for setting an operating target
for itd (discussed in the next section), itm should
be chosen each period in accordance with the
simple rule
(17) itm = itd .
When the central bank implements its target for
the policy rate through open-market operations,
it will automatically have to adjust the supply of
reserves to satisfy (16). But this does not require
a central bank’s monetary policy committee (the
FOMC in the case of the United States) to deliberate about an appropriate target for reserves at
each meeting; once the target for the policy rate is
chosen (and the interest rate to be paid on reserves
is determined by that, through condition (17)),
the quantity of reserves that must be supplied to
implement the target can be determined by the
bank staff in charge of carrying out the necessary
interventions (the Trading Desk at the New York
Fed in the case of the United States), on the basis
of a more frequent monitoring of market conditions than is possible for the monetary policy
committee.
One obvious way to ensure that the efficiency
condition (17) is satisfied is to adopt a routine
practice of automatically paying interest on
reserves at a rate that is tied to the current operating target for the policy rate. This is already the
practice of many central banks outside the United
242

J U LY / A U G U S T

2010

States. At some of those banks, the fixed spread
between the target for the policy rate and the rate
paid on overnight balances at the central bank is
quite small: for example, 25 basis points in the
case of the Bank of Canada; in the case of New
Zealand, the interest rate paid on overnight balances is the policy rate itself. There are possible
arguments (relating to considerations not reflected
in our simple model) why the optimal spread
might be larger than zero, but it is likely in any
event to be desirable to maintain a constant small
spread rather than treat the question of the interest
rate to be paid on reserves as a separate, discretionary policy decision to be made at each policy
committee meeting. Apart from the efficiency
gains modeled here, such a system should also
help to facilitate the central bank’s control of the
policy rate (Goodfriend, 2002, and Woodford,
2003, Chap. 1, Sec. 3).

2.2 Is There a Role for “Quantitative
Easing”?
While our analysis implies that it is desirable
to ensure that the supply of reserves never falls
– L  it also implies
below a certain lower bound, m
t t
that there is no benefit from supplying reserves
beyond that level. There is, however, one important exception to this assertion: It can be desirable
to supply reserves beyond the satiation level if
this is necessary to make the optimal quantity of
central bank lending to the private sector, Ltcb,
consistent with (10). This qualification is important when considering the desirability of the massive expansion in the supply of reserves by the
Fed since September 2008, as shown in Figure 2.
The increase in reserves (shown in Figure 4)
occurred only after the Fed expanded the various newly created liquidity and credit facilities
beyond the scale that could be financed simply
by reducing its holdings of Treasury securities
(as had been its policy over the previous year).20
Some have argued, instead, that further expansion of the supply of reserves beyond the level
20

Bernanke (2009) distinguishes between the Federal Reserve policy
of “credit easing” and the type of “quantitative easing” practiced
by the Bank of Japan earlier in the decade, essentially on this
ground.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

needed to bring the policy rate down to the level
of the rate of interest paid on reserves is an important additional policy tool in its own right—one
of particular value precisely when a central bank
is no longer able to further reduce its operating
target for the policy rate, owing to the zero lower
bound (as at present in the United States and
many other countries). It is sometimes proposed
that when the zero lower bound is reached, it is
desirable for a central bank’s policy committee
to shift from deliberations, about an interest rate
target to a target for the supply of bank reserves,
as under the Bank of Japan’s policy of “quantitative easing” during the period between March
2001 and March 2006.
Our model provides no support for the view
that such a policy should be effective in stimulating aggregate demand. Indeed, it is possible to
state an irrelevance proposition for quantitative
easing in the context of our model. Let the three
dimensions of central bank policy be described
by functions that specify the operating target for
the policy rate, the supply of reserves, the interest rate to be paid on reserves, and the quantity
of central bank credit as functions of macroeconomic conditions. For the sake of concreteness,
we may suppose that each of these variables is to
be determined by a Taylor-type rule,
itd = φ id (π t ,Yt , Lt ; ξt ),
M t Pt = φ m (π t ,Yt , Lt ; ξt ),
itm = φ im (π t ,Yt , Lt ; ξt ),
L
Lcb
t = φ ( π t ,Yt , Lt ; ξt ) ,

where the functions are such that constraints in
(10) through (13) are satisfied for all values of
the arguments. (Here the vector of exogenous
disturbances, ξt , on which the reaction functions
may depend, includes the exogenous factors that
shift the function ΞtpL;m.) Then our result is that,
given the three functions φ id., φ im., and φ L.,
the set of processes {πt ,Yt ,Lt ,bt ,itd,itb,Ωt ,∆t } that
constitute possible rational expectations equilibria is the same independent of the choice of the
function φ m. as long as the specification of φ m.
is consistent with the other three functions (in
the sense that (10) and (11) are necessarily satisF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

fied and that (11) holds with equality in all cases
where (12) is a strict inequality).21
Of course, the stipulation that φ m. be consistent with the other functions uniquely determines what the function must be for all values of
the arguments for which the functions id. and
im. imply that δ tm > 0. However, the class of policies considered allows for an arbitrary degree of
expansion of reserves beyond the satiation level
in the region where those functions imply that
δ tm = 0, and in particular, for an arbitrary degree
of quantitative easing when the zero bound is
reached (i.e., when itd = itm = 0). The class of policies considered includes the popular proposal
under which the quantity of excess reserves should
depend on the degree to which a standard Taylor
rule (unconstrained by the zero bound) would call
for a negative policy rate. Our result implies that
there should be no benefits from such policies.
Our result might seem to be contradicted by
the analysis of Auerbach and Obstfeld (2005), in
which an open market operation that expands the
money supply is found to stimulate real activity
even when the economy is at the zero bound at
the time of the monetary expansion. But their
thought experiment does not correspond to pure
quantitative easing of the kind contemplated in
the above proposition—because they specify
monetary policy in terms of a path for the money
supply and the policy change that they consider
is one that permanently increases the money supply, so that it remains higher after the economy
has exited from the “liquidity trap” in which the
zero bound is temporarily binding. The contemplated policy change is therefore not consistent
with an unchanged reaction function φ id. for
the policy rate, and the effects of the intervention can be understood to be the consequences
of the commitment to a different future interest
rate policy.
Our result implies only that quantitative easing should be irrelevant under two conditions:
when (i) an increase in reserves finances an
increase in central bank holdings of Treasury
securities, rather than an increase in central bank
21

This result generalizes the irrelevance result for quantitative easing
in Eggertsson and Woodford (2003) to a model with heterogeneity
and credit frictions.

J U LY / A U G U S T

2010

243

Cúrdia and Woodford

lending to the private sector, and (ii) policy
implies no change in the way that people should
expect future interest rate policy to be conducted.
Our model does allow for real effects of an
increase in central bank lending, Ltcb, financed by
an increase in the supply of reserves, if privatesector financial intermediation is inefficient22;
but the real effects of the increased central bank
lending in that case are the same whether the
lending is financed by an increase in the supply
of reserves or by a reduction in central bank holdings of Treasury securities. Our model also allows
for real effects of an announcement that interest
rate policy in the future will be different, as when
a central bank commits itself not to return immediately to its usual Taylor rule as soon as the zero
bound ceases to bind, but promises instead to
maintain policy accommodation for some time
after it would become possible to comply with
the Taylor rule (as discussed in the next section).
But such a promise (if credible and correctly
understood by the private sector) should increase
output and prevent deflation to the same extent
even if it implies no change in policy during the
period when the zero lower bound binds.
While our definition of quantitative easing
may seem narrow, the policy of the Bank of Japan
during the period 2001-06 fits our definition fairly
closely. The Bank of Japan’s policy involved the
adoption of a series of progressively higher quantitative targets for the supply of reserves. The aim
of the policy was understood to be to increase
the monetary base, rather than to allow the Bank
of Japan to acquire any particular type of assets.
The assets purchased were almost entirely
Japanese government bonds, since credit allocation to malfunctioning markets was not a goal.
There was no suggestion that the targets of policy
after the end of the zero-interest-rate period would
be any different from before. There was no commitment to maintain the increased quantity of
base money in circulation permanently; and,
22

This result differs from that obtained in Eggertsson and Woodford
(2003), where changes in the composition of the assets on the
central bank’s balance sheet are also shown to be irrelevant. That
stronger result depends on the assumption of a representative
household (as in Eggertsson and Woodford) or, alternatively, frictionless financial intermediation.

244

J U LY / A U G U S T

2010

indeed, once it was judged time to end the zerointerest-rate policy, the supply of reserves was
rapidly contracted again (Figure 6).
Our theory suggests that expansion of the
supply of reserves under such circumstances
should have little effect on aggregate demand,
and this seems to have been the case. For example,
as is also shown in Figure 6, despite an increase
in the monetary base of 60 percent during the
first two years of the quantitative easing policy,
and an eventual increase of nearly 75 percent,
nominal GDP never increased at all (relative to
its March 2001 level) during the entire five years
of the policy.23

3. OPTIMAL POLICY: INTEREST
RATE POLICY
We turn now to a second dimension of policy,
the approach taken by the central bank in determining its operating target for the policy rate
(the federal funds rate in the case of the Federal
Reserve). In this section, we take for granted that
reserve-supply policy is being conducted in the
way recommended in the previous section, that
is, that the rate of interest on reserves will satisfy
(17) at all times. In this case, we can replace the
function ΞtpLt ;mt  with

(

Ξtp ( Lt ) ≡ Ξtp Lt ; mt ( Lt )

)

and the function ωtLt ;mt , defined by the lefthand side of (7), with

(

)

ω t ( Lt ) ≡ ω t Lt ; mt ( Lt ) ,
since there will be satiation in reserves at all
times.24 Using these functions to specify the
23

As indicated in Figure 6, over the first two years of the quantitativeeasing policy, nominal GDP fell by more than 4 percent, despite
extremely rapid growth of base money. While nominal GDP recovered thereafter, it remained below its 2001:Q1 level over the entire
period until 2006:Q4, three quarters after the official end of quantitative easing, by which time the monetary base had been reduced
again by more than 20 percent. Moreover, even if the growth of
nominal GDP after 2003:Q1 is regarded as a delayed effect of the
growth in the monetary base two years earlier, this delayed nominal
GDP growth was quite modest relative to the size of the expansion
in base money.

24

– L , this will not affect the
Even if at some times mt exceeds m
t t
p
values of Ξ t or ωt .

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 6
The Monetary Base and Nominal GDP for Japan 1990-2009 (seasonally adjusted)
Trillion Yen

Trillion Yen

600

120

105
500
90
400
75

60

300

45
200
30
100
15

Nominal GDP (left axis)
Monetary Base (right axis)

0

19
9
19 0:Q
90 1
19 :Q
91 4
19 :Q
92 3
19 :Q
9 2
19 3:Q
93 1
19 :Q
9 4
19 4:Q
95 3
19 :Q
9 2
19 6:Q
9 1
19 6:Q
9 4
19 7:Q
98 3
19 :Q
9 2
19 9:Q
9 1
20 9:Q
0 4
20 0:Q
01 3
20 :Q
0 2
20 2:Q
02 1
20 :Q
03 4
20 :Q
04 3
20 :Q
0 2
20 5:Q
05 1
20 :Q
06 4
20 :Q
07 3
20 :Q
0 2
20 8:Q
08 1
:Q
4

0

NOTE: The shaded region shows the period of “quantitative easing,” from March 2001 through March 2006.
SOURCE: International Monetary Fund International Financial Statistics and Bank of Japan.

equilibrium evolution of Ξtp and ωt as functions
of the evolution of aggregate private credit, we
can then write the equilibrium conditions of the
model without any reference to the quantity of
reserves or to the interest rate paid on reserves.
We shall also take as given the state-contingent
evolution of central bank lending {Ltcb}, and ask
how the central bank’s target for the policy rate
should be adjusted in response to shocks to the
economy. In this case the problem considered is
of the form considered in Cúrdia and Woodford
(2009a).
As in a representative-household model with
no financial frictions, a consideration of optimal
interest rate policy requires taking into account
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

the desired evolution of aggregate output and of
inflation (which affects the objective (14) because
of the consequences of inflation for the evolution
of the price dispersion index, ∆t ), given the tradeoff between variations in these two variables
implied by the aggregate-supply relation. While
our model implies that, in the presence of credit
frictions, interest rate policy also has consequences
for the evolution of Ξt and Ωt (which are also
arguments of (15)), owing to its effects on the
volume of lending by the intermediary sector,
the most important effects are the effects on the
paths of output and inflation. The way in which
the paths of output and inflation matter for welfare are essentially the same as in a model withJ U LY / A U G U S T

2010

245

Cúrdia and Woodford

out financial frictions, and the nature of the
aggregate-supply tradeoff indicated by (6) remains
the same as well, with credit frictions appearing
mainly as a source of additional shift terms (like
the “cost-push shocks” emphasized in treatments
such as those of Clarida, Gali, and Gertler, 1999).
Hence some of the important conclusions of the
standard literature continue to apply, at least
approximately, even in an environment where
credit frictions are nontrivial and time varying.

3.1 The Robustness of Flexible Inflation
(or Price-Level) Targeting
Benigno and Woodford (2005) show that in
the representative-household version of our
model, optimal interest rate policy can be characterized by the requirement that interest rates be
adjusted so that a certain target criterion is satisfied each period.25 To a log-linear approximation,
the optimal target criterion can be expressed as
(18) π t + φ ( x t − x t −1 ) = 0,
regardless of the degree of steady-state distortions
due to market power or distorting taxes, where
xt  Ŷt – Ŷt*, Ŷt* is a function of the exogenous
disturbances to preferences, technology, fiscal
policy, and markups,26 and φ is a positive coefficient. This can be viewed as a form of “flexible
inflation targeting” in the sense of Svensson
(1997): The acceptable near-term inflation projection should be adjusted by an amount proportional to the projected change in the output gap.
(Farther in the future, there will never be continuing forecastable changes in the output gap; so
the criterion will always require that the projected
path of inflation a few years in the future will
equal an unchanging long-run target value, here
equal to zero.)
25

For further discussion of targeting regimes as an approach to the
conduct of monetary policy, see Svensson (1997 and 2005),
Svensson and Woodford (2005), or Woodford (2007).

26

In the case that the steady-state level of output under flexible prices
(or with zero inflation) is efficient, Ŷt* corresponds to variations
in the efficient level of output. When the steady-state level of output under flexible prices is not efficient, the two concepts differ
somewhat; for the more general definition of Ŷt*, and discussion of
its relation to the efficient level of output and to the flexible-price
equilibrium level of output, see Woodford (2009, section 2).

246

J U LY / A U G U S T

2010

The optimal target criterion in the
representative-household model can alternatively
be expressed in the form
(19) p t ≡ pt + φ xt = p* ,
where pt is the log of the general price index at
time t. (Note that (18) simply states that the first
difference of p̃t should be zero each period, so
that p̃t must never be allowed to change.) This is
an output gap–adjusted price-level target, or the
commitment to a rule of the form in (19), which
is an example of what Hall (1984) calls an “elastic
price standard.” If the target criterion can be fulfilled precisely each period, the two target criteria
are equivalent; but if it is not always possible for
the central bank to satisfy the target criterion (as
when the zero lower bound is reached, discussed
below), the two commitments are no longer equivalent. In this case, there are actually advantages to
the price-level formulation, as we discuss below.
Cúrdia and Woodford (2009a) show that in a
special limiting case, the target criterion (18) or
alternatively, (19)—continues to be necessary and
sufficient for the optimality of interest rate policy,
even in the model with heterogeneity and credit
frictions. This is the special case in which steadystate distortions (including the steady-state credit
spread ω– ) are negligible, though we allow for
shocks that temporarily increase credit spreads
relative to the steady-state level. Real resources
used in financial intermediation are negligible
(so that the shocks that increase credit spreads
are purely due to an increase in the perceived
fraction of bad loans), and the time-varying fraction of bad loans is independent of intermediaries’
scale of operations. In this case, there are no variations in Ξt and the fluctuations in Ωt are essentially exogenous, so that the welfare-relevant effects
of interest rate policy relate only to its effects on
output and inflation, as in a model without credit
frictions; and the additional terms in the aggregatesupply tradeoff (6) are purely exogenous disturbance terms, so that the derivation of the optimal
target criterion proceeds as in the representativehousehold model.
More generally, the target criterion (18) will
not correspond precisely to optimal policy; but
our numerical investigations of calibrated models
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

suggest that it can easily continue to provide a
reasonably good approximation to the optimal
Ramsey policy, making the prescription of “flexible
inflation targeting” still a useful practical rule of
thumb. Figures 7 through 10 illustrate this, for
one illustrative calibration of our model, now
allowing both Ξtp and ωt to vary endogenously with
the volume of private lending.27 Each figure plots
the impulse responses (under a log-linear approximation to the model dynamics) of several of the
endogenous variables to a particular type of exogenous disturbance, under each of four different
possible specifications of monetary policy: (i) a
simple “Taylor rule” using the coefficients proposed by Taylor (1993); (ii) a “strict inflationtargeting” regime, under which interest rate policy
is used to ensure that inflation never deviates
from its target level (zero) in response to any
disturbance; (iii) a “flexible inflation-targeting”
regime, under which interest rate policy ensures
that (18) holds each period; and (iv) a fully optimal policy (the solution to the Ramsey policy
problem).
In each of the cases shown (as well as for a
large number of other types of disturbances that
we have considered), the “flexible inflationtargeting” regime remains a good approximation
to the fully optimal policy, even if it is no longer
precisely the optimal policy. Both types of inflation-targeting regimes are closer to the optimal
policy than is the Taylor rule, which mechanically
responds to observed variations in real activity
without taking account of the types of disturbances
27

The calibration is discussed further in Cúrdia and Woodford (2009a).
The model parameters that are shared with the representativehousehold version of the model are calibrated as in Woodford (2003,
Chap. 6), on the basis of the empirical estimates of Rotemberg and
Woodford (1997). The degree of heterogeneity of the consumption
preferences of the two types is as shown in Figure 5, while the
disutility of labor is the same for the two types, except for a multiplicative factor chosen so that in steady state the two types work
the same amount. The steady-state credit spread is calibrated to
equal 2 percent per annum, as in Mehra, Piguillem, and Prescott
(2008), and is attributed entirely to the marginal resource cost of
private financial intermediation, to make the endogeneity of Ξt as
great as possible given the average size of the spread. A highly con–
vex function ΞpL is also assumed in the numerical results presented
here, to make the endogeneity of the credit spread as great as pos–
sible. If we assume a less convex function for ΞpL or that a smaller
fraction of the steady-state credit spread is due to real resource
costs, then the special case in which equation (18) is optimal is an
even better approximation than in the case shown in the figures.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

responsible for those variations.28 (The Taylor rule
tightens policy too much in response to increases
in output resulting from productivity growth or
increased government purchases, while it does
not tighten enough in the case of the wage-markup
shock, which causes output to fall even as inflation increases.) But especially in the case of the
wage-markup shock and the shock to government
purchases, the flexible inflation target provides
a better approximation to optimal policy than
would a strict inflation target.
The target criterion (18) continues to provide
a good approximation to optimal policy in the
case of a “purely financial” disturbance as well,
even though such disturbances are not allowed for
in the analysis of Benigno and Woodford (2005).
Figure 10 shows the impulse responses to an
exogenous increase in the function χtL, corresponding to an increase in the fraction of loans
expected to be bad loans, that then gradually shifts
back to its steady-state value. Such a shock temporarily shifts up the value of ω– tL for any value
of L and so represents a contraction of the loan
supply for reasons internal to the financial sector.
In equilibrium, such a disturbance results both
in a contraction of private lending (and hence in
equilibrium borrowing bt , as shown in the bottomleft panel) and in an increase in the equilibrium
credit spread, ωt (as shown in the middle-right
panel). If the central bank follows the Taylor rule,
such a shock results in both an output contraction
and deflation, but an optimal policy would allow
little of either to occur.29 Again, the flexible
28

Here we assume a rule in which the intercept term representing
the equilibrium real funds rate is a constant, and the output gap is
defined as output relative to a deterministic trend, as in Taylor
(1993). A more sophisticated variant, in which the intercept varies
with variations in the “natural rate of interest,” and the output
gap is defined relative to variations in the “natural rate of output”
(defined as in equation (6)), provides a better approximation to
optimal policy, but still less close an approximation than that
provided by the flexible inflation-targeting rule. The responses to
exogenous disturbances under the more sophisticated form of
Taylor rule are discussed in Cúrdia and Woodford (2009b).

29

Interestingly, the optimal policy does not involve a much larger cut
in the policy rate than occurs under the Taylor rule. The difference
is that under the Taylor rule, the central bank is unwilling to cut
the policy except to the extent that this can be justified by a fall in
inflation or output, and so in equilibrium those must occur; under
the optimal policy, the central bank is willing to cut the policy rate
without requiring inflation or output to decline, and in equilibrium
they do not.

J U LY / A U G U S T

2010

247

Cúrdia and Woodford

Figure 7
Impulse Responses to a 1 Percent Increase in Total-Factor Productivity, Under Four Alternative
Monetary Policies
Y

Percent

π

Percent
0

2
−0.5

1

−1

0
0

4

8
Quarters

12

16

id

Percent
0

0

4

8
Quarters

12

16

12

16

ω

Percent
0.2
0.15

−0.5

0.1
0.05

−1

0
0

4

Percent

8
Quarters

12

16

b

0

4

8
Quarters

Optimal Policy

0.2

Strict Inflation Target

0.15

Taylor Rule

0.1

Flexible Inflation Target

0.05
0
0

4

8

12

16

Quarters
SOURCE: Cúrdia and Woodford (2009a).

inflation-targeting regime provides a reasonable
approximation to what would happen under an
optimal policy commitment. (We obtain a very
similar figure in the case in which the disturbance
is instead an exogenous increase in the marginal
resource cost of private financial intermediation.)
These results provide an answer to one of the
questions posed in the introduction: Does keeping track of the projected paths of inflation and
output alone provide a sufficient basis for judgments about whether monetary policy (by which
interest rate policy is here intended) remains on
track, even during times of financial turmoil?
248

J U LY / A U G U S T

2010

Our results suggest that, while the target criterion
(18) involving only the projected paths of inflation
and the output gap is not complex enough to
constitute a fully optimal policy in our extended
model, ensuring that (18) holds at all times would
in fact ensure that policy is not too different from
a fully optimal policy commitment—not only in
an environment in which financial intermediation
is imperfect, but even when the main disturbances
to the economy originate in the financial sector and
imply large increases in the size of credit spreads.
It is important to note, however, that our
results do not imply that there is no need for a
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 8
Impulse Responses to a 1 Percent Increase in the Wage Markup, Under Four Alternative
Monetary Policies
Y

Percent

π

Percent

0
0.6
−0.5

0.4

−1

0.2
0

−1.5
0

4

8
Quarters

12

16

id

Percent
0.8

0

4

8
Quarters

12

16

12

16

ω

Percent
0

0.6
−0.05

0.4

−0.1

0.2
0
0

4

8
Quarters

Percent

12

16

b

0

4

8
Quarters

Optimal Policy

0

Strict Inflation Target
Taylor Rule

−0.05

Flexible Inflation Target

−0.1
0

4

8

12

16

Quarters
SOURCE: Cúrdia and Woodford (2009a).

central bank to monitor or respond to financial
conditions. Under the targeting regime recommended here, it is necessary to keep track of the
various exogenous disturbances affecting the
economy, to correctly forecast the evolution of
inflation and output under alternative paths for
the policy rate—and this includes keeping track
of financial disturbances, when these are important. The simple Taylor rule, which does not
require the central bank to use information about
any variables other than inflation and real GDP,
would not be an adequate guide to policy.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

3.2 A Spread-Adjusted Taylor Rule?
Might a Taylor rule instead be a sufficient
basis for setting interest rate policy if the standard
Taylor rule is augmented, as proposed by Taylor
(2008), by an adjustment for observed variations
in a credit spread, such as one of the LIBOR-OIS
spreads shown in Figure 1? For the kind of disturbance considered in Figure 10, this type of
adjustment would allow the policy rate to be cut
by more than a full percentage point even in the
absence of any decline in inflation or output—
which is exactly what is necessary to the allow
J U LY / A U G U S T

2010

249

Cúrdia and Woodford

Figure 9
Impulse Responses to a 1 Percent Increase in Gt Equal to 1 Percent of Steady-State Output,
Under Four Alternative Monetary Policies
Y

Percent

π

Percent

0.3

0

0.2
−0.05
0.1
−0.1

0
0

4

8
Quarters

12

id

Percent

0

16

4

8
Quarters

16

12

16

ω

Percent
0.01

0.1

12

0
0.05

−0.01
−0.02

0

−0.03
0

4

Percent
0.01

8
Quarters

12

16

b

0

4

8
Quarters

Optimal Policy
Strict Inflation Target

0

Taylor Rule

−0.01

Flexible Inflation Target

−0.02
−0.03
0

4

8
Quarters

12

16

SOURCE: Cúrdia and Woodford (2009a).

the kind of equilibrium responses associated
with the optimal policy commitment.
Cúrdia and Woodford (2009b) consider modified Taylor rules of this kind in the context of the
same calibrated structural model used in Figures 7
through 10. While they find that the type of spread
adjustment proposed by Taylor (2008) would be
beneficial under some circumstances—such as
the type of disturbance considered in Figure 10—
the desirable degree of adjustment (and even
sometimes the sign of the adjustment) of the policy
rate in response to a change in credit spreads is
not independent of the nature of the disturbance
250

J U LY / A U G U S T

2010

that causes spreads to change. Even in the case
of “purely financial” disturbances, like the kind
considered in Figure 10, the optimal degree of
response to changes in the credit spread depends
on the degree of anticipated persistence of the
disturbance.
In fact, the targeting regime that we propose
above automatically involves a spread adjustment
of the general type proposed by Taylor (2008).
Given that a change in the credit spread (and in
the anticipated future path of credit spreads,
which determines the marginal-utility gap, Ωt ,
because of (4)) affects aggregate demand for any
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 10
Impulse Responses to a Shift in the Function χt(L) that Triples the Size of ω–t(L) for Each Value of L,
Under Four Alternative Monetary Policies
Y

Percent

π

Percent
0.1

0

0

−0.5

−0.1

−1

−0.2

−1.5
0

4

8
Quarters

12

16

id

Percent

0

4

8
Quarters

12

16

12

16

ω

Percent

0
−0.5

2

−1
1

−1.5
−2

0
0

4

Percent

8
Quarters

12

16

b

0

4

8
Quarters

Optimal Policy

0

Strict Inflation Target

−0.5

Taylor Rule

−1

Flexible Inflation Target

−1.5
−2
0

4

8

12

16

Quarters

SOURCE: Cúrdia and Woodford (2009a).

given anticipated path of the policy rate—both
because of the difference between itavg and the
policy rate indicated in (5) and because of the Ω̂t
terms in (3)—the consequences of a given path of
the policy rate for the inflation and output projections will be different when the path of credit
spreads changes, and so the path for the policy
rate required to produce projections that conform
to the target criterion will be different. Since larger
credit spreads (now and in the future) reduce
aggregate demand leading to lower inflation, the
policy rate will generally need to be reduced to
offset this effect.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Moreover, we believe that the targeting
approach represents a conceptually superior
way of introducing these considerations into
decisions about interest rate policy. The Taylor
(2008) proposal requires that one specify which
particular measure of credit spreads will be taken
into account in the modified reaction function.
(Taylor has proposed one very specific spread—
the LIBOR-OIS spread.) But in fact central banks
monitor many different credit spreads; and while
in our highly stylized model there is only a single
credit spread, a more empirically realistic model
would have to include several (as indeed the
J U LY / A U G U S T

2010

251

Cúrdia and Woodford

FRB/U.S. model already does). Under our proposed targeting regime, each of these would be
relevant to setting interest rate policy: Variations
in each of the different spreads would be taken
into account to the extent that they enter the equations of the model used to project the paths of
inflation and output. Furthermore, under the
targeting approach, the adjustment of the policy
rate would not have to be a mechanical (and purely
contemporaneous) function of the change in the
credit spread; instead, one would automatically
respond differently depending on the nature of the
disturbance and would respond also to changes
in the expected future path of spreads as well as
to the current spread.

3.3 Policy When the Zero Lower Bound
Is Reached
In the discussion above (and in the simulations in Figures 7 through 10), it is assumed that
the zero lower bound on the policy rate is never
reached, and our theoretical model implies that
it should not be reached, in the case of small
enough shocks. But it is theoretically possible
for it to bind in the case of large-enough shocks
of certain types, and recent events in the United
States and elsewhere have shown that one cannot
presume that the constraint will never bind in
practice. (As a practical matter, it seems that it is
most likely to bind following severe disruptions
of the financial sector, as in the case of the Great
Depression, in Japan during the 1990s, and at
present.)
When the zero bound is a binding constraint,
it may not be possible for the central bank to use
interest rate policy to ensure fulfillment of the
target criterion (19) in all periods. Does this affect
the validity of our recommendation of this policy?
Although it may not be possible to fulfill the target
criterion at all times, that does not in itself imply
that it is not desirable to adjust interest rate policy
to fulfill the criterion when it can be satisfied.
Also, nothing here implies that, when policymakers deliberate interest rate policy, they should
forgo the question of whether there exists an
interest rate path that would satisfy the target
criterion.
252

J U LY / A U G U S T

2010

But the fact that the lower bound is sometimes
a binding constraint also has consequences for
the appropriate policy target even under certain
circumstances when the zero lower bound would
not prevent one from achieving the target criterion
(18). The reason is that the severity of the distortions during the period when the lower bound is
binding should depend on the way in which
policy is expected to be conducted after the constraint ceases to bind. Hence, the policy that a
central bank should commit to follow in such a
period should be chosen with a view to the consequences of the anticipation of that policy during
the period when the zero bound binds.
Eggertsson and Woodford (2003) analyze this
issue in a model that is equivalent to a special
case of the model considered here. Let us again
consider the special case (mentioned in Section
3.1) in which there are no steady-state distortions,
no resources are used in financial intermediation,
and the fraction of bad loans is independent of
the scale of lending. Because in this case both the
credit spread and the marginal-utility gap evolve
exogenously, a second-order Taylor series approximation to the objective function (14), expanding
around the optimal (zero-inflation) steady state,
is exactly the same quadratic function of inflation
and the output gap as in the case of a representative-household model (the case considered by
Eggertsson and Woodford). The “intertemporal IS
relation” (3) and the aggregate-supply relation (6)
are also identical to those of the representativehousehold model, except for the presence of
additional additive disturbance terms involving
ω̂t and Ω̂t .
The optimal policy problem—which can be
stated as the choice of processes for inflation,
output, and the policy rate consistent with (3),
(6), and (13) each period to maximize the welfare
measure written in terms of output and inflation—is of the same form as the one analyzed by
Eggertsson and Woodford (2003), except with
additional possible interpretations of the exogenous disturbance terms. In particular, the extension of the model to incorporate credit frictions
provides a more empirically realistic interpretation
of the disturbance hypothesized by Eggertsson and
Woodford (2003), which makes the real policy
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 11
Equilibrium Responses of the Policy Rate, Inflation, and the Output Gap, Under Two Alternative
Monetary Policies
A. Interest Rate

Percent
6
4
2
0
−5

0

5

Percent

10
Quarters

15

20

25

15

20

25

B. Inflation

0

−5

−10
−5

0

5

Percent

10
Quarters
C. Output Gap

0
−5
Optimal

−10

π* = 0
−15
−5

0

5

10
Quarters

15

20

25

NOTE: The figure represents equilibrium responses when the expected probability of loan default exogenously increases beginning
in quarter zero and ending in quarter 15.
SOURCE: Eggertsson and Woodford (2003).

rate needed to maintain a constant zero output
gap temporarily negative. Rather than postulate
a sudden, temporary disappearance of real spending opportunities or a temporary reduction in the
rate of time preference, we can instead attribute
the situation to a temporary increase in credit
spreads as a result of disruption of the financial
sector.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Eggertsson and Woodford (2003) show that it
can be a serious mistake for a central bank to be
expected to return immediately to the pursuit of
its normal policy target as soon as the zero bound
no longer prevents it from hitting that target. For
example, Figure 11 (reproduced from their paper)
compares the dynamic paths of the policy rate,
the inflation rate, and aggregate output under two
J U LY / A U G U S T

2010

253

Cúrdia and Woodford

alternative monetary policies in the case of a real
disturbance (here interpreted as an exogenous
increase in the probability that loans are bad,
requiring intermediaries to increase the credit
spread by several percentage points) that begins
in period zero and lasts for 15 quarters, before
real fundamentals permanently return to their
original (“normal”) state.
In this case, if the financial disturbance were
never to occur, optimal policy would involve
maintaining a zero inflation rate, as this would
also imply a zero output gap in every period. After
the disturbance dissipates, one of the feasible policies is an immediate return to this zero-inflation
steady state (under the parameterization assumed
in the figure, this involves a nominal interest rate
of 4 percent), and this is optimal from the point
of view of welfare in all periods after the financial
disturbance dissipates. It is not, however, possible
to maintain the zero-inflation steady state at all
times, because during the financial disturbance
this would require the policy rate to equal –2 percent, which would violate the zero lower bound.
One of the policies considered in Figure 10
(dashed lines) is strict (forward-looking) inflation
targeting: The central bank uses interest rate policy to maintain a zero inflation rate whenever it
is not prevented by the zero lower bound on the
policy rate. When undershooting the inflation
target cannot be avoided, the policy rate is maintained at the lower bound. The other policy (solid
lines) is the optimal Ramsey policy, when the zero
lower bound is included among the constraints
on the set of possible equilibria. The forwardlooking inflation-targeting policy is clearly much
worse, as it involves both a much more severe
output contraction and much more severe deflation during the period when the zero bound constrains policy.
The problem with the forward-looking inflation-targeting policy is that because the central
bank simply targets zero inflation from the time
that it again becomes possible to do so, all of the
deflation that occurs while the zero bound binds
is fully accommodated by the subsequent policy:
The central bank continues to maintain the price
level at whatever level it has fallen to. This results
in expected deflation during the entire period of
254

J U LY / A U G U S T

2010

the financial disturbance, for deflation will continue as long as the financial disruption continues,
while no inflation will be allowed even if the
disturbance dissipates; this expected deflation
makes the zero bound on nominal interest rates
a higher lower bound on the real policy rate, making the contraction and deflation worse, giving
people reason to expect more deflation as long as
the disruption continues, and so on in a vicious
circle.
The outcome would be even worse if the
central bank were to seek to achieve the target
criterion (18) each period as soon as it becomes
possible to do so. This is because, once credit
spreads contract again, this policy would require
the central bank to target negative inflation and/or
a negative output gap (even though zero inflation
and a zero output gap would now be achievable),
simply because there had been a large negative
output gap in the recent past (when the zero
bound was a binding constraint); but the expectation of such policy would make the output contraction while the zero bound constrained policy
even more severe (justifying even tighter policy
immediately following the “exit” from the “liquidity trap,” and so on).
Under the optimal policy, there is instead a
commitment to maintain accommodative conditions for a brief interval, even though the reduction in credit spreads means that this level for the
policy rate is now expansionary, leading to a mild
boom and temporary inflation above the long-run
target level (of zero). The expectation that this
will occur during the “exit” from the trap results
in much less contraction of economic activity
and much less deflation, because it makes the
perceived real rate of interest lower at all times
while the policy rate is at zero (given that there
is in each period some probability that credit
spreads will shrink again in the next period, allowing mild inflation to occur). This expectation
results in less deflation and higher real activity
while the lower bound binds; and the expectation
that continuation of the financial stress will have
less drastic consequences is itself a substantial
factor in making those consequences much less
drastic—in a “virtuous circle” that exactly reverses
the logic of our analysis above.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

While our analysis implies that it is desirable
for people to be able to expect that the “exit” from
the trap will involve mild inflation, it does not
follow that the possibility of occasionally hitting
the zero lower bound on the policy rate is a reason to aim for a substantial positive rate of inflation at all times (as proposed by Summers, 1991)
simply to ensure that a zero nominal interest rate
will always mean a sufficiently negative value of
the real policy rate. To the extent that a historydependent inflation target of the kind called for
in Eggertsson and Woodford (2003) can be made
credible and understood by the public, it suffices
that the central bank be committed to bringing
about a temporarily higher rate of inflation only
on the particular occasions when the zero lower
bound has bound in the recent past, and not all
of the time.30
This analysis implies that a commitment to
maintain policy accommodation can play an
important role in mitigating the effects of the zero
lower bound on interest rates. One might reasonably ask for what length of time it is sensible to
commit to keep rates low, and in particular
whether it is really prudent to make any lengthy
commitment when it is hard for a central bank to
be certain that recovery may not come much
sooner than anticipated. The answer is that the
best way to formulate such a commitment is not
in terms of a period of time that can be identified
with certainty in advance, but rather in terms of
targets that must be met for the removal of policy
accommodation to be appropriate.
In fact, Eggertsson and Woodford (2003) show
that in the representative-household model (and
hence similarly in the special case described
above), optimal policy can be precisely described,
regardless of the nature of the exogenous disturbances,31 by a target criterion involving only
the path of the output gap–adjusted price level

defined in (18). Under the optimal rule, the central
bank has a target each period for p̃t that depends
only on the economy’s history through period t –1
and must use interest rate policy to achieve the
target, if this is possible without violation of the
zero lower bound; if the target is undershot even
with a zero policy rate, the policy rate is at any
rate reduced to zero—and the target for p̃t –1 is
increased in proportion to the degree of undershooting. In periods when the zero bound does
not bind, the target for the gap-adjusted price level
is not adjusted, and the target criterion is the same
as the one discussed in Section 3.1.
Actually, the adjustments of the target are not
of great importance, even when the zero bound
does bind: Eggertsson and Woodford (2003) show
that almost all of the improvement in stabilization
achievable under the optimal policy commitment
can be obtained by simply committing to a target
criterion of the form in (19) with a constant target
p*. The crucial feature of the optimal policy is
that the target for p̃t must not be allowed to fall
as a result of having undershot the target in past
periods. Hence one of the approximate characterizations of optimal policy proposed in Section 3.1
continues to provide a good approximation to
optimal policy even when the zero lower bound
sometimes binds: It is simply important that the
commitment be to the level form of the target criterion (19) rather than to the growth rate form (18).

4. OPTIMAL POLICY: CREDIT
POLICY
We turn now to the final of our three independent dimensions of central bank policy,
namely, adjustment of the composition of the asset
side of the central bank’s balance sheet, taking as
given the overall size of the balance sheet (deter31

30

Eggertsson and Woodford (2003) compare the welfare levels associated with alternative constant inflation targets and find that the
existence of an occasionally binding zero lower bound does indeed
make the optimal inflation target higher than it would otherwise
be, if one must choose from among this very restrictive class of policies. But they show that even the best policy in that class involves
much larger average distortions than a price-level targeting policy,
even though a price-level targeting policy implies a long-run average
inflation rate of zero.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Their analysis allows for both exogenous variations in the “natural
rate of interest” (which means an additive exogenous term in the
intertemporal IS relation) and in the “cost-push” term (which means
an additive exogenous term in the Phillips-curve tradeoff), evolving according to arbitrary stochastic processes. Since the effects
of the financial frictions in (3) and (6) are to add additional terms
involving Ω̂t that can be viewed as a combination of these two types
of shifts, the optimal target criterion derived by Eggertsson and
Woodford (2003) continues to apply—under the special assumptions stated above—even in the presence of time-varying credit
frictions.

J U LY / A U G U S T

2010

255

Cúrdia and Woodford

mined by the reserve-supply decision discussed
in Section 2). According to the traditional doctrine
of “Treasuries only,” the central bank should not
vary the composition of its balance sheet as a
policy tool. Instead, it should avoid both balancesheet risk and the danger of politicization by holding only (essentially riskless) Treasury securities
at all times, while varying the size of its balance
sheet to achieve its stabilization goals for the
aggregate economy.32
Apart from these prudential concerns, if private financial markets can be relied on to allocate
capital efficiently, it is hard to argue that there
would be any substantial value to allowing the
central bank this additional dimension of policy.
Eggertsson and Woodford (2003) present a formal
irrelevance proposition in the context of a representative-household general-equilibrium model.
In their model, the assets purchased by the central
bank have no consequences for the equilibrium
evolution of output, inflation, or asset prices—
and this is true regardless of whether the central
bank purchases long-term or short-term assets,
nominal or real assets, riskless or risky assets, and
so on. In addition, even in a model with heterogeneity of the kind considered here, the composition of the central bank’s balance sheet would
be irrelevant if we were to assume frictionless
private financial intermediation, since private
intermediaries would be willing to adjust their
portfolios to perfectly offset any changes in the
portfolio of the central bank.
This irrelevance result does not hold, however, in the presence of credit frictions of the kind
assumed in Section 1; so we can also consider
the optimal use of this additional dimension of
policy if we are willing to suppose that the prudential arguments against the central bank’s
involvement in the allocation of credit should
not be determinative, at least in the case of sufficiently severe financial disruptions. In our model,
an increase in Ltcb can improve welfare on two
grounds: For a given volume of private borrowing,
bt an increase in Ltcb allows the volume of private
lending, Lt, to fall, which should reduce both the
32

See Goodfriend (2009) for a discussion of this view and a warning
about the dangers of departing from it.

256

J U LY / A U G U S T

2010

resources Ξpt consumed by the intermediary sector
and the equilibrium credit spread, ωt (due to
equilibrium (7)). Under plausible conditions, our
model implies both a positive shadow value ϕ Ξ,t
of reductions in Ξt (the Lagrange multiplier associated with the resource constraint (2)) and a positive shadow value ϕ ω,t of reductions in ωt (the
Lagrange multiplier associated with the constraint
(4)); hence an increase in Ltcb should be desirable
on both grounds.
In the absence of any assumed cost of central
bank credit policy, one can easily obtain the result
that it is always optimal for the central bank to
lend an amount sufficient to allow an equilibrium with Lt = 0; that is, the central bank should
substitute for private credit markets altogether.
Of course, we do not regard this as a realistic
conclusion. As a simple way of introducing into
our calculations the fact that the central bank is
unlikely to have a comparative advantage at the
activity of credit allocation under normal circumstances, we assume that central bank lending consumes real resources in a quantity ΞcbLtcb, by
analogy with our assumption that real resources,
Ξ tp, are consumed by private intermediaries. The
function ΞcbL is assumed to be increasing and
at least weakly convex; in particular, we assume
that Ξcb ′0 > 0 so that there is a positive marginal
resource cost of this activity, even when the central bank starts from a balance sheet made up
entirely of Treasury securities.

4.1 When Is Active Credit Policy Justified?
The first-order conditions for optimal choice
of Ltcb then become

(

)

( )
) (


ϕ Ξ,t Ξtp′ bt − Lcb
− Ξcb′ Lcb
t
t 


(20)
p
cb

+ϕω ,t Ξt ′′ bt − Lt + χt′′ bt − Lcb
t  ≤ 0,



(

)

(21) Lcb
t ≥ 0,
together with the complementary slackness condition that at least one of conditions (20) or (21)
must hold with equality in each period. (Here, the
first expression in square brackets in (20) is the
partial derivative of Ξt with respect to Ltcb, holdF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

ing constant the value of total borrowing, bt; the
second expression in square brackets is the partial derivative of ωt with respect to Ltcb under the
same assumption.)
A “Treasuries only” policy is optimal in the
event of a corner solution, in which (20) is an
inequality, as will be the case if Ξcb ′0 is large
enough. In our view, it is probably most reasonable to calibrate the model so that this is true in
steady state. Then not only will the optimal policy involve “Treasuries only” in the steady state,
but (assuming that the inequality is strict at the
steady state) this will continue to be true in the
case of any stochastic disturbances that are small
enough. However, it will remain possible for the
optimal policy to require Ltcb > 0 in the case of
certain large-enough disturbances. This is especially likely to be true in the case of large-enough
disruptions of the financial sector of a type that
increase the marginal resource cost of private inter–
mediation (the value of Ξp′ ) and/or the degree to
which increases in private credit require a larger
credit spread (the value of ω– ′).
However, not all “purely financial” disturbances—by which we mean exogenous shifts in
–
the functions ΞtpL or χt L of a type that increase
the equilibrium credit spread ω–t L for a given
volume of private credit—are equally likely to
justify an active central bank credit policy on the
grounds just mentioned.33 To illustrate this, let us
consider four different possible purely financial
disturbances, each of which will be assumed to
–
increase the value of ω–t L  by the same number
of percentage points. Here, by an additive shock,
we mean one that translates the schedule ω–t L
vertically by a constant amount; a multiplicative
shock will instead multiply the entire schedule
ω–t L by some constant factor greater than 1. We
shall also distinguish between disturbances that
–
change the function Ξt L (“Ξ shocks”) and disturbances that change the function χtL (“χ shocks”).
Thus a “multiplicative χ shock” is a change in the
33

Our result here is quite different from that in Section 3, where the
consequence of a “purely financial” disturbance for optimal interest
rate policy, taking as given the path of central bank lending to the
private sector, depends (to a first approximation) only on the size of
–
the shift in ω–t L , which is why we do not bother to show the optimal responses to more than one type of purely financial disturbance.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

function χt L; as a consequence, of which the
schedule ω–t L is multiplied by a factor greater
than 1 for all values of L, and so on.
With the model calibrated as in the numerical
exercises in Figures 7 through 10, Figure 12 plots
the dynamic response of the sum of the three positive terms on the left-hand side of (20) to each of
these four types of purely financial disturbances.
In these simulations, both interest rate policy and
reserve-supply policy are assumed to be optimal,
as discussed in Sections 2 and 3. We assume in
each case that there is no central bank lending to
the private sector in the equilibrium being computed, but we ask (in the equilibrium computed
under this assumption) what the smallest value
of Ξcb ′0 is at each point in time, for which this
would be consistent with the first-order condition
(21). (Thus an increase in the quantity plotted
means that the marginal benefit of central bank
credit policy is increased, even if in our calculations no central bank lending actually occurs.)
We divide the sum of the three terms by the value
of ϕΞ,t , so that the quantity plotted is precisely
the threshold value of Ξcb ′0, expressed in terms
of an interest rate spread. (Since it is an interest
rate spread, we multiply by 4 so that the quantity
on the vertical axis of the figure is in units of percentage points per annum.) In the figure, each of
the four disturbances is of a size that increases the
–
value of ω–t L  by 4 percentage points per annum
(i.e., from 2.0 percent to 6.0 percent).
In the absence of any disturbances, the steadystate value of this quantity is a little less than 3.5
percentage points per annum. This means that a
marginal resource cost of central bank loan origination of 3.5 percent or higher will suffice to justify our proposal above—that in the steady state
the optimal quantity of central bank credit is
zero.34 Let us suppose that Ξcb ′0 is equal to 4.0
percent. Then in the absence of shocks, a corner
solution with Treasuries only is optimal. However, either a “multiplicative Ξ shock” or an
“additive Ξ shock” of the size assumed would
34

Note that this quantity is well above the marginal resource cost of
private lending in the steady state, which we have calibrated at
2.0 percent per annum because our baseline calibration implies a
relatively inelastic private supply of credit: ω–t L is steeply increasing with L.

J U LY / A U G U S T

2010

257

Cúrdia and Woodford

Figure 12
Response of the Critical Threshold Value of Ξ cb ′(0) for a Corner Solution, Under Four “Purely
Financial” Disturbances
Percent
7.0
Multiplicative χ
6.5

Multiplicative Ξ
Additive χ

6.0

Additive Ξ

5.5
5.0
4.5
4.0
3.5
3.0
2.5
4

8

12

16

20

Quarters
–
NOTE: For each disturbance, ω–t (L ) increases by 4 percentage points.

cause condition (21) to be violated in the case of
a corner solution; hence optimal policy would
require a positive quantity of central bank lending. (In the case of the “multiplicative Ξ shock,”
this would be true even if Ξcb ′0 were equal to
5.0 percentage points.)
On the other hand, even in the case of the
“multiplicative Ξ shock,” the threshold required
to justify a corner solution is only above 4 percent
in the quarter of the shock and the quarter immediately following it—despite the fact that in our
numerical experiment the disturbance is assumed
to have an autocorrelation coefficient of 0.9, so
that the shift in the ω–t L schedule is still 65 percent of its initial magnitude a year later. This suggests that, even in the case of those disturbances
for which the welfare benefits of central bank
258

J U LY / A U G U S T

2010

credit policy are greatest, departure from the corner solution is likely to be justified only for a relatively brief period of time.

4.2 An Example with Active Credit Policy
As an example of how optimal credit policy
can, under some circumstances, substantially
alter the economy’s response to a financial disruption, Figure 13 considers the optimal response
to a “multiplicative Ξ shock,” under a calibration
in which Ξcb ′0 is assumed to be low enough so
that even in the steady state a corner solution is
not optimal.35 (While this is not the case that we
regard as most realistic, it simplifies the calcula35

This alternative calibration is chosen to imply that in the steady
state, only 5 percent of total credit, bt, is supplied by the central bank.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

Figure 13
–
Impulse Responses to a Shift in Ξ t (L) that Triples the Size of ω–t(L) for Each Value of L, Under
Optimal Interest Rate Policy and Two Alternative Assumptions About Credit Policy
Y

Percent

π

Percent
0.06

0

0.04

−0.2

0.02

−0.4

0
0

4

8
Quarters

12

16

id

Percent

0

4

8
Quarters

12

16

12

16

ω

Percent

0
2
−1
1
−2
0
0

4

Percent

8
Quarters

12

16

4

0

4

−0.5

3

−1

2

8
Quarters
Lcb

Percent

b

−1.5

No Credit
Credit

1

−2

0
0

4

8

12

16

Quarters

tions reported in Figure 13, since it implies that
constraint (21) never binds. We leave for future
work analysis of the more interesting case, in
which (21) binds in some periods and not in
others.) The figure plots the impulse responses
under two alternative assumptions about policy:
With credit, central bank policy is optimal along
all three dimensions (and Ltcb varies over time);
with no credit, Ltcb is constrained to equal the
–
steady-state value Lcb at all times,36 while interestrate policy and reserve-supply policy are optimal.
36

0

–
We impose the constraint that Ltcb must equal L cb, rather than zero,
in the no-credit case, so that the steady state is the same under
both policies.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

0

4

8

12

16

Quarters

In addition to the responses of the five variables plotted in Figures 7 through 10, Figure 13
also plots the response of an additional variable,
Lˆtcb, indicating the deviation of central bank
credit from its steady-state value, expressed as a
–
fraction of total steady-state credit, b (in percentage points).
Under an optimal use of credit policy, central
bank lending to the private-sector increases substantially in response to the financial disturbance
(central bank lending increases from 5 percent of
total credit to a little over 9 percent). As a result,
the large increase in the credit spread that would
otherwise occur as a result of the shock is essenJ U LY / A U G U S T

2010

259

Cúrdia and Woodford

tially prevented from occurring (so that the credit
spread remains close to its steady-state level of
2.0 percent per annum). As a further consequence,
it is not necessary under this policy to cut the
policy rate sharply, as would otherwise be
required by an optimal interest rate policy. The
substantial contraction of credit that would otherwise occur (an eventual contraction of aggregate credit by more than 2 percent a year after
the shock) is largely avoided, and the modest
effects on output and inflation that would occur
even under an optimal interest rate policy in the
absence of an active credit policy are also largely
avoided.
This example indicates that, under at least
some circumstances, our model would support
a fairly aggressive use of active credit policy for
stabilization purposes. We must caution, however, that these results are quite dependent upon
assumptions about the nature of the financial disturbance. It is equally possible to conclude that
central bank credit should be contracted (assuming that it would be positive to begin with) in
response to a disturbance that increases credit
spreads. If the only form of purely financial disturbance is an “additive χ disturbance,” and we
assume that ΞcbL cb is a linear function, then none
–
–
of the functions Ξp′L, Ξp′′L, or χ ′L is time
varying and Ξcb ′ is a constant. In this case, the
requirement that (20) hold with equality determines the volume of private credit, Lt , as a timeinvariant function of ϕω,t /ϕΞ,t . In the case of a
disturbance that increases the credit spread, the
resulting decline in credit demand, bt, means that,
for credit supply Lt to be stabilized, Ltcb would
have to contract; so unless ϕω,t /ϕΞ,t changes to
such an extent that the value of Lt consistent with
(20) falls as much as bt does,37 it is optimal for
Ltcb to contract (as Figure 12 would also suggest).
In a case of this kind, active credit policy would
actually cause credit to contract by more (and
credit spreads to increase by more) than they
would if the supply of central bank credit did
not respond to the shock.
37

Our numerical experiments indicate that this can easily fail to be
the case.

260

J U LY / A U G U S T

2010

4.3 Segmented Credit Markets
In the simple model expounded above, there
is a single credit market and single borrowing
rate, itb, charged for loans in this market. Our discussion of central bank credit policy has correspondingly simply referred to the optimal quantity
of central bank lending to the private sector overall, as if the allocation of this credit is not an issue.
In reality, of course, there are many distinct credit
markets and many different parties to which the
central bank might consider lending. Moreover,
since there is only a potential case to be made for
central bank credit policy when private financial
markets are severely impaired, it does not make
sense to assume efficient allocation of credit among
different classes of borrowers by the private sector,
so that only the total credit extended by the central
bank would matter. Our simple discussion here
has sought merely to clarify the connection that
exists, in principle, between decisions about credit
policy and the other dimensions of credit policy.
An analysis of credit policy that could actually be
used as a basis for credit policy decisions would
instead need to allow for multiple credit markets,
with imperfect arbitrage between them.
We do not here attempt an extension of our
model in that direction. (A simple extension
would be to allow for multiple types of “type b”
households, each only able to borrow in a particular market with its own borrowing rate and
market-specific frictions for the intermediaries
lending in each of these markets.) We shall simply
note that in such an extension there would be a
distinct first-order condition, analogous to conditions (20) and (21), for each of the segmented
credit markets. There would be no reason to
assume that the question of whether active credit
policy is justified should have a single answer at
a given point in time: Lending might be justified
in one or two specific markets while the corner
solution remained optimal in the other markets.
The conditions that should be appealed to in
order to justify central bank lending are more
microeconomic than macroeconomic: They relate
to the severity of the distortions that have arisen
in particular markets and to the costs of intervention in those particular markets, rather than to
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

aggregate conditions. Thus the main determinants
of whether central bank credit policy is justified—
when it is justifiable to initiate active policy and
when it would be correct to phase out such programs—should not be questions such as whether
the zero lower bound on interest rate policy binds
or whether the central bank continues to undershoot the level of real GDP that it would like to
attain. While aggregate conditions will be one
factor that affects the shadow value of marginal
reductions in the size of credit spreads (represented by the multiplier ϕω,t in (20)), the value of
this multiplier will likely be different for different
markets and the main determinants of variations
in it are likely to be market specific. This will
apply even more to the other variables that enter
into the first-order condition (20).
Hence it would be a mistake to think of credit
policy as a substitute for interest rate policy, an
alternative tool that can be used to achieve the
same goals and that should be used to achieve the
central bank’s target criterion for inflation and
the output gap when interest rate policy alone is
unable to. Such a concept would be dangerous for
two reasons. On the one hand, it would direct
attention away from the most relevant costs and
benefits when thinking about the appropriate
scale, timing, and allocation of active credit policy.
And on the other hand, it could also allow the
central bank to avoid recognition of the extent to
which the correct target criterion for interest rate
policy needs to be modified as a result of the zero
lower bound—in particular, to avoid the challenge
of shaping expectations about interest rate policy
after the lower bound ceases to bind, on the
ground that credit policy (or “quantitative easing”)
should allow the bank’s usual target criterion to
be achieved continuously, without any need for
signaling about unconventional future interest
rate policy as compensation for past target misses.

CONCLUSIONS
We have shown that a canonical New
Keynesian model of the monetary transmission
mechanism can be extended in a fairly simple
way to allow analysis of additional dimensions
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

of central bank policy that have been at center
stage during the recent global financial crisis—
variations in the size and composition of the
central bank balance sheet and in the interest
rate paid on reserves—alongside the traditional
monetary policy issue of the choice of an operating
target for the federal funds rate (or some similar
overnight inter-bank rate elsewhere). We have also
considered the consequences for monetary policy
analysis both of nonzero credit spreads all of the
time and of financial disruptions that greatly
increase the size of those spreads for a period of
time; we have also considered the consequences
of the fact the zero lower bound for short-term
nominal interest rates is sometimes a binding
constraint on interest rate policy.
One of our most important conclusions is that
these issues can be addressed in a framework that
represents a straightforward extension of the kind
of model often used for monetary policy analysis
in the past. This allows both the considerations
emphasized in the traditional literature and the
more novel considerations brought to the fore by
recent events to be taken into account, within a
single coherent framework. This integration is
particularly important, in our view, for clear thinking about the way in which the transition from
the current emergency policy regime to a more
customary policy framework should be handled
as financial conditions normalize. Because of the
importance of expectations regarding future policy
in determining market outcomes now, we believe
that clarity about “exit strategy” is important for
the success of policy even during periods of
severe disruption of financial markets.
Another important implication of our model
is that interest rate policy should continue to be
a central focus of monetary policy deliberations,
despite the existence of the other dimensions of
policy discussed here, and despite the existence
of time-varying credit frictions that complicate
the relationship between the central bank’s policy
rate and financial conditions more broadly. While
welfare can also be affected by reserve-supply
policy, we argue that this dimension of policy
should be determined by a simple principle that
does not require any discretionary adjustments
in light of changing economic conditions: InterJ U LY / A U G U S T

2010

261

Cúrdia and Woodford

mediaries should be satiated in reserves at all
times, by maintaining an interest rate on reserves
at or close to the current target for the policy rate.
And while welfare can similarly be affected
by central bank credit policy, to the extent that
nontrivial credit frictions exist, we nonetheless
believe that under normal circumstances a corner
solution (“Treasuries only”) is likely to represent
the optimal composition of the central bank balance sheet. Decisions about active credit policy
then will be necessary only under relatively
unusual circumstances, and it will be desirable
to phase out special credit programs relatively
rapidly after the disturbances that have justified
their introduction. We thus do not anticipate that
it should be necessary to routinely make statecontingent adjustments of central bank policy
along multiple dimensions, even if recent events
suggest that it is desirable for central banks to have
the power to act along additional dimensions
under sufficiently exigent circumstances.
Finally, our results suggest that the traditional
emphasis in interest rate policy deliberations on
the consequences of monetary policy for the projected evolution of inflation and aggregate real
activity is not mistaken, even taking into account
the consequences for the monetary transmission
mechanism of time-varying credit frictions. At
least in the context of the simple model of credit
frictions proposed here, optimal interest rate
policy can be characterized to a reasonable degree
of approximation by a target criterion that involves

262

J U LY / A U G U S T

2010

the paths of inflation and of an appropriately
defined output gap, but no other endogenous
target variables. This does not mean that central
banks should remain indifferent toward changes
in financial conditions; to the contrary, credit
spreads (and perhaps other measures of financial
market distortions as well) should be closely
monitored and taken into account in judging the
forward path of interest rate policy necessary for
conformity with the target criterion. However,
financial variables need not be taken themselves
as targets of monetary policy.
The main respect in which the appropriate
target criterion for interest rate policy should be
modified to take account of the possibility of financial disruptions is by aiming at a target path for
the price level (ideally, for an output gap–adjusted
price level), rather than for a target rate of inflation
looking forward, as a forward-looking inflation
target accommodates a permanent decline in the
price level after a period of one-sided target misses
due to a binding zero lower bound on interest
rates. Our analysis implies that a credible commitment to the right kind of “exit strategy” should
substantially improve the ability of monetary
policy to deal with the unusual challenges posed
by a binding zero lower bound during a deep financial crisis; and, to the extent that this is true, the
development of an integrated framework for policy
deliberations, suitable both for crisis periods and
for more normal times, is a matter of considerable
urgency for the world’s central banks.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Cúrdia and Woodford

REFERENCES
Auerbach, Alan J. and Obstfeld, Maurice. “The Case for Open-Market Purchases in a Liquidity Trap.” American
Economic Review, March 2005, 95(10), pp. 110-37.
Benigno, Pierpaolo and Woodford, Michael. “Inflation Stabilization and Welfare: The Case of a Distorted Steady
State.” Journal of the European Economic Association, December 2005, 3(6), pp. 1185-236.
Bernanke, Ben S. “The Crisis and the Policy Response.” Stamp Lecture, London School of Economics, January 13,
2009; www.federalreserve.gov/newsevents/speech/bernanke20090113a.htm.
Christiano, Lawrence J.; Motto, Roberto and Rostagno, Massimo. “Financial Factors in Business Cycles.”
Unpublished manuscript, Northwestern University, November 2007.
Clarida, Richard H.; Gali, Jordi and Gertler, Mark. “The Science of Monetary Policy: A New Keynesian Perspective.”
Journal of Economic Literature, December 1999, 37(4), pp. 1661-707.
Cúrdia, Vasco and Woodford, Michael. “Credit Frictions and Optimal Monetary Policy.” Unpublished manuscript,
Federal Reserve Bank of New York, July 2009a.
Cúrdia, Vasco and Woodford, Michael. “Credit Spreads and Monetary Policy.” NBER Working Paper 15289,
National Bureau of Economic Research, August 2009b; www.nber.org/papers/w15289.pdf.
Cúrdia, Vasco and Woodford, Michael. “The Central-Bank Balance Sheet as an Instrument of Monetary Policy.”
Unpublished manuscript, Federal Reserve Bank of New York, April 2010.
Eggertsson, Gauti B. and Woodford, Michael. “The Zero Bound on Interest Rates and Optimal Monetary Policy.”
Brookings Papers on Economic Activity, 2003, 1(34), pp. 139-211.
Faia, Ester and Monacelli,Tommaso. “Optimal Interest Rate Rules, Asset Prices, and Credit Frictions.” Journal
of Economic Dynamics and Control, October 2007, 31(10), pp. 3228-54.
Friedman, Milton. “The Optimum Quantity of Money,” in Milton Friedman, ed., The Optimum Quantity of
Money and Other Essays. Chicago: Aldine, 1969, pp. 1-50.
Gerali, Andrea; Neri, Stefano; Sessa, Luca and Signoretti, Federico M. “Credit and Banking in a DSGE Model.”
Unpublished manuscript, Bank of Italy, June 2008.
Gertler, Mark and Karadi, Peter. “A Model of Unconventional Monetary Policy.” Unpublished manuscript,
New York University, April 2009.
Goodfriend, Marvin. “Interest on Reserves and Monetary Policy.” Federal Reserve Bank of New York Economic
Policy Review, May 2002, 8(1), pp. 77-84; www.newyorkfed.org/research/epr/02v08n1/0205good.pdf.
Goodfriend, Marvin, “Central Banking in the Credit Turmoil: An Assessment of Federal Reserve Practice.”
Unpublished manuscript, Carnegie-Mellon University, May 2009.
Goodfriend, Marvin and King, Robert G. “The New Neoclassical Synthesis and the Role of Monetary Policy,”
in B. Bernanke and J. Rotemberg, eds., NBER Macroeconomics Annual 1997. Volume 12. Cambridge, MA:
MIT Press, 1997, pp. 231-96.
Greenspan, Alan. “Performance of the U.S. Economy.” Testimony before the Committee on the Budget, U.S.
Senate, January 21, 1997.
Hall, Robert E. “Monetary Strategy with an Elastic Price Standard,” in Price Stability and Public Policy.
Federal Reserve Bank of Kansas City, 1984; www.kansascityfed.org/publicat/sympos/1984/S84HALL.PDF.
Mehra, Rajnish; Piguillem, Facundo and Prescott, Edward C. “Intermediated Quantities and Returns.” Research
Department Staff Report 405, Federal Reserve Bank of Minneapolis, August 2008 version;
www.minneapolisfed.org/publications_papers/pub_display.cfm?id=1115.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

263

Cúrdia and Woodford

Mishkin, Frederic S. “Monetary Policy Flexibility, Risk Management and Financial Disruptions.” Speech at the
Federal Reserve Bank of New York, January 11, 2008;
www.federalreserve.gov/newsevents/speech/mishkin20080111a.htm.
Reis, Ricardo. “Interpreting the Unconventional U.S. Monetary Policy of 2007-09.” Unpublished Manuscript,
Columbia University, August 2009.
Rotemberg, Julio J. and Woodford, Michael. “An Optimization-Based Econometric Framework for the Evaluation
of Monetary Policy,” in B. Bernanke and J. Rotemberg, eds., NBER Macroeconomics Annual 1997. Volume 12.
Cambridge, MA: MIT Press, 1997, pp. 297-346.
Summers, Lawrence. “How Should Long-Term Monetary Policy Be Determined? ” Journal of Money, Credit, and
Banking, August 1991, 23(3 Part 2), pp. 625-31.
Svensson, Lars E.O. “Inflation Forecast Targeting: Implementing and Monitoring Inflation Targeting.” European
Economic Review, June 1997, 41(6), pp. 1111-46.
Svensson, Lars E.O. “Monetary Policy with Judgment: Forecast Targeting.” International Journal of Central
Banking, May 2005, 1(1), pp. 1-54.
Svensson, Lars E.O. and Woodford, Michael. “Implementing Optimal Policy through Inflation-Forecast Targeting,”
in B.S. Bernanke and M. Woodford, eds., The Inflation Targeting Debate. Chicago: University of Chicago Press,
2005, pp. 19-92.
Taylor, John B. “Discretion versus Policy Rules in Practice.” Carnegie-Rochester Conference Series on Public
Policy, December 1993, 39(1), pp. 195-214.
Taylor, John B. “Monetary Policy and the State of the Economy.” Testimony before the Committee on Financial
Services, U.S. House of Representatives, February 26, 2008.
van Rixtel, Adrian. “The Exit from Quantitative Easing (QE): The Japanese Experience.” Prepared for the
Symposium on Building the Financial System of the 21st Century: An Agenda for Japan and the United
States, Harvard Law School, October 2009.
Woodford, Michael. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton
University Press, 2003.
Woodford, Michael. “Forecast Targeting as a Monetary Policy Strategy: Policy Rules in Practice.” NBER Working
Paper 13716, National Bureau of Economic Research, December 2007; http://www.nber.org/papers/w13716.pdf.
Woodford, Michael. “Optimal Monetary Stabilization Policy.” Unpublished manuscript, Columbia University,
September 2009.

264

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

New Monetarist Economics: Methods
Stephen Williamson and Randall Wright
This essay articulates the principles and practices of New Monetarism, the authors’ label for a
recent body of work on money, banking, payments, and asset markets. They first discuss methodological issues distinguishing their approach from others: New Monetarism has something in
common with Old Monetarism, but there are also important differences; it has little in common
with Keynesianism. They describe the principles of these schools and contrast them with their
approach. To show how it works in practice, they build a benchmark New Monetarist model and
use it to study several issues, including the cost of inflation, liquidity, and asset trading. They also
develop a new model of banking. (JEL E0, E1, E4, E5)
Federal Reserve Bank of St. Louis Review, July/August 2010, 92(4), pp. 265-302.

1. INTRODUCTION

T

he purpose of this essay is to articulate
the principles and practices of a school
of thought we call New Monetarist
Economics. It is a companion piece to Williamson
and Wright (forthcoming), which provides more
of a survey of the models used in this literature
and focuses on technical issues to the neglect of
methodology or history of thought. Although we
do present some technical material in order to
show how the approach works in practice, here
we also want to discuss in more detail what we
think defines New Monetarism.1 Although there
is by now a large body of research in the area,
1

The other paper is forthcoming as a chapter for the new Handbook
of Monetary Economics, edited by Benjamin Friedman and Michael
Woodford, and early versions included much of the discussion
contained here. But to keep the chapter focused, on the advice of
the editors, we separated the material into two papers. There is
unavoidably some overlap in the presentations, since the same

perhaps our labeling of it merits explanation. We
call ourselves New Monetarists because we find
much that is appealing in Old Monetarist economics, epitomized by the writings of Milton
Friedman and his followers, although we also
disagree with some of their ideas in important
ways. We have little in common with Old or
New Keynesians, in part because of the way they
approach monetary economics and the microfoundations of macroeconomics and in part
because of their nearly exclusive focus on nominal rigidities as the key distortion shaping policy.
Below we describe in more detail what we see
as the defining principles of these various schools
and try to differentiate our approach.
benchmark model is developed in both, but the applications are
different, and there remains almost no discussion of how our
approach compares to alternative schools of thought in the
Handbook chapter. In this essay, we try explain what we think our
methods are, not just how our models work.

Stephen Williamson is a professor of economics at Washington University in St. Louis, a visiting scholar at the Federal Reserve Bank of
Richmond, and a research fellow at the Federal Reserve Bank of St. Louis. Randall Wright is a professor of economics at the University of
Wisconsin–Madison, and a research associate at the Federal Reserve Bank of Minneapolis. The authors thank many friends and colleagues
for useful discussions and comments, including Neil Wallace, Fernando Alvarez, Robert Lucas, Guillaume Rocheteau, and Lucy Liu. They
thank the National Science Foundation for financial support. Wright also thanks for support the Ray Zemon Chair in Liquid Assets at the
Wisconsin Business School.

© 2010, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

265

Williamson and Wright

One reason to do so is the following: We think
that it was a healthy state of affairs when, even in
the halcyon days of Old Keynesianism, there was
a dissenting view presented by Old Monetarists.
At the very least, this dissenting view could be
interpreted as a voice of caution to those who
thought that macro and monetary economics back
in the day were solved problems—which obviously
looks premature with the benefit of hindsight.2
The claim that people thought the problems were
solved is well documented by the sentiments of
Solow as quoted by Leijonhufvud (1968), when
he said,
I think that most economists would feel that
short-run macroeconomic theory is pretty well
in hand...The basic outlines of the dominant
theory have not changed in years. All that is left
is the trivial job of filling in the empty boxes,
and that will not take more than 50 years of
concentrated effort at a maximum.

At least prior to recent events, many people
seemed to be of the opinion that there was a New
Keynesian consensus, similarly sanguine as in
the 1960s. We feel that it would be healthier if
currently more people recognized that there is
an alternative to New Keynesianism. We dub our
alternative New Monetarism.
Evidence that people have an impression of
consensus, at least among more policy-oriented
economists, about the idea that New Keynesianism is the most useful approach to analyzing
macroeconomic phenomena and guiding central
bank policy can be found in many places (see,
e.g., Goodfriend, 2007). We find this somewhat
surprising, mainly because we encounter much
sympathy for the view that there are fundamental
flaws in the New Keynesian framework. It must
then be the case that those of us who think New
Keynesianism is not the only game in town, or
who think that the approach has some deep issues
that need to be discussed, are not speaking with
enough force and clarity. In part, this essay is an
attempt to rectify this state of affairs and foster
2

Rather than go through the details, we refer to Lucas (1980a) for a
discussion of how the paradigm of the 1960s was disrupted by the
confluence of events and technical developments in the 1970s,
leading to the rise of the rational expectations, or New Classical,
approach to macroeconomics.

266

J U LY / A U G U S T

2010

more healthy debate. The interaction we envision
between New Monetarists and New Keynesians
is in some ways similar to the debates in the 1960s
and 1970s, but it is in other ways different, of
course, since much of the method and language
has changed in economics since then. To bring
the dialog to the twenty-first century, we need to
describe what New Monetarists are doing and
why we are doing it.
New Monetarism encompasses a body of
research on monetary theory and policy, banking,
financial intermediation, payments, and asset
markets, developed over the past few decades. In
monetary economics, this includes the seminal
work using overlapping generations models by
Lucas (1972) and some of the contributors to the
Kareken and Wallace (1980) volume, although
antecedents exist, including, of course, Samuelson
(1958). More recently, much monetary theory has
adopted the search and matching approach, an
early example of which is Kiyotaki and Wright
(1989), although there are also antecedents for
this, including Jones (1976) and Diamond (1984).
In the economics of banking, intermediation, and
payments, which builds on advances in information theory that occurred mainly in the 1970s, we
have in mind papers such as Diamond and Dybvig
(1983), Diamond (1984), Williamson (1986 and
1987a), Bernanke and Gertler (1989), and Freeman
(1996). On asset markets and finance we have in
mind recent work such as Duffie, Gârleanu, and
Pederson (2005) or Lagos and Rocheteau (2009).
Much of this research is abstract and theoretical,
but attention has turned more recently to empirical and policy issues.3
To explain what unifies this work, we begin
in Section 1 by saying what New Monetarism is
not, describing what we see as the defining characteristics of other schools. Then we lay out a set
of principles that guide our approach. By way of
preview, we think New Monetarists agree more
or less with the following:
Principle 1. Microfoundations matter, and productive analyses of macro and monetary economics,
3

The examples cited here are meant only to give a broad impression
of the kind of research we have in mind. More examples and references are found below.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

including policy discussions, require adherence to
sound and internally consistent economic theory.
Principle 2. Money matters, and in the quest to
understand monetary phenomena and monetary
policy, it is decidedly better to use models that
are explicit about the frictions that give rise to a
role for money in the first place; as Wallace (1998)
puts it, money should not be a primitive in monetary economics.
Principle 3. Financial intermediation matters—
e.g., while bank liabilities and currency sometimes
perform similar roles as media of exchange, for
many issues treating them as identical can lead
one astray.
Principle 4. In modeling frictions, like those that
give rise to a role for money or financial intermediaries, one has to have an eye for the appropriate
level of abstraction and tractability—e.g., the fact
that in some overlapping generations models people live two periods, or that in some search models
people meet purely at random, may make them
unrealistic but it does not make them irrelevant.
Principle 5. No single model should be an allpurpose vehicle for dealing with every question
in monetary economics, but it is still desirable to
have a framework, or a class of models making use
of similar assumptions and technical devices,
that can be applied to a variety of issues.
That these principles are not all universally
accepted is to us only too clear. Consider
Principle 2 (money matters). This is violated by
the many currently popular models used for monetary policy analysis that either have no money—
or banks or related institutions—or, if they do,
they slip it in by assuming cash-in-advance constraints or by putting money in utility or production functions or even putting government bonds
and commercial bank reserves in utility or production functions.4 Also, while some of these
principles may be accepted in principle by most
4

See Krishnamurthy and Vissing-Jorgensen (2009) and Curdia and
Woodford (2009) for recent examples of T-Bills or bank reserves
showing up in utility or production functions. We are not here
arguing that taking such shortcuts isn’t time-honored (see Tobin
1958) or that it is never useful. The claim is that this is not what a
New Monetarist would do on a good day.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

economists, it is a matter of degree. Consider
Principle 4 (appropriate abstraction). We all learn,
or at least teach, that useful economic models are
not necessarily realistic, but one still hears rather
harsh critiques of both overlapping generations
and search models of money based primarily on
their lack of realism.5 Also, we don’t want
Principle 1 (microfoundations matter) to sound
like a platitude, even if everyone, of course, wants
sound and consistent economic theory—or at least
they pay lip service to this—as we believe New
Monetarists take it more seriously. Not to pick too
much on any one example, for now, but consider
the so-called fiscal theory of the price level. New
Keynesians seem to find this quite interesting
despite the fact that it typically relies on descriptions of what happens out of equilibrium, in models that have nothing to say except about what
happens in equilibrium. This is something that
would bother a New Monetarist a lot.6
A more obvious illustration of New
Monetarists worrying relatively more about the
soundness and consistency of economic theories
may be the reliance of the entire Keynesian edifice
on a foundation of sticky prices, which are not
what we would call microfounded, even when—
especially when—appeal is made to Calvo (1983)
pricing or Mankiw (1985) menu costs. This may
not be the place to go too far into a discussion of
the merits or demerits of imposing nominal rigidities, and given the readiness of many economists
to adopt stickiness with neither trepidation nor
apology, we can’t imagine changing anyone’s
mind easily. But in Williamson and Wright (forthcoming) we offer as food for thought two New
Monetarist models that speak to the issue. In one,
we blatantly impose price stickiness to yield a
version of our framework that looks in many ways
like what one sees in Woodford (2003) or Clarida,
Gali, and Gertler (1999). This is intended to show
that, even if one cannot live without nominal
5

See Tobin (1980) and Howitt (2005) for negative takes on overlapping generations and search models, based on the unrealistic
assumptions of two-period–lived agents and random matching,
respectively.

6

Bassetto (2002) is a notable exception because he does not use
classical equilibrium theory to discuss what happens out of
equilibrium.

J U LY / A U G U S T

2010

267

Williamson and Wright

rigidity, this does not mean one cannot be serious
about money, banking, and related institutions.
The other model uses search theory to get nominal
rigidities to emerge endogenously, as an outcome
rather than an assumption. This model is consistent not just with the broad observation that many
prices appear to be sticky, but also the detailed
micro evidence discussed by Klenow and Malin
(forthcoming) and references therein. Yet it
delivers policy prescriptions very different from
those of New Keynesians: Money is neutral. We
return to some of these issues below, but the point
here is that sticky prices do not logically constitute evidence of nonneutralities or support for
Keynesian policy.7
The rest of the paper is organized as follows.
In Section 2 we go into detail concerning what we
think New Monetarism is and how it compares
with other approaches. In Section 3, in the spirit
of Principle 5 above, we lay out a very tractable
New Monetarist benchmark model based on Lagos
and Wright (2005). We try to explain what lies
behind the assumptions and we give some of its
basic properties—money is neutral but not superneutral, the Friedman rule is optimal but may not
give the first best, and so on. In Section 4 we discuss a few extensions of the baseline model that
can be found in the literature. Then we show how
these models can be used in novel ways to address
issues pertaining to asset markets, banking, and
monetary policy. In Section 5 we construct a
model with money and equity shares and discuss
its implications for asset pricing, asset trading,
and liquidity premia, including how these depend
on monetary policy. This model is extended in
Section 6 to include banking, in order to show
how financial intermediation can improve welfare
and to derive some new results concerning the
effect of monetary policy on interest rates. This
illustrates one way in which New Monetarism
7

The model we are referring to is based on Head et al. (2010), which
is related to, but also quite different from Caplin and Spulber (1987).
To be clear, the New Monetarist position is not that monetary nonneutralities can never arise, and indeed we provide examples where
they do (based, e.g., on incomplete information), nor is it our position
that policy is irrelevant, as in some examples from New Classical
macro (e.g., Sargent and Wallace, 1975, 1976). The point is rather
that, despite what one hears from pundits such as Ball and Mankiw
(1994), as a matter of logic, nominal rigidities in theory do not mean
Keynesians are right in practice.

268

J U LY / A U G U S T

2010

departs from Old Monetarism: Friedman’s proposal for 100 percent reserve requirements is a
bad idea, according to this model, because it eliminates the welfare gains from intermediation,
exemplifying Principle 3 above. We conclude in
Section 7.
We think that the examples presented here
and in Williamson and Wright (forthcoming)
illustrate the usefulness of the New Monetarist
approach. As we hope readers will appreciate, the
models used in different applications all build on
a consistent set of economic principles. This is true
of the simplest setups used to formalize the role
of currency in the exchange process, and of the
extensions to incorporate banking, credit arrangements, payment systems, and asset markets. We
think this is not only interesting in terms of theory,
but there are also lessons to be learned for understanding the current economic situation and shaping future policy. To the extent that the recent crisis
has at its roots problems related to banking, to
mortgage and other credit arrangements, or to
information problems in asset markets, one cannot
hope to address the issues without theories that
take seriously the exchange process. Studying
this process is exactly what New Monetarist economics is about. Although New Keynesians have
had some admirable success, perhaps especially
in convincing policymakers to listen to them, we
are not convinced that all economic problems are
caused by nominal rigidities. And despite the
views of reactionaries such as Krugman (2009),
we cannot believe the answer to every interesting
question hangs on the Old Keynesian cross. We
think our approach provides a relevant alternative
for academics and policymakers, and what follows
is an attempt to elaborate on this position.

2. PERSPECTIVES ON MONETARY
ECONOMICS
To explain the basic precepts underlying New
Monetarism, we find it helps to first summarize
some popular alternative schools of thought. This
will allow us to highlight what is different about
our approach to understanding monetary phenomena and guiding monetary policy.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

2.1 Keynesianism
We begin with a discussion of Keynesian
economics, ostensibly to describe what it is, but,
we have to admit, also partially to critique it. Of
course, it all began with Keynes’s (1936) General
Theory. His ideas were soon popularized in Hicks’s
(1937) IS-LM model, which became enshrined in
the undergraduate curriculum and was integrated
into the so-called Neoclassical Synthesis of the
1960s. New Keynesian economics, as surveyed
in Clarida, Gali, and Gertler (1999) or Woodford
(2003), makes use of more sophisticated tools
than Old Keynesian economists had at their disposal, but much of the language and many of the
ideas are essentially the same. New Keynesianism
is typically marketed as a synthesis that can be
boiled down to an IS relationship, a Phillips curve,
and a policy rule determining the nominal interest
rate, the output gap, and the inflation rate. It is
possible to derive a model featuring these equations from slightly more primitive ingredients,
including preferences, but often practitioners do
not bother with these details. If one were being
pedantic one could find this problematic, since
reduced-form relations from one model need not
hold once one changes the environment, but we
don’t want to dwell on self-evident points.
A more serious concern is that all New
Keynesian models have weak foundations for the
ingredient at the very core of the theory: Prices
(or sometimes wages) must be set in nominal terms,
even in nonmonetary versions, mind you, and
these prices are sticky in the sense that they cannot be changed, except at times specified rather
arbitrarily, or at a cost. We already discussed some
issues related to nominal rigidities in the Introduction, and rather than repeat that material here,
we mention a few other points. First, as everyone
including any card-carrying Keynesian is well
aware, the key implications of the theory would
be completely overturned if nominal prices could
be indexed to observables, say if a seller announces
“my price in dollars is p and it increases one-forone with aggregate P.” Such a scheme does not
seem especially complicated or costly—to miss
this trick, one has to be not merely rationally
inattentive but a veritable slug. Having said that,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

we are all for the idea that information processing
may be costly, consistent with the “sticky information” approach suggested by Mankiw and Reis
(2002), even if we find the label sticky information, to mix metaphors, pandering to the choir.
We are not sure, however, that when all is said
and done the most relevant manifestation of
information-processing costs will be that Keynes
turned out to be right.
Another issue is this: Economists take many
ingredients as given, including preferences,
endowments, and technology. Why not treat other
things the same way and take sticky prices, or,
more generally, incomplete contracts or incomplete markets, as given? One answer is it depends
on the question. But from this perspective, taking
nominal rigidities as given is delicate when they
are the essential component of the theory and the
main driver of its policy prescriptions. Another
answer is one that we think we heard suggested
by Neil Wallace ( he may disavow it). Economists
have others to study preferences, endowments,
and technology—including psychologists, resource
specialists, and engineers—and they can at least
potentially inform us about those elements of the
world. We might hope to get away with deferring
to others, in saying we take those things as given,
since they are not our area of expertise. But the
pricing mechanism, including nominal stickiness
and more generally incomplete markets and contracts, is exactly the thing we ought to be studying.
Almost by definition, there is no one but economists to chime in on these elements. When we
take them as given we are surely shirking.8
Another point is that we object to calling
sticky prices a “friction,” and this is only partly
semantic. We think of frictions as features of the
environment that make it difficult for agents in the
8

A different but related idea is that, if we are going to allow menu
costs to muck up the process of changing prices, we ought to take
seriously the costs of changing everything, and we don’t mean
merely tacking on ad hoc costs of adjustment in capital, inventories,
and employment. If this is not obvious, consider the following.
At some level we might all agree that search theory is a type of
adjustment-cost theory, yet one can still claim that for many purposes it is more fruitful to use explicit search-based models of the
labor market than otherwise frictionless models with some parametric cost-of-adjustment specification. As always, this will depend
on the issue at hand, and perhaps also on taste or faith, but in our
opinion what we learn from the successes (and the failures!) of
search-based models of labor markets speaks for itself.

J U LY / A U G U S T

2010

269

Williamson and Wright

model to achieve desirable outcomes. Examples
include private information or limited commitment, which may make it difficult to get agents
to tell the truth or keep their promises; spatial or
temporal separation, which can make it hard for
agents to get together in the first place; and problems like imperfect monitoring, incomplete record
keeping, and so on. These are, to repeat, frictions
in the environment. By contrast, price stickiness
is, if anything, a friction in the mechanism. It interferes directly with the way agents behave, as
opposed to letting them interact as they like
subject to constraints imposed by endowments,
technology, preferences, and frictions in the
environment as mentioned above. A serious, and
not just semantic, reason to distinguish between
frictions in the environment and the mechanism
is that agents, in both our world and our models,
should be allowed to be creative and resilient
when it comes to seeking out gains from trade.
What we mean is this: In some environments,
competitive equilibrium and alternative solution
concepts, like the core, generate the same outcomes, so it does not matter which we use. However, once we make prices or wages sticky, say
using Calvo (1983) pricing or Mankiw (1985) costs,
these mechanisms are generally not equivalent.
In a world where market prices are sticky, agents
may well choose endogenously to adopt an alternative trading mechanism that delivers superior
outcomes. One early version of this notion was
the suggestion by Barro (1977) that sticky wages
may be nothing more than a facade. An earlier
version is Coase’s (1937) theory of firm formation.
In all these cases, the big idea is that when the
price mechanism is doing a relatively poor job,
agents will abandon it and start interacting via
alternative institutions. An implication we find
unattractive (although we understand that this is
what some people like the most) in sticky-price
theory is that agents in the model are not doing
as well as they could: Gains from trade are being
left on the table when exchanges are forced at the
wrong relative prices. The modelers who use this
approach are only allowing agents to do the best
they can from a very narrow perspective, taking
institutions as given, as if microfoundations means
it is enough to let agents solve any old constrained
maximization problem.
270

J U LY / A U G U S T

2010

This is in sharp contrast to some economic
theory, the purest of which we take to be the mechanism design approach, where, by construction,
agents do as well as they can subject to constraints
imposed by the environment and incentive conditions. There can be frictions, including private
information or limited commitment, of course,
that make doing as well as one can fairly bad. It
would be a rookie mistake to think that the practitioners of mechanism design believe we live in
a Panglossian world, as Buiter (1980) once said
of New Classical macroeconomists. The world
could be better with fewer constraints (and,
indeed, sometimes it would be better with more
constraints). We find it appealing that mechanism
design attributes creativity and resiliency to the
agents in models. We also find it interesting that
economists often proceed as though agents can
figure out how to interact optimally in the presence of moral hazard, adverse selection, and other
recalcitrant situations, and yet at other times they
proceed as if these agents can’t get their heads
around the comparatively minor inconvenience
of a tardy Calvo fairy.
We do not want to push the whole mechanism
design approach too hard here, and since we do
not have the space, anyway, we refer to Wallace
(forthcoming), which is another Handbook of
Monetary Economics chapter, and is dedicated
to the topic.9 We do want to mention Townsend
(1988), however, who put it this way:
The competitive markets hypothesis has been
viewed primarily as a postulate to help make
the mapping from environments to outcomes
more precise...In the end though it should be
emphasized that market structure should be
endogenous to the class of general equilibrium
models at hand. That is, the theory should
explain why markets sometimes exist and
sometimes do not, so that economic organisation falls out in the solution to the mechanism
design problem. (pp. 22-23)

Nominal rigidities, like incomplete markets
or contracts, more generally, might conceivably
9

We do think there are some subtle unresolved issues, like whether
a given bargaining protocol should be considered part of the environment or a particular imposed mechanism (see Hu, Kennan, and
Wallace, 2009).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

emerge endogenously out of some environments,
but we think it would be better if they were an
outcome and not an assumption, especially since
in Keynesian economics everything hinges on
such rigidities. The Introduction discussed a case
where sticky prices emerged endogenously that
did not support the Keynesian position.10
But we digress. And we are perhaps being
too negative. Despite the above concerns, which
may sound like nit picking to many people, New
Keynesianism has met with considerable success,
obviously. It is also sometimes argued that it is
consistent with, or has absorbed, the major revolutionary ideas developed in macroeconomics
over the past few decades, including the Lucas
critique and real business cycle theory, though
this is somewhat less obvious. If we take Woodford
(2003) as representing the state of the art, the main
tenets of the approach are the following:
(1) The key friction that gives rise to shortrun nonneutralities of money, and the
primary concern of central bank policy,
is sticky prices. Because some prices are
not fully flexible, inflation or deflation
induces relative price distortions, and
this has consequences for welfare. There
can be other distortions, such as monopolistic as opposed to perfect competition,
or non-lump-sum taxes, in some applications, but nominal rigidities are clearly
the essence of the approach.
(2) The frictions that we encounter in relatively deep monetary economics, or even
not-so-deep monetary economics, like
cash-in-advance models, are at best of
second-order importance. In monetary
theory these frictions include explicit
descriptions of specialization that make
direct exchange difficult, and information
10

Relatedly, speaking more directly about money and banking, a position advocated in Williamson (1987b), is that what makes financial
intermediation potentially worth studying are its special functions,
such as diversification, information processing, and asset transformation. We cannot expect to generate these special activities or
derive many useful implications if our approach does not build on
the economic features that cause financial intermediaries to arise
in the first place. This is another call for making one’s assumptions
explicit and generating market structure, including everything from
intermediation to nominal contracting, endogenously.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

problems that make credit difficult, giving
rise to a fundamental role for media of
exchange and to different implications
for policy.
(3) There is a short-run Phillips curve tradeoff between inflation and output (if not
inflation and unemployment, since these
theories typically do not have detailed
descriptions of the labor market, with
exceptions like Gertler and Trigari, 2009).
We can induce a short-run increase in
output with an increase in inflation.
(4) The central bank is viewed as being able
to set a short-term nominal interest rate,
and the policy problem is presented as
the choice over alternative rules for how
this should be done in response to current
economic conditions.
We also think it is fair to say that New
Keynesians tend to be supportive of current practice by central banks. Elements of the modeling
approach in Woodford (2003) are specifically
designed to match standard operating procedures,
and he appears to find little in the behavior of
central banks that he does not like. The feeling
seems to be mutual, which may be what people
envisage when they conclude that there is a consensus. Interest in New Keynesianism has been
intense in recent years, especially in policy circles,
and as we said above, some economists (again,
see Goodfriend, 2007) profess that it constitutes
the default approach to analyzing and evaluating
monetary policy.

2.2 Monetarism
Old Monetarist ideas are represented in the
writings of Friedman (1960, 1968, and 1969) and
Friedman and Schwartz (1963). In the 1960s and
1970s, the approach was viewed as an alternative
to Keynesianism, with different implications for
how policy should be conducted. Friedman put
much weight on empirical analysis and the
approach was often grounded only informally in
theory—even if some of his work, such as the
theory of the consumption function in Friedman
(1957), is concerned with what we would call
J U LY / A U G U S T

2010

271

Williamson and Wright

microfoundations. Although there are few professed monetarists in the profession these days,
the school has had an important role in shaping
macroeconomics and the practice of central
banking.11
The central canons of Old Monetarism include
the following:
(1) Sticky prices, while possibly important
in generating short-run nonneutralities,
are unimportant for monetary policy.
(2) Inflation, and inflation uncertainty, generate significant welfare losses.
(3) The quantity theory of money is an essential building block. There exists a demand
function for money which is an empirically
stable function of a few variables.
(4) There may exist a short-run Phillips curve
trade-off, but the central bank should not
attempt to exploit it. There is no long-run
Phillips curve trade-off (although Friedman
tempered this position between 1968 and
1977 when he seemed to perceive the possibility of an upward-sloping long-run
Phillips curve).
(5) Monetary policy is viewed as a process of
determining the supply of money in circulation, and an optimal monetary policy
involves minimizing the variability in the
growth rate of some monetary aggregate.
(6) Money is any object that is used as a
medium of exchange, and whether these
objects are private or government liabilities
is irrelevant for the analysis of monetary
theory and policy.
We think it is also apparent that Friedman
and his followers tended to be critical of contemporary central bank practices, and this tradition
11

In the early 1980s a standard textbook put it this way: “As a result
of all of this work quantity theorists and monetarists are no longer
a despised sect among economists. While they are probably a
minority, they are a powerful minority. Moreover, many of the
points made by monetarists have been accepted, at least in attenuated form, into the mainstream Keynesian model. But even so,
as will become apparent as we proceed, the quantity theory and
the Keynesian theory have quite different policy implications”
(Mayer, Duesenberry, and Aliber, 1981, emphasis added).

272

J U LY / A U G U S T

2010

was carried on through such institutions as the
Federal Reserve Bank of St. Louis and the Shadow
Open Market Committee. One lasting influence
of monetarism is the notion that low inflation
should be a primary goal of policy, which is also
a principle stressed by New Keynesian economists.
However, the policy prescription in Friedman
(1968) that central banks should adhere to strict
targets for the growth of monetary aggregates is
typically regarded as a practical failure. Old
Monetarism tended to emphasize the long run
over the short run: Money can be nonneutral in
the short run, but exploitation of this by the central
bank only makes matters worse (in part due to
infamous long and variable lags). Policy should
focus on long-run inflation. We also think it is
fair to suggest that monetarists tended to favor
relatively simple models, as compared to the
Keynesian macroeconometric tradition.
Some but definitely not all of these ideas
carry over to New Monetarism. Before moving to
that, we mention that there are many other facets
to the policy prescriptions, methodological ideas,
and philosophical positions taken by Friedman
and his epigones, any one of which may or may
not fit with the thinking of any particular New
Monetarist. In some sense Friedman’s undeniable
faith in free markets, for example, resembles the
approach a mechanism design specialist might
take, but in another sense it is the polar extreme,
given the latter puts much weight on private information and other incentive problems. We do not
want to get into all of these issues, but there is one
position advocated by Friedman that we think is
noteworthy, in the current climate, concerning
fiscal rather than monetary policy. Friedman was
clear when he argued that spending and tax proposals should be evaluated based on microeconomic costs and benefits, not on their potential
impact on the macroeconomy. In stark contraposition, virtually all the popular and academic
discussion of the recent stimulus package seems
to focus on the size of multipliers, which to us
seems misguided. But let us return to monetary
economics, which is probably our (comparative)
advantage.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

2.3 New Monetarism
Although dating such things precisely can be
subtle, we would suggest that the foundations for
New Monetarism can be traced to a conference
on Models of Monetary Economies at the Federal
Reserve Bank of Minneapolis in the late 1970s,
with the proceedings and some post-conference
contributions published in Kareken and Wallace
(1980). Important antecedents are Samuelson
(1958), which is a legitimate model of money in
general equilibrium, and Lucas (1972), which
sparked the rational expectations revolution and
the move toward incorporating rigorous theory
in macroeconomics. The Kareken and Wallace
volume contains a diverse body of work with a
common goal of moving the profession toward
a deeper understanding of the role of money and
the proper conduct of monetary policy, and
spurred much research using overlapping generations and related models, including the one in
Townsend (1980).12
Much of this work was conducted by Wallace
and his collaborators during the 1980s. Some
findings from that research are the following:
(1) Because Old Monetarists neglect key elements of economic theory, their prescriptions for policy can go dramatically wrong
(Sargent and Wallace, 1982).
(2) The fiscal policy regime is critical for the
effects of monetary policy (Sargent and
Wallace, 1981, and Wallace, 1981).
(3) Monetary economics can make good use
of received theory in other fields, like
finance and public economics (Bryant and
Wallace, 1979 and 1984).
A key principle, laid out first in the introduction to Kareken and Wallace (1980) and elaborated in Wallace (1998), is that progress can be
made in monetary theory and policy analysis only
by modeling monetary arrangements explicitly.
12

In addition to much impressive modeling and formal analysis, the
Kareken-Wallace volume also contains in some of the discussions
and post-conference contributions a great deal of fascinating debate
on methodology and philosophy of the sort that we would like to
see resurface, related to our comments in the Introduction about
healthy economic science.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

In line with the arguments of Lucas (1976), to conduct a policy experiment in an economic model,
it must be invariant to the experiment under consideration. One interpretation is the following:
If we are considering experiments involving the
operating characteristics of the economy under
different monetary policy rules, we need a model
in which economic agents hold money not because
it enters utility or production functions, in a
reduced-form fashion, but because money ameliorates some fundamental frictions in the exchange
process. This is our last, best, and only hope for
invariance, and it is why we are so interested in
trying to carefully model frictions, instead of
simply assuming some particular channel by
which money matters. Of course, the suggestion
that monetary economists need to look frictions
in the face goes way back to Hicks (1935).13
There are various ways to try to conceptualize
the notion of frictions. Just as Old Monetarists
tended to favor simplicity, so do we. One reason
for the preference for simple models is that, relative to Keynesian economics, there may be more
of a focus on long-run issues such as the cost of
steady state inflation, instead of business cycles.
This is mainly because the long run is taken to be
more important from a welfare perspective, but
as a by-product, it often allows one to employ
simpler models. It is also relevant to point out
that tractability is especially important in monetary economics, where questions of existence,
uniqueness versus multiplicity, and dynamics
are big issues that can more easily and more naturally be addressed using analytic rather than
numerical methods. With all due respect to computational economics, which has made brilliant
advances in recent years, we believe that there
are still some important questions to which the
answer is not a number.
Overlapping generations models can be simple, although one can also complicate them as
much as one likes, but much research in monetary theory following Kiyotaki and Wright (1989)
instead uses matching models, building more on
13

Notice that, in line with the previous discussion, we are talking
about frictions in the exchange process, as opposed to frictions in
the price-setting process, like nominal rigidities, where money
does not help (in fact, it is really the cause of the problem).

J U LY / A U G U S T

2010

273

Williamson and Wright

ideas in search and game theory than general
equilibrium theory.14 Matching models are very
tractable for many applications, although a key
insight that eventually arose from this research
program is that spatial separation per se is not
the critical friction making money essential,
where here we are using the term in a technical
sense usually attributed to Hahn (1973): Money
is essential when the set of allocations that can
be supported (satisfying resource and incentive
conditions) with money is bigger or better than
without money. As pointed out by Kocherlakota
(1998), and emphasized by Wallace (2001), with
credit due to earlier work by Ostroy (see Ostroy
and Starr, 1990) and Townsend (1987 and 1989),
money is essential when it overcomes a double
coincidence of wants problem combined with
limited commitment and imperfect record keeping. Perfect record keeping, what Kocherlakota
calls perfect memory, implies that efficient allocations can be supported through insurance and
credit arrangements, or various other arrangements, in a large class of environments including
those used by search theorists without the use of
money.
It needs to be emphasized that random bilateral matching among a large number of agents can
be a convenient way to generate a double coincidence problem and to motivate incomplete record
keeping, but it is not otherwise important to the
approach. Corbae, Temzelides, and Wright (2003)
and Julien, Kennes, and King (2008), e.g., redo
much of the early monetary search theory using
directed rather than random matching, and
although some of the results change, in interesting
ways, the essence of the theory emerges unscathed.
Moreover, although it is good, perhaps essential,
for monetary economists to understand what
may or may not make currency essential in the
exchange process, New Monetarists are interested
in a host of other issues, institutions, and phenomena. Developments in intermediation and payment
14

Other papers in this literature will be discussed below, although a
more comprehensive survey is to be found in Williamson and
Wright (2010). See Ostroy and Starr (1990) for a survey of earlier
attempts at building microfoundations for money using general
equilibrium theory. Overlapping generations models are discussed
and surveyed in various places, including Wallace (1980) and
Brock (1990).

274

J U LY / A U G U S T

2010

theories over the last 25 years are critical to our
understanding of credit and banking arrangements, and one significant difference between Old
and New Monetarists is how they think about the
role of financial intermediaries and their interactions with central banks, as we discuss more formally in Section 6.
The 1980s saw important developments in the
field, spurred by earlier progress in information
theory. One influential contribution is Diamond
and Dybvig (1983), which we now understand to
be a useful approach to studying banking as liquidity transformation and insurance (although
whether it can produce anything resembling a
bank run depends on auxiliary assumptions, as
discussed, e.g., by Ennis and Keister, 2009a,b).
Other work involved well-diversified intermediaries economizing on monitoring costs, including
Diamond (1984) and Williamson (1986), in models
where financial intermediation is an endogenous
phenomenon. The resulting intermediaries are
well-diversified, process information in some
manner, and transform assets in terms of liquidity,
maturity, or other characteristics. The theory has
also been useful in helping us understand the
potential for instability in the banking and financial system (Ennis and Keister, 2009a,b) and how
the structure of intermediation and financial
contracting can propagate aggregate shocks
(Williamson, 1987a, and Bernanke and Gertler,
1989).
A relatively new sub-branch of the area examines the economics of payments. This involves
the study of payment and clearing systems, particularly among financial institutions, such as
Fedwire in the United States, where central banks
can play an important role (see Freeman, 1996, for
an early contribution and Nosal and Rocheteau,
2009, for a recent survey). The key insights from
this literature are related to the role played by outside money and central bank credit in the clearing and settlement of debt, and the potential for
systemic risk as a result of intraday credit. Even
while payment systems are working well, work
in this field is important, because the cost of failure is potentially great given the amount of money
processed through such systems each day. New
Monetarist economics not only has something to
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

say about these issues, it is basically the only
approach that could. How can one hope to understand payments and settlement without modeling
the exchange process?
In an even newer research area, people have
recently been using models consistent with our
approach to study asset markets, including Duffie,
Gârleanu, and Pederson (2005 and 2007), Vayanos
and Weill (2008), and Lagos and Rocheteau (2009).
This may come as a surprise to some people—it
initially did to us—who might think financial
markets are as close to a frictionless ideal as there
is, but it turns out to be one of the most natural
applications of search and bargaining theory.
As Duffie, Gârleanu, and Pederson (2007) put it,
many assets, such as mortgage-backed securities, corporate bonds, government bonds, U.S.
federal funds, emerging-market debt, bank
loans, swaps and many other derivatives, private equity, and real estate, are traded in overthe-counter (OTC) markets. Traders in these
markets search for counterparties, incurring
opportunity or other costs. When counterparties meet, their bilateral relationship is strategic;
prices are set through a bargaining process that
reflects each investor’s alternatives to immediate trade.

This branch of finance uses formal models very
close to those presented below (see Williamson
and Wright, forthcoming, for more discussion).
In terms of how we go about it, to reiterate what
was said in the introduction, New Monetarists
more or less try to abide by the following
principles:
(1) Useful analysis in macro and monetary
economics, including policy analysis,
requires sound micro economic theory,
which involves using what we know from
general equilibrium, search, and game
theory.
(2) Especially important is a clear and
internally consistent description of the
exchange process and the means by which
money and related institutions help facilitate that process, implying that the theory
must be built on environments with
explicit frictions.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

(3) Rigorous models of financial intermediation are important for monetary theory
and policy: Credit, banking, and payment
systems matter.
(4) Other things being equal relatively simple
models are preferred. While this is true in
most of economics, it is especially important in monetary theory, because existence,
uniqueness versus multiplicity, and
dynamics are big issues that are not easy
to study numerically. This makes it crucial
to come up with assumptions that deliver
tractability without sacrificing too much
along other dimensions.
(5) While no one theory can answer all questions, in monetary economics, there are
important characteristics that we feel any
good model should have. In addition to
tractability, this includes the right amount
of abstraction, and internal consistency
(which means there are not too many institutions, like incomplete markets, nominal
contracting, and so on, that are taken as
primitives). It would be useful to have a
benchmark model with these properties
that is also flexible enough to address a
variety of questions.
Taking the above as our desiderata, we now
present a baseline New Monetarist model and
show how it can be used to study several substantive issues. Since we go into detail concerning the
technical aspects of related models in Williamson
and Wright (forthcoming), here we provide only
a cursory discussion of those before getting to the
structure that we actually put to use.

3. A BENCHMARK FRAMEWORK
3.1 Background
The simplest setup consistent with the spirit
of New Monetarist Economics is a version of firstgeneration monetary search theory along the lines
of Kiyotaki and Wright (1993), which is a strippeddown version of Kiyotaki and Wright (1989 and
1991). In such a model, agents meet bilaterally
and at random, which makes barter difficult due
J U LY / A U G U S T

2010

275

Williamson and Wright

to a double-coincidence problem generated by
specialization. Also, these models have limited
commitment and imperfect memory, which makes
credit arrangements difficult. Money is then essential in the sense that (the set of) equilibrium outcomes can be better with money than without it.
We think this is a good starting point for monetary
economics, since money is playing a bona fide
role in facilitating exchange. Moreover, frictions
like those in the models, or at least informal
descriptions thereof, have long been thought to
be important for understanding the role of money,
by such luminaries as Jevons (1875), Menger
(1892), and Wicksell (1967), among others. The
goal of the early search-based literature is to formalize these ideas, to see which are valid under
what assumptions, and to develop new insights.
These first-generation models make some
strong assumptions, however, including the indivisibility of money and goods. This allows one
to focus on describing the pattern of trade without
having to determine the terms of trade, but does
not otherwise seem especially desirable. Even with
such assumptions in place, we think the theory
captures something salient about money. One can
look at Williamson and Wright (forthcoming) for
a summary of results from these rudimentary
models, but we can at least mention here the following. Equilibria exist where an intrinsically
useless asset, fiat currency, is valued. These equilibria can have good welfare properties relative
to pure barter, even if they typically do not achieve
first best. They are tenuous in the sense that there
coexist nonmonetary equilibria, although monetary equilibria are also robust in that they can
survive even if we endow currency with some
undesirable properties by giving it, say, a storage
or transaction cost, or if we tax it. Money encourages specialization in the models, as has been
understood since Adam Smith, but has not been
previously easy to formalize. One can also use the
model to analyze commodity money, international
currency, some issues related to banking, and so
on (see our companion paper, Williamson and
Wright (forthcoming) for references).
Beginning the next generation of papers in
this literature, Shi (1995) and Trejos and Wright
(1995) endogenize prices by retaining the assump276

J U LY / A U G U S T

2010

tion that money is indivisible but allowing divisible goods and having agents bargain. Results
stemming from these models illustrate additional
properties of fiat and commodity money systems,
and one can use the framework to study many
substantive issues. Compared to the previous
work, a new insight from these second-generation
models is that the equilibrium price level is typically not efficient: Under natural assumptions,
it can be shown that one does not get enough for
one’s money. Many other results and applications are available, and again, one can look at
Williamson and Wright (forthcoming) for more
discussion and references. But clearly, while this
is an improvement over models where prices are
not determined endogenously, and while research
using the framework has proved productive, the
maintained indivisibility of money makes the
model ill suited for much empirical and policy
analysis as it is usually conceived by practitioners.
When one admits divisible money, however,
one has to keep track of the distribution of money
across agents as a state variable, and this gets complicated, even using numerical methods.15 Still,
Molico (2006) computes equilibria in his divisiblemoney model, and uses it to discuss the effects
of inflation generated by transfers from the monetary authority. See also Chiu and Molico (2008
and 2010) and Dressler (2009 and 2010). Since
we are interested in analytic results, we do not
pursue the computational approach here. Instead
we focus on models that allow us to avoid having
to track distributions, and to this end there are
two main routes.16 The first, originating with Shi
(1997), gets a degenerate distribution from the
assumption of large households (a natural exten15

The problem is in dealing with the distribution of money, and wealth,
more broadly defined, in multiple-asset models. Heterogeneousagent, incomplete-markets, macro models of the sort analyzed by
Huggett (1993) or Krusell and Smith (1998) also have an endogenous distribution as a state variable, but the agents in those models
do not care about this distribution per se—they only care about
prices. Of course prices depend on the distribution, but one can
typically characterize accurately prices as functions of a small
number of moments. In a search model, agents care about the distribution of money directly, since they are trading with each other
and not merely against their budget equations.

16

Alternative approaches include Camera and Corbae (1999), Zhu
(2003 and 2005), and a body of work emanating from the model
introduced by Green and Zhou (1998), citations to which can be
found in Jean, Stanislav, and Wright (2010).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

sion for random-matching models of the workershopper pair discussed in the cash-in-advance
literature since Lucas, 1980b). Thus, each decisionmaking unit consists of many members, who
search randomly, but at the end of each trading
round they return to the homestead where they
share any money they bring back. Loosely speaking, by the law of large numbers, large families
start each new trading round with the same amount
of money. See Shi (2006) for a discussion and
survey of this approach.
We take another route, following Lagos and
Wright (2005), where alternating centralized and
decentralized markets take the place of families.
This allows us to address a variety of issues in
addition to rendering distributions tractable. And
it helps reduce the gap between monetary theory
with some claim to microfoundations and mainstream macro as, while money is essential in the
decentralized markets, having some centralized
markets allows us to add elements that are hard
to integrate into pure search models, such as
standard capital and labor markets, fiscal policy.
For what it’s worth, we also believe the framework
provides a realistic way to think about economic
activity. In actual economies some activity is relatively centralized—it is fairly easy to trade, credit
is available, we take prices as given, etc.—which
is arguably well approximated by the apotheosis
of a competitive market. But there is also much
activity that is decentralized—it is not so easy to
find a partner, it can be hard to get credit, etc.—
as in search theory. For all these reasons we like
the approach.

3.2 The Environment
The population consists of a continuum of
infinitely lived agents with unit mass, each of
whom has discount factor β. We divide each
period in discrete time into two subperiods. In
one, agents interact in a decentralized market, or
DM, where there is pairwise random matching
with α denoting the arrival rate (the probability
of a match). Conditional on meeting someone,
due to specialization (see Williamson and Wright,
forthcoming, for more discussion), each agent has
probability σ of being able to produce something
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

the other agent wants to consume but not vice
versa, and the same probability σ of wanting to
consume what the other one can produce but not
vice versa. Each of these two types of meetings
involves a single coincidence of wants. Purely
for simplicity, and without loss of generality, we
assume no double-coincidence meetings, so that
with probability 1 – 2σ there is no opportunity
for trade in a meeting. Also, there is no record
keeping in the DM, in the sense that the agents
cannot observe actions in meetings other than
their own, and have no knowledge of the histories
of their would-be trading partners in any given
meeting.
In the other subperiod, agents interact in a
frictionless centralized market, or CM, as in standard general equilibrium theory. In the CM there
is also limited record keeping, in the sense that
agents only observe prices, which is all they need
to respect their budget constraints. In particular
they do not observe the actions of other individuals
directly, only market outcomes (prices), which
makes it difficult to use game-theoretic triggers
that might otherwise render money inessential
(Aliprantis, Camera, and Puzzello, 2006 and 2007,
and Araujo et al., 2010). Some applications do
allow partial record keeping, so that, for example,
bonds can be traded across two meetings of the
CM, although usually this is not crucial. Sometimes the setup is described by saying the DM
convenes during the day and the CM at night, or
vice versa, but this is not important for anything
except perhaps mnemonics, to keep track of the
timing. One can also proceed differently, without
changing basic results, say as in Williamson
(2007), where both markets are always open and
agents randomly transit between them.17
There is one consumption good x in the DM
and another X in the CM, although it is easy to
have x come in many varieties, or to interpret X
17

For some issues it is also interesting to have more than one round
of trade in the DM between meetings of the CM, as in Berentsen,
Camera, and Waller (2005) or Ennis (2009), or more than one period
of CM trade between meetings of the DM, as in Telyukova and
Wright (2008). Chiu and Molico (2006) allow agents to transit
between markets whenever they like, at a cost, embedding what
looks like the model of Baumol (1952) and Tobin (1956) into general
equilibrium, where money is essential, but that requires numerical
methods.

J U LY / A U G U S T

2010

277

Williamson and Wright

as a vector. For now x and X are produced onefor-one using labor h and H, so the real wage in the
CM is w = 1. Preferences in any period encompassing one DM and CM are described by a standard
utility function Ux, h, X, H . What is important
for tractability, if not for the theory, in general, is
quasilinearity: U should be linear in either X or
H.18 For now, we assume U is linear in H, and in
fact we also make it separable,
U = u ( x ) − c ( h) + U ( X ) − H .
Assume u′ > 0, u′′ < 0, u′0 = ⬁, c′ > 0, c′′ ≥ 0,
c′0 = u0 = c0 = 0, U′ > 0, and U′′ ≤ 0. Also,
denote the efficient quantities by x* and X*,
where u′x* = c′x* and U′X* = 1 (we leave it
as an exercise to verify these are efficient).
If we shut down the CM then this environment, including the random matching specification, technology, and preferences, is identical
to that used by Molico (2006) in the model discussed above. And since the Molico (2006) model
collapses to the one in Shi (1995) or Trejos and
Wright (1995) when we make money indivisible,
and to the one in Kiyotaki and Wright (1993) when
we additionally make goods indivisible, these
ostensibly different environments are actually
special cases of one framework. As we discuss in
Williamson and Wright (forthcoming), this is good
not because we want one all-purpose vehicle for
every issue in monetary economics, but because
we want to avoid the impression that New
Monetarist economics consists of a huge set of
mutually inconsistent models. The same fundamental building blocks are used in the models discussed above, in the extensions presented below,
in our companion paper, and in many other places
in the literature, even if some applications sometimes make certain special assumptions.
Let Vt m and Wt m denote, respectively,
the value function at date t for an agent holding
money balances m at the beginning of the DM
and the CM. Then we have
18

To be clear, one can proceed with general preferences, but this
requires numerical methods; with quasilinearity, we can derive
many results analytically. Actually, one can use general utility and
still be achieve tractability if we assume indivisible labor, since
then agents act as if utility is quasilinear (see Rocheteau et al., 2008).

278

J U LY / A U G U S T

2010

{

}

ˆ)
Wt ( m ) = max U ( X ) − H + βVt +1 ( m
ˆ
X,H ,m

ˆ ) + T,
st X = H + φt ( m − m
where φt is the CM value of money, or the inverse
of the nominal price level pt = 1/φt, and T is a
lump-sum transfer, as discussed below. Assuming
an interior solution (see Lagos and Wright, 2005),
we can eliminate H and write
Wt ( m ) = ϕ t m + T + max {U ( X ) − X }
(1)

X

{

}

ˆ + βVt +1 ( m
ˆ) .
+ max −ϕ t m
ˆ
m

From this it is immediate that Wt m is linear
with slope φt ; X = X*; and m̂ is independent of
wealth φt m + T. This last result implies a degenerate distribution across agents leaving the CM:
They all choose m̂ = M regardless of the m they
brought in.19
In a sense, one can think of the CM as a settlement subperiod, where agents reset their liquidity
positions. Quasilinearity implies they all rebalance to the same m̂, leading to a representative
agent in the DM. Without this feature the analysis
is more complicated. It can also be more interesting, for some applications, but we want a tractable
benchmark. By analogy, while models with heterogeneous agents and incomplete markets in macro
generally are interesting, it is nice to have the
basic neoclassical growth theory, with complete
markets and homogeneous agents, as the textbook
case. Since serious monetary theory with complete markets and homogeneity is a non-starter,
we present this model as our benchmark, but one
is free to relax our assumptions and use computational methods (analogous, perhaps, to the way
some people compute large-scale overlapping
generations models while others prove theorems
in simpler versions).
To see one manifestation of this tractability,
compared to many other models, consider an
individual contemplating bringing m dollars into
the DM. Since we just established everyone else
in the DM has M, it does not matter who the agent
19

This is obvious at least if Vt is strictly concave, which is the case
under some conditions (given below), but as shown in Wright
(2010), it is true generically even if Vt is not strictly concave.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

under consideration meets, except insofar as it
can determine whether he is a buyer or seller (all
sellers look the same to a buyer and vice versa).
Hence,
(2)

{

}

Vt ( m ) = Wt ( m ) + ασ u  x t ( m, M )  − φtdt ( m, M )

{

}

+ασ −c  x t ( M , m )  + φtdt ( M , m ) ,

1−θ

max u ( x ) − ϕ t d   −c ( x ) + ϕ t d 
x, d

st d ≤ m.

Again we used Wt′ = φt, which makes this bargaining problem nice and easy. First note that in any
equilibrium the constraint d ≤ m must bind (see
Lagos and Wright, 2005). Then inserting d = m,
taking the first-order condition (FOC) with respect
to x, and rearranging, we get φt m = gx where
(3) g ( x ) ≡

(5) ϕ t = βϕ t +1 1 +  ( xt +1 )  ,
where we define

where xt m,M  is the quantity of goods and
dt m,M  the dollars traded at t in a singlecoincidence meeting where the buyer has m and
the seller has M (which, if you are following along,
explains why the arguments are reversed in the
second and third terms). Note that we used the
earlier result Wt ′m = φt to simplify this.
The next step is to determine xt . and dt .,
and for now we use the generalized Nash bargaining solution (but see Section 4.1). Letting the
bargaining power of the buyer be given by θ and
the threat points by continuation values, xt m,M 
and dt m,M  solve
θ

φt , with probability 1 – ασ, plus the marginal
value of spending it, which is u′x∂x/∂m, with
probability ασ. Updating this one period and
combining it with the FOC from the CM,
φt = βV′t +1m̂, we arrive at

θ c ( x ) u ′ ( x ) + (1 − θ ) u ( x ) c ′ ( x )
.
θ u ′ ( x ) + (1 − θ ) c ′ ( x )

This expression may look nasty, but g. is quite
well behaved, and it simplifies a lot in some special cases; for example, θ = 1 implies gx = cx,
in which case real balances paid to the producer
φt m exactly compensate him for his cost. In any
case, ∂x/∂m = φt /g′x > 0.
We have shown that for any m, m̃, in equilibrium dt m, m̃ = m and xt m, m̃ depend on m but
not m̃. We can now differentiate (2) to obtain
(4) Vt′( m ) = (1 − ασ )ϕ t + ασϕ t u′ ( x t ) g ′ ( x t ),
where on the right-hand side xt = xt m. The
marginal benefit of money in the DM is the marginal value of carrying it into the CM, which is
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

 u′ ( x )

(6)  ( x ) ≡ ασ 
− 1 .
 g ′(x ) 
The expression in (6) is the liquidity premium,
giving the marginal value of spending a dollar,
as opposed to carrying it forward, times the probability ασ one spends it.
Assume for now that the lump-sum transfer
T is financed by printing currency, or, if negative,
by retiring currency. Then the amount of currency
in the CM at t is the amount brought in by private
agents Mt , plus the transfer µt Mt , where µt is the
rate of increase in the money stock. Market clearing implies m̂t = 1 + µMt = Mt+1 is brought out
of the CM and into the DM at t +1. Thus, the bargaining solution tells us φt+t Mt+1 = gxt+1for all t,
and inserting this into (5) we arrive at
(7)

g ( xt )
Mt

=β

g ( xt +1 )

1 +  ( x t +1 )  .
M t +1 

For a given path of Mt, equilibrium can be defined
as a list including paths for Vt ., Wt ., xt ., and
so on, satisfying the relevant conditions. But (7)
reduces all of this to a simple difference equation
determining a path for xt . Here we focus on stationary equilibria, where xt and hence φt Mt are
constant, which makes sense as long as µt is constant (nonstationary equilibria, including sunspot,
cyclic, and chaotic equilibria, are discussed in
Lagos and Wright, 2003). In a stationary equilibrium, (7) simplifies nicely to 1 + µ = β [1 + ᐍx].20
20

One has to also consider the consolidated government budget constraint, say G + T = µ – 1φM, where G is government CM consumption. But notice that it does not actually matter for (7) whether
changes in M are offset by changing T or G—individuals would
prefer lower taxes, other things equal, but this does not affect their
decisions about real balances or consumption in the model. Therefore, we do not have to give new money away as a transfer, but can
instead have the government spend it, for the purpose of describing
the most interesting variables in equilibrium.

J U LY / A U G U S T

2010

279

Williamson and Wright

3.3 Result
Having defined monetary equilibrium, we
proceed to discuss some of its properties. To facilitate comparison to the literature, imagine that we
can use standard methods to price real and nominal bonds between any two meetings of the CM,
assuming these bonds are illiquid—they cannot
be traded in the DM.21 Then the real and nominal interest rates r and i satisfy 1 + r = 1/β and
1 + i = 1 + µ1 + r, the latter being of course the
standard Fisher equation. Then we can rewrite
the condition 1 + µ = β [1 + ᐍx] for stationary
equilibrium derived above as
(8)  ( x ) = i .
Intuitively, (8) equates the marginal benefit of
liquidity to its cost, given by the nominal interest
rate i. In what follows we assume i > 0, although
we do consider the limit i → 0 (it is not possible
to have i < 0 in equilibrium).
For simplicity let us assume ᐍ′x < 0, in which
case there is a unique stationary monetary equilibrium and it is given by the x > 0 that solves (8).
It is not true that we can show ᐍ′x < 0 under the
usual concavity and monotonicity assumptions,
but there are conditions that work. One such condition is θ ≈ 1; another is that c x is linear and
ux displays decreasing absolute risk aversion.
Note also that the same conditions that make
ᐍ′x < 0 make Vm strictly concave. In any case,
this is not especially important, since the argument
in Wright (2010) shows that there generically exists
a unique stationary monetary equilibrium even
if ᐍx is not monotone.
In terms of welfare and policy implications,
the first observation is that it is equivalent for
policymakers to target either the money growth
or inflation rate, since both equal µ – 1; or they
can target the nominal rate i, which is tied to µ
through the Fisher equation. Second, it is clear
that the initial stock of money M0 is irrelevant
for the real allocation (money is neutral), but the
same is not true for the growth rate µ (money is
21

Do not get confused: We are not introducing tangible objects called
bonds here; we are considering a thought experiment where we ask
agents what return they would require to move one unit of either
X or m from the CM at t to the CM at t +1.

280

J U LY / A U G U S T

2010

not superneutral). These properties are shared by
many theories, of course. Next, it is easy to see
that ∂x/∂i < 0, intuitively, because i is a tax on DM
activity. Since CM output X = X * is independent
of i in this basic setup, total output is also decreasing in i. However, it is important to point out that
X is not generally independent of i if we allow
nonseparable utility (see Williamson and Wright,
forthcoming).
One can also show that x is increasing in bargaining power θ, that x < x* for all i > 0, and in
fact, x = x* iff i = 0 and θ = 1. The condition i = 0
is the Friedman rule, which is standard, while
θ = 1 is a version of the Hosios (1990) condition
describing how to split the surplus in a socially
efficient fashion in bilateral trade, which does
not show up in reduced-form monetary theory.
To understand it, note that in general there is a
holdup problem in money demand analogous to
the usual problem with ex ante investments and
ex post negotiations. Thus, agents make an investment when they acquire cash in the CM, which
pays off in single-coincidence meetings since it
allows them to trade. But if θ < 1, producers capture some of the gains from trade, leading agents
to initially underinvest in m̂. The Hosios condition tells us that investment is efficient when the
payoff to the investor is commensurate with his
contribution to the total surplus, which in this
case means θ = 1, since it is the money of the
buyer (and not that of the seller) that allows the
pair to trade.
There is reason to think that this is important in terms of quantitative and policy analysis,
and not merely a technical detail. To make the
case, first consider the typical quantitative exercise using something like a cash-in-advance
model, without other explicit frictions, where
one asks about the welfare cost of fully anticipated
inflation. If as usual we measure this cost by asking agents what fraction of consumption they
would be willing give up to go from, say, 10 percent inflation to the Friedman rule, the answer is
generally very low. There are many such studies,
but we can summarize the typical result by saying that consumers would be willing to give up
around 1/2 of 1 percent, or perhaps slightly more,
but not above 1 percent, of their consumption
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

(see Cooley and Hansen, 1989, or Lucas, 2000, for
representative examples, or Craig and Rocheteau,
2008, for a survey). This has led many economists
to conclude that the inflation tax distortion is not
large, and may be one reason that New Keynesians
focus virtually all their attention on sticky-price
distortions.
Given the apparent aversion to inflation of
many politicians, as well as regular people, one
may wonder, why are the numbers generated by
those models so small? The answer is straightforward. In standard cash-in-advance and other
reduced-form models, at the Friedman rule we
get the first best. Hence, by the envelope theorem,
the derivative of welfare with respect to i is 0 at
the Friedman rule, and a small inflation matters
little. This is consistent with what one finds in
our benchmark model when we set θ = 1. But if
θ < 1, then the envelope theorem does not apply,
since while i = 0 is still optimal it is a corner solution, given i < 0 is not feasible. Hence, the derivative of welfare is not 0 at i = 0, and a small
deviation from i = 0 has a first-order effect. The
exact magnitude of the effect of course depends
on parameter values, but in calibrated versions
of the model it can be considerably bigger than
what one finds in the reduced-form literature.
These results lead New Monetarists to rethink
the previously conventional wisdom that anticipated inflation does not matter much.
One should look at the individual studies for
details, but we can sketch the method. Assume
UX  = logX , ux = Ax1 – a/1 – a, and c x = x.
Then calibrate the parameters as follows. First set
β = 1/1 + r, where r is some average real interest
rate in the data. In terms of arrival rates, we can
at best identify ασ, so normalize α = 1. In fact, it
is not that easy to identify ασ, so for simplicity
set σ to its maximum value of σ = 1/2, although
this is actually not very important for the results.
We need to set bargaining power θ, as discussed
below. Then, as in virtually all other quantitative
monetary models, we set the remaining parameters
A and a to match the so-called money demand
observations. By these observations we mean the
empirical relationship between i and the inverse
of velocity, M/PY, which is traditionally interpreted as money demand by imagining agents
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

setting real balances proportional to income, with
a factor of proportionality that depends on the
opportunity cost i.
Here, with UX  = logX , real CM output is
X * = 1 (a normalization), and so nominal CM output is PX = 1/φ . Nominal DM output is ασ M, since
in every single-coincidence meeting M dollars
change hands. Hence, total nominal output is
PY = 1/φ + ασ M. Using φ M = gx, we get
(9)

g (x )
M
,
=
PY 1 + ασ g ( x )

and since x is decreasing in i, so is M/PY. This is
the money-demand curve implied by theory.
Given θ, gx depends on preferences, and we can
pick the parameters a and A of ux, by various
methods, to fit (9) to the data (assuming, for simplicity, say, that each observation corresponds to
a stationary equilibrium of the model, although
one can also do something more sophisticated).
To implement this one has to choose an empirical measure of M, which is typically M1.22
This is all fairly straightforward, the only
nonstandard parameter in quantifying the model
being θ, which does not show up in theories
with price taking. A natural target for calibrating
θ is the markup, price over marginal cost, since
it seems intuitive that this should convey information about bargaining power. One can compute the average markup implied by the model
and set θ so that this matches the data. In terms
of which data, we think the evidence discussed
by Faig and Jerez (2005) from the Annual Retail
Trade Survey, describing markups across various
types of retailers, is most relevant. According to
these data, at the low end, in warehouse clubs,
superstores, automotive dealers, and gas stations,
markups range between 1.17 and 1.21; and at the
high end, in specialty foods, clothing, footware,
and furniture, they range between 1.42 and 1.44.
Aruoba, Waller, and Wright (2009) target 1.3, at
the midpoint of these data. Lagos and Wright
22

Which measure of M one uses does make a difference (as it would
in any model of money, with or without microfoundations). One
might think a more natural measure would be M0 based on a narrow
interpretation of the theory, but this is probably taking the model
too literally for empirical work (see, e.g., Lucas, 2000). More research
is needed to better match theory and data on this dimension.

J U LY / A U G U S T

2010

281

Williamson and Wright

(2005) earlier used 1.1, consistent with other
macro applications (e.g., Basu and Fernald, 1997).
However, in this range, the exact value of θ does
not matter too much.
It is now routine to compute the cost of inflation. It is hard to summarize the final answer with
one number, since the results can depend on
factors such as the sample period, frequency
(monthly, quarterly, or annual), whether one
includes complications like capital or fiscal policy,
and so on. However, it is safe to say that Lagos
and Wright (2005) can get agents to willingly give
up 5 percent of consumption to eliminate a 10
percent inflation, which is an order of magnitude
bigger than previous findings. In a model with
capital and taxation, Aruoba, Waller, and Wright
(2009) get closer to 3 percent when they target a
markup of 1.3, which is still quite large. There
are many recent studies using variants of New
Monetarist models that have come up with similar
numbers (again, see Craig and Rocheteau, 2008).
Two points to take away from this are the following: First, the intertemporal distortion induced
by inflation may be more costly than many economists used to think. Second, getting into the
details of monetary theory is not only a matter of
striving for logical consistency or elegance; it can
also make a big difference for quantitative and
policy analysis.
Which distortions are most important?
Although there is more work to be done on this
question, state-of-the-art research by Aruoba and
Schorfheide (2010) attempts to answer it by estimating a model integrating New Keynesian and
New Monetarist features (and they provide references to related work). They compare the importance of the sticky-price friction, which implies
0 inflation is optimal, and the intertemporal inflation distortion on which we have been focusing,
which recommends the Friedman rule. They consider four scenarios, having to do with whether
they try to fit the short- or long-run money-demand
elasticity, and on whether the terms of trade are
determined in the DM according to Nash bargaining or Walrasian pricing (see Section 4.1). In the
version with bargaining designed to match the
short-run elasticity, despite a reasonably-sized
sticky-price distortion, the Friedman rule turns
282

J U LY / A U G U S T

2010

out to be optimal after all. The other three versions
yield optimal inflation rates of –1.5 percent, –1
percent, and –0.75 percent. Even considering
parameter uncertainty, they never find optimal
inflation close to 0. They conclude that the two
distortions are about equally important. Again,
more work needs to be done, but in light of these
findings, we see no compelling evidence supporting the New Keynesian assertion that one may
with impunity ignore intertemporal inflation
distortions, or monetary distortions, or money,
more generally.

4. EXTENSIONS
In this section, we discuss some extensions
in the literature to the benchmark New Monetarist
model, before moving to new results.

4.1 Alternative Mechanisms
In the previous section we determined the
terms of trade between buyers and sellers in the
DM using the Nash bargaining solution. This
seems reasonable in a bilateral matching context
and is actually fairly general, at least in the sense
that as we vary bargaining power θ between 0
and 1, we trace out the pairwise core (the set of
bilaterally efficient trades). But alternative solution concepts can and have been used. Rocheteau
and Wright (2005), among many others since,
consider Walrasian price taking, as well as price
posting with directed search, in the benchmark
model. Aruoba, Rocheteau, and Waller (2007)
consider bargaining solutions other than Nash.
Galenianos and Kircher (2008) and Dutu, Julien,
and King (2009), in versions with some multilateral meetings, use auctions. Ennis (2008), Dong
and Jiang (2009), and Sanches and Williamson
(2010a) study pricing with private information.
Hu, Kennan, and Wallace (2009) use pure mechanism design. Head et al. (2010) use price posting
with random search.
While these may all be appropriate for particular applications, in the interests of space, here
we present just one: Walrasian pricing. This can
be motivated by interpreting agents as meeting
in large groups in the DM, rather than bilaterally,
and assuming that whether one is a buyer or
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

seller is determined by preference and technology
shocks rather than random matching. It might
help to think about labor search models, like
Mortensen and Pissarides (1994), which uses
bargaining, and Lucas and Prescott (1974), which
uses price taking. A standard interpretation of the
latter is that workers and firms meet on islands
representing local labor markets, but on each
island there are enough workers and firms that it
makes sense to take wages parametrically. The
same is true in monetary models: Specialization
and anonymity can lead to an essential role for
money independent of whether agents meet in
small or large groups.
Let γ be the probability of being a buyer in
any given DM subperiod, and also the probability
of being a seller, so that we have the same measure
of each, although this is easy to relax.23 Assume
for now that whether an agent ends up a buyer or
seller in the DM is realized after the CM closes.
Hence, agents are homogeneous ex ante, and they
all choose the same m̂ (we consider ex ante heterogeneity below). Leaving off t subscripts when there
is little risk of confusion, the CM problem is the
same as above, but in the DM
V ( m ) = γ V b ( m ) + γ V s ( m ) + (1 − 2γ )W ( m ),
where V b . and V s . are the payoffs to ending
up a buyer or a seller ex post. These payoffs are
given by

{
}
 )},
( m) = max {−c ( x ) + W ( m + px

 ) st px
 ≤m
V b ( m ) = max u ( x ) + W ( m − px
V

s

where p̃ is the DM nominal price of x (which in
general differs from the CM price p = 1/φ ). The
buyer’s constraint always binds, p̃x = m, exactly
as in the bargaining model. Then, market clearing
in the DM and optimization imply that, to use
Walrasian pricing, simply replace gx with cx
and ασ with γ . In particular, the same simple
condition ᐍx = i in (8) determines the unique
stationary monetary equilibrium, as long as in
the formula for ᐍx = ασ [u′x/g′x – 1] we replace
23

We assume here that one can never be both a buyer and seller in
the same subperiod, but this is also easy to relax, just like it is easy
to allow some double-coincidence meetings in the matching model.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

ασ with γ and g′x with c′x. The results are
otherwise qualitatively the same. However, there
can be very interesting quantitative differences
between the Nash and Walrasian versions of the
model (see Aruoba, Waller, and Wright, 2009, or
Aruoba and Schorfheide, 2010, for a case in point).
Also, notice that here we made two changes to the
baseline model: We generate the double coincidence problem via preference and technology
shocks, instead of random bilateral matching;
and we swapped Nash bargaining for Walrasian
pricing. One could of course use preference and
technology shocks instead of matching and stick
with bargaining, or one could impose price taking
with bilateral matching, although this seems less
reasonable.

4.2 Ex Ante Heterogeneity
Here we present a simple extension of the
benchmark model to illustrate another application
and to make some methodological points. As
above, preference and technology shocks rather
than matching generate the DM double coincidence problem, but now agents know the realization of these shocks before they choose m̂ in the
CM. In fact, in our quasilinear specification, it is
equivalent to assume there are two permanently
distinct types: buyers, who may consume but
never produce in the DM; and sellers, who may
produce but never consume in the DM.24 We can
allow buyers and sellers to have different CM utility functions, say U bX  – H and U s X  – υH.25
Denote the measures of buyers and sellers by nb
and ns . If we normalize ns = 1, then by varying
24

In case it is not obvious that it is equivalent to have permanently
different types or types determined every period, it follows from
the fact that agents exit each CM with a clean slate, rebalancing
their money balances appropriately to wipe out previous histories.
Notice also that it makes sense to have some agents who are permanently sellers in the DM only when the CM is operative—otherwise, say in Molico’s model, what would they do with their money?
Similarly it makes sense to have some agents who are permanently
buyers in the DM only when the CM is operative—otherwise, where
would they get their money?

25

A case used in some applications is UbX = 0, UsX = X, and υ = 0,
which means buyers consume only in the DM and produce only in
the CM, while sellers do just the opposite. Notice that υ = 0 implies
we need UsX to be linear if we want quasilinearity. In some applications, sellers are interpreted as firms operating in the DM, paying
dividends to their owner in the CM (e.g., Berentsen, Menzio, and
Wright, 2010).

J U LY / A U G U S T

2010

283

Williamson and Wright

nb we allow variation in market tightness in the
DM, given by τ = nb/ns .
We now have to write separate value functions
for buyers and sellers in the CM. Again, leaving
off the t subscripts, after eliminating H, these can
be written

( )
( mˆ )}

W b ( m) = ϕ m + T + U b X b − X b
(10)

{

ˆ + βV b
+ max −ϕ m
ˆ
m

( )

(11) W s ( m ) = ϕ m + T + U s X s − ν X s + βV s ( 0),
where we use two results that should be obvious:
Buyers and sellers respectively choose X b and X s,
where U jX j  ≤ 1 with equality if X j > 0; and only
buyers ever choose m̂ > 0, so that m̂ = 0 for sellers.
Hence we no longer have a degenerate distribution of money balances in the DM, but this does
not complicate the analysis. Indeed, it is perhaps
worth emphasizing that what makes the framework easy to use is not degeneracy, per se, but
history independence. It is the fact that the distribution of money in the DM is degenerate conditional on agent type that begets tractability.
In the DM,

{

}

ˆ ) = − kb + W b ( m
ˆ ) + α b σ u  x ( m
ˆ ) − ϕ m
ˆ
(12) V b ( m

{

}

(13) V s (0) = − k s + W s ( 0) + α s σ −c  x ( m ) + ϕ m ,
where we use Nash bargaining, implying the
result d = m̂ and x = xm̂, with m̂ being the money
the buyer chooses, while sellers take it as given
– (they are equal in equilibrium).
that buyers have m
Additionally, for buyers and sellers, respectively,
we add flow search costs kb and ks and distinguish
the arrival rates as αb and αs , which can now be
endogenous. Notice that even though we use the
same notation, V b . and V s . are different here
than in Section 4.1, where agents were homogeneous ex ante (when they choose m̂). Manipulating the buyer’s FOC φ = βV′m̂, following the
same steps as in the benchmark model, we get
the analogous equilibrium condition

 u′ ( x )
(14) i =  ( x ) ≡ α b σ 
− 1 .
 g ′(x ) 
b

284

J U LY / A U G U S T

2010

This extension of the benchmark model is
often adopted in applications, where it may be
more natural, or easier. Here we can use it to
expound on a venerable issue: the effect of inflation on the time it takes people to spend their
money. Conventional wisdom has it that higher
inflation makes people spend money faster—like
a hot potato they want to get rid of sooner rather
than later —and this is one channel via which
inflation increases velocity.26 Search-based theory
seems ideal for studying this phenomenon. Li
(1994 and 1995) introduced endogenous search
effort into a first-generation model, and proxied
for inflation with taxation, since it is hard to have
inflation with indivisible money. He shows that
increasing his inflation-like tax makes buyers
search harder and spend money faster, increasing
velocity. Moreover, some inflation is good for
welfare, because there is too little search under
laissez faire, because agents do not internalize
the effect of their search effort on others’ expected
payoffs.
Lagos and Rocheteau (2005) show, however,
that the main result is an artifact of indivisibilities.
They introduce search intensity into the standard
New Monetarist framework, which allows them
to model inflation directly, and more importantly
to determine prices endogenously. They then
prove that inflation reduces buyers’ search effort,
the opposite of Li’s (1994, 1995) finding. Intui26

Of Keynes’s many beautiful passages, we like this one: “The public
discover that it is the holders of notes who suffer taxation [from
inflation]...and they begin to change their habits and to economize
in their holding of notes. They can do this in various ways...[T]hey
can reduce the amount of till-money and pocket-money that they
keep and the average length of time for which they keep it, even
at the cost of great personal inconvenience...By these means they
can get along and do their business with an amount of notes having
an aggregate real value substantially less than before. In Moscow
the unwillingness to hold money except for the shortest possible
time reached at one period a fantastic intensity. If a grocer sold a
pound of cheese, he ran off with the rubles as fast as his legs could
carry him to the Central Market to replenish his stocks by changing
them into cheese again, lest they lost their value before he got there;
thus justifying the prevision of economists in naming the phenomenon velocity of circulation! In Vienna, during the period of collapse...[it] became a seasonable witticism to allege that a prudent
man at a cafe ordering a bock of beer should order a second bock
at the same time, even at the expense of drinking it tepid, lest the
price should rise meanwhile” (Keynes, 1924, p. 51).
We like it not only because it involves beer and cheese, consistent
with our Wisconsin connections, but also because Keynes was able
to anticipate the usefulness of our benchmark specification where
agents periodically visit the Central(ized) Market.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

tively, people cannot avoid the inflation tax by
spending money more quickly, buyers can only
pass it on to sellers, who are not inclined to absorb
it for free. When prices can adjust, inflation
reduces x and hence the trading surplus, which
reduces the return to DM activity. Thus, agents
invest less in this activity, which means search
effort goes down, and they end up spending
money more slowly. Li’s ostensibly plausible
finding fails when prices are endogenous—somewhat reminiscent of Gresham’s law, that bad
money drives out good money, which also holds
when prices are fixed but not necessarily when
they are flexible (see Friedman and Schwartz,
1963, for a discussion and Burdett, Trejos, and
Wright, 2001, for a theoretical analysis). We would
not claim this is a puzzle in any serious sense,
but several people have worked on trying to resurrect the result that inflation makes people spend
money faster in various extensions of the benchmark model, including Ennis (2009) and Nosal
(2010).
One resolution is proposed by Lagos and
Rocheteau (2005) themselves, who can get search
effort to increase with inflation when they replace
bargaining by price posting, although their result
is not very robust—it only holds for some parameter values, and in particular for low inflation
rates. Here we take a different tack, following Liu,
Wang, and Wright (2010). We start with a very
simple matching technology, which assumes that,
as in Li (1995), sellers wait passively, while buyers actively search by directly choosing αb at flow
cost kb = kαb . Simplicity comes from the fact
that with this technology search effort by other
buyers does not affect the arrival rate of an individual buyer, although it does affect the arrival
rate of sellers (see Liu, Wang, and Wright, 2010,
for details, but note that this is only used to ease
the presentation). Taking the FOC with respect
to αb in (12) and using the bargaining solution
φ m = gx, we have
(15) k ′ (α b ) = σ u ( x ) − g ( x )  .
Equilibrium is a quantity x and an arrival rate αb
solving (14) – (15). It is not hard to show, as in
Liu, Wang, and Wright (2010), that in equilibrium
x and αb both fall with i.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

This is our simplified version of the Lagos
and Rocheteau (2005) result that inflation makes
buyers spend their money less quickly, because
it reduces the expected gain from a meeting,
σ [ux – gx]. As we said, one can try to overturn
this by changing the pricing mechanism, but
instead we change the notion of search intensity:
Rather than the intensive margin (effort), we consider the extensive margin (participation). That
is, we introduce a free entry decision by buyers,
similar to the decision of firms in the textbook
labor search model in Pissarides (2000) (for other
applications, one may alternatively consider entry
by sellers or allowing agents to choose whether
to be buyers or sellers in the DM). For this demonstration, we use a general constant returns to
scale matching technology. Thus, the number of
DM meetings n = nnb , ns  depends on the measures of buyers nb and sellers ns in the market, and
αb = nnb , ns /nb = nτ,1, where τ = nb /nb. We
make the usual assumptions on n..27
We now set kb = 0, but assume buyers must
pay a fixed cost k to enter the DM, while sellers
get in for free. Hence, all sellers participate and
ns = 1, while nb is endogenous. Assuming some
but not all buyers participate, they must be indifferent about going to the DM, which as a matter
of algebra can be shown to imply
(16) k + ig ( x ) = α b σ u ( x ) − g ( x ) .
This equates the total cost of participating in the
DM, the entry cost k plus the real cost of carrying
cash iφ m̂ = igx, to the expected benefit. A monetary equilibrium in this model is a non-zero solution x,αb to (14) and (16), from which we can
easily get the rest of the endogenous variables,
including the measure of participating buyers nb ,
which is a decreasing function of αb. One can
verify, as in Liu, Wang, and Wright (2010), that
there is a unique equilibrium, with x decreasing
and αb increasing with i.
Thus we unambiguously get the hot potato
effect ∂αb/∂i > 0 that was elusive, at least with
27

It is twice continuously differentiable, strictly increasing, and
strictly concave. Also, nnb,ns  ≤ minnb,ns , n0,ns  = nnb,0 = 0,
limτ → ∞ αb = 0, and limτ → ∞ αb = 1.

J U LY / A U G U S T

2010

285

Williamson and Wright

bargaining, when search intensity was modeled
on the intensive margin. The intuition is crystal
clear: An increase in inflation has to lead to buyers
spending their money faster, because this is the
only way to keep them indifferent about participating! It works by having nb go down, naturally,
when i increases. Moreover, this implies velocity
unambiguously increases with i. In terms of welfare, it can be shown that (as in the benchmark
model), the Friedman rule i = 0 plus the Hosios
condition θ = 1 are necessary and sufficient for
x = x* But this does not in general imply efficiency
in terms of entry, because of so-called search
externalities: With a general matching function,
participation by buyers increases the arrival rate
for sellers and decreases it for other buyers. There
is a separate Hosios condition for efficient participation, which as in a standard Pissarides (2000)
model equates θ to the elasticity of the matching
function with respect to nb. But this conflicts in
general with the condition θ = 1 required for
x = x*. Further analyzing efficiency and policy
interventions in this class of models is an important area of investigation (see, e.g., Berentsen and
Waller, 2009).
There are at least two reasons to be interested
in these issues. One is normative: Ongoing
research is studying whether there is, apropos the
previous paragraph, too little or too much search
or entry under laissez faire, and what policy can
do about it. The other is positive: The effect of
inflation on the speed with which people spend
money is one channel through which it affects
velocity, which is related to money demand. This
is interesting for many reasons, including, as we
saw in Section 3, the fact that it helps calibrate
the model and measure the cost of inflation. We
also think this subsection makes the following
methodological point. We are arguing generally
for better foundations for monetary economics.
Although it is not the only possible way to proceed, it is sometimes convenient and informative
to use search-and-bargaining theory. We have
often heard it said that everything that can be
done with search and bargaining can also be done
using a money-in-the-utility-function or cash-inadvance model. Therefore, as the argument goes,
286

J U LY / A U G U S T

2010

we do not need search and bargaining. This application is a manifest counterexample: The interesting issues are all about search and bargaining.28

4.3 Other Extensions
Williamson and Wright (forthcoming) provide
more details and references, but it would not hurt
here to briefly summarize a few existing applications and generalizations of the benchmark model.
As already mentioned, various alternative pricing
mechanisms have been considered. People have
included neoclassical capital and labor markets,
and versions that nest standard real business cycle
theory as a special case. Others have studied labor
markets and the Phillips curve, using either
Rogerson (1988) or Mortensen and Pissarides
(1994) models of unemployment. People have
included unanticipated inflation and signal extraction problems to quantify the importance of monetary uncertainty, while others have introduced
private information to study recognizability and
the counterfeiting of money or other assets. Others
have analyzed optimal fiscal and monetary policy.
Some people have introduced banking in various
ways, while others have studied technology transfer and economic growth. Still others have studied
the interaction between money and bonds, details
of monetary policy implementation, the use of
credit cards, and various issues in finance. There
are many other applications and extensions of the
benchmark model, both theoretical and empirical.
In the rest of this essay we will present some examples related to asset markets and to intermediation.

5. ASSET PRICING AND LIQUIDITY
New Monetarist models provide insights into
the exchange process and allow us to be explicit
about the frictions that provide a role for money.
Another advantage is that they allow us to consider a rich array of assets, credit arrangements,
and intermediary structures. In this section we
construct a version with two assets: money and
28

Berentsen, Menzio, and Wright (2010) provide a different argument
pertaining to search-and-bargaining models and reduced-form
models delivering different results, both qualitatively and
quantitatively.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

equity shares.29 We use the setup with ex ante
heterogeneity developed in Section 4.2, with no
entry costs, so that all buyers and sellers participate in the DM, and here we normalize nb = 1.
Again, in the DM, buyers always want to consume
but cannot produce, while sellers are always able
to produce but do not want to consume. As before,
we can give buyers and sellers different CM utility U bX  – H and U s X  – As H. Also, to reduce
notation we set cx = x, and buyers in the DM
now make take-it-or-leave-it offers θ = 1. Also, to
make the discussion of welfare below more interesting, we assume it can be costly to maintain
the stock of currency: It uses up ωφ M units of the
CM good X to maintain a real currency supply of
φ M where M is the stock of currency before the
transfer from the government occurs in the CM.
This can be interpreted as the cost of replacing
worn-out notes, or thwarting counterfeiters, perhaps, and is financed through lump-sum taxes in
the CM.
As is standard, following Lucas (1978), there
is a productive asset in this economy that one
can think of as a tree in fixed supply, normalized
to 1, that yields a dividend y in fruit in units of
the numeraire each period in the CM. Agents can
trade equity shares in the tree in the CM at price
ψ. Ownership of a shares entitles a shareholder
to receive ay units of X in the CM. In the DM, for
simplicity, each buyer is matched with a seller
with probability 1. As in the benchmark model,
there is no record keeping, so credit is unavailable.
Also, because we want to have both money and
equity used in transactions, even when money is
dominated in rate of return, we give shares a disadvantage in terms of recognizability. Thus buyers in the DM can costlessly produce fake shares,
which are illegitimate claims to dividends in the
CM, perhaps because they are counterfeit—bad
claims to good trees—or because they are lemons—
good claims to bad trees (see Lester, Postlewaite,
and Wright, 2009 and 2010; Rocheteau, 2009; and
Li and Rocheteau, 2010, for more on this).
29

The presentation here has some features in common with the
multiple-asset models of Geromichalos, Licari, and Suarez-Lledo
(2007), Lagos (2008), Lagos and Rocheteau (2008), and Lester,
Postlewaite, and Wright (2010), as well as models of money and
credit, such as Sanches and Williamson (2010b).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

To capture the extent of the recognizability
problem, following Williamson and Wright (1994),
in a fraction η of DM meetings the seller has no
technology for discriminating between phony
and genuine shares, so they do not accept them
(if they did they would only receive fakes). We
call these meetings nonmonitored. In these meetings, money, which can always be recognized, is
the only object accepted in trade. In the remaining
fraction 1 – η of DM meetings, sellers can differentiate between genuine and phony shares, so
equity as well as currency are potentially acceptable. We call these meetings monitored, with one
idea being that the seller can keep a record of
who gave him any particular asset, so that when
he gets to the next frictionless CM, where phony
and genuine shares can always be distinguished,
he could report and we could punish severely
anyone who passed a fake. This is not the only
interpretation, however, another one being that
the seller in a monitored meeting has a technology
to verify an asset’s authenticity.
The timing is such that buyers do not know
whether they will be in a monitored or nonmonitored meeting in the DM until after the CM
closes. Therefore, the problem for a buyer coming
into the CM with a portfolio m,a of currency
and shares is given, after eliminating H, by

( )

W b ( m, a ) = U b X b − X b + ϕ m + (ψ + y )a
(17)

{

}

ˆ − ψ aˆ + βV b ( m
ˆ , aˆ ) ,
+T + max −ϕ m
ˆ , aˆ
m

where X b satisfies ∂U bX b/∂X b ≤ 1 with equality
if X b > 0.30 In any case, ∂W b/∂m = φ and ∂W b/∂a
= ψ + y. We do not actually need to consider the
seller’s problem beyond noting that, as long as
we assume sellers’ preferences are quasilinear,
their CM value function will also satisfy ∂W s/∂m
= φ and ∂W s/∂a = ψ + y. Given this, in nonmonitored and monitored DM meetings the bargaining solutions with θ = 1 and cx = x are xN = φ dN
and xM = φ dM + eψ + y, where now dN ≤ m̂ and
dM ≤ m̂ are dollars that change hands in nonmonitored and monitored trades, and e ≤ â is
30

In the special case mentioned above, where UbX ≡ 0 and buyers
consume only in the DM, Xb = 0, but again this does not really
matter for the interesting results.

J U LY / A U G U S T

2010

287

the amount of equity handed over in a monitored trade (as we said above, no equity changes
hands in non-monitored trades).
We can anticipate dN = dM = m̂, without loss
of generality, but we cannot be sure of e = â ,
because buyers never want to buy more than x*.
Let a* be the amount of equity required to buy x*
in a monitored meeting, given the buyer also
spends m̂, defined by x* = φ m̂ + a*ψ + y. Then
xM = φ m + â ψ + y if â < a* and xM = x* otherwise,
while e = â if â < a* and e = a* otherwise. The DM
value function for buyers can now be written

handing over only a fraction of their shares e < 1;
and (ii) liquidity is scarce, 1 < a*, which means
equity is in short enough supply that in monitored
meetings buyers settle for x M < x* while handing
over all of their shares e = 1. In case (i) we insert
(19)–(20) into the FOC from the CM problem using
(22) to get the relevant derivatives; and in case (ii)
we do the same using (21). We now consider each
case in turn.32

5.1 Case (i)
When a* < 1 and xM = x*, one could say liquidity is plentiful. Then the above procedure—inserting (19)–(20) into the FOC from equation (17)
using equation (22)—yields

( )
( )

ˆ , aˆ ) = η u x N + W ( 0, aˆ ) 
V b (m


(18)
M
+ (1 − η ) u x + W (0, aˆ − e ) .


Differentiating, we have
∂V b
∂x N
∂x M
= ηu′ x N
+ (1 − η ) u ′ x M
ˆ
ˆ
ˆ
∂m
∂m
(19) ∂m
∂e
− (1 − η ) (ψ + y )
∂m̂

( )

( )

∂V b
∂x M
= η (ψ + y ) + (1 − η )u′ x M
∂aˆ
∂aˆ
(20)
e
∂


+ (1 − η ) (ψ + y )  1 −  ,

∂∠

( )

where from the bargaining solution we know the
following31:
∂x N
∂x M
= φ;
= φ;
ˆ
ˆ
∂m
∂m
(21)
∂e
∂x M
∂e
=1
= ψ + y;
= 0;
ˆ
∂â
∂aˆ
∂m
aˆ < a ∗ ⇒

∂x N
∂x M
= φ;
= 0;
ˆ
ˆ
∂m
∂m
(22)
∂e
∂x M
∂e
−φ
;
=0
= 0;
=
ˆ ψ + y ∂â
∂aˆ
∂m
aˆ > a ∗ ⇒

.

In stationary equilibrium ψt +1 = ψt and φt +1 =
φt /1 + µ, where again µ is both the rate of growth
of Mt and the inflation rate. Market clearing
requires â = 1. There are then two possibilities
for equilibrium: (i) liquidity is plentiful, 1 > a*,
which means that in monitored meetings agents
have sufficient cash plus equity to buy x* while
288

J U LY / A U G U S T

2010

( )

(23) 1 + µ = β ηu′ x N + 1 − η 


(24) ψ = β (ψ + y ) .
Defining the interest rate on a nominal bond that
is illiquid (cannot be traded in the DM) by 1 + i =
1 + µ/β, (23) can be written i = ᐍxN , where
ᐍx = η[u′x – 1] is the formula for the liquidity
premium when θ = 1, cx = x, and the relevant
version of the single-coincidence probability is η.
As in the model with money and no other assets,
there is a unique xN > 0 solving this condition,
and it would be correct to say that cash bears a
liquidity premium.
By contrast, (24) tells us that equity is priced
according to its fundamental value, the present
value of its dividend stream, ψ = ψ F ≡ βy/1 – β .
In this equilibrium, therefore, equity bears no
liquidity premium, and its real return is invariant
to inflation, as Irving Fisher would have it. To see
when this equilibrium exists, the requirement
a* < 1 is easily seen to hold iff x* < x N + y/1 – β .
Hence, if y > 1 – β x* this equilibrium always
exists. And if y < 1 – β x* it exists iff µ < µ–, where
31

Notice in particular that when â > a*, if we gave a buyer a little
more m̂ in a monitored meeting, he would not buy more xM but
would reduce e to keep xM = x*.

32

We ignore nongeneric cases throughout this section, where, say,
buyers have just exactly enough liquidity to get xM = x*.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

 

y 
(25) 1 + µ = β ηu′  x ∗ −
+ 1 − η,

1− β 
 

since x N → x* as µ → β – 1. An important conclusion is that even if equity is scarce, in the sense
that y < 1 – β x*, liquidity will not be scarce as
long as inflation is low enough. Liquidity is always
plentiful at the Friedman rule.

5.2 Case (ii)
When 1 < a* and x M < x*, one could say liquidity is scarce. Then the procedure described above
yields

( )

( )

(26) 1 + µ = β ηu′ x N + (1 − η )u′ x M 



( )

(27) ψ = β (ψ + y ) η + (1 − η )u′ x M  .


Immediately (26) tells us that equity trades in the
CM for more than its fundamental price, ψ > ψ F,
as it now bears a liquidity premium. Using the
bargaining solution x M = φm + â ψ + y to eliminate ψ from (27), we are left with two equations
in x N,x M , which are easy to analyze. It is easy
to check that in this equilibrium Fisher’s theory
does not apply to equity: An increase in inflation
reduces the real rate of return of shares. The reason is that an increase in µ causes agents to, at
the margin, shift their portfolio from cash into
equity, driving up the share price ψ and driving
down the real return y/ψ .33 This equilibrium exists
iff x M < x*. This is the case if equity is scarce,
y < 1 – β x*, and additionally µ > µ– where µ– is
given in (25).

5.3 Discussion
To discuss optimality, for the sake of argument,
let us add utilities across agents to construct a
welfare measure

( )
( )

W = η u x N − x N 


(28)
M
M
+ (1 − η ) u x − x  − ω x N ,



where we take into account the cost of maintaining real money balances, ωφ M = ω x N. If ω = 0
then  is decreasing in µ and the optimal policy
is the Friedman rule µ = β – 1. Given µ = β – 1,
we achieve the first best x M = x N = x*, shares trade
at their fundamental price in the CM ψ = ψ F, the
real return on equity is y/ψ = r, and the nominal
return is 0. Indeed, in a Friedman rule equilibrium, shares do not have to circulate in the DM,
since outside money satiates agents in liquidity.
We are not sure what to think of this result, however, since in practice private liquidity appears
to be important for many transactions, and it is
not clear that currency would replace it entirely
even if monetary policy were optimal.
To get at this, we allow outside money to be
costly by considering ω > 0, for reasons mentioned
above concerning maintenance of the currency,
protection against counterfeiting, and so on. Now
at the Friedman rule µ = β – 1 we have ∂/∂µ =
–ω ∂x N/∂µ > 0, so inflating above the Friedman
rule is optimal. Suppose equity is plentiful at
the optimum,
∂x N
∂W 
= 0,
= ηu′ x N − η − ω 
∂µ
∂µ

( )

and the optimal policy is
(29) µ ∗ = β (1 + ω ) − 1 .
This is the optimal policy, which means y >
1 – β x*, or y < 1 – β x* and µ* > µ–. This will be
the case iff ω < ω– for some threshold ω–. If, however, y < 1 – β x* and µ* > µ–, which is the case
iff ω > ω–, then equity is scarce at the optimum.
In this case we cannot derive a closed-form solution for the optimal policy, but µ is still increasing in ω.34
For those who have not kept up with New
Monetarist research, this example illustrates how
it has moved beyond studying purely cash transactions. Related models, including Duffie
Gârleanu, and Pederson (2005 and 2007); Vayanos
and Weill (2008); Lagos (2008); Lagos and
Rocheteau (2009); Lagos, Rocheteau, and Weill
34

33

For an illiquid bond, however, that cannot circulate in the DM,
the Fisher equation still holds, of course.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Effectively, the inflation tax falls on the users of currency, but at
least for the case where shares are not scarce at the optimum, the
inflation tax is not sufficient to finance currency maintenance.

J U LY / A U G U S T

2010

289

Williamson and Wright

(2009); Rocheteau (2009), Ravikumar and Shao
(2006), and Lester, Postlewaite, and Wright (2010),
begin to address issues related to liquidity in asset
markets, asset price volatility, the roles of public
and private liquidity, and how informational
frictions might matter. These models capture,
in a simple way, optimal deviations from the
Friedman rule. It is not common for monetary
models, including reduced-form models, to produce an optimal deviation from the Friedman rule,
yet central bankers typically target a short-term
nominal interest rate of 0 only temporarily—if at
all. At some level this is no different than policymakers using positive capital taxes or tariffs, binding minimum wage laws, rent control, agricultural
price supports, and so on, which are all suboptimal according to textbook economics. Yet one
might at least entertain the hypothesis that i = 0
may be suboptimal.
New Keynesian sticky price models typically
yield a deviation from the Friedman rule, with a
zero inflation rate being the default option. We do
not take those results very seriously, however,
since those models leave out all the frictions that
we think are relevant. For us, elements that are
important in generating optimal departures from
the Friedman rule might well include costs of
operating currency systems, as captured in a
simple way in the above example. He, Huang,
and Wright (2008) and Sanches and Williamson
(2010b) go into more detail analyzing explicit
models of theft and show how this leads to the
use of currency substitutes at the optimum.
Similarly, Nosal and Wallace (2007) and Li and
Rocheteau (2010) provide interesting analyses of
counterfeiting. While currency maintenance, theft,
counterfeiting, and so on are not usually considered first-order issues in mainstream monetary
policy analysis, we think they are potentially
important enough to take seriously. More work
remains to be done on these issues.

a liquidity premium—in practice, financial intermediation plays an important role in asset markets,
and alternatives to currency in retail transactions
are essentially always the liabilities of some private intermediary. Research from the 1980s on
financial intermediation provides some alternative approaches to modeling intermediary structures in the class of models under consideration,
including the framework of Diamond and Dybvig
(1983), and costly-state-verification models like
Diamond (1984) or Williamson (1986). Here we
show how to integrate Diamond and Dybvig (1983)
banking into our benchmark model, where banks
provide insurance against the need for liquidity.
Moreover, as in earlier attempts by Freeman
(1988) or Champ, Smith, and Williamson (1996),
in this model money and monetary policy play
a role, while the original Diamond and Dybvig
(1983) specification has neither currency nor
anything that could be interpreted as the use of
third-party liabilities facilitating transactions.35
The only alteration to the environment in
Section 5 concerns the timing. Let’s call buyers
in a nonmonitored DM meeting type N buyers
and those in a monitored meeting type M buyers.
Then assume that buyers’ types for the next DM
are realized at the end of the current CM, after
production and consumption decisions have been
made but before they part company, and that this
is publically observable. This allows buyers to
enter into relationships that resemble banking.
What is a bank? Any agent can offer the following
deposit contract: “Make a deposit with me while
the CM is still open, either in goods or money or
other assets, it does not matter since I can adjust
my portfolio frictionlessly in the CM; upon seeing
your type, if it is N you can withdraw mN dollars
before going to the DM and retain claims to aM in
the next CM; and if it is M you withdraw nothing,
but in the DM you can trade claims against your
deposits backed by mM dollars and aM equity
shares.”

6. INTERMEDIATION

35

While the model in Section 5 has some interesting features—for example, assets other than
currency are used in transactions and can bear
290

J U LY / A U G U S T

2010

The model in this section is related to the model of banking in
Berentsen, Camera, and Waller (2007) and Chiu and Meh (2010),
although it also goes beyond that work, in ways that we discuss
below. A related analysis, using mechanism design, that also takes
seriously the role of bank liabilities (deposits) in the exchange
process is developed in Mattesini, Monnet, and Wright (2010).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

The fact that deposit claims are transferable,
allows them to potentially be traded in the DM,
but to make things interesting here we treat them
symmetrically with actual shares as in Section 5—
they can be phony, and only sellers in monitored
meetings can verify this, and therefore only sellers
in monitored meetings accept these claims.
Banks are competitive, so the equilibrium contract maximizes the welfare of a representative
depositor, subject to non-negative profit, and a
bank can diversify perfectly against its customers
ending up type N or M as long as it attracts a
strictly positive mass (although it would also be
interesting to add aggregate uncertainty). Suppose the representative buyer acquires and then
deposits m̂ and â , where we can restrict attention
to the case where buyers bank all their assets. Also,
without loss of generality, we can restrict attention
to contracts with mN > 0 and aN = 0, since buyers
have no use for equity in nonmonitored meetings,
and therefore to contracts where aM = â /1 – η,
but we have to sort out below whether mM > 0
or mM = 0; all we know so far is that ηmN +
1 – ηmM = m̂. We maintain the assumptions that
buyers make take-it-or-leave-it offers in the DM
and cx = x, so that xN = φmN and xM = φmM +
eψ + y, as before, except now type N buyers go
to the DM with mN dollars while type M go with
transferable deposits of mN dollars plus â /1 – η
shares. Still it should be clear that we can again
take the following for granted: dN = mN; dM = mM;
e = â /1 – η if â /1 – η < a* and e = a* otherwise;
xN < x*; and, finally, x M = φmM + ψ + yâ /1 – η
if â < a* and xM = x*.
The objective function for a buyer, and hence
for a competitive banker, is exactly W b. as written in (17), except now

( )
(30)



ˆ
+ (1 − η ) u ( x ) + W  0, a − e   ,
1
−
η



ˆ , aˆ ) = η u x N + W b ( 0, 0) 
V b (m


M

b

where xN = φ mN and xM = φ mM + eψ + y. The
same procedure used in Section 5 applies: Insert
into the FOC the derivatives of V b. from (30)
taking care of whether â /1 – η > a* or vice
versa, and also whether mM = 0 or mM > 0. When
â /1 – η > a* it should be clear that mM = 0, since
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

type M buyers are already satiated in liquidity
without cash. Also, market clearing implies â = 1
and m̂ = M1 + µ. Hence, in this model, there are
three possibilities for equilibrium: (i) 1 > a*1 – η
which implies mM = 0 and x M = x*; (ii) 1 < a*1 – η
and mM = 0, which implies x M < x*; and (iii) 1 <
a*1 – η and mM > 0, which also implies x M < x*.
Again, we study each case in turn.

6.1 Case (i)
In this case the supply of equity is plentiful
enough that type M buyers are satiated in liquidity, 1 > a*1 – η which implies x M = x* and mM = 0,
and therefore mN = M/η. The procedure described
above immediately yields

( )

(31) 1 + µ = βu′ x N
(32) ψ = β (ψ + y ) .

Thus, equity trades at its fundamental value in the
CM, ψ = ψ F, and x N satisfies the usual condition,
which as above could also be written i = ᐍxN .
For this equilibrium to exist, we require 1 >
a*1 – η, which holds in this case iff
(33) y > (1 − η ) (1 − β ) x ∗ .
Also, in this case, the real rate of return on shares
is 1/β – 1 independent of µ, and there is a standard
Fisher effect.

6.2 Case (ii)
The by now standard procedure tells us that
solves (31), the same as in the previous case.
However, (32) becomes

xN

( )

(34) ψ = β (ψ + y )u′ x M ,
where x M < x* implies ψ > ψ F. Equity now bears
a liquidity premium because it is scarce—even
though type M buyers are able to offer 1/1 – η
shares, it is not enough to get x*. Using the bargaining solution, which in this case entails
x M = ψ + y/1 – η, to eliminate ψ in (34) yields
a simple equation in x M. Notice, interestingly
enough, that x M and hence ψ are independent of
µ in this case. One can show that for this equilibrium to exist we require that the inequality in (34)
J U LY / A U G U S T

2010

291

Williamson and Wright

goes the other way, and in addition, we must verify
that mM = 0 is part of the equilibrium deposit contract. It is straightforward to show this is the case
iff µ ≥ µ̃ , where µ̃ 僆 β – 1,0 solves

 1 + µ
y
u′ 
.
=
β
 (1 − η ) µ 
Notice the real return on shares is below 1/β but
above the real return on money in this equilibrium.
The gross nominal interest rate on shares is
ψ + y 
,
 ψ 

(1 + µ ) 

where µ > µ̃ , and
ψ + y 
1+ µ
> (1 + µ ) 
> 1.
β
 ψ 
Hence, the nominal interest rate on shares is
positive when µ > µ̃ , although when µ = µ̃ it goes
to zero. Letting r j denote the real rate of return
faced by a type j buyer, from (31) and (32) we have
rj =

xj

η (1 + µ ) x N + (1 − η ) ψψ+ y x M

.

As well the gross nominal interest rate on
deposits is

(1 + µ ) r M

=

x

M

η x + (1 − η ) (1+ µ )ψ(ψ + y ) x M
N

.

Thus, the nominal interest rate on deposits is
positive when x N < x M and

ψ +y
1
>
ψ
1+ µ
and zero when x N = x M and

ψ +y
1
=
.
ψ
1+ µ

6.3 Case (iii)
In this case the deposit contract sets mM > 0
as well as mN > 0 and â > 0. It is easily shown that
the equilibrium contract equates DM consumption
for type M and type N buyers, x M = x N, and we
call the common value x < x*. Also, we have
292

J U LY / A U G U S T

2010

(35) 1 + µ = βu′ ( x )
(36) ψ = β (ψ + y )u′ ( x ) .
By (35), x is given by the usual condition in
monetary trades, and (36) determines ψ > ψ F.
One can show this equilibrium exists iff the
inequality in (33) again goes the other way and
µ 僆 [β – 1,µ̃ ].
Note that in this equilibrium the gross return
on shares is below 1/β, but since the real returns
on shares and money are identical, the nominal
interest rate on shares is 0, as is the nominal
interest rate on deposits. Another interesting feature of this case is that an increase in the money
growth rate increases the price of shares, has no
effect on the nominal interest rate, and reduces
the real interest rate. Further, banks hold reserves
in equilibrium. Simplistic intuition might tell us
that, given the zero nominal rate, monetary policy
would encounter some kind of liquidity trap. But
changes in the money growth rate µ will change
the real allocation, despite the fact that it brings
about a change in the quantity of reserves and no
change in the nominal rate. So much for simplistic
intuition.

6.4 Discussion
The principal role of a bank here is to allocate
public and private liquidity to its most efficient
uses in transactions. Without banking, some
buyers show up in non-monitored DM meetings
with shares that are not accepted, while others
show up in monitored DM meetings with money
that is dominated in rate of return by shares that
are equally acceptable. Buyers would be better off
if they knew in advance their type (monitored or
nonmonitored) in the next DM. If they knew this,
they would typically take only cash to nonmonitored meetings and only equity to monitored meetings. Essentially, with banking it is as if buyers
knew in advance their type, which in this case
corresponds to their need for currency. Banking
allows shares to be concentrated in monitored
meetings, so more private liquidity can be allocated to where it is useful, and currency to be
allocated to nonmonitored meetings where it has
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

an advantage in terms of acceptability, except in
the case where public liquidity is useful at the
margin for sharing risk between type 1 and type
2 buyers (the case where the bank holds reserves).
This is related to, but also goes beyond, the New
Monetarist banking model of Berentsen, Camera,
and Waller (2007), where the only role of banks
is to allocate currency between buyers and sellers.
One advantage of including alternative assets is
that we can provide a link between liquidity provision and media of exchange, on the one hand,
and investment, on the other; see Williamson
(2009) for more on this topic.
One can use this simple banking model to
shed new light on several issues. In terms of optimal policy, since the cost of maintaining the currency is now ηω xN, our welfare measure becomes

( )
( )

W = η u x N − x N 


M
M

− ηω x N .
+ (1 − η ) u x − x


Notice that outside money held as reserves costs
nothing to maintain, as this can be interpreted as
electronic account balances with the central bank.
If ω = 0, then the Friedman rule µ = β – 1 is optimal, we get the first best using currency, and
banks become irrelevant (buyers can do just as
well trading on their own). However, if ω > 0,
then µ > β – 1 is optimal. There are three cases to
consider.
If (33) holds, so deposits are not scarce for any
µ, the optimal policy entails µ = β 1 + ω – 1 and
the nominal interest rates on shares and deposits
are strictly positive. If inequality (33) goes the
other way, β 1 + ηω < 1 and


y
u′ 
 ≥ 1 + ηω ,
 (1 − η ) 1 − β (1 + ηω )  
then the optimal policy is µ = β 1 + ηω – 1. In
this case, at the optimum, µ ≥ µ̃, shares are scarce,
the nominal interest rate is zero, and the real interest rate is below the rate of time preference. This
is novel, in that the usual Friedman rule prescription is to equate real rates of return on all assets,
so that the nominal interest rate should be 0. But
if ω = 0, we would reduce the money growth rate
to µ = β – 1, which would increase the real rate of
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

interest to the rate of time preference. Finally, if
(33) goes the other way and either β 1 + ηω ≥ 1 or


y
u′ 
 < 1 + ηω ,
 (1 − η ) 1 − β (1 + ηω )  
then µ = β 1 + ω – 1 at the optimum.
In summary, in this model, as long as ω > 0
banks perform a socially useful function.36 We
now use the model to discuss Friedman’s (1960)
proposal for 100 percent reserve requirements on
all transactions deposits, a scheme sometimes
referred to as narrow banking. His reasoning was
that variability in real economic activity and in
the price level arises, perhaps primarily, from variability in the money stock measured by currency
in circulation plus transactions deposits. The
central bank cannot control inside money, the
quantity of transactions deposits, directly, but
only the quantity of outside money. However, if
all transactions deposits are backed 100 percent
by outside money, then the central bank can control the total stock of money perfectly, and can
thus cure monetary instability. According to the
model presented above, however, this is wrong.
We start with Friedman’s premise, which is
informed by the quantity theory, that the behavior of some monetary aggregate like M1 is important. In the model, M1 in the DM of period t +1 is
(37) M 1t +1 = M t +

(ψ + y ) = M
φt +1

 η x tN+1 + ψ + y 
t

ηx tN+1



in equilibria where no bank reserves are held and
 x −ψ − y 
(38) M 1t +1 = M t  t +1

x t +1

in equilibria where bank reserves are positive.
Here, x tN+1 denotes the consumption of type N
buyers in the DM when no bank reserves are held,
and x t +1 is the consumption of each buyer in the
DM when bank reserves are positive. In (37) and
(38), the expression in parentheses in each equa36

Adding theft or counterfeiting to the model makes banks even
more useful. Indeed, stories about the need for the safekeeping of
liquid assets are often used to help students understand how banks
developed as institutions that link the provision of transactions
services with portfolio management. See He, Huang, and Wright
(2008) for an explicit New Monetarist model of theft and the safekeeping role of banks.

J U LY / A U G U S T

2010

293

Williamson and Wright

tion is the money multiplier, which plays an
important role, for example, in the interpretation
of historical data by Friedman and Schwartz
(1963).
It is hard to think of an interesting question
to which the money multiplier would help us
with the answer. The reason is that the money
multiplier is not invariant to most policy experiments, except for simple one-time increases in
the stock of outside money. Since money is neutral, the multiplier does not depend on the level
of the money supply, so the multiplier tells us
how much M1 increases per unit increase in the
stock of base money. Beyond that, we know that
x tN+1 depends on µ in (37) and ψ and x t +1 depend
on µ in (38). The model tells us the details of how
a change in µ affects prices and quantities. However, the quantity theory of money does not help
us organize our thinking about banks, liquidity,
or exchange in this context. Similar ideas apply
for other types of monetary policy experiments.
If we want to understand the effects of central
bank lending and open market operations, as in
Williamson (2009), for example, money multiplier
analysis does not seem to help.
Note as well that theory provides no particular
rationale for adding up certain public and private
liabilities (in this case currency and bank deposits),
calling the sum money, and attaching some special
significance to it. Indeed, there are equilibria in
the model where currency and bank deposits are
both used in some of the same transactions, both
bear the same rate of return, and the stocks of
both turn over once each period. Thus, Friedman,
if he were alive, might think he had good reason
to call the sum of currency and bank deposits
money and proceed from there. But what the
model tells us is that public and private liquidity
play quite different roles. In reality, many assets
are used in transactions, broadly defined, including Treasury bills, mortgage-backed securities,
and mutual fund shares. We see no real purpose
in drawing some boundary between one set of
assets and another, and calling members of one
set money.37
37

Related discussions can be found in Wallace (1980) and Sargent
and Wallace (1982); in a sense we are just restating their ideas in
the context of our New Monetarist model.

294

J U LY / A U G U S T

2010

Suppose the government were to, misguidedly
as it turns out, impose 100 percent reserve requirements. At best, this would be a requirement that
outside money be held one-for-one against bank
deposits. We are now effectively back to the world
of the model without banks in the previous section, as holding bank deposits becomes equivalent
to holding currency. Agents receive no liquidity
insurance, and are worse off than with unfettered
banking, since the efficiency gains from the reallocation of liquidity are lost. At worst, suppose
the 100 percent reserve requirement is imposed
by constraining every transaction to be a trade of
outside money for something else, so that shares
cannot be used at all in transactions. Then shares
will be held from one CM until the next, never
trading in the DM, and any benefits from private
liquidity are forgone. This obviously reduces welfare. A flaw in Old Monetarism was that it neglected the role of intermediation in allocating
resources efficiently. In other related environments (e.g., Williamson, 1999 and 2009, and some
examples presented in Williamson and Wright,
forthcoming), banks can also be important in reallocating investment and capital efficiently, with
the transactions role of bank liabilities being
critical in attracting savings to financial intermediaries that can be channeled into investment. In
spite of the weaknesses in the quantity theory of
money, the reasoning behind the Friedman rule
is impeccable, and we take that to be the important
legacy of Old Monetarism.

7. CONCLUSION
New Monetarist economists are committed
to modeling approaches that are explicit about
the frictions that make monetary exchange and
related arrangements socially useful and that
capture the relationships among credit, banking,
and currency transactions. Ideally, economic
theories designed for analyzing and evaluating
monetary policy should be able to answer basic
questions concerning the necessity and role of
central banking, the superiority of one type of
central bank operating procedure over another,
and the differences in the effects of central bank
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

lending and open market operations. New
Monetarist economists have made progress in
understanding the basic frictions that make
monetary exchange an equilibrium or an efficient
arrangement, and in understanding the mechanisms by which policy can affect allocations and
welfare. However, much remains to be learned
about many issues, including the sources of shortrun nonneutralities and their quantitative significance, as well as the role of central banking.
With the examples in this paper, and some
other examples in our companion paper
(Williamson and Wright, forthcoming) concerning
payments systems, labor markets, investment,
and several other substantive applications, we

hope to give some of the flavor of frontier work
in the New Monetarist research program. Our principles and our modeling approaches developed
thus far have great potential in explaining asset
pricing anomalies; the role of public and private
liquidity in transactions, both at the retail level
and among financial institutions; the functions
of collateral; and the relationship between money
and credit. Recent events in financial markets and
in the broader economy make it clear how important it is to model basic frictions in the financial
system. We look forward to developments in this
research and are excited about the future prospects
for New Monetarism.

REFERENCES
Aliprantis, Charalambos D.; Camera, Gabrielle and Puzzello, Daniela. “Matching and Anonymity.” Economic
Theory, October 2006, 29(2), pp. 415-32.
Aliprantis, Charalambos D.; Camera, Gabrielle and Puzzello, Daniela. “Anonymous Markets and Monetary
Trading.” Journal of Monetary Economics, October 2007, 54(7), pp. 1905-28.
Araujo, Luis; Camargo, Braz; Minetti, Raoul and Puzzello, Daniela. “The Informational Role of Prices and the
Essentiality of Money in the Lagos-Wright Model.” Unpublished manuscript, Michigan State University, 2010.
Aruoba, S. Borağan; Rocheteau, Guillaume and Waller, Christopher. “Bargaining and the Value of Money.”
Journal of Monetary Economics, November 2007, 54(8), pp. 2636-55.
Aruoba, S. Borağan and Schorfheide, Frank. “Sticky Prices versus Monetary Frictions: An Estimation of Policy
Tradeoffs.” American Economic Journal: Macroeconomics, 2010 (forthcoming).
Aruoba, S. Borağan; Waller, Christopher and Wright, Randall. “Money and Capital: A Quantitative Analysis.”
Working paper, University of Maryland, 2009.
Ball, Laurence and Mankiw, N. Gregory. “A Sticky Price Manifesto.” Unpublished manuscript, 1994.
Barro, Robert. “Long-Term Contracting, Sticky Prices, and Monetary Policy.” Journal of Monetary Economics,
July 1977, 3(3), pp. 305-16.
Bassetto, Marco. “A Game-Theoretic View of the Fiscal Theory of the Price Level.” Econometrica, November
2002, 70(6), pp. 2167-95.
Basu, Susanto and Fernald, John G. “Returns to Scale in U.S. Production: Estimates and Implications.” Journal
of Political Economy, April 1997, 105(2), pp. 249-83.
Baumol, William. “The Transactions Demand for Cash: An Inventory Theoretic Approach.” Quarterly Journal
of Economics, November 1952, 66(4), pp. 545-56.
Berentsen, Aleksander; Camera, Gabriele and Waller, Christopher. “The Distribution of Money Balances and
the Nonneutrality of Money.” International Economic Review, May 2005, 46(2), pp. 465-87.
Berentsen, Aleksander; Camera, Gabriele and Waller, Christopher. “Money, Credit, and Banking.” Journal of
Economic Theory, July 2007, 135(1), pp. 171-95.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

295

Williamson and Wright

Berentsen, Aleksander; Menzio, Guido and Wright, Randall. “Inflation and Unemployment in the Long Run.”
American Economic Review, 2010 (forthcoming).
Berentsen, Aleksander and Waller, Christopher. “Optimal Stabilization Policy with Endogenous Firm Entry.”
Working Paper 2009-032, Federal Reserve Bank of St. Louis, August 2009;
http://research.stlouisfed.org/wp/2009/2009-032.pdf.
Bernanke, Ben S. and Gertler, Mark. “Agency Costs, Net Worth, and Business Fluctuations.” American Economic
Review, March 1989, 79(1), pp. 14-31.
Brock, William A. “Overlapping Generations Models with Money and Transactions Costs,” in Benjamin M.
Friedman and Frank H. Hahn, eds., Handbook of Monetary Economics. Chap. 7. Amsterdam: North-Holland,
1990.
Buiter, Willem H. “The Macroeconomics of Dr. Pangloss: A Critical Survey of the New Classical Macroeconomics.”
Economic Journal, March 1980, 90(357), pp. 34-50.
Burdett, Kenneth; Trejos, Alberto and Wright, Randall. “Cigarette Money.” Journal of Economic Theory, July 2001,
99(1-2), pp. 117-42.
Bryant, John and Wallace, Neil. “The Inefficiency of Interest-Bearing Government Debt.” Journal of Political
Economy, April 1979, 87(2), pp. 365-81.
Bryant, John and Wallace, Neil. “A Price Discrimination Analysis of Monetary Policy.” Review of Economic
Studies, April 1984, 51(165), pp. 279-88.
Calvo, Guillermo. “Staggered Prices in a Utility-Maximizing Framework.” Journal of Monetary Economics,
September 1983, 12(3), pp. 383-98.
Camera, Gabriele and Corbae, P. Dean. “Money and Price Dispersion.” International Economic Review, November
1999, 40(4), pp. 985-1008.
Caplin, Andrew and Spulber, Daniel. “Menu Costs and the Neutrality of Money.” Quarterly Journal of Economics,
November 1987, 102(4), pp. 703-25.
Champ, Bruce; Smith, Bruce D. and Williamson, Stephen. “Currency Elasticity and Banking Panics: Theory
and Evidence.” Canadian Journal of Economics, November 1996, 29(4), pp. 828-64.
Chiu, Jonathan and Meh, Césaire. “Banking, Liquidity, and Inflation.” Macroeconomic Dynamics, 2010
(forthcoming).
Chiu, Jonathan and Molico, Miguel. “Liquidity, Redistribution, and the Welfare Cost of Inflation.” Journal of
Monetary Economics, May 2010, 57(4), pp. 428-38.
Chiu, Jonathan and Molico, Miguel. “Uncertainty, Inflation, and Welfare.” Working paper, Bank of Canada, 2008.
Clarida, Richard; Gali, Jordi and Gertler, Mark. “The Science of Monetary Policy: A New Keynesian Perspective.”
Journal of Economic Literature, December 1999, 37(4), pp. 1661-707.
Coase, Ronald. “The Nature of the Firm.” Economica, November 1937, 4(16), pp. 386-405.
Cooley, Thomas and Hansen, Gary. “The Inflation Tax in a Real Business Cycle Model.” American Economic
Review, September 1989, 79(4), pp. 733-48.
Corbae, P. Dean; Temzelides, Ted and Wright, Randall. “Directed Matching and Monetary Exchange.”
Econometrica, May 2003, 71(3), pp. 731-56.
Craig, Ben and Rocheteau, Guillaume. “State-Dependent Pricing, Inflation, and Welfare in Search Economies.”
European Economic Review, April 2008, 52(3), pp. 441-68.

296

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

Curdia, Vasco and Woodford, Michael. “Conventional and Unconventional Monetary Policy.” CEPR Discussion
Paper 7514, Centre for Economic Policy Research, October 2009.
Diamond, Douglas. “Financial Intermediation and Delegated Monitoring.” Review of Economic Studies, July
1984, 51(166), pp. 393-414.
Diamond, Douglas and Dybvig, Philip. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political
Economy, June 1983, 91(3), pp. 401-19.
Diamond, Peter. “Money in Search Equilibrium.” Econometrica, January 1984, 52(1), pp. 1-20.
Dong, Mei and Jiang, Jing. “Money and Price Posting under Private Information.” Unpublished manuscript,
Bank of Canada, 2009.
Dressler, Scott J. “Money Holdings, Inflation, and Welfare in a Competitive Market.” International Economic
Review, 2010 (forthcoming).
Dressler, Scott J. “Inflation and Welfare Dynamics in a Competitive Market.” Unpublished manuscript,
Villanova University, 2010.
Duffie, Darrell; Gârleanu, Nicolae and Pederson, Lasse H. “Over-the-Counter Markets.” Econometrica, November
2005, 73(6), pp. 1815-47.
Duffie, Darrell; Gârleanu, Nicolae and Pederson, Lasse H. “Valuation in Over-the-Counter Markets.” Review of
Financial Studies, November 2007, 20(6), pp. 1865-900.
Dutu, Richard; Julien, Benoit and King, Ian. “Liquidity Constrained Competing Auctions.” Working Papers 1068,
University of Melbourne Department of Economics, April 2009.
Ennis, Huberto. “Search, Money, and Inflation under Private Information.” Journal of Economic Theory,
January 2008, 138(1), pp. 101-31.
Ennis, Huberto. “Avoiding the Inflation Tax.” International Economic Review, May 2009, 50(2), pp. 607-25.
Ennis, Huberto and Keister, Todd. “Bank Runs and Institutions: The Perils of Intervention.” American Economic
Review, September 2009a, 99(4), pp. 1588-607.
Ennis, Huberto and Keister, Todd. “Run Equilibria in the Green-Lin Model of Financial Intermediation.”
Journal of Economic Theory, September 2009b, 144(5), pp. 1996-2020.
Faig, Miguel and Jerez, Belén. “A Theory of Commerce.” Journal of Economic Theory, May 2005, 122(1), pp. 60-99.
Freeman, Scott J. “Banking as the Provision of Liquidity.” Journal of Business, January 1988, 61(1), pp. 45-64.
Freeman, Scott J. “The Payments System, Liquidity, and Rediscounting.” American Economic Review, December
1996, 86(5), pp. 1126-38.
Friedman, Milton. A Theory of the Consumption Function. Princeton: Princeton University Press, 1957.
Friedman, Milton. A Program for Monetary Stability. New York: Fordham University Press, 1960.
Friedman, Milton. “The Role of Monetary Policy.” American Economic Review, March 1968, 58(1), pp. 1-17.
Friedman, Milton. The Optimum Quantity of Money and Other Essays. New York: Aldine Publishing Company,
1969.
Friedman, Milton and Schwartz, Anna J. A Monetary History of the United States, 1867-1960. Cambridge, MA:
National Bureau of Economic Research, 1963.
Galenianos, Manolis and Kircher, Philipp. “A Model of Money with Multilateral Matching.” Journal of Monetary
Economics, September 2008, 55(6), pp. 1054-66.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

297

Williamson and Wright

Geromichalos, Athanasios; Licari, Juan M. and Suarez-Lledo, José. “Monetary Policy and Asset Prices.” Review
of Economic Dynamics, October 2007, 10(4), pp. 761-79.
Gertler, Mark and Trigari, Antonella. “Unemployment Fluctuations with Staggered Nash Wage Bargaining.”
Journal of Political Economy, February 2009, 117(1), pp. 38-86.
Goodfriend, Marvin. “How the World Achieved Consensus on Monetary Policy.” Journal of Economic
Perspectives, Fall 2007, 21(4), pp. 47-68.
Green, Edward and Zhou, Ruilin. “A Rudimentary Random-Matching Model with Divisible Money and Prices.”
Journal of Economic Theory, August 1998, 81(2), pp. 252-71.
Hahn, Frank H. “On the Foundations of Monetary Theory,” in Michael Parkin and A. Robert Nobay, eds., Essays
in Modern Economics. New York: Barnes & Noble, 1973.
He, Ping; Huang, Lixin and Wright, Randall. “Money, Banking, and Monetary Policy.” Journal of Monetary
Economics, September 2008, 55(6), pp. 1013-24.
Head, Allen; Liu, Lucy Q.; Menzio, Guido and Wright, Randall. “Sticky Prices?” Unpublished manuscript,
Queen’s University, 2010.
Hicks, John R. “A Suggestion for Simplifying the Theory of Money.” Economica, February 1935, 2(5), pp. 1-19.
Hicks, John R. “Mr. Keynes and the ‘Classics’: A Suggested Interpretation.” Econometrica, April 1937, 5(2),
pp. 147-59.
Hosios, Arthur J. “On the Efficiency of Matching and Related Models of Search and Unemployment.” Review
of Economic Studies, April 1990, 57(2), pp. 279-98.
Howitt, Peter. “Beyond Search: Fiat Money in Organized Exchange.” International Economic Review, May 2005,
46(2), pp. 405-29.
Hu, Tai-wei; Kennan, John and Wallace, Neil. “Coalition-Proof Trade and the Friedman Rule in the Lagos-Wright
Model.” Journal of Political Economy, February 2009, 117(1), pp. 116-37.
Huggett, Mark. “The Risk-Free Rate in Heterogeneous-Agent Incomplete-Insurance Economies.” Journal of
Economic Dynamics and Control, September/November 1993, 17(5/6), pp. 953-69.
Jean, Kasie; Stanislav, Rabinovich and Wright, Randall. “On the Multiplicity of Monetary Equilibria Green-Zhou
Meets Lagos-Wright.” Journal of Economic Theory, January 2010, 145(1), pp. 392-401.
Jevons, William S. Money and the Mechanism of Exchange. London: Appleton, 1875.
Jones, Robert A. “The Origin and Development of Media of Exchange.” Journal of Political Economy, August
1976, 84(4), pp. 757-75.
Julien, Benoit; Kennes, John and King, Ian. “Bidding For Money.” Journal of Economic Theory, September 2008,
142(1), pp. 196-217.
Kareken, John H. and Wallace, Neil. Models of Monetary Economies. Minneapolis: Federal Reserve Bank of
Minneapolis, 1980.
Kiyotaki, Nobuhiro and Wright, Randall. “On Money as a Medium of Exchange.” Journal of Political Economy,
August 1989, 97(4), pp. 927-54.
Kiyotaki, Nobuhiro and Wright, Randall. “A Contribution to the Pure Theory of Money.” Journal of Economic
Theory, April 1991, 53(2), pp. 215-35.
Kiyotaki, Nobuhiro and Wright, Randall. “A Search-Theoretic Approach to Monetary Economics.” American
Economic Review, March 1993, 83(1), pp. 63-77.

298

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

Keynes, John M. A Tract on Monetary Reform. Amherst, NY: Prometheus Books, 1924.
Keynes, John M. The General Theory of Employment, Interest and Money. Cambridge, MA: Macmillan Cambridge
University Press, 1936.
Klenow, Peter and Malin, Benjamin A. “Microeconomic Evidence on Price-Setting,” in Handbook of Monetary
Economics. Volume 3A. (forthcoming).
Kocherlakota, Narayana R. “Money is Memory.” Journal of Economic Theory, August 1998, 81(2), pp. 232-51.
Krugman, Paul. “How Did Economists Get It So Wrong?” New York Times Magazine, September 2, 2009.
Krusell, Per and Smith, Anthony A. “Income and Wealth Heterogeneity in the Macroeconomy.” Journal of
Political Economy, October 1998, 106(5), pp. 867-96.
Krishnamurthy, Arvind and Vissing-Jorgensen, Annette. “The Aggregate Demand for Treasury Debt.” Unpublished
manuscript, Northwestern University, 2009.
Lagos, Ricardo. “Asset Prices and Liquidity in an Exchange Economy.” Working paper, New York University,
May 2008.
Lagos, Ricardo and Rocheteau, Guillaume. “Inflation, Output, and Welfare.” International Economic Review,
May 2005, 46(2), pp. 495-522.
Lagos, Ricardo and Rocheteau, Guillaume. “Money and Capital as Competing Media of Exchange.” Journal of
Economic Theory, September 2008, 142(1), pp. 247-58.
Lagos, Ricardo and Rocheteau, Guillaume. “Liquidity in Asset Markets with Search Frictions.” Econometrica,
March 2009, 77(2), pp. 403-26.
Lagos, Ricardo; Rocheteau, Guillaume and Weill, Pierre-Olivier. “Crises and Liquidity in Over-the-Counter
Markets.” Unpublished manuscript, New York University, 2009.
Lagos, Ricardo and Wright, Randall. “Dynamics, Cycles and Sunspot Equilibria in ‘Genuinely Dynamic,
Fundamentally Disaggregative’ Models of Money.” Journal of Economic Theory, April 2003, 109(2), pp. 156-71.
Lagos, Ricardo and Wright, Randall. “A Unified Framework for Monetary Theory and Policy Analysis.” Journal
of Political Economy, June 2005, 113(3), pp. 463-84.
Leijonhufvud, Axel. On Keynesian Economics and the Economics of Keynes: A Study in Monetary Theory.
London: Oxford University Press, 1968.
Lester, Benjamin; Postlewaite, Andrew and Wright, Randall. “Information and Liquidity.” Unpublished
manuscript, 2009 (forthcoming in Journal of Money, Credit, and Banking).
Lester, Benjamin; Postlewaite, Andrew and Wright, Randall. “Liquidity, Information, Asset Prices and Monetary
Policy.” Working paper, University of Pennsylvania, 2010.
Li, Victor E. “Inventory Accumulation in a Search-Based Monetary Economy.” Journal of Monetary Economics,
December 1994, 34(3), pp. 511-36.
Li, Victor E. “The Optimal Taxation of Fiat Money in Search Equilibrium.” International Economic Review,
November 1995, 36(4), pp. 927-42.
Li, Yiting and Rocheteau, Guillaume. “The Threat of Counterfeiting.” Macroeconomic Dynamics, 2010
(forthcoming).
Liu, Lucy; Wang, Liang and Wright, Randall. “On the ‘Hot Potato’ Effect of Inflation: Intensive versus Extensive
Margins.” Macroeconomic Dynamics, 2010 (forthcoming).
Lucas, Robert E. “Expectations and the Neutrality of Money.” Journal of Economic Theory, April 1972, 4(2),
pp. 103-24.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

299

Williamson and Wright

Lucas, Robert E. “Econometric Policy Evaluation: A Critique.” Carnegie-Rochester Conference Series on Public
Policy, January 1976, 1(1), pp. 19-46.
Lucas, Robert E. “Asset Prices in an Exchange Economy.” Econometrica, November 1978, 46(6), pp. 1429-45.
Lucas, Robert E. “Methods and Problems in Business Cycle Theory.” Journal of Money, Credit, and Banking,
November 1980a, 12(4), pp. 696-715.
Lucas, Robert E. “Equilibrium in a Pure Currency Economy,” in John Kareken and Neil Wallace, eds., Models
of Monetary Economies. Minneapolis: Federal Reserve Bank of Minneapolis, 1980b, pp. 131-45.
Lucas, Robert E. “Inflation and Welfare.” Econometrica, March 2000, 68(2), pp. 247-74.
Lucas, Robert E. and Prescott, Edward C. “Equilibrium Search and Unemployment.” Journal of Economic Theory,
February 1974, 7(2), pp. 188-209.
Mayer, Thomas; Duesenberry, James S. and Aliber, Robert Z. Money, Banking, and the Economy. New York:
Norton, 1981.
Mankiw, N. Gregory. “Small Menu Costs and Large Business Cycles: A Macroeconomic Model.” Quarterly
Journal of Economics, May 1985, 100(2), pp. 529-38.
Mankiw, N. Gregory and Reis, Ricardo. “Sticky Information Versus Sticky Prices: A Proposal to Replace the
New Keynesian Phillips Curve.” Quarterly Journal of Economics, November 2002 , 117(4), pp. 1295-328.
Mattesini, Fabrizio; Monnet, Cyril and Wright, Randall. “Banking: A Mechanism Design Approach.” Unpublished
manuscript, Federal Reserve Bank of Philadelphia, 2010.
Menger, Carl. “On the Origin of Money.” Economic Journal, 1892, 2(6), pp. 239-55.
Molico, Miguel. “The Distribution of Money and Prices in Search Equilibrium.” International Economic Review,
August 2006, 47(3), pp. 701-22.
Mortensen, Dale T. and Pissarides, Christopher A. “Job Creation and Job Destruction in the Theory of
Unemployment.” Review of Economic Studies, July 1994, 61(3), pp. 397-416.
Nosal, Ed. “Search, Welfare and the ‘Hot Potato’ Effect of Inflation.” Macroeconomic Dynamics, 2010
(forthcoming).
Nosal, Ed and Rocheteau, Guillaume. “Money, Payments, and Liquidity.” Unpublished manuscript, Federal
Reserve Bank of Chicago, 2009.
Nosal, Ed and Wallace, Neil. “A Model of (the Threat of) Counterfeiting.” Journal of Monetary Economics, May
2007, 54(4), pp. 994-1001.
Ostroy, Joseph M. and Starr, Ross M. “The Transactions Role of Money,” in Benjamin M. Friedman and Frank H.
Hahn, eds., Handbook of Monetary Economics. Chap. 1. Amsterdam: North-Holland, 1990.
Pissarides, Christopher A. Equilibrium Unemployment Theory. Cambridge, MIT Press, 2000.
Ravikumar, B. and Shao, Enchuan. “Search Frictions and Asset Price Volatility.” Working paper, University of
Iowa, 2006.
Rocheteau, Guillaume. “A Monetary Approach to Asset Liquidity.” Working paper, University of CaliforniaIrvine, 2009.
Rocheteau, Guillaume and Wright, Randall. “Money in Search Equilibrium, in Competitive Equilibrium, and in
Competitive Search Equilibrium.” Econometrica, January 2005, 73(1), pp. 175-202.
Rocheteau, Guillaume; Rupert, Peter; Shell, Karl and Wright, Randall. “General Equilibrium with Nonconvexities
and Money.” Journal of Economic Theory, September 2008, 142(1), pp. 294-317.

300

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Williamson and Wright

Rogerson, Richard. “Indivisible Labor, Lotteries, and Equilibrium.” Journal of Monetary Economics, January
1988, 21(1), pp. 3-16.
Samuelson, Paul A. “An Exact Consumption-Loan Model With or Without the Social Contrivance of Money.”
Journal of Political Economy, 1958, 66(6), pp. 467-82.
Sanches, Daniel and Williamson, Stephen. “Adverse Selection, Segmented Markets, and the Role of Monetary
Policy.” Macroeconomic Dynamics, 2010a (forthcoming).
Sanches, Daniel and Williamson, Stephen. “Money and Credit with Limited Commitment and Theft.” Journal
of Economic Theory, 2010b(forthcoming).
Sargent, Thomas and Wallace, Neil. “‘Rational’ Expectations, the Optimal Monetary Instrument, and the Optimal
Money Supply Rule.” Journal of Political Economy, April 1975, 83(2), pp. 241-54.
Sargent, Thomas and Wallace, Neil. “Rational Expectations and the Theory of Economic Policy.” Journal of
Monetary Economics, April 1976, 2(2), pp. 169-83.
Sargent, Thomas and Wallace, Neil. “Some Unpleasant Monetarist Arithmetic.” Federal Reserve Bank of
Minneapolis Quarterly Review, Fall 1981, 5(3).
Sargent, Thomas and Wallace, Neil. “The Real Bills Doctrine versus the Quantity Theory: A Reconsideration.”
Journal of Political Economy, December 1982, 90(6), pp. 1212-36.
Shi, Shouyong. “Money and Prices: A Model of Search and Bargaining.” Journal of Economic Theory, December
1995, 67(2), pp. 467-96.
Shi, Shouyong. “A Divisible Search Model of Fiat Money.” Econometrica, January 1997, 65(1), pp. 75-102.
Shi, Shouyong. “Viewpoint: A Microfoundation of Monetary Economics.” Canadian Journal of Economics,
August 2006, 39(3), pp. 643-88.
Telyukova, Irina A. and Wright, Randall. “A Model of Money and Credit, with Application to the Credit Card
Debt Puzzle.” Review of Economic Studies, April 2008, 75(2), pp. 629-47.
Tobin, James. “The Interest-Elasticity of Transactions Demand for Cash.” Review of Economics and Statistics,
August 1956, 38(3), pp. 241-7.
Tobin, James. “Liquidity Preference as Behavior Towards Risk.” Review of Economic Studies, February 1958,
25(2), pp. 65-86.
Tobin, James. “Discussion,” in John Kareken and Neil Wallace, eds., Models of Monetary Economies.
Minneapolis: Federal Reserve Bank of Minneapolis, 1980.
Townsend, Robert M. “Economic Organization with Limited Communication.” American Economic Review,
December 1987, 77(5), pp. 954-70.
Townsend, Robert M. “Currency and Credit in a Private Information Economy.” Journal of Political Economy,
December 1989, 97(6), pp. 1323-45.
Trejos, Alberto and Wright, Randall. “Search, Bargaining, Money, and Prices.” Journal of Political Economy,
February 1995, 103(1), pp. 118-41.
Vayanos, Dimitri and Weill, Pierre-Olivier. “A Search-Based Theory of the On-the-Run Phenomenon.” Journal
of Finance, June 2008, 63(3), pp. 1351-89.
Wallace, Neil. “The Overlapping Generations Model of Fiat Money,” in John Kareken and Neil Wallace, eds.,
Models of Monetary Economies. Minneapolis: Federal Reserve Bank of Minneapolis, 1980.
Wallace, Neil. “A Modigliani-Miller Theorem for Open Market Operations.” American Economic Review, June
1981, 71(3), pp. 267-74.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

301

Williamson and Wright

Wallace, Neil. “A Dictum for Monetary Theory.” Federal Reserve Bank of Minneapolis Quarterly Review, Winter
1998, 22(1), pp. 20-26.
Wallace, Neil. “Whither Monetary Economics?” International Economic Review, November 2001, 42(4),
pp. 847-69.
Wallace, Neil. “The Mechanism Design Approach to Monetary Theory,” in Benjamin Friedman and Michael
Woodford, eds., Handbook of Monetary Economics. Volume 3A. (forthcoming).
Wicksell, Knut. Lectures on Political Economy. Volume 2: Money. Translation by E. Classen. Second Edition.
New York: Kelley, 1967.
Williamson, Stephen. “Costly Monitoring, Financial Intermediation, and Equilibrium Credit Rationing.”
Journal of Monetary Economics, September 1986, 18(2), pp. 159-79.
Williamson, Stephen. “Financial Intermediation, Business Failures, and Real Business Cycles.” Journal of
Political Economy, December 1987a, 95(6), pp. 1196-216.
Williamson, Stephen. “Recent Developments in Modeling Financial Intermediation.” Federal Reserve Bank of
Minneapolis Quarterly Review, Summer 1987b, 11(3), pp. 19-29.
Williamson, Stephen. “Private Money.” Journal of Money, Credit, and Banking, August 1999, 31(3), pp. 469-91.
Willliamson, Stephen. “Search, Limited Participation, and Monetary Policy.” International Economic Review,
February 2007, 47(1), pp.107-28.
Williamson, Stephen. “Liquidity, Financial Intermediation, and Monetary Policy in a New Monetarist Model.”
Working paper, Washington University in St. Louis, 2009.
Williamson, Stephen and Wright, Randall. “Barter and Monetary Exchange Under Private Information.” American
Economic Review, March 1994, 84(1), pp. 104-23.
Williamson, Stephen and Wright, Randall. “New Monetarist Economics: Models,” forthcoming in Benjamin
Friedman and Michael Woodford, eds., Handbook of Monetary Economics. Volume 3A. (forthcoming).
Woodford, Michael. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton
University Press, 2003.
Wright, Randall. “A Uniqueness Proof for Monetary Steady State.” Journal of Economic Theory, January 2010,
145(1), pp. 382-91.
Zhu, Tao. “Existence of a Monetary Steady State in a Matching Model: Indivisible Money.” Journal of Economic
Theory, October 2003, 112(2), pp. 307-24.
Zhu, Tao. “Existence of a Monetary Steady State in a Matching Model: Divisible Money.” Journal of Economic
Theory, August 2005, 123(2), pp. 130-60.

302

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Asset Prices, Liquidity, and Monetary Policy
in the Search Theory of Money
Ricardo Lagos
The author presents a search-based model in which money coexists with equity shares on a risky
aggregate endowment. Agents can use equity as a means of payment, so shocks to equity prices
translate into aggregate liquidity shocks that disrupt the mechanism of exchange. The author characterizes a family of optimal monetary policies and finds that the resulting equity prices are independent of monetary considerations. The author also studies monetary policies that target a constant, but nonzero, nominal interest rate and finds that to the extent that a financial asset is valued
as a means to facilitate transactions, the asset’s real rate of return will include a liquidity return
that depends on monetary considerations. Through this liquidity channel, persistent deviations
from an optimal monetary policy can cause the real prices of assets that can be used to relax trading constraints to exhibit persistent deviations from their fundamental values. (JEL E31, E52, G12)
Federal Reserve Bank of St. Louis Review, July/August 2010, 92(4), 303-09.

M

any financial assets are held not
only for the intrinsic value of the
stream of consumption that they
yield, but also for their usefulness
in facilitating exchange. Consider a buyer who
cannot commit to or be forced to honor debts and
who wishes to make a purchase from a seller.
This buyer would find any asset that is valuable
to the seller (e.g., an equity share, a bond, money)
helpful to carry out the transaction. For example,
the buyer could settle the transaction on the spot
by using the asset directly as a means of payment.
In some modern transactions, often the buyer
uses a financial asset to enter a repurchase agreement with the seller or as collateral to borrow the
funds needed to pay the seller. Once stripped
from the subsidiary contractual complexities,
the essence of these transactions is that the asset
helps the untrustworthy buyer to obtain what he
wants from the seller. In this sense, many financial assets are routinely used in the exchange

process and play a role akin to a medium of
exchange—that is, they provide liquidity—the
term that monetary theorists use to refer to the
usefulness of an asset in facilitating transactions.
Financial assets are subject to price fluctuations resulting from aggregate shocks; therefore,
to the extent that these assets serve as a source of
liquidity, shocks to their prices translate into aggregate liquidity shocks that disrupt the mechanism
of exchange and the ensuing allocations. Recent
developments in financial markets have renewed
economists’ interest in the idea that fluctuations
in asset prices can disrupt the exchange process
in some key markets and, through this channel,
propagate to the macroeconomy.
Much of the policy advice offered to central
banks is framed in terms of simple interest rate
feedback rules loosely motivated by a particular
class of models in which the preeminent friction
is a specific type of reduced-form nominal rigidity. Such policy recommendations are based on

Ricardo Lagos is an associate professor of economics at New York University. Financial support from the C.V. Starr Center for Applied
Economics at New York University is gratefully acknowledged.

© 2010, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

303

Lagos

the premise that the primary goal of monetary
policy is to mitigate the effects of these rigidities.
With no room or role for a notion of liquidity (and
typically even no meaningful role for money),
this conventional view that dominates policy
circles has failed to offer relevant policy guidance
in the midst of the recent financial crisis. I interpret this failure as an indication that the consensus
stance toward monetary policy, with its theoretical focus on sticky price frictions and its implementation emphasis on ad hoc feedback interest
rate rules, is too narrow in that it neglects the
fundamental frictions that give rise to a demand
for liquidity.
In this article, I present a dynamic equilibrium,
microfounded monetary asset–pricing framework
with multiple assets and aggregate uncertainty
regarding liquidity needs, and discuss the main
normative and positive policy implications of
the theory. The broad view that emerges from
explicitly modeling the role of money and other
liquid assets in the exchange process is that of a
monetary authority that seeks to provide the private sector with the liquidity needed to conduct
market transactions. More precisely, I state and
explain three propositions that answer the following questions: How should monetary policy be
conducted to mitigate the adverse effects of shocks
to the valuations of the financial assets that provide liquidity to the private sector? What are the
implications for asset prices of deviating from the
optimal monetary policy? Are such deviations
capable of inflating real asset prices above their
fundamental values for extended periods of time?

MODEL
In this section I outline a bare-bones model
that encompasses the key economic mechanisms.1
The model combines elements of the asset-pricing
model of Lucas (1978) with elements of the model
of monetary exchange of Lagos and Wright (2005).
Time is discrete and the horizon infinite. There
is a [0,1] continuum of infinitely lived agents.
Each time period is divided into two subperiods
1

The analysis that follows is based on Lagos (2006, 2009, 2010).

304

J U LY / A U G U S T

2010

during which different activities take place. There
are three nonstorable and perfectly divisible consumption goods at each date: fruit, general goods,
and special goods. Fruit and general goods are
homogeneous goods, whereas special goods come
in many varieties. The only durable commodity
in the economy is a set of “Lucas trees.” The number of trees is fixed and equal to the number of
agents. Trees yield (the same amount of) a random
quantity xt of fruit in the second subperiod of every
period t. The realization of xt becomes known to
all at the beginning of period t (when agents enter
the first subperiod). Production of fruit is entirely
exogenous: No resources are used and it is not
possible to affect the output at any time. The
motion of xt is assumed to follow a Markov process,
defined by its transition function

(

)

F ( x ′, x ) = Pr xt +1 ≤ x ′ xt = x .
For each fixed x, F 共·, x兲 is a distribution function
with support Ξ 債 共0,⬁兲.
In each subperiod, every agent is endowed
– units of time that can be used as labor
with n
services. In the second subperiod, each agent has
access to a linear production technology that transforms labor services into general goods. In the first
subperiod, each agent has access to a linear production technology that transforms his own labor
input into a particular variety of the special good
that the agent does not consume. This specialization is modeled as follows: Given two agents i and
j drawn at random, there are three possible events.
The probability that i consumes the variety of
special good that j produces but not vice versa (a
single coincidence) is denoted α. Symmetrically,
the probability that j consumes the special good
that i produces but not vice versa is also α. In a
single-coincidence meeting, the agent who wishes
to consume is the buyer, and the agent who produces is the seller. The probability that neither
wants what the other can produce is 1 – 2α, with
α ≤ 1/2. Fruit and general goods are homogeneous
and hence consumed (and in the case of general
goods, also produced) by all agents.
In the first subperiod, agents participate in a
decentralized market where trade is bilateral
(each meeting is a random draw from the set of
pairwise meetings), and the terms of trade are
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Lagos

determined by bargaining (a take-it-or-leave-it
offer by the buyer, for simplicity). The specialization of agents over consumption and production
of the special good, combined with bilateral trade,
creates a double-coincidence-of-wants problem
in the first subperiod. In the second subperiod,
agents trade in a centralized market. Agents
cannot make binding commitments, and trading
histories are private in a way that precludes any
borrowing and lending between people, so all
trade—both in the centralized and decentralized
markets—must be quid pro quo.
Each tree has outstanding one durable and
perfectly divisible equity share that represents
the bearer’s ownership and confers to the owner
the right to collect the fruit dividends. A second
financial asset, money, is intrinsically useless (it
is not an argument of any utility or production
function), and unlike equity, ownership of money
does not constitute a right to collect any resources.
Money is issued by a “government” that at t = 0
commits to a monetary policy represented by a
sequence of positive real-valued functions, { µ t }t⬁= 0.
Given an initial stock of money, M0 > 0, a monetary
policy induces a money supply process, {Mt }t⬁= 0 ,
by means of Mt +1 = µ t 共x t 兲Mt , where x t denotes a
history of realizations of fruit dividends through
period t—that is, x t = 共xt , xt –1,…, x0 兲. The government injects or withdraws money through lumpsum transfers or taxes in the second subperiod of
every period; thus, along every sample path,
Mt +1 = Mt + Tt , where Tt is the lump-sum transfer
(or tax, if negative). All assets are perfectly recognizable, cannot be forged, and can be traded among
agents in both the centralized and decentralized
markets. At t = 0, each agent is endowed with a 0s
equity shares and a m
0 units of fiat money.
Let the utility function for special goods,
u : ⺢+ → ⺢+, and the utility function for fruit,
U : ⺢+ → ⺢+, be continuously differentiable,
bounded by B on Ξ, increasing, and strictly concave, with u共0兲 = U共0兲. Let –n be the utility from
working n hours in the first subperiod. Also, suppose there exists q* 僆 共0,⬁兲 defined by u′共q*兲 = 1,
with q* ≤ n–. Let both the utility for general goods
and the disutility from working in the second subperiod be linear. The agent prefers a sequence
{qt ,nt ,ct ,yt ,ht }t⬁= 0 over another sequence
{q̃t , ñt , c̃t , ỹt ,h̃t }t⬁= 0 if
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

T

{

lim inf E 0 ∑ β t u (qt ) − nt + U (ct ) + yt − ht 
T →∞
t =0

− β t u (qt ) − n t + U (ct ) + y t − ht  ≥ 0,

}

where β 僆 共0,1兲, qt and nt are the quantities of
special goods consumed and produced in the
decentralized market, ct denotes consumption of
fruit, yt consumption of general goods, ht the hours
worked in the second subperiod, and Et is an
expectations operator conditional on the information available to the agent at time t, defined
with respect to the matching probabilities and
the probability measure induced by F.
Next, consider the individual optimization
problems. Let at = 共ats, atm 兲 denote the portfolio
of an agent who holds ats shares and atm units of
money. Let Wt 共at 兲 and Vt 共at 兲 be the maximum
attainable expected discounted utility of an agent
who enters the centralized and decentralized
market, respectively, at time t with portfolio at .
Then,
Wt ( at ) =

{U (c ) + y

max
ct , y t , ht , at + 1

t

t

}

− ht + β E tVt +1 ( at +1 )

(1) s.t.Ä c + w y + φ s a s + φ ma m =
t
t t
t t +1
t t +1

(ϕ

s
t

)

(

)

+ x t ats + φtm atm + Tt + w t ht

0 ≤ ct ,Ä 0 ≤ ht ≤ n,Ä 0 ≤ at +1 .
The agent chooses consumption of fruit (ct ), consumption of general goods (yt ), labor supply (ht ),
and an end-of-period portfolio (at +1). Fruit is used
as numéraire: wt is the relative price of the general
good, φ ts is the (ex-dividend) price of a share, and
1/φ tm is the dollar price of fruit.
Let [qt 共a, ã 兲, pt 共a, ã 兲] denote the terms at
which a buyer who owns portfolio a trades with a
seller who owns portfolio ã , where qt 共a, ã 兲 僆 ⺢+
is the quantity of a special good traded, and
pt 共a, ã 兲 = [pts共a, ã 兲, ptm 共a, ã 兲] 僆 ⺢+ × ⺢+ is the
transfer of assets from the buyer to the seller (the
first argument is the transfer of equity). Consider
a meeting in the decentralized market of period t
between a buyer with portfolio at and a seller with
portfolio ã t . The terms of trade, 共qt, pt兲, are determined by Nash bargaining where the buyer has
all the bargaining power:
J U LY / A U G U S T

2010

305

Lagos

summarized by a sequence {φ t }t⬁= 0 that satisfies
the following necessary and sufficient conditions
for individual optimization:

max u (qt ) + Wt (at − pt ) − Wt ( at ) 
qt , pt ≤ at 
s.t. Wt ( a t + pt ) − qt ≥ Wt (at ).
The constraint pt ≤ at indicates that the buyer in
a bilateral meeting cannot spend more assets than
he owns. Let λ t = 共λ ts, λ tm 兲, with λ ts ⬅ 共φ ts + xt 兲/wt
and λ tm ⬅ φ tm /wt . The bargaining outcome is as
follows: If λ t at ≥ q*, the buyer buys qt = q* in
exchange for a vector pt of assets with real value
λ t pt = q* ≤ λ t at . Otherwise, he pays the seller
pt = at , in exchange for qt = λ t at . Hence, the
quantity of the special good exchanged is
min共λ t at , q*兲 ⬅ q共λ t at 兲, and the real value of the
portfolio used as payment is λ t pt 共at , ãt 兲 = q共λ t at 兲.
Given the bargaining solution, the value of
search to an agent who enters the decentralized
market of period t with portfolio at can be written as
Vt ( at ) = S ( λt at ) + Wt (at ),
where S共x兲 ⬅ α {u[q共x兲] – q共x兲} is the expected gain
from trade in the decentralized market. Substitute
the budget constraint (equation (1)) and Vt 共at 兲
into the right-hand side of Wt 共at 兲 to arrive at
c
Wt (at ) = λt at + τ t + max U (ct ) − wtt 

ct ≥ 0 

{

+ max −
at + 1 ≥ 0

φ t at + 1
w t + β Et

}

 S ( λt +1at +1 ) + Wt +1 (at +1 ) ,

where τt = λ tm Tt and λ t = 共λ ts, λ tm 兲.
Given a process {Mt }t⬁= 0 , an equilibrium is a
plan {ct , at +1}t⬁= 0 , pricing functions {wt , φt }t⬁= 0 , and
bilateral terms of trade {qt, pt}t⬁= 0 such that (i) given
prices and the bargaining protocol, {ct , at +1}t⬁= 0
solves the agent’s optimization problem; (ii) the
terms of trade are determined by Nash bargaining—
that is, qt = min共λ t at ,q*兲 and λ t pt = qt; and (iii)
the centralized market clears—that is, ct = xt and
ats+1 = 1. The equilibrium is monetary if φ tm > 0
for all t, and in this case the money-market clearing condition is atm+1 = Mt +1. The market-clearing
conditions imply {ct, ats+1, atm+1}t⬁= 0 = {xt, 1, Mt +1}t⬁= 0 ,
wt = 1/U ′共wt 兲, and once {φt }t⬁= 0 has been found,
{qt}t⬁= 0 = {λ t pt }t⬁= 0 = {min共Λt +1, q*兲}t⬁= 0 , where
Λt +1 ⬅ λts+1 + λtm+1Mt +1. Therefore, given a money
supply process {Mt }t⬁= 0 and letting L 共Λt +1兲 ⬅
[1 + S ′共Λt +1兲], a monetary equilibrium can be
306

J U LY / A U G U S T

2010

(

)

U ′ ( x t )φts = β E t  L ( Λt +1 )U ′ ( x t +1 ) φts+1 + x t +1 
U ′ ( x t )φtm = β Et  L ( Λt +1 )U ′ ( x t +1 )φ tm+1 
lim E 0  β tU ′ ( x t )φts  = 0
t →∞

lim E 0  β tU ′ ( x t +1 )φ tm+1 M t +1  = 0.
t →∞

NORMATIVE RESULTS: OPTIMAL
POLICY AND IMPLEMENTATION
The Pareto optimal allocation in this environment can be found by solving the problem of a
social planner who wishes to maximize average
(equally weighted across agents) expected utility.
The planner chooses a plan {ct , qt , nt , yt , ht }t⬁= 0
subject to the feasibility constraints—that is,
0 ≤ ct ≤ dt , yt ≤ ht , and 0 ≤ qt ≤ nt for those agents
who are matched in the first subperiod of period
t and qt = nt = 0 for those agents who are not. Under
these constraints, the planner’s problem consists
of finding a feasible plan {ct , qt }t⬁= 0 such that
T

{ {

}
− β {α u (q ) − q  + U (c )}} ≥ 0

limÄ inf Ä E 0 ∑ β t α u (qt ) − qt  + U (ct )
T →∞
t =0

t

t

t

t

for all feasible plans {c̃t , q̃t }t⬁= 0. Here, E0 denotes
the expectation with respect to the probability
measure over sequences of dividend realizations
induced by F. The solution is {ct , qt }t⬁= 0= {xt , q*}t⬁= 0.
In equilibrium, ct = xt —that is, the equilibrium
consumption of fruit is at the efficient level.
However, the equilibrium allocation has qt ≤ q*,
which may hold with strict inequality in some
states. That is, in a monetary equilibrium, consumption and production in the decentralized
market may be below their efficient levels.
It is convenient to introduce the following
notion of a nominal interest rate before stating
the results. Consider an illiquid nominal bond—
a one-period, risk-free government bond that
pays a unit of money in the centralized market
F E D E R A L R E S E R V E B A N K O F S T . LO U I S R E V I E W

Lagos

and cannot be used in decentralized exchange.
Let φ tn denote the price of this asset. In equilibrium, this price must satisfy U ′共xt 兲φ tn =
β Et [U ′共xt +1兲φ tm+1]. Since φ tn/φ tm is the money price
of a nominal bond, the (net) nominal interest rate
in a monetary equilibrium is it = φ tm/φ tn – 1 or,
equivalently,

(2) it =

E t  L (

Λt +1 λtm+1 

( )

Proposition 1 establishes the optimality of
the Friedman rule—Milton Friedman’s (1969)
prescription that monetary policy should induce
a zero nominal interest rate to lead to an optimal
allocation of resources. The proof is as follows:
The equilibrium allocation is efficient if and only
if qt 共Λt 兲 = q*, and this equality holds if and only
if Λt ≥ q*—that is, if and only if the real value of
the equilibrium portfolio, Λt , is at least as large as
the real liquidity needs, represented by q*. The
nominal interest rate, it , is zero if and only if
L共Λt +1兲 = 1, and this equality holds if and only if
Λt ≥ q*. Hence, qt 共Λt 兲 = q* if and only if it = 0.
Intuitively, the cost of producing real balances is
zero to the government, so the optimum quantity
of real balances should be such that the marginal
benefit—which in equilibrium equals the marginal
cost, it —is zero to the economic agents.
I next turn to the question of implementation:
Which monetary policies are consistent with a
monetary equilibrium in which the nominal interest rate is at its optimal target level of zero? The
following result addresses the issue of (weak)
implementation by characterizing a family of
monetary policies that are consistent with an
equilibrium with it = 0 for all t.
Proposition 2
∞

t →∞

t ∈T

− 1.

Proposition 1 Equilibrium quantities in a monetary equilibrium are Pareto optimal if and only
if it = 0 with probability 1, for all t.

Let φts∗ = Et ∑ j =1 β j

(3) lim M t = 0
(4) inf M t β − t > 0 if T ≠ ∅.

)

E t λtm+1

Assume that inft 僆 T πt > 0. A monetary equilibrium
with it = 0 with probability 1 for all t exists under
a deterministic money supply process {Mt }t⬁= 0 if
and only if the following two conditions hold:

( )x

U ′ xt + j

U ′ ( xt )

t+ j ,

λ ts* = U ′共xt 兲共φ ts* + xt 兲, and T be the set of dates for
which q* – λ ts* > 0 holds with probability πt > 0.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Conditions (3) and (4) are rather unrestrictive
asymptotic conditions. Condition (3) requires
that the money supply converges to zero. Condition (4) requires that asymptotically, on average
over the set of dates T when fiat money plays an
essential role, the growth rate of the money supply
must be at least as large as the rate of time preference. Versions of this result have been proven by
Wilson (1979) and Cole and Kocherlakota (1998)
for deterministic competitive economies with
cash-in-advance constraints that are imposed on
agents every period with probability 1.
Proposition 2 has several implications. First,
even though liquidity needs are stochastic in this
environment (because equity, whose value is
stochastic, can be used alongside money as a
means of payment), a deterministic money-supply
sequence can suffice to implement a zero nominal rate in every state of the world. Second, even
within the class of deterministic monetary policies, there is a large family of policies that can
implement the Pareto optimal equilibrium. Finally,
it would be impossible for someone with access
to a finite time-series for the path of the money
supply to determine whether an optimal monetary policy is being followed. On the other hand,
a single observation of a positive nominal rate
constitutes definitive evidence of a deviation
from an optimal monetary policy.

POSITIVE RESULTS: INTEREST
RATE TARGETS AND ASSET PRICES
In this section, I consider perturbations of the
optimal monetary policy that consist of targeting
a constant positive nominal interest rate, and then
discuss some positive implications of changes in
the nominal interest rate target for the inflation
J U LY / A U G U S T

2010

307

Lagos

rate, equity prices, and equity returns. To this end,
it is convenient to focus on a recursive formulation in which prices are invariant functions of
the aggregate state st = 共xt , Mt 兲—that is, φ ts = φ s共st 兲,
φ tm = φ m 共st 兲, and λ t = 共λ s共st 兲, λ m共st 兲兲, where λ s共st 兲 =
U ′共xt 兲[φ s共st 兲 + xt ], λ m共st 兲 = U ′共xt 兲φ m 共st 兲, and Λt =
λ s共st 兲 + λ m共st 兲Mt . Also, I restrict attention to stationary monetary policies—that is, µ : Ξ → ⺢+, so that
Mt+1 = µ 共xt 兲Mt . To illustrate the main ideas as simply as possible, the following proposition focuses
the analysis on the case of i.i.d. dividends and
liquidity constraints that would bind with probability 1 at every date in the absence of money.
Proposition 3 Assume dF共x′, x兲 = dF共x兲. Let l共δ 兲 =
1 – α + αu′共δ q*兲 and δ be defined by l共δ 兲 = 1/β.
Let δ 0 僆 共δ ,1兲 be given, and suppose that B ≤
[1 – β l共δ 0兲]δ 0q*. Then for any δ 僆 [δ 0,1], there
exists a recursive monetary equilibrium under
the monetary policy

µ ( x ;δ ) =
1

(5)

β l (δ )

δ q∗ − 1− β l (δ ) ∫ x ′U ′( x ′ )dF ( x ′ )
δ q ∗ − 1− β(l (δ) ) ∫ x ′U ′( x ′ )dF ( x ′ ) − xU ′ ( x )
βl δ

.

The equilibrium prices of equity and money are
(6) φ s ( x ;δ ) =

β l (δ )
1 − β l (δ )

∫ x ′U ′( x ′ )dF ( x ′ )
U ′(x )

and

φ m ( s;δ ) =
(7) δ q∗ − 1− β(l (δ) ) ∫ x ′U ′( x ′ )dF ( x ′ ) − xU ′ ( x )
βl δ

U ′(x ) M

.

Together with equation (2), the asset prices in
equations (6) and (7) imply that the monetary
policy (equation (5)) induces an equilibrium gross
nominal interest rate that is constant (independent
of s) and equal to l共δ 兲 ≥ 1 (with equality only if
δ = 1). The function µ 共·; δ 兲 defines a class of monetary policies indexed by the parameter δ , which
effectively determines the level of the constant
nominal interest rate implemented by the policy.
According to equation (7), real money balances
and the value of money are decreasing in the
nominal interest rate target (increasing in δ ).
308

J U LY / A U G U S T

2010

According to equation (6), under the proposed
policy, the real price of equity is increasing in
the nominal interest rate target (decreasing in δ ).
As δ → 1, l共δ 兲 → 1, and therefore (according to
Proposition 1) the policy µ 共x; δ 兲 approaches an
optimal policy under which the recursive monetary equilibrium decentralizes the Pareto optimal
allocation.
Notice that φ s 共x;1兲 is the “fundamental” equilibrium equity price that would result in a Lucas
(1978) economy with no liquidity needs. Therefore, the fact that φ s 共x;1兲 < φ s 共x; δ 兲 for all x and
any δ 僆 [δ 0,1兲 implies that deviations from the
optimal policy “inflate” real asset prices above
the value that a financial analyst would calculate
based on the expected stream of dividends discounted by the Lucas stochastic discount factor,
βU ′共xt+1兲/U共xt 兲.
On average, liquidity considerations generate
a negative relationship between the nominal interest rate (and the inflation rate) and equity returns:
If the target nominal rate, l共δ 兲 –1, is higher, the
average inflation rate is higher, real money balances are lower, and the liquidity return on equity
rises, which causes its price to rise and its measured real rate of return to fall. Intuitively, a higher
nominal interest rate target implies that buyers are
on average short of liquidity, so equity becomes
more valuable as it is used by buyers to relax their
trading constraints. This additional liquidity value
causes the real financial return on equity to be
lower, on average, at a higher interest rate.
Proposition 3 also shows explicitly how
monetary policy must be conducted to support a
recursive monetary equilibrium with a constant
nominal interest rate (with the Pareto optimal
equilibrium in which the nominal rate is zero
as a special case): The growth rate of the money
supply must be relatively low following states
in which the real value of the equilibrium equity
holdings is below average. Equivalently, the
implied inflation rate will be relatively low
between state x and a next-period state x ′, if the
realized real value of the equilibrium equity holdings in state x is below the state-x conditional
expectation of its value next period.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Lagos

CONCLUSION
I have presented a simple version of a prototypical search-based monetary model in which
money coexists with a financial asset that yields
a risky real return. In this formulation, money is
not assumed to be the only asset that must, nor
the only asset that can, play the role of a medium
of exchange: Nothing in the environment prevents
agents from using equity along with money, or
instead of money, as a means of payment. Since
the equity share is a claim to a risky aggregate
endowment, the fact that agents can use equity
to finance purchases implies that they face aggregate liquidity risk, in the sense that in some states
of the world, the value of equity holdings may
ultimately be too low relative to what would be
needed to carry out the transactions that require
a medium of exchange. This seems like a natural
starting point to study the role of money and monetary policy in providing liquidity to lubricate the
mechanism of exchange in modern economies.

In this context, I characterized a large family
of optimal monetary policies. Every policy in this
family implements Friedman’s prescription of
zero nominal interest rates. Under an optimal
policy, equity prices and returns are independent
of monetary considerations. I have also studied a
class of monetary policies that target a constant,
but nonzero, nominal interest rate. For this perturbation of the family of optimal policies, I found
that the model articulates the idea that, to the
extent that a financial asset is valued as a means
to facilitate transactions, the asset’s real rate of
return will include a liquidity return that depends
on monetary considerations. As a result of this
liquidity channel, persistent deviations from the
optimal monetary policy will cause the real prices
of assets that can be used to relax borrowing or
other trading constraints to exhibit persistent
deviations from their fundamental values.

REFERENCES
Cole, Harold L. and Kocherlakota, Narayana. “Zero Nominal Interest Rates: Why They’re Good and How to Get
Them.” Federal Reserve Bank of Minneapolis Quarterly Review, Spring 1998, 22(2), pp. 2-10.
Friedman, Milton. “The Optimum Quantity of Money,” in The Optimum Quantity of Money and Other Essays.
Chap. 1. Chicago: Aldine, 1969, pp. 1-50.
Lagos, Ricardo. “Asset Prices and Liquidity in an Exchange Economy.” Staff Report 373, Federal Reserve Bank
of Minneapolis, May 2006; www.minneapolisfed.org/research/SR/SR373.pdf.
Lagos, Ricardo. “Asset Prices, Liquidity, and Monetary Policy in an Exchange Economy.” Working paper,
New York University, April 2009.
Lagos, Ricardo. “Some Results on the Optimality of the Friedman Rule in the Search Theory of Money.” Journal
of Economic Theory, 2010 (forthcoming).
Lagos, Ricardo and Wright, Randall. “A Unified Framework for Monetary Theory and Policy Analysis.” Journal
of Political Economy, June 2005, 113(3), pp. 463-84.
Lucas, Robert E. Jr. “Asset Prices in an Exchange Economy.” Econometrica, November 1978, 46(6), pp. 1426-45.
Wilson, Charles. “An Infinite Horizon Model with Money,” in Jerry R. Green and José A. Scheinkman, eds.,
General Equilibrium, Growth, and Trade. New York: Academic Press, 1979, pp. 79-104.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

309

310

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Reading the Recent Monetary History
of the United States, 1959-2007
Jesús Fernández-Villaverde, Pablo Guerrón-Quintana, and Juan F. Rubio-Ramírez
In this paper the authors report the results of the estimation of a rich dynamic stochastic general
equilibrium (DSGE) model of the U.S. economy with both stochastic volatility and parameter drifting
in the Taylor rule. They use the results of this estimation to examine the recent monetary history
of the United States and to interpret, through this lens, the sources of the rise and fall of the Great
Inflation from the late 1960s to the early 1980s and of the Great Moderation of business cycle fluctuations between 1984 and 2007. Their main findings are that, while there is strong evidence of
changes in monetary policy during Chairman Paul Volcker’s tenure at the Federal Reserve, those
changes contributed little to the Great Moderation. Instead, changes in the volatility of structural
shocks account for most of it. Also, although the authors find that monetary policy was different
under Volcker, they do not find much evidence of a big difference in monetary policy among the
tenures of Chairmen Arthur Burns, G. William Miller, and Alan Greenspan. The difference in aggregate outcomes across these periods is attributed to the time-varying volatility of shocks. The history for inflation is more nuanced, as a more vigorous stand against it would have reduced inflation
in the 1970s, but not completely eliminated it. In addition, they find that volatile shocks (especially those related to aggregate demand) were important contributors to the Great Inflation.
(JEL E10, E30, C11)
Federal Reserve Board of St. Louis Review, July/August 2010, 92(4), pp. 311-38.

1. INTRODUCTION

U

ncovering the rationales behind monetary
policy is hard. While the instruments of
policy, such as the federal funds rate or
reserve requirements, are directly observable, the
process that led to their choice is not. Instead,
we have the documentary record of the minutes
of different meetings, the memoirs of participants
in the process, and the internal memos circulated
within the Federal Reserve System.
Although this paper trail is valuable, it is not
and cannot be a complete record of the policy

process. First and foremost, documents are not a
perfect photograph of reality. For example, participants at Federal Open Market Committee (FOMC)
meetings do not necessarily say or vote what they
really would like to say or vote, but what they
think is appropriate at the moment given their
objectives and their assessment of the strategic
interactions among the members of the committee.
(The literature on cheap talk and strategic voting
is precisely based on those insights.) Also, memoirs are often incomplete or faulty and staff memos
are the product of negotiations and compromises
among several actors. Second, even the most com-

Jesús Fernández-Villaverde is an associate professor of economics at the University of Pennsylvania, a research associate for the National
Bureau of Economic Research, a research affiliate for the Centre for Economic Policy Research, and a research associate chair for FEDEA
(Fundación de Estudios de Economía Aplicada). Pablo Guerrón-Quintana is an economist at the Federal Reserve Bank of Philadelphia. Juan
F. Rubio-Ramírez is an associate professor of economics at Duke University and a visiting scholar at the Federal Reserve Bank of Atlanta and
FEDEA. The authors thank André Kurmann, Jim Nason, Frank Schorfheide, Tao Zha, and participants at several seminars for useful comments
and Béla Személy for invaluable research assistance. The authors also thank the National Science Foundation for financial support.

© 2010, The Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the
views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced,
published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts,
synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

311

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

plete documentary evidence cannot capture the
full richness of a policy decision process in a
modern society. Even if it could, it would probably be impossible for any economist or historian
to digest the whole archival record.1 Third, even
if we could forget for a minute about the limitations of the documents, we would face the fact
that actual decisions tell us only about what was
done, but say little about what would have been
done in other circumstances. And while the
absence of an explicit counterfactual may be a
minor problem for historians, it is a deep flaw
for economists who are interested in evaluating
policy rules and making recommendations regarding the response to future events that may be very
different from past experiences.
Therefore, in this paper we investigate the
history of monetary policy in the United States
from 1959 to 2007 from a different perspective.
We build and estimate a rich dynamic stochastic
general equilibrium (DSGE) model of the U.S.
economy with both stochastic volatility and
parameter drifting in the Taylor rule that determines monetary policy. Then, we use the results
of our estimation to examine, through the lens
of the model, the recent monetary policy history
of the United States. Our attention is focused
primarily on understanding two fundamental
observations: (i) the rise and fall of the Great
Inflation from the late 1960s to the early 1980s,
the only significant peacetime inflation in U.S.
history, and (ii) the Great Moderation of business
cycle fluctuations that the U.S. economy experienced between 1984 and 2007, as documented
by Kim and Nelson (1998), McConnell and PérezQuirós (2000), and Stock and Watson (2003).
All the different elements in our exercise are
necessary. We need a DSGE model because we
are interested in counterfactuals. Thus, we require
1

For instance, Allan Meltzer (2010), in his monumental A History
of the Federal Reserve, uses the summaries of the minutes of FOMC
meetings compiled by nine research assistants (volume 2, book 1,
page X). This shows how even a several-decades-long commitment
to getting acquainted with the archives is not enough to process
all the relevant information. Instead, it is necessary to rely on summaries, with all the potential biases and distortions that they might
bring. This is, of course, not a criticism of Meltzer: He just proceeded,
as many other great historians do, by standing on the shoulders of
others. Otherwise, modern archival research would be plainly
impossible.

312

J U LY / A U G U S T

2010

a model that is structural in the sense of Hurwicz
(1962)—that is, invariant to interventions such
as the ones that we consider. We need a model
with stochastic volatility because, otherwise, any
changes in the variance of aggregate variables
would be interpreted as the consequence of variations in monetary policy. The evidence in Sims
and Zha (2006), Fernández-Villaverde and RubioRamírez (2007), and Justiniano and Primiceri
(2008) points out that these changes in volatility
are first-order considerations when we explore
the data. We need a model with parameter drifting in the monetary policy rule because we want
to introduce changes in policy that obey a fully
specified probability distribution, and not a onceand-for-all change around 1979-80, as is often
postulated in the literature (for example, in
Clarida, Galí, and Gertler, 2000, and Lubick and
Schorfheide, 2004).
In addition to using our estimation to interpret the recent monetary policy history of the
United States, we follow Sims and Zha’s (2006)
call to connect estimated changes to historical
events. (We are also inspired by Cogley and
Sargent, 2002 and 2005.) In particular, we discuss
how our estimation results relate to both the observations about the economy—for instance, how
our model interprets the effects of oil shocks—
and the written record.
Our main findings are that, although there is
strong evidence of changes in monetary policy
during Chairman Paul Volcker’s tenure at the Fed,
those changes contributed little to the Great
Moderation. Instead, changes in the volatility
of structural shocks account for most of it. Also,
although we find that monetary policy was different under Volcker, we do not find much evidence
of a difference in monetary policy among the
tenures of Chairmen Arthur Burns, G. William
Miller, and Alan Greenspan. The reduction in the
volatility of aggregate variables after 1984 is attributed to the time-varying volatility of shocks. The
history for inflation is more subtle. According to
our estimated model, a more aggressive stance of
monetary policy would have reduced inflation
in the 1970s, but not completely eliminated it. In
addition, we find that volatile shocks (especially
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

those related to aggregate demand) were important
contributors to the Great Inflation.
Most of the material in this paper is based
on a much more extensive and detailed work by
Fernández-Villaverde, Guerrón-Quintana, and
Rubio-Ramírez (2010), in which we (i) present the
DSGE model in all of its detail, (ii) characterize
the decision rules of the agents, (iii) build the
likelihood function, and (iv) estimate the model.
Here, we concentrate instead on understanding
recent U.S. monetary history through the lens of
our theory.

2. A DSGE MODEL OF THE U.S.
ECONOMY WITH STOCHASTIC
VOLATILITY AND PARAMETER
DRIFTING
As we argued in the introduction, we need a
structural equilibrium model of the economy to
evaluate the importance of each of the different
mechanisms behind the evolution of inflation and
aggregate volatility in the United States over the
past several decades. However, while the previous
statement is transparent, it is much less clear how
to decide which particular elements of the model
to include. On the one hand, we want a model
that is sufficiently detailed to account for the
dynamics of the data reasonably well. But this goal
conflicts with the objective of having a parsimonious and soundly microfounded description of
the aggregate economy.
Given our investigation, a default choice for
a model is a standard DSGE economy with nominal rigidities, such as the ones in Christiano,
Eichenbaum, and Evans (2005) or Smets and
Wouters (2003). This class of models is currently
being used to inform policy in many central banks
and is a framework that has proven to be successful at capturing the dynamics of the data. However, we do not limit ourselves to a standard DSGE
model. Instead, we extend it in what we think
are important and promising directions by incorporating stochastic volatility into the structural
shocks and parameter drifting in the Taylor rule
that governs monetary policy.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Unfortunately, for our purposes, the model
has two weak points that we must acknowledge
before proceeding further: money and Calvo pricing. Most DSGE models introduce a demand for
money through money in the utility function (MIA)
or cash in advance (CIA). By doing so, we endow
money with a special function without sound
justification. This hides inconsistencies that are
difficult to reconcile with standard economic
theory (Wallace, 2001). Moreover, the relation
between structures wherein money is essential
and the reduced forms embodied by MIA or CIA
is not clear. This means that we do not know
whether that relation is invariant to changes in
monetary policy or to the stochastic properties
of the shocks that hit the economy, such as the
ones we study. This is nothing more than the
Lucas critique dressed in a different way.
The second weakness of our DSGE model is
the use of Calvo pricing. Probably the best way
to think about Calvo pricing is as a convenient
reduced form of a more-complicated pricing mechanism that is easier to handle, thanks to its memoryless properties. However, if we are entertaining
the idea that monetary policy or the volatility of
shocks has changed over time, it is exceedingly
difficult to believe that the parameters that control
Calvo pricing have been invariant over the same
period (see the empirical evidence that supports
this argument in Fernández-Villaverde and RubioRamírez, 2008).
However, getting around these two limitations
seems, at the moment, infeasible. Microfounded
models of money are either too difficult to work
with (Kiyotaki and Wright, 1989) or rest in assumptions nearly as implausible as MIA (Lagos and
Wright, 2005) or that the data find too stringent
(Aruoba and Schorfheide, forthcoming). Statedependent models of pricing are too cumbersome
computationally for estimation (Dotsey, King, and
Wolman, 1999).
So, with a certain reluctance, we use a mainstream DSGE model with households; firms (a
labor packer, a final-good producer, and a continuum of intermediate-good producers); a monetary
authority, the Federal Reserve, which implements
monetary policy through open market operations
following a Taylor rule; and nominal rigidities in
the form of Calvo pricing with partial indexation.
J U LY / A U G U S T

2010

313

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

2.1 Households
We begin our discussion of the model with
households. We work with a continuum of them,
indexed by j. Households are different because
each supplies a specific type of labor in the market: Some households are carpenters and some
households are economists. If, in addition, each
household has some market power over its own
wage and stands ready to supply any amount of
labor at posted prices, it is relatively easy to introduce nominal rigidities in wages. Some households are able to change their wages and some
are not, and the relative demand for each type of
labor adjusts to compensate for these differences
in input prices.
At the same time, we do not want a complicated model with heterogeneous agents that is
daunting to compute. We resort to two tricks to
get around that problem. First, we have a utility
function that is separable among consumption,
cjt, real money balances, mjt /pt, and hours worked,
ljt . Second, we have complete markets in Arrow
securities. Complete markets allow us to equate
the marginal utilities of consumption across all
households in all states of nature. And, since by
separability this marginal utility depends only on
consumption, all households will consume the
same amount of the final good. The result makes
aggregation trivial. Of course, it also has the
unpleasant feature that those households that do
not update their wages will work different numbers of hours than those that do. If, for example,
we have an increase in the average wage, those
households stuck with the old, lower wages will
work longer hours and have lower total utility.
This is the price we need to pay for tractability.
Given our previous choice of a separable
utility function and our desire to have a balanced
growth path for the economy (which requires a
marginal rate of substitution between labor and
consumption that is linear in consumption), we
postulate a utility function of the form

(

)


log c jt − hc jt −1


1+ϑ
(1) E0 ∑β t dt 
,
m
l
 jt 
jt
t =0
ϕ
ψ
−

+υ log 
t
1 + ϑ 
 pt 

∞

314

J U LY / A U G U S T

2010

where ⺕0 is the conditional expectation operator,
β is the discount factor for one quarter (the time
period for our model), h controls habit persistence,
and ϑ is the inverse of the Frisch labor supply
elasticity. In addition, we introduce two shifters
to preferences, common to all households: The
first is a shifter to intertemporal preference, dt,
that makes utility today more or less desirable.
This is a simple device to capture shocks to aggregate demand. A prototypical example could be
increases in aggregate demand caused by fiscal
policy, an aspect of reality ignored in our model.
Another possibility is to think about dt as the consequence of demographic shocks that propagate
over time. The second is a shifter to labor supply,
ϕt . As emphasized by Hall (1997), this shock is
crucial for capturing the fluctuation of hours in
the data.
A simple way to parameterize the evolution
of the two shifters is to assume AR(1) processes:
logdt = ρd logdt −1 + σ dt εdt ,
where εdt ~ N共0,1兲, and
logϕt = ρϕ logϕ t −1 + σ ϕt εϕt ,
where εϕ t ~ N共0,1兲. The most interesting feature
of these processes is that the standard deviations
(SDs), σdt and σϕ t , of the innovations, εdt and εϕ t ,
evolve over time. This is the first place where we
introduce time-varying volatility in the model:
Sometimes the preference shifters are highly
volatile; sometimes they are less so. This changing volatility may reflect, for instance, the different regimes of fiscal policy or the consequences
of demographic forces (Jaimovich and Siu, 2009).
We can specify many different processes for
σdt and σϕ t . A simple procedure is to assume that
σdt and σϕ t follow a Markov chain and take a finite
number of values. While this specification seems
straightforward, it is actually quite involved. The
distribution that it implies for σdt and σϕ t is discrete and, therefore, perturbation methods (such
as the ones that we use later) are ill designed to
deal with it. Such conditions would force us to
rely on global solution methods that are too slow
for estimation.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Instead, we can postulate simple AR(1) processes in logs (to ensure the positivity of the SDs):

(

)

where εµt ~ N共0,1兲 with drift Λµ and innovation
εµt, whose SD σµt evolves according to our favorite
autoregressive process:

logσ dt = 1 − ρσd logσ d + ρσ d logσ dt −1 + ηdudt ,

(

(

)

logσ µt = 1 − ρσ µ logσ µ + ρσ µ logσ µt −1 + ηµuµt ,

where udt ~ N共0,1兲, and

)

logσ ϕt = 1 − ρσϕ logσ ϕ + ρσϕ logσ ϕt −1 + ηϕuϕt ,
where uϕ t ~ N共0,1兲. This specification is both
parsimonious (with only four new parameters,
ρσd, ρσ , ηd , and ηϕ ) and rather flexible. Because
of these advantages, we impose the same specification for the other three time-varying SDs in the
model that appear below (the ones affecting an
investment-specific technological shock, a neutral
technology shock, and a monetary policy shock).
Hereafter, agents perfectly observe the structural
shocks and the level and innovation to the SDs
and have rational expectations about their stochastic properties.
Households keep a rich portfolio: They own
(physical) capital, kjt ; nominal government bonds,
bjt , that pay a gross return Rt –1; Arrow securities,
ajt+1, which pay one unit of consumption in event
ω jt+1,t traded at time t at unitary price qjt+1,t ; and
cash.
The evolution of capital deserves some
description. Given a depreciation rate δ, the
amount of capital owned by household j at the
end of period t is
ϕ


 x jt 
k jt = (1 − δ ) k jt −1 + µt  1 − V 
 x jt .

 x jt −1 
Investment, xjt , is multiplied by a term that
depends on a quadratic adjustment cost function,
2


 x  κ x
V  t  =  t − Λx  ,

 x t −1  2  x t −1
written in deviations with respect to the balanced
growth rate of investment, Λx , with adjustment
parameter κ and an investment-specific technology level µt . This technology level evolves as a
random walk in logs:
log µt = Λ µ + log µt −1 + σ µt ε µt ,

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

where uµt ~ N共0,1兲.
We introduce this shock convinced by the
evidence in Greenwood, Herkowitz, and Krusell
(1997) that this is a key mechanism to understanding aggregate fluctuations in the United States
over the past 50 years.
Thus, the jth household’s budget constraint is
c jt + x jt +

mjt
pt

+

bjt +1
pt

(

+ ∫q jt +1, t a jt +1dω jt +1, t

)

= w jt l jt + rt ujt − µt−1Φ ujt  k jt −1
+

m jt −1
pt

+ Rt −1

b jt
pt

+ a jt + Tt + Ft ,

where wjt is the real wage, rt is the real rental
price of capital, ujt > 0 is the rate of use of capital,
µ t–1Φ[ujt ] is the cost of using capital at rate ujt in
terms of the final good, µt is an investment-specific
technology level, Tt is a lump-sum transfer, and
Ft is the profits of the firms in the economy. We
postulate a simple quadratic form for Φ[.],
Φ [u ] = Φ1 (u − 1) +

Φ2
(u − 1)2 ,
2

and normalize u, the utilization rate in the balanced growth path of the economy, to 1. This
imposes the restriction that the parameter Φ1 must
satisfy Φ1 = Φ′[1] = r̃, where r̃ is the balanced
growth path rental price of capital (rescaled by
technological progress, as we explain later).
Of all the choice variables of the households,
the only one that requires special attention is
hours. As we explained previously, each household j supplies its own specific type of labor.
This labor is aggregated by a labor packer into
homogeneous labor, l td, according to a constant
elasticity of substitution technology,
η

 1 η −1  η − 1
d
lt =  ∫ 0 l jtη dj  .



J U LY / A U G U S T

2010

315

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

The labor packer is perfectly competitive and
takes all the individual wages, wjt , and the wage
wt for l td as given.
The household decides, given the demand
function for its type of labor generated by the
labor packer,
 w jt 
l jt = 
 w 

ltd

∀j ,

2.2 Firms
In addition to the labor packer, we have two
other types of firms in this economy. The first, the
final-good producer, is a perfectly competitive
firm that aggregates a continuum of intermediate
goods with the technology:
=

(

( ∫ y di )
1

ε −1
ε

0

it

ε
ε −1

.

where uAt ~ N共0,1兲.
The quantity sold of the good is determined
by the demand function (equation (3)). Given
equation (3), the intermediate-good producers
set prices to maximize profits. As was the case
for households, intermediate-good producers are
subject to a nominal rigidity in the form of Calvo
pricing. In each quarter, a proportion of them,
1 – θp , can reoptimize their prices. The remaining
fraction θp indexes their prices by a fraction
χ 僆 [0,1] of past inflation.

2.3 The Policy Rule of the Federal
Reserve
In our model, the Federal Reserve implements
monetary policy through open market operations
(that generate lump-sum transfers, Tt, to maintain
a balanced budget). In doing so, the Fed follows
a modified Taylor rule that targets the ratio of
nominal gross return, Rt, of government bonds
over the balanced growth path gross return, R:

This firm takes as given all intermediate-good
prices, pti , and the final-good price, pt , and generates a demand function for each intermediate
good:
p 
(3) y it =  it 
 pt 

−ε

y td

∀i .

Second, we have the intermediate-good producers, each of which has access to a CobbDouglas production function,
1−α

( )

y it = At k itα−1 l itd

,

where kit –1 is the capital, l itd is the packed labor
rented by the firm, and At (our fourth structural
shock) is the neutral productivity level, which
evolves as a random walk in logs:
316

)

logσ At = 1 − ρσ A logσ A + ρσ A logσ At −1 + ηAuAt ,

which wage maximizes its utility and stands ready
to supply any amount of labor at that wage. However, when it chooses the wage, the household is
subject to a nominal rigidity: a Calvo pricing
mechanism with partial indexation. At the start
of every quarter, a fraction 1 – θw of households
are randomly selected and allowed to reoptimize
their wages. All other households can only index
their wages to past inflation with an indexation
parameter χw 僆 [0,1].

(2)

where εAt ~ N共0,1兲 with drift ΛA and innovation εAt .
We keep the same specification for the SD of this
innovation as we did for all previous volatilities:

−η

t

y td

log At = Λ A + log At −1 + σ At ε At ,

J U LY / A U G U S T

2010

Rt  Rt −1 
=

R  R 

γR


yt
γ Π, t 
y t −1
  Πt 

  Π 
 exp Λ y


( )





γ y,t






1−γ R

ξt .

This rule depends on (i) the past Rt –1, which
smooths changes over time; (ii) the “inflation gap,”
Πt /Π, where Π is the balanced growth path of
inflation2; (iii) the “growth gap,” which is the ratio
2

Here we are being careful with our words: Π is inflation in the
balanced growth path, not the target of inflation in the stochastic
steady state. As we will see later, we solve the model using a secondorder approximation. The second-order terms move the mean of
the ergodic distribution of inflation, which corresponds in our view
to the usual view of the inflation target, away from the balanced
growth path level. We could have expressed the policy rule in terms
of this mean of the ergodic distribution, but it would amount to
solving a complicated fixed-point problem (for every inflation level,
we would need to solve the model and check that indeed this is
the mean of the ergodic distribution), which is too complicated a
task for the potential benefits we can derive from it.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

between the growth rate of the economy, yt /yt –1,
and Λy , the balanced path gross growth rate of yt ,
dictated by the drifts of neutral and investmentspecific technological change; and (iv) a monetary
policy shock, ξt = expσm,t εmt, with an innovation
εmt ~ N共0,1兲 and SD of the innovation, σm,t , that
evolves as

(

)

logσ mt = 1 − ρσ m logσ m + ρσ m logσ mt −1 + ηmum, t .
Note that, since we are dealing with a general
equilibrium model, once the Fed has chosen a
value of Π, R is not a free target, as it is determined
by technology, preferences, and Π.
We introduce monetary policy changes
through a parameter drift over the responses of
Rt to inflation, γ Π,t , and growth gaps, γ y,t :

(

)

log γ Πt = 1 − ργ Π log γ Π + ργ Π log γ Πt −1 + ηπ ε π t ,
where επt ~ N共0,1兲 and

(

)

log γ yt = 1 − ργ y log γ y + ργ y log γ yt −1 + ηy ε yt ,
where εyt ~ N共0,1兲.
In preliminary estimations, we discovered
that, while other parameters such as γ R could also
be changing, the likelihood function of the model
did not react much to that possibility and, thus,
we eliminated those channels.
Our parameter-drifting specification tries to
capture mainly two different phenomena. First,
changes in the composition of the voting members
of the FOMC (through changes in governors and
in the rotating votes of presidents of regional
Reserve Banks) may affect how strongly the FOMC
responds to inflation and output growth because
of variations in the political-economic equilibrium
in the committee.3 Similarly, changes in staff may
have effects as long as their views have an impact
on the voting members through briefings and
other, less-structured interactions. This may have
3

According to Walter Heller, President Kennedy clearly stated,
“About the only power I have over the Federal Reserve is the power
of appointment, and I want to use it” (cited by Bremner, 2004,
p. 160). The slowly changing composition of the Board of Governors
may lead to situations, such as the one in February 1986 (discussed later in the text), when Volcker was outvoted by Ronald
Reagan’s appointees on the Board.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

been particularly true in the late 1960s, when a
majority of staff economists embraced Keynesian
policies and the MIT-Penn-Federal Reserve System
(MPS) model was built.4 The second phenomenon
is the observation that, even if we keep constant
the members of the FOMC, their reading of the
priorities and capabilities of monetary policy may
evolve (or be more or less influenced by the general political climate of the nation). We argue
below that this is a good description of Martin,
who changed his beliefs about how strongly the
Fed could fight inflation in the late 1960s, or of
Greenspan’s growing conviction in the mid-1990s
that the long-run growth rate of the U.S. economy
had risen.
While this second channel seems well
described by a continuous drift in the parameters
(beliefs plausibly evolving slowly), changes in
the voting members, in particular the Chairman,
might potentially be better understood as discrete
jumps in γ Π,t and γ y,t . In fact, our smoothed path
of γ Π,t , which we estimate from the data, gives
some support to this view. But in addition to our
pragmatic consideration that computing models
with discrete jumps is hard, we argue in Section 6
that, historically, changes have occurred more
slowly and even new Chairmen have required
some time before taking a decisive lead on the
FOMC (Goodfriend and King, 2005).
In Section 7, we discuss other objections to
our form of parameter drifting—in particular the
objection to the assumption that agents observe
the changes in parameters without problem, its
exogeneity, or its avoidance of open economy
considerations.

2.4 Aggregation and Equilibrium
The model is closed by finding an expression
for aggregate demand,
y td = ct + x t + µt−1Φ ut  kt −1 ,
and another for aggregate supply,
4

The MPS model is the high-water mark of traditional Keynesian
macroeconometric models in the Cowles tradition. The MPS model
was used operationally by staff economists at the Fed from the
early 1970s to the mid-1990s (see Brayton et al., 1997).

J U LY / A U G U S T

2010

317

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

1
α
At (ut kt −1 ) ltd
v tp

1−α

( )

y ts =

,

where
ltd =

1
v tw

1

∫ 0l jt dj

is demanded labor,
v tw =

1  w jt


∫ 0  w 
t

−η

dj

is the aggregate loss of labor input induced by
wage dispersion, and
v tp =

pit 
∫ 0  p 
t
1

−ε

di

is the aggregate loss of efficiency induced by price
dispersion of the intermediate goods. By market
clearing, yt = ytd = yts.
The definition of “equilibrium” for this model
is rather standard: It is just the path of aggregate
quantities and prices that maximize the problems
of households and firms, the government follows
its Taylor rule, and markets clear. But while the
definition of equilibrium is straightforward, its
computation is not.

3. SOLUTION AND LIKELIHOOD
EVALUATION
The solution of our model is challenging. We
have 19 state variables, 5 innovations to the structural shocks (εdt, εϕt, εAt, εµt, εmt ), 2 innovations to
the parameter drifts (επt, εyt ), and 5 innovations to
the volatility shocks (udt, uϕt, uµt, uAt, umt ), for a
total of 31 variables that we must consider.
A vector of 19 states makes it impossible to
use value-function iteration or projection methods
(finite elements or Chebyshev polynomials). The
curse of dimensionality is too acute even for the
most powerful of existing computers. Standard
linearization techniques do not work either:
Stochastic volatility is inherently a nonlinear
process. If we solved the model by linearization,
all terms associated with stochastic volatility
would disappear because of certainty equivalence
318

J U LY / A U G U S T

2010

and our investigation would be essentially
worthless.
Nearly by default, then, using perturbation
to obtain a higher-order approximation to the
equilibrium dynamics of our model is the only
option. A second-order approximation includes
terms that depend on the level of volatility. Thus,
these terms capture the responses of agents (households and firms) to changes in volatility. At the
same time, a second-order approximation can be
found sufficiently fast, which is of the utmost
importance since we want to estimate the model
and that forces us to solve it repeatedly for many
different parameter values. Thus, a second-order
approximation is an interesting compromise
between accuracy and speed.
The idea of perturbation is simple. Instead of
the exact decision rule of the agents in the model,
we use a second-order Taylor expansion to the
rule around the steady state. That Taylor expansion depends on the state variables and on the
innovations. However, we do not know the coefficients multiplying each term of the expansion.
Fortunately, we can find them by an application
of the implicit function theorem as follows (see
also Judd, 1998, and Schmitt-Grohé and Uribe,
2006).
First, we write all the equations describing the
equilibrium of the model (optimality conditions
for the agents, budget and resource constraints,
the Taylor rule, and the laws of motion for the
different stochastic processes). Second, we rescale
all the variables to remove the balanced growth
path induced by the presence of the drifts in the
evolution of neutral and investment-specific technology. Third, we find the steady state implied by
the rescaled variables. Fourth, we linearize the
equilibrium conditions around the steady state
found in the previous step. Then, we solve for the
unknown coefficients in this linearization, which
happen to be, by the implicit function theorem,
the coefficients of the first-order terms of the decision rules in the rescaled variables that we are
looking for (which can be easily rearranged to
deliver the decision rules in the original variables).
The next step is to take a second-order approximation of the equilibrium conditions, plugging
in the terms found before, and solve for the coefF E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

ficients of the second-order terms of the decision
rules.
While we could keep iterating in this procedure for as long as we want, Aruoba, FernándezVillaverde, and Rubio-Ramírez (2006) show that,
for the basic stochastic neoclassical growth model
(the backbone of our model) calibrated to the U.S.
data, a second-order approximation delivers excellent accuracy at great computational speed. In our
actual computation, we undertake the symbolic
derivatives of the equilibrium conditions using
Mathematica 6.0. The code generates all of the
relevant expressions and exports them automatically into Fortran files. Then, Fortran sends
particular parameter values in each step of the
estimation, evaluates those expressions, and
determines the terms of the Taylor expansions
that we need.
Once we have the approximated solution to
the model, given some parameter values, we use
it to build a state-space representation of the
dynamics of states and observables. This representation is, as we argued before, nonlinear and
hence standard techniques such as the Kalman
filter cannot be applied to evaluate the associated
likelihood function. Instead, we resort to a simulation method known as the particle filter, as
applied to DSGE models by Fernández-Villaverde
and Rubio-Ramírez (2007). The particle filter generates a simulation of different states of the model
and evaluates the probability of the innovations
that make these simulated states explain the
observables. These probabilities are also called
weights. A simple application of a law of large
numbers tells us that the mean of the weights is
an evaluation of the likelihood. The secret of the
success of the procedure is that, instead of performing the simulation over the whole sample,
we perform it only period by period, resampling
from the set of simulated state variables according to the weights we just found. This sequential
structure, which makes the particle filter a case of
a more general class of algorithms called sequential Monte Carlo methods, ensures that the simulation of the state variables remains centered on
the true but unknown value of the state variables.
This dramatically limits the numerical variance
of the procedure.
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Now that we have an evaluation of the likelihood of the model given observables, we only
need to search over different parameter values
according to our favorite estimation algorithm.
This can be done in two ways. One is with a regular maximum likelihood algorithm: We look for
a global maximum of the likelihood. This procedure is complicated by the fact that the evaluation
of the likelihood function that we obtain from the
particle filter is nondifferentiable with respect to
the parameters because of the inherent discreteness of the resampling step. An easier alternative,
and one that allows the introduction of presample
information, is to follow a Bayesian approach. In
this route, we specify a prior over the parameters,
multiply the likelihood by it, and sample from the
resulting posterior by means of a random-walk
Metropolis-Hastings algorithm. In this paper, we
choose this second route. In our estimation, however, we do not take full advantage of presample
information since we impose flat priors to facilitate the communication of the results to other
researchers: The shape of our posterior distributions will be proportional to the likelihood. We
must note, however, that relying on flat priors
forces us to calibrate some parameters to values
typically used in the literature (see FernándezVillaverde, Guerrón-Quintana, and Rubio-Ramírez,
2010 [FGR hereafter], for the values and justification of the calibrated values).
While our description of the solution and
estimation method has been necessarily brief,
the reader is invited to check FGR for additional
details. In particular, FGR characterize the structure of the higher-order approximations, showing
that many of the relevant terms are zero and
exploiting this result to quickly solve for the innovations that explain the observables given some
states. This result, proved for a general class of
DSGE models with stochastic volatility, is bound to
have wide application in all cases where stochastic volatility is an important aspect of the problem.

4. ESTIMATION
To estimate our model, we use five time series
for the U.S. economy: (i) the relative price of
J U LY / A U G U S T

2010

319

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Table 1
Posterior Distributions: Parameters of the Stochastic Processes for Volatility Shocks
Parameter
logσd

logσϕ

logσµ

logσA

–1.9834
(0.0726)

–2.4983
(0.0917)

–6.0283
(0.1278)

–3.9013
(0.0745)

ρσd

ρσϕ

ρσµ

ρσA

0.7508
(0.035)

0.2411
(0.005)

ηµ

ηA

0.4716
(0.006)

0.7955
(0.013)

0.9506
(0.0298)

0.1275
(0.0032)

ηd

ηϕ

0.3246
(0.0083)

2.8549
(0.0669)

logσm

–6.000
(0.1471)
ρσm

0.8550
(0.0231)
ηm

1.1034
(0.0185)

NOTE: Numbers in parentheses indicate standard deviations.

investment goods with respect to the price of
consumption goods, (ii) the federal funds rate,
(iii) real per capita output growth, (iv) the consumer price index, and (v) real wages per capita.
Our sample covers 1959:Q1 to 2007:Q1.
Figure 1 plots three of the five series: inflation,
(per capita) output growth, and the federal funds
rate—the three series most commonly discussed
when commentators talk about monetary policy.
By refreshing our memory about their evolution
in the sample, we can frame the rest of our discussion. For ease of reading, each vertical bar
corresponds to the tenure of one Fed Chairman:
Martin, Burns-Miller (we merge these two because
of Miller’s short tenure), Volcker, Greenspan, and
Bernanke.
The top panel shows the history of the Great
Inflation: From the late 1960s to the mid-1980s,
the U.S. experienced its only significant inflation
in peace time, with peaks of around 12 to 14 percent during the 1973 and 1979 oil shocks. The
middle panel shows the Great Moderation: A simple inspection of the series after 1984 reveals a
much smaller amplitude of fluctuations (especially between 1993 and 2000) than before that
date. The Great Inflation and the Great Moderation
are the two main empirical facts to keep in mind
for the rest of the paper. The bottom panel shows
the federal funds rate, which follows a pattern
320

J U LY / A U G U S T

2010

similar to inflation: It rises in the 1970s (although
less than inflation during the earlier years of the
decade and more during the last years) and stays
much lower in the 1990s, reaching historical
minima by the end of the sample.
The point estimates we get from our posterior
distribution agree with other estimates in the literature. For example, we document a fair amount
of nominal rigidities in the economy. In any case,
we refer the reader to FGR to avoid a lengthy discussion. Here, we report only the modes and SDs
of the posterior distributions associated with the
parameters governing stochastic volatility (Table 1)
and policy (Table 2). In our view, those parameters are the most relevant for our reading of the
recent history of monetary policy in the United
States.
The main lesson from Table 1 is that the scale
parameters, ηi , are clearly positive and bounded
away from zero, confirming the presence of timevariant volatility in the data. Shocks to the volatility of the intertemporal preference shifter, σd, are
the most persistent (also, the SDs are tight enough
to suggest that we are not suffering from serious
identification problems). The innovations to the
volatility of the intratemporal labor shock, ηϕ ,
are large in magnitude, which suggests that labor
supply shocks may have played an important role
during the Great Inflation by moving the marginal
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Figure 1
Time Series for Inflation, Output Growth, and the Federal Funds Rate
Inflation

Annualized Rate
14
Martin

12

Burns-Miller

10

Volcker
Greenspan

8

Bernanke

6
4
2
0
1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

1990

1995

2000

2005

1995

2000

2005

Output

Annualized Rate
10
5
0
−5
−10
−15
1960

1965

1970

1975

1980

1985

Interest Rate

Annualized Rate
18
16
14
12
10
8
6
4
2
1960

1965

1970

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

1975

1980

1985

1990

J U LY / A U G U S T

2010

321

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Table 2
Posterior Distribution: Policy Parameters
Parameter
γR

logγy

Π

logγ Π

ηπ

0.7855
(0.0162)

–1.4034
(0.0498)

1.0005
(0.0043)

0.0441
(0.0005)

0.1479
(0.002)

NOTE: Numbers in parentheses indicate standard deviations.

cost of intermediate-good producers. Finally, the
estimates for the volatility process governing
investment-specific productivity suggest that
such shocks are important in accounting for
business cycles fluctuations in the United States
(Fisher, 2006).
The results from Table 2 indicate that the
central bank smooths interest rates (γR > 0). The
parameter γ Π is the average magnitude of the
response to inflation in the Taylor rule. Its estimated value (1.045 in levels) is just enough to
guarantee determinacy in the model (Woodford,
2003).5 The size of the innovations to the drifting inflation parameter, ηπ , reaffirms our view of
a time-dependent response to inflation in monetary policy. The estimates for γy,t (the response
to output deviations in the Taylor rule) are not
reported because preliminary attempts at estimation convinced us that ηy was nil. Hence, in our
next exercises, we set ργ y and ηy to zero.

5. TWO FIGURES
In this section, we present two figures that
show us much about the evolution and effects of
monetary policy: (i) the estimated smoothed path
of γ Πt over our sample and (ii) the evolution during the same years of a measure of the real interest
rate. In the next section, we map these figures into
the historical record.
Figure 2, perhaps the most important figure
in this paper, plots the smoothed estimate of the
evolution of the response of monetary policy to

inflation plus or minus a 2-SD interval given our
point estimates of the structural parameters. The
message of Figure 2 is straightforward. According
to our model, at the arrival of the Kennedy administration, the response of monetary policy to
inflation was around its estimated mean, slightly
over 1.6 It grew more or less steadily during the
1960s, until reaching a peak at the end of 1967
and beginning of 1968. Subsequently, γ Πt fell so
quickly that it was below 1 by 1971. For nearly
all of the 1970s, γ Πt stayed below 1 and picked up
only with the arrival of Volcker. Interestingly, the
two oil shocks did not have an impact on the estimated γ Πt. The parameter stayed high throughout
the Volcker years and fell after a few quarters into
Greenspan’s tenure, when it returned to levels
even lower than during the Burns and Miller years.
The likelihood function favors an evolving monetary policy even after introducing stochastic
volatility in the model. In FGR, we assess this
statement more carefully with several measures
of model fit, including the construction of Bayes
factors and the computation of Bayesian information criteria between different specifications
of the model.
The reader could argue, with some justification, that we have estimated a large DSGE model
and that it is not clear what is driving the results
and what variation in the data is identifying the
movements in monetary policy. While a fully
worked-out identification analysis is beyond the
scope of this paper, as a simple reality check, we
plot in Figure 3 a measure of the (short-term)
6

5

In this model, local determinacy depends only on the mean of γΠ.

322

J U LY / A U G U S T

2010

This number nearly coincides with the estimate of Romer and
Romer (2002a) of the coefficient using data from the 1950s.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Figure 2
Smoothed Path for the Taylor Rule Parameter on Inflation ±2 SDs
Level of Parameter
Martin
5

Burns-Miller

4

Volcker
Greenspan
Bernanke

3

2

1

0

–1
1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

1990

1995

2000

2005

Figure 3
Real Interest Rate (Federal Funds Rate Minus Inflation)
Annualized Rate
Martin

12

Burns-Miller
10

Volcker
Greenspan

8

Bernanke

6
4
2
0
–2

1960

1965

1970

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

1975

1980

1985

J U LY / A U G U S T

2010

323

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

real interest rate defined as the federal funds rate
minus current inflation.7
This figure shows that Martin kept the real
interest rate at positive values around 2 percent
during the 1960s (with a peak by the end, which
corresponds with the peak of our estimated γ Πt ).
However, during the 1970s, the real interest rate
was often negative and only rarely above 2 percent,
a rather conservative lower bound on the balanced
growth real interest rate given our point estimates.
The likelihood function can interpret those observations only as a very low γ Πt (remember that the
Taylor principle calls for increases in the real
interest rate when inflation rises; that is, nominal
interest rates must grow more than inflation). Real
interest rates skyrocketed with the arrival of
Volcker, reaching a historic record of 13 percent
by 1981:Q2. After that date, they were never even
close to zero, and only in two quarters were they
below 3 percent. Again, the likelihood function
can interpret that observation only as a high γ Πt .
The Greenspan era is more complicated because
real interest rates were not particularly low in the
1990s. However, output growth was very positive,
which pushed the interest rates up in the Taylor
rule. Since the federal funds rate was not as high
as the policy rule would have predicted with a
high γ Πt , the smoothed estimate of the parameter
is lowered. During the 2000s, real interest rates
close to zero were enough, by themselves, to keep
γ Πt low.

6. READING MONETARY HISTORY
THROUGH THE LENS OF OUR
MODEL
Now that we have our model and our estimates of the structural parameters, we smooth
the structural and volatility shocks implied by
the data and use them to read the recent monetary
history of the United States. Somewhat conventionally, we organize our discussion around the
different Chairmen of the Fed from Martin to
7

Since inflation is nearly a random walk (Stock and Watson,
2007), its current value is an excellent proxy for its expected value.
In any case, our argument is fully robust to slightly different definitions of the real interest rate.

324

J U LY / A U G U S T

2010

Greenspan—except for Miller, whom we group
with Burns because of his short tenure.
One fundamental lesson from this exercise is
that Figure 2 can successfully guide our interpretation of policy from 1959 to 2007. We document
how both Martin and Volcker believed that inflation was dangerous and that the Fed had both the
responsibility and the power to fight it, although
growing doubts about that power overcame Martin
during his last term as Chairman. Burns, on the
other hand, thought the costs of inflation were
lower than the cost of a recession triggered by
disinflation. In any case, he was rather skeptical
about the Fed’s ability to successfully disinflate.
Greenspan, despite his constant warnings about
inflation, had in practice a much more nuanced
attitude. According to our estimated model, good
positive shocks to the economy gave him the
privilege of skipping a daunting test of his resolve.
Because by using a DSGE model we have a
complete set of structural and volatility shocks,
in FGR, we complete this analysis with the construction of counterfactual exercises. In those exercises, we build artificial histories of economies
in which some source of variation has been eliminated or modified in an illustrative manner. For
example, we can evaluate how the economy
would have behaved in the absence of changes
in the volatility of the structural shocks or if the
average monetary policy of one period had been
applied in another. By interpreting those counterfactual histories, we attribute most of the defeat
of the Great Inflation to monetary policy under
Volcker and most of the Great Moderation after
1984 to good shocks. We incorporate information
from those counterfactuals as we move along.
Our exercise in this section is closely related
to the work of Christina and David Romer (1989;
2002a,b; 2004), except that we attack the problem
from exactly the opposite perspective. While they
let their narrative approach guide their empirical
specification and like to keep a flexible relation
with equilibrium models, we start from a tightly
parameterized DSGE model of the U.S. economy
and use the results of our estimation to read the
narrative told by the documents. We see both
strategies as complementary since each can teach
us much of interest. Quite remarkably, given the
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

different research designs, many of our conclusions are similar to the views expressed by Romer
and Romer.

6.1 The Martin Era: Resistance and
Surrender
William McChesney Martin, the Chairman of
the Fed between April 2, 1951, and January 31,
1970, knew how to say no. On December 3, 1965,
he dared to raise the discount rate for the first time
in more than five years, despite warnings from
the Treasury secretary, Henry Fowler, and the
chairman of the Council of Economic Advisors,
Gardner Ackley, that President Lyndon Johnson
disapproved of such a move. Johnson, a man not
used to seeing his orders ignored, was angered
by Martin’s unwelcome display of independence
and summoned him to a meeting at his Texas
ranch. There, for over an hour, he tried to corner
the Chairman of the Fed with the infamous bullying tactics that had made him a master of the
Senate in years past. Martin, however, held his
ground and carried the day: The raise would stand.
Robert Bremner starts his biography of Martin
with this story.8 The choice is most appropriate.
The history of this confrontation illustrates better
than any other event our econometric results.
The early 1960s were the high years of
Martin’s tenure. The era of the “New Economics”
combined robust economic growth, in excess of
5 percent, and low inflation, below 3 percent.
According to our estimated model, this moderate
inflation was, in part, a reflection of Martin’s views
about economic policy. Bremner (2004, p. 122)
summarizes Martin’s guiding principles this way:
Stable prices were crucial for the correct working
of a market economy and the Fed’s main task was
to maintain that stability. In Martin’s own words,
“the Fed has a responsibility to use the powers
it possesses over economic events to dampen
excesses in economic activity [by] keeping the
8

Bremner (2004, pp. 1-2). This was not the only clash of Martin with
a president of the United States. In late 1952, Martin bumped into
Harry Truman leaving the Waldorf Astoria Hotel in New York City.
To Martin’s “Good afternoon,” Truman wryly replied, “Traitor!”
Truman was deeply displeased by how the Fed had implemented
the accord of March 3, 1951, between the Fed and the Treasury that
ended the interest rate peg in place since 1942 (Bremner, 2004, p. 91).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

use of credit in line with resources available for
production of goods and services.”9 Martin was
also opposed to the idea (popular at the time)
that the U.S. economy had a built-in bias toward
inflation, a bias the Fed had to accommodate
through monetary policy. Sumner Slichter, an
influential professor of economics at Harvard,
was perhaps the most vocal proponent of the
built-in bias hypothesis. In Martin’s own words,
“I refuse to raise the flag of defeatism in the battle
of inflation” and “there is no validity whatever
in the idea that any inflation, once accepted, can
be confined to moderate proportions.”10 As we
will see in the next subsection, this opposition
stands in stark contrast to Burns’s pessimistic
view of inflation, which had many points of contact with Slichter’s.
Our estimates of γ Π,t , above 1 and growing
during the period, clearly tell us that Martin was
doing precisely that: working to keep inflation low.
Our result also agrees with Romer and Romer’s
(2002a) narrative and statistical evidence regarding the behavior of the Fed during the late 1950s.
We must not forget, however, that our estimates
in FGR suggest as well that the good performance
of the economy from 1961 to 1965 was also the
consequence of good positive shocks.
The stand against inflation started to be tested
around 1966. Intellectually, more and more voices
had been raised since the late 1950s defending
the notion that an excessive concern with inflation was keeping the economy from working at
full capacity. Bremner (2004, p. 138) cites Walter
Heller and Paul Samuelson’s statements before
the Joint Economic Committee in February 1959
as examples of an attitude that would soon gain
strength. The following year, Samuelson and
Robert Solow’s (1960) classic paper about the
Phillips curve was taken by many as providing
9

Martin’s testimony to the Joint Economic Committee, February 5,
1957 (cited by Bremner 2004, p. 123).

10

The first quotation is from the New York Times, March 16, 1957,
where Martin was expressing dismay for having reached a 2 percent
rate of inflation. The second quotation is from the Wall Street
Journal, August 19, 1957. Martin also thought that Keynes himself
had changed his views on inflation after the war (they had talked
privately on several occasions) and that, consequently, Keynesian
economists were overemphasizing the benefits of inflation. See
Bremner (2004, pp. 128 and 229).

J U LY / A U G U S T

2010

325

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

an apparently sound empirical justification for
a much more sanguine position with respect to
inflation: “In order to achieve the nonperfectionist’s goal of high enough output to give us
no more than 3 percent unemployment, the price
index might have to rise by as much as 4 to 5 percent per year. That much price rise would seem
to be the necessary cost of high employment and
production in the years immediately ahead”
(Samuelson and Solow, 1960, p. 192).11 Heller’s
and Tobin’s arrival on the Council of Economic
Advisors transformed the critics into the insiders.
The pressures on monetary policy were contained during Kennedy’s administration, in good
part because C. Douglas Dillon, the secretary of
the Treasury and a Rockefeller Republican, sided
on many occasions with Martin against Heller.12
But the changing composition of the Board of
Governors and the arrival of Johnson, with his
expansionary fiscal programs, the escalation of
the Vietnam War, and the departure of Dillon from
the Treasury Department, changed the weights of
power.
While the effects of the expansion of federal
spending in the second half of the 1960s often
play a central role in the narrative of the start of
the Great Inflation, the evolution of the Board of
Governors has received less attention. Heller
realized that, by carefully selecting the governors,
he could shape monetary policy without the
need to ease Martin out. This was an inspired
observation, since up to that moment, the governors who served under the Chairman had played
an extremely small role in monetary policy and
the previous administrations had, consequently,
shown little interest in their selection. The strat11

The message of the paper is, however, much more subtle than
laying down a simple textbook Phillips curve. As Samuelson and
Solow (1960) also say in the next page of their article (p. 193), “All
of our discussion has been phrased in short-run terms, dealing
with what might happen in the next few years. It would be wrong,
though, to think that our Figure 2 menu that relates obtainable price
and unemployment behavior will maintain its shape in the longer
run. What we do in a policy way during the next few years might
cause it to shift in a definite way.”

12

In particular, Dillon’s support for Martin’s reappointment for a
new term in 1963 was pivotal. Hetzel (2008, p. 69) suggests that
President Kennedy often sided with Dillon and Martin over Heller
to avoid a gold crisis on top of the problems with the Soviet Union
over Cuba and Berlin.

326

J U LY / A U G U S T

2010

egy worked. Heller’s first choice, George W.
Mitchell, would become a leader of those preferring a more expansionary monetary policy on
the FOMC.
By 1964, Martin was considerably worried
about inflation. He told Johnson: “I think we’re
heading toward an inflationary mess that we won’t
be able to pull ourselves out of.”13 In 1965, he
ran into serious problems with the president, as
discussed at the beginning of this section. The
problems appeared again in 1966 with the appointment of Brimmer as a governor against Martin’s
recommendation. During all this time, Martin
stuck to his guns, trying to control inflation even
if it meant erring on the side of overtightening the
economy. Our estimated γ Π,t captures this attitude
with an increase from around 1965 to around 1968.
But by the summer of 1968, Martin gave in
to an easing of monetary policy after the tax surcharge was passed by Congress. As reported by
Hetzel (2008), at the time, the FOMC was divided
into two camps: members more concerned about
inflation (such as Al Hayes, the president of the
Federal Reserve Bank of New York) and members
more concerned about output growth (Brimmer,14
Maisel,15 and Mitchell, all three appointees of
Kennedy and Johnson). Martin, always a seeker
of consensus, was growlingly incapable of carrying the day.16 Perhaps Martin felt that the political climate had moved away from a commitment
13

Oral history interview with Martin, Lyndon B. Johnson Library
(quoted by Bremner, 2004, p. 191).

14

Brimmer is also the first African American to have served as a
governor and was, for a while, a faculty member at the University
of Pennsylvania.

15

Sherman Maisel was a member of the Board of Governors between
1965 and 1972. Maisel, a professor at the Haas School of Business
at the University of California–Berkeley, has the honor of being the
first academic economist appointed as a governor after Adolph
Miller, one of the original governors in 1914. As he explained in
his book, Managing the Dollar (one of the first inside looks at the
Fed and still a fascinating read today), Maisel was also a strong
believer in the Phillips curve: “There is a trade-off between idle
men and a more stable value of the dollar. A conscious decision
must be made as to how much unemployment and loss of output
is acceptable in order to get smaller price rises” (Maisel, 1973,
p. 285). Maisel’s academic and Keynesian background merged in
his sponsoring of the MPS model mentioned in Section 2.

16

On one occasion, Maisel felt strongly enough to call a press conference to explain his dissenting vote in favor of more expansion.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

to fight inflation.17 Or perhaps he was just
exhausted after many years running the Fed (at
the last meeting of the FOMC in which he participated, he expressed feelings of failure for not
having controlled inflation). No matter what the
exact reason was, monetary policy eased drastically in comparison with what was being called
for by the Taylor rule with a γ Π,t above 1. Thus,
our estimated γ Π,t starts to plunge in the spring
of 1968, reflecting that the increases in the federal
funds rate passed at the end of 1968 and in 1969
were, according to our estimated Taylor rule, not
aggressive enough given the state of the economy.
The genie of the Great Inflation was out of the
bottle.

6.2 The Burns-Miller Era: Monetary
Policy in the Time of Turbulence
Arthur F. Burns started his term as Chairman
of the Fed on February 1, 1970. A professor of
economics at Columbia University and the president of the National Bureau of Economic Research
between 1957 and 1967, Burns was the first academic economist to hold the chairmanship. All
the previous nine Chairmen had been bankers or
lawyers. However, any hope that his economics
education would make him take an aggressive
stand against the inflation brewing during the
last years of Martin’s tenure quickly disappeared.
The federal funds rate fell from an average of
8.02 percent during 1970:Q1 to 4.12 percent by
1970:Q4. The justification for those reductions
was the need to jump-start the economy, which
was stacked in the middle of the first recession
in nearly a decade, since December 1969. But
since inflation stayed at 4.55 percent by the end
of 1970, the reduction in the nominal rate meant
that real interest rates sank into the negative
region.
17

Meltzer (2010, p. 549) points out that Martin and the other Board
members might have been worried by Johnson’s appointment, at
the suggestion of Arthur Okun (the chairman of the Council of
Economic Advisors at the time), of a task force to review changes
in the Federal Reserve System. That message only became reinforced with the arrival of a new administration in 1969, given
Richard Nixon’s obsession with keeping unemployment as low as
possible. (Nixon was convinced that he had lost the 1960 presidential election to a combination of vote fraud and tight monetary
policy.)

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Our smoothed estimate of γ Π,t in Figure 2
responds to this behavior of the Fed by quickly
dropping during the same period. This indicates
that the actual reduction in the federal funds rate
was much more aggressive than the reduction
suggested by the (important) fall in output growth
and the (moderate) fall in inflation. Furthermore,
the likelihood function accounts for the persistent
fall in the real interest rate with a persistent fall
in γ Π,t .
Burns did little over the next few years to
return γ Π,t to higher values. Even if the federal
funds rate had started to grow by the end of 1971
(after the 90-day price controls announced on
August 15 of that year as part of Nixon’s New
Economic Policy) and reached new highs in 1973
and 1974, it barely kept up with inflation. The
real interest rate was not above our benchmark
value of 2 percent until the second quarter of 1976.
Later, in 1977, the federal funds rate was only
raised cautiously, despite the evidence of strong
output growth after the 1973-75 recession and
that inflation remained relatively high.
Our econometric results come about because
the Taylor rule does not care about the level of
the interest rate in itself, but by how much inflation deviates from Π. If γ Π,t > 1, the increases in
the federal funds rate are bigger than the
increases in inflation. This is not what happened
during Burns’s tenure: The real interest rate was
above the cutoff of 2 percent that we proposed
before only in three quarters: his first two quarters as Chairman (1970:Q2 and 1970:Q3) and in
1976:Q2. This observation, by itself, should be
sufficient proof of the stand of monetary policy
during the period.18
Burns’s successor, William Miller, did not
have time to retract these policies in the brief
interlude of his tenure, from March 8, 1978, to
August 6, 1979. But he also did not have either
the capability, since his only experience in the
conduct of monetary policy was serving as a
18

A memorandum prepared at the end of December 1997 by two of
Carter’s advisers reveals the climate of the time, proposing not to
reappoint Chairman Burns for a third term because he was more
concerned with inflation than unemployment (memo for the president on the role of the Federal Reserve, Box 16, R.K. Lipshitz Files,
Carter Library, December 10, 1977, pp. 1-2; cited by Meltzer, 2010,
p. 922).

J U LY / A U G U S T

2010

327

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

director of the Federal Reserve Bank of Boston,
or the desire, since he had little faith in restrictive
monetary policy’s ability to lower inflation.19
Thus, our estimated γ Π,t remains low during that
time.20
Burns was subject to strong pressure from
Nixon.21 His margin of maneuver was also limited
by the views among many leading economists
that overestimated the costs of disinflation and
who were in any case skeptical of monetary policy.22 But his own convictions leaned in the same
direction. According to the recollections of
Stephen H. Axilrod, a senior staff member at the
19

Miller stated, “Our attempts to restrain inflation by using conventional stabilization techniques have been less than satisfactory.
Three years of high unemployment and underutilized capital stock
have been costly in terms both of lost production and of the denial
to many of the dignity that comes from holding a productive job.
Yet, despite this period of substantial slack in the economy, we
still have a serious inflation problem” (Board of Governors, 1978,
p. 193; quoted by Romer and Romer, 2004, p. 140).

20

The situation with Miller reached the surrealistic point when, as
narrated by Kettl (1986), Charles Schultze, the chairman of the
Council of Economic Advisors, and Michael Blumenthal, the
Treasury secretary, were leaking information to the press to pressure
Miller to tighten monetary policy.

21

Perhaps the clearest documented moment is the meeting between
Nixon and Burns on October 23, 1969, right after Burns’s nomination, as narrated by John Ehrlichman (1982, pp. 248-49): “I know
there’s the myth of the autonomous Fed...Nixon barked a quick
laugh…and when you go up for confirmation some Senator may
ask you about your friendship with the President. Appearances
are going to be important, so you can call Ehrlichman to get messages to me, and he’ll call you.” The White House continued its
pressure on Burns by many different methods, from constant conversations to leaks to the press (falsely) accusing Burns of requesting a large wage increase. These, and many other histories, are
collected in a fascinating article by Abrams (2006).

22

Three examples. First, Franco Modigliani testified before the U.S.
Congress on July 20, 1971: “[Y]ou have to recognize that prices are
presently rising, and no measure we can take short of creating massive unemployment is going to make the rate of change of prices
substantially below 4 percent.” Second, Otto Eckstein, the builder
of one of the large macroeconometric models at the time, the DRI
U.S. model, argued that it was not the Fed’s job to solve structural
inflation. Third, James Tobin (1974): “For the rest of us, the tormenting difficulty is that the economy shows inflationary bias even
when there is significant involuntary unemployment. The bias is
in some sense a structural defect of the economy and society…
Chronic and accelerating inflation is then a symptom of a deeper
social disorder, of which involuntary unemployment is an alternative symptom. Political economists may differ about whether
it is better to face the social conflicts squarely or to let inflation
obscure them and muddle through. I can understand why anyone
who prefers the first alternative would be working for structural
reform, for a new social contract. I cannot understand why he
would believe that the job can be done by monetary policy. Within
limits, the Federal Reserve can shift from one symptom to the other.
But it cannot cure the disease.” The examples are quoted by Hetzel
(2008, pp. 86, 89, and 128).

328

J U LY / A U G U S T

2010

Board back then, Burns did not believe any theory
of the economy—whether Keynesian or monetarist—could account for the business cycle; he
dismissed the relation between the stock of money
and the price level; and he was unwilling or
unable to make a persuasive case against inflation
to the nation and to the FOMC.23
In addition, Burns had a sympathetic attitude
toward price and wage controls. For instance, he
testified to Congress on February 7, 1973:
[T]here is a need for legislation permitting some
direct controls over wages and prices...The
structure of our economy—in particular, the
power of many corporations and trade unions
to exact rewards that exceed what could be
achieved under conditions of active competition—does expose us to upward pressure on
costs and prices that may be cumulative and
self-reinforcing (cited by Hetzel, 2008, p. 79).

He reiterated that view in a letter to the president on June 1, 1973, in which he proposed to
reintroduce mandatory price controls for large
firms.24 In his view, controls could break the costpush spiral of the economy and the inflationary
pressures triggered by the social unrest of the late
1960s and be a more effective instrument than
open market operations, which could be quite
costly in terms of employment and financial disturbances.25 In fact, many members of the FOMC
believed that the introduction of price and wage
controls in different phases between 1971 and
1973 had not only eased the need for monetary
tightening, but also positively suggested that monetary policy should not impose further restraint
on the economy.26 More interestingly, if price and
wage controls were an argument for loose monetary policy, their easing was also an argument
for expansionary policy, or as Governor Charles
Partee put it during the FOMC meeting of
January 11, 1973, the lifting of controls “might
necessitate a somewhat faster rate of monetary
24

Burns papers, B_N1, June 1, 1973, as cited by Meltzer (2010, p. 787).

25

At the time, many financial institutions were subject to ceiling
rates on deposits, which could have made them bankrupt in the
case of a fast tightening of monetary policy.

26

Maisel’s diary entry for August 25, 1971; cited by Meltzer, 2010,
p. 790.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

growth to finance the desired growth in real output under conditions of greater cost-push inflation
than would have prevailed with tighter controls”
(cited by Meltzer, 2010, p. 815).
Burns’s 1979 Per Jacobsson lecture is a revealing summary of Burns’s own views on the origins
and development of inflation. He blamed the
growing demands of different social groups during
the late 1960s and early 1970s and the federal
government’s willingness to concede to them as
the real culprit behind inflation. Moreover, he
felt that the Fed could not really stop the inflationary wave: If the Federal Reserve then sought to
create a monetary environment that fell seriously
short of accommodating the upward pressures
on prices that were being released or reinforced
by governmental action, severe difficulties could
be quickly produced in the economy. Not only
that, the Federal Reserve would be frustrating
the will of Congress to which it was responsible.
But beyond Burns’s own defeatist attitude
toward inflation, he was a most unfortunate Chairman. He was in charge during a period of high
turbulence and negative shocks, not only the 1973
oil shock, but also poor crops in the United States
and the Soviet Union. Our model estimates large
and volatile intertemporal shocks, dt , and labor
supply shocks, ϕt , during his tenure (see FGR for
a plot of these shocks). Examples of intertemporal
shocks include the final breakdown of the Bretton
Woods Agreement, fiscal policy during the 197375 recession (with a temporary tax cut signed in
March 1975 and increases in discretionary spending), and Nixon’s price and wage controls (which
most likely distorted intratemporal allocations).
Examples of labor supply shocks include the historically high level of strikes in American industry during the early 1970s. (A major issue in the
Republican primary of 1976 between Ford and
Reagan was picketing rules for striking workers,
a policy issue most unlikely to grab many voters’
attention nowadays.)
Both types of shocks complicated monetary
policy. Large positive intertemporal shocks
increase aggregate demand. In our model, this
translates partly into higher output and partly
into higher inflation. Positive labor supply shocks
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

increase wages, which pushes up the marginal
cost and, therefore, inflation. Moreover, FGR show
that, if volatility had stayed at historical levels,
even with negative innovations, inflation would
have been much lower and the big peak of 1973
avoided.
However, these negative shocks should not
make us forget that, according to our model, if
monetary policy had engineered higher real
interest rates during those years, the history of
inflation could have been different. In FGR we
calculate that, had monetary policy behaved
under Burns and Miller as it did under Volcker,
inflation would have been 4.36 percent on average,
instead of the observed 6.23 percent. The experience of Germany or Switzerland, which had much
lower inflation than the United States during the
same time, suggests that this was possible. After
all, the peak of inflation in Germany was in 1971,
well before any of the oil shocks. And in neither
of these two European countries do we observe
statements such as that by Governor Sheehan on
the January 22, 1974, FOMC meeting: “[T]he
Committee had no choice but to validate the rise
in prices if it wished to avoid compounding the
recession” (Hetzel, 2008, p. 93).
Thus, our reading of monetary policy during
the Burns years through the lens of our model
emphasizes the confluence of two phenomena:
an accommodating position with respect to inflation and large and volatile shocks that complicated the implementation of policy. There is ample
evidence in the historical record to support this
view. This was, indeed, monetary policy in the
time of turbulence.

6.3 The Volcker Era: High Noon
In his 1979 Per Jacobsson lecture cited earlier,
Burns had concluded: “It is illusory to expect
central banks to put an end to the inflation that
now afflicts the industrial democracies.” Paul
Volcker begged to differ. He had been president
of the Federal Reserve Bank of New York since
August 1975 and, from that position, a vocal foe
of inflation. In particular, during his years as a
member of the FOMC, Volcker expressed concern
that the Fed was consistently underpredicting
J U LY / A U G U S T

2010

329

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

inflation and that, therefore, monetary policy
was more expansionary than conventionally
understood (Meltzer, 2010, p. 942).27
In the summer of 1979, Jimmy Carter moved
Miller to the Treasury Department. Then, he
offered Volcker the chairmanship of the Board of
Governors. Volcker did not hesitate to take it, but
not before warning the president “of the need
for tighter money—tighter than Bill Miller had
wanted” (Volcker and Gyothen, 1992, p. 164)
and the Senate in his confirmation hearings that
“the only sound foundation for the continuing
growth and prosperity of the American economy
is much greater price stability” (U.S. Senate, 1979,
p. 16; quoted by Romer and Romer, 2004, p. 156).
Deep changes were coming and the main decisionmakers were aware of them.
We should be careful not to attribute all of
the sharp break in monetary policy to Volcker’s
appointment. In 1975, the House passed Concurrent Resolution 133, the brainchild of Karl Brunner
(Weintraub, 1977). This resolution, which asked
the Fed to report to the House Banking Committee
on objectives and plans with respect to the ranges
of growth or diminution of monetary and credit
aggregates in the upcoming twelve months, was
a first victory for monetarism. Although the resolution probably did little by itself, it was a sign
that times were changing. Congress acted again
with the Full Employment and Balanced Growth
Act of 1978, which required the Fed to report
monetary aggregates in its reports to Congress. In
April 1978, the federal funds rate started growing
quickly, from a monthly average of 6.9 percent to
10 percent by the end of the year. This reflected
a growing consensus on the FOMC (still with
many dissenting voices) regarding the need for
lower inflation. Figure 2 shows the start of an
increase in γ Π,t around that time. At the same time,
the new procedures for monetary policy that targeted money growth rates and reserves instead
of the federal funds rate were not announced
27

This position links to an important point made by Orphanides
(2002): Monetary policy decisions are implemented using realtime data, a point that our model blissfully ignores. In turbulent
times such as the 1970s, this makes steering the ship of policy targets exceedingly difficult.

330

J U LY / A U G U S T

2010

until October 6, 1979. Additionally, Goodfriend
and King (2005) have argued that Volcker required
some time before asserting his control over the
FOMC. For instance, in the Board meeting of
September 18, 1979, Volcker did obtain a rise in
the discount rate, but only with three dissenting
votes. As we argued in Section 2, all of these
observations suggest that modeling the evolution
of monetary policy as a smooth change may be
more appropriate than assuming a pure break.
Regardless of the exact timing of changes in
monetary policy, the evidence in Figure 2 is overwhelming: On or about August 1979, the character
of monetary policy changed. The federal funds
rate jumped to new levels, with the first significant long-lasting increase in the real interest rate
in many years. Real interest rates would remain
high for the remainder of the decade of the 1980s,
partly reflecting high federal fund rates and partly
reflecting the deeply rooted expectations of inflation among the agents. In any case, the response
of monetary policy to inflation, γ Π,t , was consistently high during the whole of Volcker’s years.
An important question is the extent to which
the formalism of the Taylor rule can capture the
way in which monetary policy was conducted
at the time, when money growth targeting and
reserve management were explicitly tried (what
Volcker called “practical monetarism”). We are
not overly concerned about this aspect of the data
because, in our DSGE model, there is a mapping
between money targeting and the Taylor rule
(Woodford, 2003). Thus, as long as we are careful
to interpret the monetary policy shocks during the
period (which we estimate were, indeed, larger
than in other parts of the sample), our exercise
should be relatively robust to this consideration.28
A much more challenging task could be to build
a DSGE model with a richer set of monetary pol28

This begets the question of why Volcker spent so much effort on
switching the operating procedure of the Fed between 1979 and
1982. Volcker himself ventures that it was easier to sell a restrictive
monetary policy in terms of money growth rates than in terms of
interest rates: “More focus on the money supply also would be a
way of telling the public that we meant business. People don’t need
an advanced course in economics to understand that inflation has
something to do with too much money” (Volcker and Gyohten,
1992, pp. 167-68).

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

icy rules and switches between them. However,
at the moment, this goal seems infeasible.29
The impressions of participants in the monetary policy process reinforced the message of
Figure 2. For instance, Axilrod (2009, p. 91) states:
During Paul Volcker’s eight-year tenure as
chairman of the Fed...policy changed dramatically. He was responsible for a major transformation—akin to a paradigm shift—that was
intended to greatly reduce inflation, keep it
under control, and thereby restore the Fed’s
badly damaged reputation. Furthermore, it was
almost solely because of Volcker that this particular innovation was put in place—one of the
few instances in my opinion where a dramatic
shift in policy approach could be attributed
to a particular person’s presence rather than
mainly or just to circumstances.

Volcker himself was very explicit about his
views30:
[M]y basic philosophy is over time we have no
choice but to deal with the inflationary situations because over time inflation and unemployment go together...Isn’t that the lesson of
the 1970s? We sat around [for] years thinking
we could play off a choice between one or the
other...It had some reality when everybody
thought processes were going to be stable...So
in a very fundamental sense, I don’t think we
have the choice.

In fact, Volcker’s views put him in the rather
unusual position of being outvoted on February 24,
1986. In that meeting, a majority of four members
of the Board voted against Volcker and two other
dissenting members to lower the discount rate
50 basis points.
At the same time, and according to our model,
Volcker was also an unlucky Chairman. The economy still suffered from large and negative shocks
during his tenure, since the level and volatility

of the intratemporal preference shifter did not
fall until later in his term. In FGR, we build a
counterfactual in which Volcker is faced with
the same structural shocks he faced in real life,
but with the historical average volatility. In this
counterfactual history, inflation falls to negative
values by the end of 1983, instead of still hovering around 3 to 4 percent. It was a tough policy
in a difficult time. However, despite these misfortunes and heavy inheritance from the past, our
model tells us that monetary policy conquered
the Great Inflation. The Great Moderation would
have to wait for better shocks.
We started this subsection with Burns’s own
words in the 1979 Per Jacobsson lecture. In 1989,
Volcker was invited to give the same lecture. What
a difference a decade can make! While Burns
was sad and pessimistic (his lecture was entitled
“The Anguish of Central Banking”), Volcker was
happy and confident (his lecture was entitled
“The Triumph of Central Banking?”). Inflation
had been defeated and he warned that “our collective experience strongly emphasizes the importance of dealing with inflation at an early stage”
(Volcker, 1990, p. 14).

6.4 The Greenspan Era: Speaking Like
a Hawk and Walking Like a Dove
These are the colorful words with which
Laurence Meyer (2004, p. 83) summarizes
Greenspan’s behavior during Meyer’s time as a
governor (June 1996 to January 2002). Once and
again,

29

The impact of the credit controls imposed by the Carter administration starting on March 14, 1980, is more difficult to gauge. Interestingly, we estimate a large negative innovation to the intratemporal
preference shifter at that point in time, a likely reflection of the
distortions of the controls in the intertemporal choices of households (see the historical description in Shreft, 1990).

[Greenspan] seemed to fall into a pattern: The
Chairman would ask for no change in the funds
rate, suggest that the time was approaching
for action, and indicate that there was a high
probability of a move at the next meeting. Then
at the next meeting, he would explain that the
data did not yet provide a credible basis for
tightening, and in any case, that the markets
didn’t expect a move. However, he would conclude that he expected the Committee would
be forced to move at the next meeting.

30

Volcker papers, Federal Reserve Bank of New York, speech at the
National Press Club, Box 97657, January 2, 1980; quoted by Meltzer,
2010, p. 1034.

Meyer means these words in a positive way. In
his opinion, Greenspan discovered before he did

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

331

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

that the economy was being hit during the second
half of the 1990s by an unusual sequence of positive shocks and directed monetary policy to take
advantage of them.
We quote Meyer because it illustrates that
Greenspan showed from the start that he knew
how to respond to changing circumstances. He
was appointed on August 11, 1987. In his confirmation hearings, he clearly reaffirmed the need to
fight inflation.31 But after just a couple of months,
on October 19, 1987, he reacted to the big crash
of the stock market by declaring the Fed’s disposition to serve as a source of liquidity, even if, in
the short run, this could complicate the control
of inflation.
Later, in early 1989, the federal funds rate
started to fall, despite the fact that inflation
remained at around 6 percent until the end of
1990. As shown in Figure 2, our estimate of γ Π,t
picks up this fall by dropping itself. Moreover, it
dropped fast. We estimate that γ Π,t was soon below
1, back to the levels of Burns-Miller (although, for
a while, there is quite a bit of uncertainty in our
estimate). The parameter stayed there for the rest
of Greenspan’s tenure. The reason for this estimated low level of γ Π,t is that the real interest rate
also started to fall rather quickly. At the same time,
a remarkable sequence of good shocks delivered
rapid output growth and low inflation.
In fact, in FGR we find that all of the shocks
went right for monetary policy during the 1990s.
A large string of positive and stable investmentspecific technological shocks delivered fast productivity growth, a falling intertemporal shifter
lowered demand pressures, and labor supply
shocks pressured wages downward—and, with
them, marginal costs. This fantastic concatenation
of shocks accounted for the bulk of the Great
Moderation. In FGR, we calculate that without
changes in volatility, the Great Moderation would
have been much smaller. The SD of inflation
31

Greenspan stated in his confirmation hearings: “[W]e allowed our
system to take on inflationary biases which threw us into such a
structural imbalance that, in order to preserve the integrity of the
system, the Federal Reserve had to do what it did. Had it not acted
in the way which it did at that time, the consequences would have
been far worse than what subsequently happened” (U.S. Senate,
1987, p. 35; quoted by Romer and Romer, 2004, p. 158).

332

J U LY / A U G U S T

2010

would have fallen by only 13 percent (instead of
60 percent in the data), the SD of output growth
would have fallen by 16 percent (instead of 46
percent in the data), and the SD of the federal funds
rate would have fallen by 35 percent (instead of
39 percent in the data). That is, the moderation
in inflation fluctuations would have been only
one-fifth as large as in the data (and the counterfactual mean would have actually been higher
than in the data) and the moderation for output
growth’s SD only one-third.
We can push the argument even further. In
FGR we build the counterfactual in which the
average γ Π,t during the Greenspan years is plugged
into the model at the time of Burns’s appointment.
Then, we keep γ Π,t at that level and hit the model
with exactly the same shocks that we backed out
from our estimation. This exercise is logically
coherent, since we are working with a DSGE
model and, therefore, the structural and volatility
shocks are invariant to this class of interventions.
We compute that the average monetary policy
during Greenspan’s years would not have made
much of a difference in the 1970s. If anything,
inflation would have been even slightly higher
(6.83 percent in the counterfactual instead of 6.23
percent in the data). This finding contrasts with
our counterfactual in which Volcker is moved to
the Burns-Miller era. In this counterfactual, inflation would have been only 4.36 percent. To summarize, our reading of monetary policy during the
Greenspan years is that it was not too different
from the policy in the Burns-Miller era; it just
faced much better shocks.
Is this result credible? First, it is clear that it
is not a pure artifact of our model. A similar
result is found in Sims and Zha (2006). These
authors, using structural vector autoregressions
with Markov switching, which imposes many
fewer cross-equation restrictions than our analysis, do not find much evidence of differences in
monetary policy across time (actually, Sims and
Zha’s position is even stronger than ours, since
they do find that monetary policy was different
even under Volcker). Second, there are hints in
the data that lead us to believe that the results
make sense. At the start of the 1994 inflation scare,
F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

when there were no signs of the “new economy”
anywhere to be seen, Greenspan argued32:
You know, I rarely feel strongly about an issue,
and I very rarely sort of press this Committee.
But let me tell you something about what’s
gnawing at me here. I am very sympathetic
with the view that we’ve got to move and that
we’re going to have an extended period of
moves, assuming the changes that are going on
now continue in the direction of strength. It is
very unlikely that the recent rate of economic
growth will not simmer down largely because
some developments involved in this particular
period are clearly one-shot factors—namely,
the very dramatic increase in residential construction and the big increase in motor vehicle
sales. Essentially the two of those have added
one-shot elements to growth. In the context of
a saving rate that is not high, the probability is
in the direction of this expansion slowing from
its recent pace, which at the moment is well
over 4 percent and, adjusting for weather
effects, may be running over 5 percent. This is
not sustainable growth, and it has nothing to
do with monetary policy. In other words, it will
come down. And the way a 3 percent growth
feels, if I may put it that way, is a lot different
from the way the expansion feels now.
I would be very concerned if this Committee
went 50 basis points now because I don’t think
the markets expect it...I’ve been in the economic
forecasting business since 1948, and I’ve been
on Wall Street since 1948, and I am telling you
I have a pain in the pit of my stomach, which
in the past I’ve been very successful in alluding
to. I am telling you—and I’ve seen these markets—this is not the time to do this. I think there
will be a time; and if the staff’s forecast is right,
we can get to 150 basis points pretty easily.
We can do it with a couple of 1/2 point jumps
later when the markets are in the position to
know what we’re doing and there’s continuity.
I really request that we not do this. I do request
that we be willing to move again fairly soon,
and maybe in larger increments; that depends
on how things are evolving.

We construe this statement as revealing a low
γ Πt . We could present similar evidence regarding
32

Board of Governors FOMC Transcripts, February 3-4, 1994, p. 55.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

the behavior of policy in the aftermath of the Long
Term Capital Management fiasco or in the exit
from the 2001 recession. But we feel the point
has been made. We believe that our estimates are
right: Monetary policy in the Greenspan years
was similar to monetary policy under Burns and
Miller. Instead, time-varying structural shocks
were the mechanism that played a key role in
the Great Moderation and the low inflation of
1987-2007.

7. WHAT ARE WE MISSING?
What is our model missing that is really important? The answer will tell us much about where
we want to go in terms of research and where we
need to be careful in our reading of monetary history. Of all the potential problems of our specification, we are particularly concerned about the
following.
First, households and firms in the model
observe the changes in the coefficients γ Πt and γ yt
when they occur. A more plausible scenario would
involve filtering in real time by the agents who
need to learn the stand of the monetary authority
from observed decisions.33 A similar argument
can be made for the values of the SDs of all the
other shocks in the economy. Unfortunately,
introducing learning suffers from two practical
difficulties: It is not obvious what is the best way
to model learning about monetary policy, especially in a nonlinear environment such as ours
where least-square rules may not work properly.
And it would make the computation of the model
nearly infeasible.
Second, we assume that monetary policy
changes are independent of the events in the
economy. However, many channels make this
assumption untenable. For instance, each administration searches for governors of the Board who
conform with its views on the economy (after all,
33

The difficulties in observing monetary policy changes can be
illustrated by Axilrod’s description of a lunch with Arthur Burns
shortly after the announcement of Volcker’s new policy. According
to Axilrod (2009, p. 100), Burns stated: “You are not really going
to be doing anything different from what we were doing. If an
insider like Burns had difficulties in filtering Volcker’s behavior,
it is hard to conclude anything but that the average agents in the
economy had difficulties as well.”

J U LY / A U G U S T

2010

333

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

this is what a democracy is supposed to be about).
We saw how Heller discovered that an administration could select governors to twist the FOMC
toward its policy priorities. This is a tradition
that has continued. Meyer (2004, p. 17) describes
the process for his own appointment as one clearly
guided by the desire of the Clinton administration
to make monetary policy more accommodative
and growth-oriented. As long as the party in power
is a function of the state of the economy, the composition of the FOMC will clearly be endogenous.
Similarly, changes in public perception of the
dangers of inflation certainly weighed heavily
on Carter when he appointed Volcker to lead the
Fed in 1979.
Third, and related to our two previous points,
evolving beliefs about monetary policy might be
endogenous to the developments of events and
lead to self-confirming equilibria. This is a point
emphasized by Cho, Williams, and Sargent (2002)
and Sargent (2008).
Fourth, our technological drifts are constant
over time. The literature on long-run risk has highlighted the importance of slow-moving components in growth trends (Bansal and Yaron, 2004).
It may be relevant to judge monetary policy to estimate a model in which we have these slow-moving
components, since the productivity slowdown
of the 1970s and the productivity acceleration of
the late 1990s are bound to be reflected in our
assessment of the stance of monetary policy during those years. This links us back to some of the
concerns expressed by Orphanides (2002). At the
same time and nearly by definition, there is very
little information in the data about this component.
Fifth, our model is a closed economy. However, the considerations regarding exchange rates
have often played an important role in monetary
policymaking. For instance, during the late 1960s,
the United States fought an increasingly desperate
battle to keep the Bretton Woods Agreement
alive, which included the Fed administering a
program to voluntarily reduce the amount of
funds that American banks could lend abroad
(Meltzer, 2010, p. 695) and purchasing long-term
Treasury bonds to help the British pound stabilize
after its 1967 devaluation. The end of Bretton
Woods also deeply influenced policymakers in
334

J U LY / A U G U S T

2010

the early 1970s. Later, Volcker’s last years at the
Fed were colored by the Plaza and Louvre Accords
and the attempts to manage the exchange rate
between the U.S. dollar and the Japanese yen.
Finally, our model ignores fiscal policy. The
experience of the 1960s, in which there was an
explicit attempt at coordinating fiscal and monetary policies, and the changes in long-run interest
rates possibly triggered by the fiscal consolidations of the 1990s indicate that the interaction
between fiscal and monetary policies deserves
much more attention, a point repeatedly made
by Chris Sims (for example in Sims, 2009).

8. CONCLUDING REMARKS
The title of this paper is not only a tribute to
Friedman and Schwartz’s (1971) opus magnum,
but also a statement of the limitations of our investigation. Neither the space allocated to us34 nor
our own abilities allow us to get even close to
Friedman and Schwartz’s achievements. We have
tried to demonstrate, only, that the use of modern
equilibrium theory and econometric methods
allows us to read the monetary policy history of
the United States since 1959 in ways that we find
fruitful. We proposed and estimated a DSGE model
with stochastic volatility and parameter drifting.
The model gave us a clear punch line: First, there
is ample evidence of both strong changes in the
volatility of the structural shocks that hit the economy and changes in monetary policy. The changes
in volatility accounted for most of the Great
Moderation. The changes in monetary policy
mattered for the rise and conquest of the Great
Inflation. Inflation stayed low during the next
decades in large part due to good shocks. When
we go to the historical record and use the results
of our estimation to read and assess the documentary evidence, we find ample confirmation, in our
opinion, that the model, despite all its limitations,
is teaching us important lessons.
34

For an only slightly longer period than ours, Meltzer (2010) requires
1,300 pages to cover the details of the history of monetary policy
in the United States, including the evolution of operational procedures that we have not even mentioned.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

As we argued in the previous section, we
leave much unsaid. Hopefully, the results in this
paper will be enticing enough for other researchers
to continue a close exploration of recent monetary
policy history with the tools of modern dynamic
macroeconomics.

REFERENCES
Abrams, Burton A. “How Richard Nixon Pressured Arthur Burns: Evidence from the Nixon Tapes.” Journal of
Economic Perspectives, Fall 2006, 20(4), pp. 177-88.
Aruoba, S. Boragan and Schorfheide, Frank. “Sticky Prices versus Monetary Frictions: An Estimation of Policy
Trade-offs.” American Economic Journal: Macroeconomics (forthcoming).
Aruoba, S. Boragan; Fernández-Villaverde, Jesus and Rubio-Ramírez, Juan F. “Comparing Solution Methods for
Dynamic Equilibrium Economies.” Journal of Economic Dynamics and Control, December 2006, 30(12),
pp. 2477-508.
Axilrod, Stephen H. Inside the Fed: Monetary Policy and Its Management, Martin through Greenspan to Bernanke.
Cambridge, MA: MIT Press, 2009.
Bansal, Ravi and Yaron, Amir. “Risks for the Long Run: A Potential Resolution of Asset Pricing Puzzles.”
Journal of Finance, August 2004, 59(4), pp. 1481-509.
Board of Governors of the Federal Reserve System. “Statement by G. William Miller, Chairman, Board of
Governors of the Federal Reserve System, before the Committee on the Budget, U.S. Senate, March 15, 1978,”
in Federal Reserve Bulletin, March 1978, 64(3), pp. 190-94.
http://fraser.stlouisfed.org/publications/frb/1978/download/60182/frb_031978.pdf.
Board of Governors of the Federal Reserve System. Meeting of the Federal Open Market Committee, February 3-4,
1994 (FOMC Transcripts); www.federalreserve.gov/monetarypolicy/files/FOMC19940204meeting.pdf.
Brayton, Flint; Levin, Andrew; Lyon, Ralph and Williams John C. “The Evolution of Macro Models at the Federal
Reserve Board.” Carnegie-Rochester Conference Series on Public Policy, December 1997, 47(1), pp. 43-81.
Bremner, Robert P. Chairman of the Fed: William McChesney Martin Jr. and the Creation of the American
Financial System. New Haven, CT: Yale University Press, 2004.
Burns, Arthur F. “The Anguish of Central Banking.” The 1979 Per Jacobsson Lecture, Belgrade, Yugoslavia,
September 30, 1979; www.perjacobsson.org/lectures/1979.pdf.
Christiano, Lawrence; Eichenbaum, Martin and Evans, Charles L. “Nominal Rigidities and the Dynamic Effects
of a Shock to Monetary Policy.” Journal of Political Economy, June 2005, 113(1), pp. 1-45.
Cho, In-Koo; Williams, Noah and Sargent, Thomas J. “Escaping Nash Inflation.” Review of Economic Studies,
January 2002, 69(1), pp. 1-40.
Clarida, Richard; Galí, Jordi and Gertler, Mark. “Monetary Policy Rules and Macroeconomic Stability: Evidence
and Some Theory.” Quarterly Journal of Economics, February 2000, 115(1), pp. 147-80.
Cogley, Timothy and Sargent, Thomas J. “Evolving Post-World War II U.S. Inflation Dynamics,” in Ben S.
Bernanke and Kenneth Rogoff, eds., NBER Macroeconomics Annual 2001. Volume 16. Cambridge, MA: MIT
Press, 2002, pp. 331-88.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

335

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Cogley, Timothy and Sargent, Thomas J. “Drifts and Volatilities: Monetary Policies and Outcomes in the Post
WWII U.S.” Review of Economic Dynamics, April 2005, 8(2), pp. 262-302.
Dotsey, Michael; King, Robert G. and Wolman, Alexander L. “State-Dependent Pricing and the General Equilibrium
Dynamics of Money and Output.” Quarterly Journal of Economics, May 1999, 114(2), pp. 655-90.
Ehrlichman, John. Witness to Power: The Nixon Years. New York: Simon and Schuster, 1982.
Fernández-Villaverde, Jesús and Rubio-Ramírez, Juan F. “Estimating Macroeconomic Models: A Likelihood
Approach.” Review of Economic Studies, October 2007, 74(4), pp. 1059-87.
Fernández-Villaverde, Jesús and Rubio-Ramírez, Juan F. “How Structural Are Structural Parameters?” in Daron
Acemoglu; Kenneth Rogoff and Michael Woodford, eds., NBER Macroeconomics Annual 2007. Volume 22.
Chicago: University of Chicago Press, 2008, pp. 83-137.
Fernández-Villaverde, Jesús; Guerrón-Quintana, Pablo and Rubio-Ramírez, Juan F. “Fortune or Virtue: TimeVariant Volatilities Versus Parameter Drifting in U.S. Data.” NBER Working Paper No. 15928, National Bureau
of Economic Research, April 2010; www.nber.org/papers/w15928.
Fisher, Jonas. “The Dynamic Effects of Neutral and Investment-Specific Technology Shocks.” Journal of Political
Economy, June 2006, 114(3), pp. 413-52.
Friedman, Milton and Schwartz, Anna J. A Monetary History of the United States, 1867-1960. Princeton, NJ:
Princeton University Press, 1971.
Goodfriend, M. and King, Robert. “The Incredible Volcker Disinflation.” Journal of Monetary Economics, July
2005, 52(5), pp. 981-1015.
Greenwood, Jeremy; Herkowitz, Zvi and Krusell, Per. “Long-Run Implications of Investment-Specific Technological
Change.” American Economic Review, June 1997, 87(3), pp. 342-62.
Hall, Robert E. “Macroeconomic Fluctuations and the Allocation of Time.” Journal of Labor Economics, January
1997, 15(1 Part 1), pp. S223-S250.
Hetzel, Robert L. The Monetary Policy of the Federal Reserve: A History. New York: Cambridge University Press,
2008.
Hurwicz, Leonid. “On the Structural Form of Interdependent Systems,” in Ernest Nagel, Patrick Suppes, and
Alfred Tarski, eds., Logic, Methodology and Philosophy of Science: Proceedings of the 1960 International
Congress. Stanford, CA: Stanford University Press, 1962, pp. 232-39.
Jaimovich, Nir and Siu, Henry E. “The Young, the Old, and the Restless: Demographics and Business Cycle
Volatility.” American Economic Review, June 2009, 99(3), pp. 804-26.
Judd, Kenneth L. Numerical Methods in Economics. Cambridge, MA: MIT Press, 1998.
Justiniano, Alejandro and Primiceri, Giorgio. “The Time Varying Volatility of Macroeconomic Fluctuations.”
American Economic Review, June 2008, 98(3), pp. 604-41.
Kettl, Donald. Leadership at the Fed. New Haven, CT: Yale University Press, 1986.
Kim, Chang-Jin and Nelson, Charles R. “Has the U.S. Economy Become More Stable? A Bayesian Approach
Based on a Markov-Switching Model of the Business Cycle.” Review of Economics and Statistics, November
1998, 81(4), pp. 608-16.
Kiyotaki, Nobuhiro and Wright, Randall. “On Money as a Medium of Exchange.” Journal of Political Economy,
August 1989, 97(4), pp. 927-54.
Lagos, Ricardo and Wright, Randall. “A Unified Framework for Monetary Theory and Monetary Analysis.”
Journal of Political Economy, June 2005, 113(3), pp. 463-84.

336

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Lubick, Thomas A. and Schorfheide, Frank. “Testing for Indeterminacy: An Application to U.S. Monetary Policy.”
American Economic Review, March 2004, 94(1), pp. 190-217.
Maisel, J. Sherman. Managing the Dollar. New York: W.W. Norton, 1973.
McConnell, Margaret M. and Pérez-Quirós, Gabriel. “Output Fluctuations in the United States: What Has
Changed Since the Early 1980’s?” American Economic Review, December 2000, 90(5), pp. 1464-76.
Meltzer, Allan H. A History of the Federal Reserve. Volume 2, Books 1 and 2. Chicago: University of Chicago
Press, 2010.
Meyer, Laurence H. A Term at the Fed: An Insider’s View. New York: Harper Collins, 2004.
Orphanides, Athanasios. “Monetary Policy Rules and the Great Inflation.” American Economic Review, May
2002, 92(2), pp. 115-20.
Romer, Christina D. and Romer, David H. “Does Monetary Policy Matter? A New Test in the Spirit of Friedman
and Schwartz,” in Olivier J. Blanchard and Stanley Fischer, eds., NBER Macroeconomics Annual 1989.
Volume 4. Cambridge, MA: MIT Press, 1989, pp. 121-70.
Romer, Christina D. and Romer, David H. “A Rehabilitation of Monetary Policy in the 1950’s.” American
Economic Review, May 2002a, 92(2), pp. 121-27.
Romer, Christina D. and Romer, David H. “The Evolution of Economic Understanding and Postwar Stabilization
Policy.” Presented at a symposium sponsored by the Federal Reserve Bank of Kansas City, “Rethinking
Stabilization Policy,” Jackson Hole, Wyoming, August 29-31, 2002b, pp. 11-78;
www.kc.frb.org/PUBLICAT/SYMPOS/2002/pdf/S02RomerandRomer.pdf.
Romer, Christina D. and Romer, David. “Choosing the Federal Reserve Chair: Lessons from History.” Journal of
Economic Perspectives, Winter 2004, 18(1), pp. 129-62.
Samuelson, Paul A. and Solow, Robert M. “Analytical Aspects of Anti-Inflation Policy.” American Economic
Review, May 1960, 50(2), pp. 177-94.
Sargent, Thomas J. “Evolution and Intelligent Design.” American Economic Review, March 2008, 98(1), pp. 5-37.
Schmitt-Grohé, Stephanie and Uribe, Martín. “Optimal Fiscal and Monetary Policy in a Medium-Scale
Macroeconomic Model,” in Mark Gertler and Kenneth Rogoff, eds., NBER Macroeconomics Annual 2005,
Cambridge, MA: MIT Press, 2006, pp. 382-425; www.nber.org/books/gert06-1.
Schreft, Stacey L. “Credit Controls: 1980.” Federal Reserve Bank of Richmond, Economic Review, November/
December 1990, 6, pp. 25-55;
www.richmondfed.org/publications/research/economic_review/1990/pdf/er760603.pdf.
Sims, Christopher A. “Price Level Determination in General Equilibrium.” Plenary talk at the Society for
Economic Dynamics Annual Meeting, July 2-4, 2009, Istanbul, Turkey.
Sims, Christopher A. and Zha, Tao. “Were There Regime Switches in U.S. Monetary Policy?” American Economic
Review, March 2006, 96(1), pp. 54-81.
Smets, Frank R. and Wouters, Raf. “An Estimated Dynamic Stochastic General Equilibrium Model of the Euro
Area.” Journal of the European Economic Association, September 2003, 1(5), pp. 1123-75.
Stock, James H. and Watson, Mark W. “Has the Business Cycle Changed, and Why?” in Mark Gertler and Kenneth
Rogoff, eds., NBER Macroeconomics Annual 2002. Volume 17. Cambridge, MA: MIT Press, 2003, pp. 159-218.
Stock, James H. and Watson, Mark W. “Why Has U.S. Inflation Become Harder to Forecast?” Journal of Money,
Credit, and Banking, February 2007, 39(S1), pp. 3-33.

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W

J U LY / A U G U S T

2010

337

Fernández-Villaverde, Guerrón-Quintana, Rubio-Ramírez

Tobin, James. “Monetary Policy in 1974 and Beyond.” Brookings Papers on Economic Activity, 1974, 1, pp. 219-32.
U.S. Senate. Committee on Banking, Housing, and Urban Affairs. “Nomination of Paul A. Volcker.”
Washington, DC: U.S. Government Printing Office, 1979.
U.S. Senate. Committee on Banking, Housing, and Urban Affairs. “Nomination of Alan Greenspan.”
Washington, DC: U.S. Government Printing Office, 1987.
Volcker, Paul A. “The Triumph of Central Banking?” The 1990 Per Jacobbson Lecture, Washington, DC,
September 23, 1990; www.perjacobsson.org/lectures/1990.pdf.
Volcker, Paul A. and Gyohten, Toyoo. Changing Fortunes: The World’s Money and the Threat to American
Leadership. New York: Times Books/Random House, 1992.
Wallace, Neil. “Whither Monetary Economics?” International Economic Review, November 2001, 42(4),
pp. 847-69.
Weintraub, Robert. “Monetary Policy and Karl Brunner.” Journal of Money, Credit, and Banking, February 1977,
9(1 Part 2), pp. 255-58.
Woodford, Michael. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton
University Press, 2003.

338

J U LY / A U G U S T

2010

F E D E R A L R E S E R V E B A N K O F S T. LO U I S R E V I E W