View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Central Banking in Theory and Practice
Lecture I: Targets, Instruments, and Stabilization

Alan S. Blinder
Vice Chairman
Board of Governors of the Federal Reserve System
Washington, D.C.

Marshall Lecture
Presented at
University of Cambridge
Cambridge, England
May 4, 1995

I am grateful to my Federal Reserve colleagues Janet Yellen, Jon Faust, Richard Freeman,
Dale Henderson, Karen Johnson, Ruth Judson, David Lindsey, Athanasios Orphanides,
Vincent Reinhart, Peter Tinsley and, especially, David Lebow for extensive assistance and
useful discussions. The insularity of this list does not reflect any belief that all wisdom
resides inside the Fed, but merely the fact that time and my calendar precluded any further
circulation of this draft.

r

1. Introduction
I realize that these are the Marshall lectures, not the
Ricardo lectures. But please pardon a momentary digression on
comparative advantage, for I have long believed that the true
test of whether a person is an economist is how devoutly he or
she lives by the principle of comparative advantage. And I don't
mean just preaching it, but actually practicing it. For example,
I have long harbored doubts about my economist friends who mow
their own lawns rather than hiring a gardener. While they
rationalize their eccentric behavior by claiming that they
actually enioy cutting grass, such a claim is suspect on its
face. More to the point, a true believer in comparative advantage
should be constitutionally incapable of enjoying activities that
run so deeply against the Ricardian grain.
Being a devotee of comparative advantage, once I agreed to
give the Marshall lectures, the topic virtually chose itself.
Greater economic theorists and more skilled econometricians than
I have delivered these lectures in the past and doubtless will in
the future. But there must be relatively few people on earth who
have been so thoroughly steeped in the academic literature on
monetary policy and then found themselves actually living in the
world they used to theorize about. Therein, I presume, lies my
comparative advantage. So that is the topic of these two
lectures: the theory and practice of central banking.
To keep things manageable, I have pared the topic beyond
what the title may suggest. First, central bankers, I can assure

you, are busy with many matters that are related tangentially if
at all to monetary policy—such as managing the payments system
and supervising banks. But I will stick to monetary policy
proper. Second, I will deal much more with the behavior of
central banks than with the monetary transmission mechanism. In
these lectures, short-term interest rates are more often lefthand
than righthand variables.
Somewhat arbitrarily, I have divided this subject matter
into two parts: old-fashioned and new-fangled. Today's lecture
covers the old-fashioned parts. What I have to say today probably
could have been written 15 or 20 years ago, if I had had the
relevant practical experience then, which I did not. In
particular, today's lecture mostly ignores the expectational and
game theoretic issues that have been central to much of the
modern academic literature on central banking. I do this not out
of a fondness for sounding archaic, but to remind this
sophisticated audience that some of the lessons of the past are
still important and very much central to monetary policymaking in
the practical world. Tomorrow, I will turn to some of the topics
that have occupied the attention of modern academic theorists of
central banking—like credibility, dynamic consistency, and
central bank independence.
Let me give away the main theme right off the bat. It comes
in three parts. First, central banking looks rather different in
practice than it does in theory. Second, both theory and practice
could benefit from greater contact with and deeper understanding
2

of the other. Neither of these will surprise you, but the third
one might: It is in the old-fashioned realm, I believe, that
practical central bankers have the most to learn from the
"theorists,111 while theorists could and should pay more attention
to practitioners in the new-fangled realm.
2. Targets and Instruments: The Rudiments
In their role as monetary policymakers, central banks have
certain objectives—such as low inflation, output stability, and
perhaps external balance—and certain instruments to be deployed
in meeting their responsibilities, such as bank reserves or
short-term interest rates. Unless it has only a single goal,2 the
central bank is forced to strike a balance among competing
objectives, that is, to face up to various tradeoffs• Unless your
education in economics is very thin, these two sentences
immediately bring to mind Tinbergen (1952) and Theil (1961). So
let us begin there, at the beginning.
In theory, it works like this. There is a known model of the
macroeconomy, which I write in structural form as:
(1)

y = F(y, x, z) + e

and in reduced form as:
(2)

y = G(x, z) + e.

Here y is the vector of endogenous variables (a few of which are

®I mean, of course, theorists armed with appropriate
econometric evidence.
2

0ne example is a central bank that must fix the exchange rate.
A number of people have suggested that central banks should pursue
zero inflation to the exclusion of all other objectives.
3

central bank objectives), x is the vector of policy instruments
(which may be of size one), and z is the vector of nonpolicy
exogenous variables. The vector e of stochastic disturbances will
fade in importance once I assume, with Tinbergen and Theil, that
F(.) is linear and the policymaker's objective function,
(3)

W = W(y),

is quadratic. In principle, the policymaker maximizes (3) subject
to the constraint (2) to derive an optimal policy "rule":
(4)

x* = H(z) .

All very simple.
What's wrong with this simple framework? Both nothing and
everything. Starting with "nothing," I do believe t h a t — o n c e you
have added a host of complications, several of which I will speak
about today—this is the right way for a central banker to think
about monetary policy. You have an economy; except for the policy
instruments you control, you must accept it as it is. You also
have multiple objectives—your own, or those assigned to you by
the legislature

and you must weigh them somehow, though perhaps

not quadratically. To a significant extent, though usually quite
informally, central bankers do think about policy this way.
But, as is well-known, there are many complications. Let me
just list a few, some of which I will dwell upon at length in the
balance of today's lecture:
1. Model uncertainty; In practice, of course, we do not know
the model, but must estimate it econometrically. Since economists
agree neither on the "right" model nor on the "right" econometric
4

techniques, this is a nontrivial problem. It means, among other
things, that policy multipliers—the derivatives of G(.) with
respect to x — a r e subject to considerable uncertainty.
2. Lags; Any reasonable macroeconometric model will have a
complex lag structure that is ignored by (1). This is not much of
a problem in principle because, as all graduate students learn,
this complication can be accommodated by the formalism simply by
appending further equations for lagged variables (cf. Chow
(1975)). However, in practice it creates serious difficulties
that bedevil policymakers.
3. Need for forecasts: With lags, execution of the
Tinbergen-Theil framework requires forecasts of the future paths
of the exogenous variables—in principle, the entire z vector,
which may be quite long. Needless to say, such forecasts are
neither easy to generate nor particularly accurate.
4. Choice of instrument: The Tinbergen-Theil framework takes
as given that some variables are endogenous and others are policy
instruments. In most cases, however, the central bank has at
least some latitude, and maybe quite a lot, in choosing its
instrument(s). One way of thinking about this is that some xs and
ys can trade places at the discretion of the central bank. For
example, the short-term interest rate can be the policy
instrument and bank reserves an endogenous variable; or the
central bank can do things the other way around. Some economists
take this idea a little too far and write models in which the
central bank can directly control, say, nominal GDP, the
5

inflation rate, or the unemployment rate on a period-by-period
basis. Believe me, we cannot.
5. The objective function: The next problem can be framed as
a question: Who supplies the objective function? The answer,
typically, is: no one. The political authorities, who after all
should decide such things, rarely if ever give such explicit
instructions to their central banks. So central bankers m u s t — i n
a figurative, not literal, sense—create their own W(.) function
based on their legal mandate, their own value judgments, and
perhaps their readings of the political will. This last thought
brings up the independence of the central bank, to which I will
return tomorrow.
Summing up, if I wanted to be curmudgeonly, I could
summarize the problems with applying the Tinbergen-Theil program
as follows: We do not know the model (1), and we do not know the
objective function (3), so we cannot compute the optimal policy
rule (4). To some critics of "impractical" or "theoretical"
economics, including some central bankers, this criticism is a
show-stopper. But, speaking now as a practical central banker, I
think such know-nothingism is not a very useful attitude. In
fact, in my view, we must use the Tinbergen-Theil approach—with
as many of the complications as we can handle—even if in a quite
informal way. An analogy will explain why.
Consider your role as the owner of an automobile. You have
various objectives toward which the use of your car contributes,
such as getting to work, shopping, and going on pleasure trips.
6

You do not literally "know" the utility function which weighs
these objectives, but you presumably wish to maximize it
nonetheless. The care and feeding of your car entails
considerable expense, and you have great uncertainty

the

"model" that maps inputs like gasoline, oil, and tir~c into
outputs like safe, uneventful trips. Furthermore, there arc
substantial, stochastic lags between maintainence expenditures
(e.g., frequent oil changes) and their payoff (e.g., greater
engine longevity).
What do you do? One alternative is the "putting out fires"
strategy: Do nothing for your car untiJ it breaks down, then fix
whatever is broken and continue driving until something elcc
breaks down. I submit that virtually none of us follows this
strategy because we know it will produce poor results.3 Instead,
we all follow something that approximates—philosophically if not
mathematically

the Tinbergen-Theil framework. Central banks do,

too. Or at least they should, for they will surely fail ir their
stabilization-policy mission if they simply "put out fires" as
they observe them. Let me review briefly how the Tinbergen-Theil
framework is used in practice.
To begin with, there must be a macro model. It need not be a
system of several hundred stochastic difference equations, though

3

In the engineering literature on control of nonlinear
systems in which the model is only an approximation to reality,
smoothing of control instruments is often recommended because
sudden, large reversals of instrument settings may set off
unstable oscillations. A related problem in the economics
literature is instrument instability (Holbrook (1972)).
7

that is not a bad place to start. In fact, no central bank that I
know of, and certainly not the Federal Reserve, is literally wed
to a single econometric model of its economy. Some banks have
such models, and some do not. But, even if they do not, or do not
use it, some kind of a m o d e l — h o w e v e r i n f o r m a l — i s necessary to
do policy, for otherwise how can you even begin to estimate the
effects of changes in policy instruments?
Some central bankers scoff at large-scale macroeconometric
models, as do some academic economists. And their reasons are not
all that dissimilar. Many point, for example, to the likelihood
of structural change in any economy over a period of several
decades, which casts doubt on the stationarity assumptions that
underlie standard econometric procedures and thus on the bedrock
notion that the past is a guide to the future. Others express
skepticism that something as complex as an entire economy can be
captured in any set of equations. Still other critics emphasize a
host of technical problems in time series econometrics that cast
doubt on any set of estimated coefficients. Finally, some central
bankers simply do not understand these ungainly creatures at all,
and doubt that they should be expected to.
Leaving aside the last, there is truth in each of these
criticisms. Every model is an oversimplification. Economies do
change over time. Econometric equations often fail subsample
stability tests. Econometric problems like simultaneity, common
trends, and omitted variables are ubiquitous in nonexperimental
data. Yet what are we to do about these problems? Be skeptical?
8

Of course. Use several methods and models instead of just one?
Certainly. But abandon all econometric modelling? I think not.
The criticisms of macroeconometrics are not wrong, but their
importance is often exaggerated and their implications
misunderstood. These criticisms should be taken as warnings--as
calls for caution, humility, and flexibility of m i n d — n o t as
excuses to retreat into nihilism. It is foolish to make the best
the enemy of the moderately useful.
Indeed, I would go further. I don't see that we central
bankers even have the luxury of ignoring econometric estimates.
Monetary policymaking requires more than just the qualitative
information that theory provides—e.g., that if short-term
interest rates rise, real GDP growth will subsequently fall. (And
who said the theory always gets the sign right, anyway?) We must
have quantitative information about magnitudes and lags, even if
that information is imperfect. I often put the choice this way:
You can get your information about the economy from admittedly
fallible statistical relationships, or you can ask your uncle. I,
for one, do not hesitate over this choice. But I fear there may
be too much uncle-asking in government circles in general, and in
central banking circles in particular.
3. Uncertainties: Models and Forecasts
Let me now turn to the first of three important amendments
to the Tinbergen-Theil framework, beginning with the obvious fact
that no one knows the "true model." It would hardly have been
news to Tinbergen and Theil that both models and forecasts of
9

exogenous variables are subject to considerable uncertainties.
And subsequent developments by economists have provided ways of
handling or finessing these gaps in our knowledge. 4 Let us
consider, very briefly, three types of uncertainty.
Uncertainty about forecasts: In the linear-quadratic case,
uncertainty about the values of future exogenous variables is no
problem in principle: you need only replace unknown future
variables with their expected values (the "certainty equivalence"
principle). But here is one case in which the gap between theory
and practice is huge, because the task of generating unbiased
forecasts of dozens or even hundreds of exogenous variables is a
titanic practical problem. It is, for example, a major reason
why large-scale econometric models are not terribly useful as
forecasting tools. 5
Skeptics often object to certainty equivalence on the
grounds that (a) the economy is nonlinear and

(b) there is no

4

In Knight's terminology, these methods apply to cases of
"risk" rather than "uncertainty." Risk arises when a random
variable has a known probability distribution; uncertainty arises
when the distribution is unknown. In the real world we are
normally dealing with uncertainty rather than risk. And here,
almost by definition, formal modeling gives us little guidance.
5

I should clarify what I mean. Used mechanically, the large
models are not very good at forecasting "headline" variables like
GDP and inflation—which is why virtually no model proprietors
use them this way. (Almost all hand-adjust both equations and
exogenous variables.) But other forecasting techniques—including
pure j u d g m e n t — a l s o produce modest records. So perhaps big models
should not be dismissed so readily. Furthermore, econometric
models are an essential tool in enforcing the consistency you
need to forecast the hundreds of variables in a typical macro
model.
10

particular reason to think that the objective function is
quadratic. Both are undoubtedly true and, if taken literally,
invalidate the certainty-equivalence principle. But I think the
importance of this point is often exaggerated by those who would
denigrate the .usefulness

and thereby escape the discipline—of

formal econometric models. Policymakers almost always will be
contemplating changes in policy instruments that can be expected
to lead to small changes in macroeconomic variables. For such
changes, anv model of an economy is approximately linear and anv
convex objective function is approximately quadratic.6 So this
problem of principle is, in my view, not terribly important in
practice—except on those rare occasions when large changes in
policy are being considered.
Uncertainty.about parameters: Uncertainty about parameters,
and hence about policy multipliers, is much more difficult to
handle, even at the conceptual level. Certainty equivalence
certainly does, not apply. While there are some fairly
sophisticated techniques for dealing with parameter uncertainty
in optimal control models with learning, those methods have not
attracted the attention of either macroeconomists or
policymakers, and perhaps for good reason.
There is, however, one oft-forgotten principle that I
suspect practical central bankers c a n — a n d in a rough way d o —
rely upon. Many years ago, William Brainard (1967) demonstrated

6

Samuelson (1970) proves an analogous proposition in the
context of portfolio theory.
1 1

that, under certain conditions,7 uncertainty about policy
multipliers should make policymakers conservative in the
following specific sense: They should compute the direction and
magnitude of their optimal policy move in the Tingergen-Theil way
and then do less.
Here is a trivial adaptation of Brainard's simple example.
Simplify equation (2) to:
(2')

y = Gx + Z + e,

and suppose that G and Z are independent random variables with
means g and z respectively, and the policymaker wishes to
minimize E(y - y * ) 2 .

Interpret Z+e as the value of y in the

absence of any further policy move (x=o) and x as the
contemplated change in policy. If G is nonrandom, the optimal
policy adjustment is certainty equivalence:
x = (y* - Z )/g,
that is, fully closing the expected gap between y* and z. But if
G is random with mean g and standard deviation s, the loss
function is minimized by setting:
x = (y* - z)/(g + s2/g) ,
which means that policy aims to fill only part of the gap.
My intuition tells me that this finding is more general—or
at least more w i s e — i n the real world than the mathematics will

7

0ne very important one is that covariances are small enough
to be ignored. With sizable covariances, anything goes.
12

support.8 And I certainly hope it is, for I can tell you that it
is never far from ly mind when I sit in my office at the Federal
Reserve. In my view as both a citizen and a policymaker, a little
stodginess at the central bank is entirely appropriate.
Uncertainty over model selection: Parameter uncertainty,
while difficult, is at least a relatively well-defined problem.
Selecting the right model from among a variety of non-nested
alternatives is another matter entirely. While there is some
formal literature on this problem,9 I think it safe to say that
central bankers neither know nor care much about this literature.
I leave it as an open question whether they are missing much.
My approach to this problem is relatively simple: Use a wide
Variety of models and don't ever trust any one of them too much.
So, for example, when the Federal Reserve staff explores policy
alternatives, I always insist on seeing results from (a) our own
quarterly econometric model, (b) several alternative econometric
-itibdelsi and (c) a variety of vector autoregressions (VARs) that I
have developed for this purpose." My usual procedure is to
simulate a policy on as mahy of these models as possible, throw
out the outlier(s), and average the rest to get a point estimate

8

With many random variables and nonzero covariances, the
mathematics does not "prove" that conservatism is optimal. In
some cases, parameter uncertainty will actually produce greater
activism;
9

0ne strand, derived from the optimal control literature,
deals with choosing among rival models. Another strand, due to
Hendry and his collaborators, focusses on encompassing tests.
See, for example, Hendry and Mizon (1993).
13

of a dynamic multiplier path. I would be very grateful if some
brilliant young econometric theorist would prove that this
constitutes optimal information processing!
4. Lags in Monetary Policy
It is by now a commonplace that monetary policy operates on
the economy with "long and variable lags." As I noted previously,
the formalism of the Tinbergen-Theil framework can readily
accommodate distributed lags. The costs are two. First, the
dimensionality of the problem increases; but with modern
computing power this is not much of a problem. Second, the
optimization problem changes from one of calculus to one of
dynamic programming.10 This latter point is significant in
practice and, I think, inadequately appreciated by practitioners.
A dynamic programming problem is typically "solved
backward," that is, if T is the final period and x is the policy
instrument, you first solve a one-period optimization problem for
period T, thereby deriving ,xT conditional on a past history. (The
postscript denotes calendar time and the prescript denotes the
date at which the expectation is taken.) Then, given your
solution for txx, which most likely depends inter alia on txT.,, you
solve a two-period problem for txT and txT., jointly. Proceeding

10

Kydland and Prescott (1977) showed that it is an error to
pursue dynamic programming mechanically if private agents base
decisions on expectations about future policy. In that case,
expectational reactions to policy must be taken into account. I
use the term "dynamic programming" generically, intending to
include such reactions of expectations.
14

similarly, by a process of backward induction you derive an
entire solution path:
*t' t^t+lf t^t+2' • • • ' JXT •
Don't get me wrong. I do not believe it is important for
central bankers to acquire any deep understanding of Bellman's
principle, still less of the computational techniques used to
implement it. What really matters for sound decisionmaking is the
way dynamic programming teaches us to think about intertemporal
optimization problems—and the discipline it imposes. It is
essential, in my view, for central bankers to realize that, in an
dynamic economy with long lags in monetary policy, today's
monetary policy decision must be thought of as the first step
along a path. The reason is simple: Unless vou have thought
through vour expected future actions, it is impossible to make
today's decision rationally. For example, when a central bank
decides to begin a cycle of tightening, it should have some idea
about where it is going before it takes the first step.
Of course, by the time period t+1 rolls around the
policymaker will have new information and may wish to change its
mind about its earlier tentative decision ,xt+1. That is fine. In
fact, given the information then available, it will want to plan
an entirely new path:
^t+l* t+l^t+2' t+l^t+S' •••» t+l^T •
But that realization in no way obviates the need to think ahead
in order to make today's decision—which is the important lesson

15

of dynamic programming. It is an intensely practical lesson and,
I believe, one that is inadequately understood,
Too often decisions on monetary policy—and, indeed, on
other policies—are taken "one step at a time" without any clear
notion of what the next several steps are likely to be. Some
people claim that such one-step-at-a-time decisionmaking is wise
because it maintains "flexibility" and guards against getting
"locked in" to decisions the central bank will later regret. But
that is a grave misunderstanding of the way dynamic-programming
teaches us to think. It is absolutely correct that flexibility
should be maintained and locking yourself in should be avoided.
But both of these notions are inherent in dynamic programming. If
there are any surprises at all, the decisions that you actually
carry out in the future will differ from the ones you originally
planned. That's flexibility. Ignoring your own likely future
actions is myopia.
Let me now make this abstract discussion more concrete.
Central banks, both in the U.S. and elsewhere, have often been
accused of making a particular type of systematic error in the
timing of policy changes. Specifically, it is alleged that they
overstay their policy stance—be it tightening or loosening,
thereby causing overshoots in both directions.11 I believe this
criticism may be correct, although I know of no systematic study
that demonstrates it. I furthermore believe that the error, if it

n

See, for example, Meltzer (1991).
16

exists, may be due to following a strategy I call "looking out
the window."
A central bank following the "look out the window" strategy
proceeds as follows. Suppose, just for concreteness, that it is
in the process of tightening. At each decisionmaking juncture, it
takes the economy's temperature and, if it is still too hot,
tightens monetary conditions another notch. Given the long lags
in monetary policy, you can easily see how such a strategy can
keep the central bank tightening for too long.
Now compare "looking out the window" to proper dynamic
optimization. Under dynamic programming, at each stage the bank
would project an entire path of future monetary policy actions,
with associated paths of key economic variables. It would, of
course, act only on today's decision. Then, if things evolved as
expected, it would keep following its projected path, which would
be likely (given the lags in monetary policy) to tell it to stop
tightening while the economy was still "hot." Of course,
economies rarely evolve as expected. Surprises are the norm, not
the exception, and they would induce the central bank to alter
its expected path in obvious ways. If the economy steamed ahead
faster than expected, the bank would tighten more. If the economy
slowed down sooner than expected, the bank would tighten less or
even reverse its stance.
Do central banks actually behave this way? Yes and no. Like
a skilled billiards player who does not understand the laws of
physics, a skilled practitioner of monetary policy may follow a
17

dynamic-programming-type strategy intuitively and informally.
Lately, for example, the notion that it is wise to pursue a
strategy of "preemptive strikes" against inflation seems to have
caught on among central banks. The Federal Reserve, I am proud to
say, seems to have started this trend,12 followed by, e.g., the
Reserve Bank of Australia and the Bank of England.13
Such a strategy implies a certain amount of confidence in
both your forecast and your model of how monetary policy affects
the economy. But not too much. Remember the flexibility principle
of dynamic programming and the Brainard conservatism principle.
Taken together, they lead to the following sort of strategy:14
Step l. Estimate how much you need to tighten or loosen
monetary policy to "get it right." Then do less.
Step 2. Watch developments.
Step 3a. If things work out about as expected, increase your
tightening or loosening toward where you thought it should be in
the first place.

12

This is not self-praise. I was not on the Federal Open
Market Committee when it began to tighten monetary policy in
February 1994.
13

The RBA began tightening in August 1994 and the BOE a month
later. Neither economy had yet reached full capacity, nor was
either yet experiencing an upsurge of inflation.
14

This strategy has a temporal aspect not found in Brainard's
analysis, and hence may embody a big leap of faith. But Aoki
(1967) offered a dynamic generalization of Brainard's result.
Nonetheless, Aoki's result, like Brainard's, is fragile and may
not survive, e.g., nonnegligible covariances.
18

Step 3b. If the economy seems to be evolving differently
from what you expected, adjust policy accordingly.
Two final points about preemptive strikes are worth making.
First, a successful stabilization policy based on preemptive
strikes will appear to be misguided and may leave the central
bank open to vociferous criticism. The reason is simple. If the
monetary authority tightens so early that inflation doesn't rise,
the preemptive strike is a resounding success, but critics of the
central bank will wonder—out loud, no doubt—why the bank
decided to tighten when the inflationary dragon was nowhere to be
seen. Similarly, a successful preemptive strike against economic
slack will prevent unemployment from rising, and leave critics
complaining that the authorities were hallucinating about
unemp1oyment.
Second, the logic behind the preemptive strike strategy is
symmetrical. Precisely the same reasoning that says a central
bank should get a head start against inflation says it should
also strike preemptively against rising unemployment. That is why
Chairman Alan Greenspan told Congress in February 1995, after the
Fed had raised short-term interest rates 300 basis points within
12 months, that: "There may come a time when we hold our policy
stance unchanged, or even ease, despite adverse price data,
should we see signs that underlying forces are acting ultimately
to reduce inflationary pressures."15 In fact, the Fed did

ls

From testimony given to committees of both the House and
Senate on February 22 and 23, 1995, printed in Federal Reserve
19

precisely that back in the summer of 1989, when it started
cutting interest rates while inflation was still rising and
unemployment was below its natural rate.
The preemptive strike strategy applies more to fighting
inflation than to fighting unemployment only if:
1. the short-run Phillips curve is distinctly nonlinear, so
that inflation rises much more in response to low unemployment
than it falls in response to high unemployment. The evidence is
decidedly against this hypothesis for the United States.
2. lags in monetary policy are longer for inflation fighting
than for unemployment fighting, which appears to be true.
3. the central bank's loss function is notably asymmetric.
5. The "Debate" over "Fine-Tuning"

Sometime in the 1970s, or perhaps even in the late 1960s, it
became the height of wisdom to declare that something called
"fine tuning" is impossible because our knowledge base is
insufficient and our instruments are not that finely calibrated.
I agree with these criticisms wholeheartedly. In fact, so far as
I can tell, everybody does. Indeed, I am not sure that anyone
ever took the other side—which makes this a curious debate,
rather like defending motherhood. The only trouble is that I am
not convinced that the debate—or nondebate—has any operational
meaning. It could be that the entire concept of fine-tuning is
epistemologically empty.

Bulletin. April 1995, p. 348.
20

Consider come possible meanings of two common statements
about fine t u n i n g — o n e positive, the other normative:
I. Fine tuning is impossible.
II. No central bank should try to fine-tune its economy.
One possible meaning of statement I is that stabilization policy
cannot entirely eliminate the variance of real output around
trend, nor the variance of inflation around target (possibly
zero), nor therefore any weighted average of the two. If that is
the meaning of the phrase, it is of course indisputably true. But
so what? Does it imply that central banks should therefore not
try to reduce these variances?
That question brings up a possible, though extreme,
interpretation of statement II: that it is unwise to attempt any
stabilization policy at all. In other words, monetary policy
should follow a nonreactive rule like Friedman's k-percent rule
for money growth. But this definition seems to distinguish
between some tuning and no tuning, not between fine tuning and
coarse tuning.
There is indeed a bright line between attempting to
stabilize the economy and abjuring the whole messy business. If
this were the issue, I could understand the debate, bring both
value judgments and technical knowledge to bear on it, and reach
a c o n c l u s i o n — a s I will in tomorrow's lecture. But once you have
left the realm of nonreactive rules and opted for some tuning, I
fail to see any bright l i n e — a n d maybe not even a dim o n e —
between coarse tuning, which is what we central bankers are

21

supposed to do, and fine tuning, which is what we are supposed to
avoid. Don't you always do the best you can, mindful of a host of
uncertainties?
Another possible interpretation of statement II is as an
injunction to follow what I have called Brainard's conservatism
principle: Estimate what you should do and then do less. If so, I
have great sympathy. But I doubt very much that this is what the
anti-fine-tuners have in mind, for the strategy appears to call
for constant adjustments of policy, even small ones, as new
information is received. This sounds a bit like fine, albeit
cautious, tuning.
Another possibility is that policy changes should be
infrequent; most of the time, monetary policy should be "on
hold." Such behavior would resemble the (S,s) strategy of
inventory management. Under an (S,s) inventory policy, a firm
lets its inventory stock drift aimlessly so long as it remains
below some upper limit S and above some lower limit s. But, if
inventories get outside those bounds, it takes prompt action
either to cut stocks down or build them up. The rationale for
such behavior is that each "order" or "sale" entails a fixed
cost, so that frequent, small changes are to be avoided. But what
is the analogous fixed cost for monetary policy in a world in
which markets change interest rates all the time, whether or not
the central bank does anything?
Part of the general hostility toward fine-tuning is surely
the notion that policymakers should not set their sights too high

22

and expect to iron every bump and wiggle out of the economy's
growth and/or inflation path. Once again, I agree but wonder
about the dictum's operational significance. And my brief
practical experience as a central banker has only deepened my
skepticism. Doesn't even a poor archer aim for the bull's eye,
even though he does not expect to hit it?
To make the discussion concrete, consider the situation
faced by the Federal Open Market Committee (FOMC) in recent
months. Sometime in late 1994 or early 1995 (according to
tastes), the U.S. economy reached a position which, if not ideal,
was at least excellent: the lowest unemployment and highest
capacity utilization in years plus the lowest inflation rate in a
generation. So a central bank that eschewed fine tuning would
certainly have been satisfied with the situation and not sought
to twiddle the dials further. But what does that actually mean in
practice? Hold the nominal federal funds rate constant even while
inflation, long-term interest rates, stock market values, and the
dollar's exchange rate moved? Or hold the real rate constant?
Does either represent "constant monetary policy?" And should we,
e.g., have ignored forecasts that a rise in inflation was likely
under unchanged policy?
My point is that monetary policy makers must make some
decision at each moment in time. Even doing nothing—whatever
that m e a n s — i s a decision. In the event, the FOMC raised the
federal funds rate 75 basis points at our November 1994 meeting,
held rates constant at the December meeting, raised rates by 50
23

basis points in February 1995, and then held rates steady again
at the March meeting. Did this constitute fine tuning or not?
What would we have done differently if we were more devoutly
opposed to fine tuning? I must admit that I don't know.
6. The Choice of Monetary Instrument

I conclude today's lecture by taking up one final oldfashioned issue: the choice of monetary instrument. By labeling
some variables as targets and others as instruments. as if that
was their birthright, the Tinbergen-Theil approach elides one of
the most enduring controversies in monetary policy.
In simple models, beginning with Poole (1970), the issue is
often posed as choosing between the rate of interest, r, and the
money supply. M. In one case, r is the instrument and M is an
endogenous variable. In the other case, the roles are reversed.
This dichotomy, of course, is both too confining and too simple.
In reality, there are many more choices

including various

definitions of M, several possible choices for r, bank reserves,
and the exchange rate. Furthermore, it is doubtful that any
interesting definition of M or any interest rate beyond the
overnight bank rate can be controlled tightly over very short
periods of time like a day or a week. In the U.S., the federal
funds rate and bank reserves are probably the only viable
options. But other variables like the Ms become candidates if the
control period is thought of as, say, a quarter.
In principle, for any choice of instrument, you can write
down and solve an appropriately complex dynamic optimization
24

problem, compute the minimized value of the loss function, and
then select the minimum minimorum to determine the optimal policy
instrument. In practice, this technical feat is rarely carried
out.16 And I am pretty sure that no central bank has ever picked
its instrument this way. But, then again, billiards players may
practice physics only intuitively.
Returning to Poole's dichotomy, let me remind you of his
basic conclusion: that large LM shocks militate in favor of
targeting interest rates while large IS shocks militate in favor
of targeting the money supply.17 Since Poole's seminal paper,
monetary theorists have devoted much attention to the question he
posed, and have tackled it in a variety of ways. One such
contribution by Sargent and Wallace (1975), in fact, turned out
to be among the opening salvos in the rational expectations
debate.
Much of this debate was intellectually fascinating. But in
the end, real-world events, not theory, decided the issue.
Ferocious instabilities in estimated LM curves in the United
States, United Kingdom, and many other countries, beginning in
the 1970s and continuing to the present day, led economists and
policymakers alike to the conclusion that M-targeting strategies

16

A few papers in this spirit are Tinsley and von zur Muehlen
(1981), Brayton and Tinsley (1994), and Bryant, Hooper, and Mann
(1993) .
17

Covariances and slopes of IS and LM curve also matter. I
ignore them here.
25

are simply not viable. Some facts about the U.S. monetary
aggregates illustrate just how strong this evidence is.
The cornerstone of monetarism must surely be the notion that
money and nominal income are cointearated. for without such a
long-run relationship why would anyone care about the behavior of
the Ms? Yet a series of cointegration tests for Ml and nominal
GDP, using rolling samples which begin in 1948 and end at various
dates, fail to reject the hypothesis of no cointegration as soon
as the endpoint of the sample extends into the late 1970s. That
is, Ml and nominal GDP are cointegrated only for sample periods
like 1948-1975, not since then. Apparent cointegration between
either M2 or M3 on the one hand and nominal GDP on the other
lasts longer. But it also disappears into a black hole in the
1990s.18 In a word, no sturdy long-run statistical relationship
exists between nominal GDP and anv of the Federal Reserve's three
official definitions of M for anv sample that includes the 1990s.
Because of facts like these, interest rate targeting won by
default. I often put the issue this way: If you want the Fed to
target the growth rate of M, you must first answer two questions:
What definition of M? And how fast should it grow? In recent
years, these questions have become show-stoppers because no one
can provide coherent answers. So, in point of fact, there are

18

This statement is actually too generous to monetarism since
data limited to the 1948-1980 period fail to indicate
cointegration. A cointegrating vector appears only when the
sample is extended well into the 1980s, but then disappears as
data from the 1990s are appended.
26

very few M advocates left in the United States. The death of
monetarism does not make it impossible to pursue a monetary
policy based on rulesi. But it does mean that the rule cannot be a
money-growth rule. I will deal with the broader rules-versusdiscretion debate tomorrow.
Was the theoretical literature therefore useless to
practitioners? Absolutely not. In fact, it is hard to think of an
aspect of monetary policy in which theory and practice interacted
more fruitfully. Poole's conclusion in theory was that
instability in the LM curve should push central banks toward
targeting short-term interest rates. In practice. LM curves
became extremely unstable and one central bank after another
abandoned any attempt to target monetary aggregates.
In the case of the Federal Reserve, the disengagement was
gradual. After a rather exciting experiment with monetarism
between 1979 and 1982, the Fed began backing away from M targets
in 1982. The target growth range for Ml was dropped in 1987, but
growth targets for M3 and, especially, M2 retained a serious role
in monetary policy formulation through 1992. Finally, in February
1993, Chairman Greenspan announced that the Fed was giving "less
weight to monetary aggregates as guides to policy."19 As usual,
however, laws lag far behind both academic knowledge and central
bank practice. A 1978 law which is still on the books requires

19

Statement to the Committee on Banking, Housing, and Urban
Affairs, U.S. Senate, February 19, 1993, printed in the Federal
Reserve Bulletin. April 23, 1993, p. 298.
27

the Federal Reserve to report its target ranges for money growth
to Congress twice a year. We dutifully do this. But the relevance
to policy eludes most of us.
7. In Conclusion
I reach the end of this lecture with a somewhat cheerful
message, one which would have made Alfred Marshall happy. Working
in their cloistered universities, Tinbergen, Theil, Brainard, and
Poole all taught valuable abstract lessons which turned out to be
of direct practical use in central banking. So did other scholars
who developed their ideas further, pointed out additional
complexities, and brought more powerful technical tools to b e a r —
such as econometric models and optimal control. None of these
ideas provide pat answers or can be applied mechanically by
central bankers. The world is much too complicated for that. So
there is still as much art as science in central banking.
Nonetheless, the science is still useful; at least I find it so.
As Marshall wrote: "Exact scientific reasoning will seldom
bring us very far on the way to the conclusion for which we are
seeking, yet it would be foolish to refuse to avail ourselves of
its aid, so far as it will reach:—just as foolish as would be
the opposite extreme of supposing that science alone can do all
the work, and that nothing will remain to be done by practical
instinct and trained common sense."20

20

Principles of Economics, p. 779.
28

That's a nice phrase: trained common sense. Isn't developing
trained common sense what the intersection of theory and practice
should be all about?

29