View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

FEDERAL RESERVE BANK OF ST. LOUIS

Contributing Authors
J. Alfred Broaddus Jr.

Federal Reserve Bank of Richmond
P. O. Box 27622
Richmond, VA 23261
Al.Broaddus@rich.frb.org
Stephen G. Cecchetti

Department of Economics
The Ohio State University
1945 N. High Street
Columbus, OH 43210
cecchetti.1@osu.edu

Alan Greenspan

Chairman, Board of Governors
of the Federal Reserve System
20th Street and Constitution Avenue, NW
Washington, DC 20551
Václav Klaus

President of the Czech Parliament
Snemovni 4, Praha 1, 118 26
CZECH REPUBLIC
kralovad@psp.cz
Stefan Krause

Georgios E. Chortareas

International Economic Analysis Division HO-2
Monetary Analysis
Bank of England
Threadneedle Street
London EC2R 8AH
UNITED KINGDOM
georgios.chortareas@bankofengland.co.uk
K. Alec Chrystal

Finance Department
City University Business School
Barbican
London EC2Y 8HB
UNITED KINGDOM
a.chrystal@city.ac.uk
Alex Cukierman

School of Economics
Tel-Aviv University
Tel-Aviv, 69978
ISRAEL
alexcuk@post.tau.ac.il

Department of Economics
Emory University
1602 Mizell Drive
Atlanta, GA 30322
Frederic S. Mishkin

Columbia School of Business
619 Uris Hall
Columbia University
New York, NY 10027-6902
fsm3@columbia.edu
Manfred J.M. Neumann

Zentrum für Europaeische Integrationsforschung
University of Bonn
Walter Flex Strasse 3
D-53113 Bonn
GERMANY
Neumann@iiw.uni-bonn.de
William Poole

Federal Reserve Bank of St. Louis
411 Locust Street
St. Louis, MO 63102
William.Poole@stls.frb.org

Charles Freedman

Bank of Canada
234 Wellington Street
Ottawa, Ontario K1A 0G9
CANADA
cholmes@bank-banque-canada.ca

Adam S. Posen

Institute for International Economics
1750 Massachusetts Avenue, NW
Washington, DC 20036-1903
aposen@iie.com

J U LY / A U G U S T 2 0 0 2

1

REVIEW
Robert H. Rasche

Research Department
Federal Reserve Bank of St. Louis
411 Locust Street
St. Louis, MO 63102
Robert.H.Rasche@stls.frb.org

Jürgen von Hagen

Zentrum für Europaeische Integrationsforschung
University of Bonn
Walter Flex Strasse 3
D-53113 Bonn
GERMANY
vonhagen@uni-bonn.de

David Stasavage

Department of International Relations
London School of Economics
London WC2A 2AE
UNITED KINGDOM
D.Stasavage@lse.ac.uk
Gabriel Sterne

International Economic Analysis Division HO-2
Monetary Analysis
Bank of England
Threadneedle Street
London EC2R 8AH
UNITED KINGDOM
gabriel.sterne@bankofengland.co.uk
Daniel L. Thornton

Research Department
Federal Reserve Bank of St. Louis
411 Locust Street
St. Louis, MO 63102
Daniel.L.Thornton@stls.frb.org

2

J U LY / A U G U S T 2 0 0 2

Carl E. Walsh

Department of Economics, SS1
University of California, Santa Cruz
Santa Cruz, CA 95064
walshc@cats.ucsc.edu
Mark W. Watson

Woodrow Wilson School
Princeton University
Princeton, NJ 08540
mwatson@princeton.edu

Chairman’s Remarks
Alan Greenspan

TRANSPARENCY IN MONETARY
POLICY
t is my pleasure to address this distinguished
group that President Poole and his colleagues
have assembled to consider the timely issue of
transparency in monetary policy. We at the Federal
Reserve are given two mandates that are not often
spelled out explicitly. First, to implement an effective
monetary policy to meet our legislated objectives.
But, second, to do so in a most open and transparent
manner in recognition that we, as unelected officials,
are accountable both to the Congress from which
we derive our monetary policy mission and, beyond,
to the American people.
These twin goals do not always work in concert.
In the extreme, we could achieve full transparency
if our deliberations and actions occurred only in
public fora. In principle, there is no reason this could
not be done. And I do not doubt that there exists a
select group of professionals who could deliberate
in such open fora as effectively as behind doors.
Milton Friedman—whose effect on monetary policy,
especially here at the Federal Reserve Bank of St.
Louis, is legendary—is one with such sharply refined
skills. I might be able to name a few more, but I
doubt that I would get much beyond counting the
fingers on one hand.
Human nature being what it is, the vast majority
of us are disinclined to offer half-thought-through,
but potentially useful, policy notions only to have
them embarrassingly dissected in front of a national
television audience. When undertaken in such a
medium, deliberations tend toward the less provocative and less useful. I do not say that such a system
cannot function, but I do say that in my three decades
in and out of government, I have never seen it function well. The undeniable, though regrettable, fact
is that the most effective policymaking is done outside the immediate glare of the press. But that notion
and others have been used too often in the past to
justify a level of secrecy that turned out to be an

I

Alan Greenspan is the Chairman of the Board of Governors of the
Federal Reserve System. His remarks were presented via videoconference.

© 2002, The Federal Reserve Bank of St. Louis.

unnecessary constraint on our obligation to be
transparent in conducting the public’s business.
We need to remember that in decades past it was
believed that monetary policy was most effective
when it was least transparent. The argument back
in the 1950s, as I remember it, was that market
uncertainty created significant differences of opinion
in the direction of the prices of short-term debt
instruments. The result was a “thick market” of bids
and asks that increased the degree of liquidity. More
recently, in the 1980s, policymakers, myself included,
were concerned that being too explicit about shortrun targets would make such targets more difficult
to change, impeding necessary adjustments to evolving market and economic conditions. Not too many
years ago, the world learned of decisions of the
Federal Open Market Committee through minor
variations in the minutia of daily open market
operations—that is, effectively through faint signals
that only informed market professionals knew how
to read with accuracy. True, over time, those signals
became increasingly clear, so that in the end, market
participants never missed a policy decision or read
into our open market operations a policy action
when there was none.
As markets, experience, and the magnitude of
outstanding financial instruments changed, the
dead-weight loss created by such uncertainty—
read: “risk”—became increasingly evident, as did
the value of transparency. Simply put, financial markets work more efficiently when their participants
do not have to waste effort inferring the stance of
monetary policy from diffuse signals generated in
the day-to-day implementation of policy. And being
clear about that stance has not constrained our
ability to adjust the stance of monetary policy in
either direction.
Our current disclosure policy, one hopes,
obviates such complexities. In recent years, we
have achieved a far better balance, in my judgment,
between transparency and effective monetary policy
implementation than we thought appropriate in
the past. Accordingly, as you know, we moved to
the immediate disclosure of our policy actions and,
over time, to explaining our decision and our sense
of future risks directly after each meeting. In addition,
we now publish full transcripts of our meetings
after five years. Through these disclosures, together
with congressional testimony, speeches by Board
Governors and Reserve Bank Presidents, and the
publication of the System’s sizable research output,
we endeavor to keep the public well informed. We
J U LY / A U G U S T 2 0 0 2

5

Greenspan

have gotten to our present degree of transparency
through an incremental process, and our disclosure
policy will continue to evolve. At each step, we need
to review whether in our judgment this new degree
of openness optimizes the Federal Reserve’s ability
to implement effective monetary policy in the context of maximum feasible disclosure.
It is inherent in the complex and changeable
nature of our economy that no one can forecast
near-term outcomes with precision. However, it is
also inherent in our economy that in the long run,
the central bank has influence over only nominal
magnitudes. As a result, the Federal Reserve can be
quite explicit about its ultimate objectives—price
stability and the maximum sustainable growth in
output that is fostered when prices are stable. By
price stability, however, I do not refer to a single
number as measured by a particular price index.
In fact, it has become increasingly difficult to pin
down the notion of what constitutes a stable general
price level.
When industrial product was the centerpiece
of the economy during the first two-thirds of the
twentieth century, our overall price indexes served
us well. Pricing a pound of electrolytic copper presented few definitional problems. The price of a ton
of cold rolled steel sheet, or a linear yard of cotton
broad-woven fabrics, could be reasonably compared
over a period of years. But in our new century, the
simple notion of price has turned decidedly ambiguous. What is the price of a unit of software or a legal
opinion? How does one evaluate change in the price
of a cataract operation over a ten-year period when
the nature of the procedure and its impact on the
patient has changed so radically? Indeed, how will
we measure inflation, and the associated financial
and real implications, in the twenty-first century
when our data—using current techniques—could
become increasingly less adequate for tracing price
trends over time?
So long as individuals make contractual arrangements for future payments valued in dollars however, there must be a presumption on the part of

6

J U LY / A U G U S T 2 0 0 2

REVIEW

those involved in the transaction about the future
purchasing power of money. No matter how complex individual products become, there will always
be some general sense of the purchasing power of
money both across time and across goods and services. Hence, we must assume that embodied in all
products is some unit of output, and hence of price,
that is recognizable to producers and consumers
and upon which they will base their decisions.
Doubtless, we will develop new techniques of price
measurement to unearth those units as the years
go on. It is crucial that we do, for inflation can
destabilize an economy even if faulty price indexes
fail to reveal it.
For all these conceptual uncertainties and
measurement problems, a specific numerical inflation target would represent an unhelpful and false
precision. Rather, price stability is best thought of
as an environment in which inflation is so low and
stable over time that it does not materially enter
into the decisions of households and firms. Nonetheless, I cannot help but conclude that the progress
that the Federal Reserve has achieved over the years
in moving toward this old definition of price stability
has contributed to the improvement in our nation’s
longer-term growth prospects that became evident
in the latter part of the 1990s. So, for the time being,
our conventional measures of the overall price level
will remain useful.
President Poole has picked an appropriate topic
for this group to consider. The historical record
indicates that the increased transparency of the
Federal Reserve has helped improve the functioning
of markets and enhanced our credibility. But, to
repeat, openness is more than just useful in shaping
better economic performance. Openness is an obligation of a central bank in a free and democratic
society. U.S. elected leaders chose to vest the responsibility for setting monetary policy in an independent
entity, the Federal Reserve. Transparency of our
activities is the means by which we make ourselves
accountable to our fellow citizens to aid them in
judging whether we are worthy of that task.

Are Contemporary
Central Banks
Transparent About
Economic Models and
Objectives and What
Difference Does It Make?
Alex Cukierman

I. INTRODUCTION
uthority over monetary policy has increasingly been delegated to central banks with
substantially higher levels of independence
than in the past. This worldwide trend has propelled
the twin issues of accountability and transparency
to the forefront of the debate on monetary institutions. The current debate is particularly intense on
the European side of the Atlantic where the formation of a European Central Bank (ECB) facing 12
different fiscal authorities and different types of
labor markets has transformed those previously
mainly academic questions into practical policy
issues.
There is nowadays a good deal of consensus
about the objectives and desirable organization of
monetary policymaking institutions. In particular,
there is widespread consensus that the main objective of monetary policy should be price stability,
that the central bank (CB) should have the freedom
to set the interest rate without political interference,
and that the objectives and the procedures followed
by the CB should be reasonably transparent. The
insistence on transparency is motivated by the
desire to ultimately make the CB accountable to
the general public either directly or through the
intermediation of elected officials. But once those
general principles are translated into operational

A

Alex Cukierman is a professor of economics at the Berglas School of
Economics, Tel-Aviv University, and a research fellow at the Center for
Economic Research, Tilburg University, and CEPR. Previous versions
of this paper were presented at the October 2000 Bundesbank/CFS
conference Transparency in Monetary Policy and at the September
2001 CEPR/ESI conference Old Age, New Economy and Central Banking
at the Bank of Finland. The author thanks Matthew Canzoneri, Jordi
Gali, Petra Geraats, Arie Kapteyn, and Carl Walsh for useful discussions.

© 2002, The Federal Reserve Bank of St. Louis.

guidelines, some differences appear. The consensus
about transparency is most fragile to the introduction of practical guidelines, as illustrated by a recent
interchange between Buiter (1999) and Issing (1999).
Buiter’s position largely reflects what I have called
elsewhere the (new) Bank of England (BE) approach,
and Issing’s position reflects the approach of the
ECB, which has been largely shaped by the philosophy of the Bundesbank (BB) during the last several
decades.1
Both approaches agree on the principle that a
CB should be transparent and accountable but differ
on the means to achieve those goals. The most vocal
disagreements have been about the early publication
of CB forecasts and the voting record of individual
monetary policy council members. The BE approach
is in favor of early release of this information, while
the BB approach is against it. Those differences partly
reflect the BB view that there should be “collective
responsibility” at the CB, while the BE approach
puts relatively more emphasis on the accountability
of individual council members. They also reflect
the fact that since the second half of the 1990s countries such as the United Kingdom and Sweden have
put in place an explicit mechanism of inflation targeting in conjunction with a numerically specified
inflation target that is decided upon by government.2
In such systems the early publication of CB forecasts
is believed to be an essential element of accountability because it enables the principal (government) to
judge whether ex post deviations from the target
were due to poor performance by the agent (the CB)
or to unanticipated economic shocks. The colorful
debate about the publication of forecasts and CB
votes overshadowed two possibly more fundamental
areas in which most (perhaps even all) existing
central banks are rather opaque. One concerns the
economic model, or models, used in making policy
decisions, and the other concerns the operational
objectives of the CB.
This paper focuses on those issues. It has two
main parts. The first evaluates the degree of transparency about the economic models used by con1

A fuller discussion of the differences between those two approaches
regarding the practical implementation of transparency and other
issues appears in the concluding section of Cukierman (2001). See
also de Haan and Eijffinger (2000) for an appraisal of the Buiter-Issing
interchange.

2

Some other countries with explicit inflation targeting systems are
New Zealand, Canada, Finland, Australia, and Spain. In almost all cases
the final formal authority to set the target resides with government.
By contrast, in the case of the BB and the ECB, the target is chosen by
the CB.

J U LY / A U G U S T 2 0 0 2

15

REVIEW

Cukierman

temporary central banks and about their objective
functions. It argues that, in spite of the recently
acknowledged importance of transparency (particularly in some inflation-targeting countries), there is
substantial haziness about the economic models
used by CBs to generate forecasts as well as about
their objective function. Some of this haziness is
due to the absence of clear knowledge about the
“true” model of the economy and some is due to
the attempt of policymakers to hedge their positions
in the face of model and of political uncertainties.
The second part of the paper examines whether
haziness about objectives matters for credibility
when monetary policymakers are more sensitive
to negative than to positive output gaps. The initial
motivation for this exercise is the following statement from Blinder (1998, pp. 19-20), made shortly
after his resignation from the office of Vice Chairman
of the Fed: “In most situations the CB will take far
more political heat when it tightens preemptively
to avoid higher inflation than when it eases preemptively to avoid higher unemployment.”
A fuller description of the second part of the
paper is provided after the following recent literature review.
Since the early 1980s the dominant academic
paradigm for conceptualizing the positive and sustained inflation rates experienced by most countries
during the twentieth century has been the KydlandPrescott (1977) and Barro-Gordon (1983) framework
(henceforth KPBG). This view includes an inflation
bias that is due to the fact that, owing to tax and/or
other labor market imperfections, the natural level
of employment is lower than the level targeted by
policymakers. This induces policymakers to try to
stimulate employment by means of inflationary
surprises. Because the public anticipates such behavior, it adjusts nominal wages (and other) contracts
accordingly, which leads to an equilibrium in which
inflation has a positive bias but output remains at
the natural level.
Recently two central bankers with strong academic backgrounds have expressed the view that
decisionmakers in their respective CBs are not trying
to maintain employment above its natural level and
conclude, therefore, that the KPBG bias story is not
applicable to their respective CBs.3 In particular,
Blinder (1998, p. 43) argues that policymakers at
the Fed do not try to systematically maintain employment above the natural level. As a matter of fact,
when in office, he personally felt duty bound to
conduct monetary policy so as to hit the natural
16

J U LY / A U G U S T 2 0 0 2

rate. In a similar vein, while recently summarizing
the U.K. experience with inflation targeting, John
Vickers (1998, p. 369) expressed the following view:
“There is a large literature on inflation bias but it
simply is not applicable to the MPC. We have no
desire to spring inflation surprises to try to bump
output above its natural rate (wherever that may be).”
Coming from a former Fed’s vice chairman and
from an executive director and chief economist at
the BE, such introspective statements certainly
deserve serious consideration, not the least because
acceptance of this view carries with it the important
implication that the credibility problem of monetary
policy is a thing of the past.4
In parallel, recent inflation targeters such as
the (reborn with instrument independence since
1997) BE acknowledge that, although their primary
objective is price stability, they are also averse to
excessive short-run fluctuations of actual output
around potential or natural output. Hence, they
attempt to achieve the inflation target on average
rather than in each period. In Mervyn King’s words,
they are not “inflation nutters” (e.g., see King, 1997).
For example, if an adverse supply shock pushes
inflation above target for some time, they do not
seek to put inflation back on target immediately
because of the associated excessive fluctuations
this would create in the output gap.
Svensson (1997) refers, somewhat more neutrally, to such a bank as a “flexible inflation targeter”
and to King’s “inflation nutter” as a “strict inflation
targeter.” Recent inflation targeters such as the
United Kingdom, New Zealand, Canada, and Sweden
have been rather transparent about the fact that they
are flexible rather than strict inflation targeters. In
terms of the familiar quadratic loss function used
by KPBG and much of the ensuing literature, this
means that, although they do not try to maintain
output above its natural level, their loss function
assigns a positive weight also to deviations of output
from its potential level. I shall refer to the relative
weight assigned to deviations of output from target
in comparison with deviations of inflation from
3

McCallum (1995, 1997) expresses a similar view.

4

The views expressed by Blinder and Vickers are not inconsistent with
the existence of a KPBG inflationary bias prior to the 1990s, provided
that policymakers, at the time, believed in a stable tradeoff between
inflation and economic activity. As the idea of no tradeoff percolated
through policymaking circles during the 1990s, policymakers, realizing
the futility of attempting to maintain output above its natural level,
settled for the natural rate. Sargent (1999) models this process using
least-squares learning about the slope of the long-run Phillips curve.

FEDERAL RESERVE BANK OF ST. LOUIS

target as the “flexibility parameter” and denote it
by A.
In any precise characterization of optimal policy
in such a context, A is obviously an important determinant of the speed with which policy seeks to put
inflation back on target following adverse shock
realizations. The larger is A, the larger is the “flexibility” allowed in returning to the inflation target following a shock. Hence, along the optimal policy plan
of a flexible inflation targeter, the parameter A determines the period-by-period deviations of inflation
from its target. In spite of its obvious importance
and of their insistence on transparency, recent inflation targeters have been rather hazy about the magnitude of the flexibility parameter. This is recognized
by Vickers (1998, p. 370) who candidly writes, “The
MPC remit is silent on this parameter of the loss
function, but optimal policy is arguably not too
sensitive to its value within a reasonable range.”5
While most explicit inflation targeters openly
admit that they are of the “flexible” variety, that
was not usually the case with the BB when it was
in charge of German monetary policy, nor is it currently the case with it successor—the ECB. In view
of the strong and unequivocal priority given to price
stability in the charter of those banks, their officials
probably prefer to view and to project to the public
an image of the bank as a strict, rather than a flexible, inflation targeter. But evidence presented in
Clarida and Gertler (1997) is consistent with the
view that the actual policy of the BB did not significantly differ from that of a flexible inflation targeter.
Thus, there seems to be substantial haziness about
the parameter A among both explicit and implicit
inflation targeters.
The second part of the paper takes the statements of Vickers (1998) and of Blinder (1998) (that
the output target of BE and Fed policymakers is the
natural level) at face value and examines the consequences of flexible inflation targeting and of haziness
about the parameter A for credibility in the presence
of asymmetric objectives. Besides the statement by
Blinder hinting at an asymmetry in the objectives
of the U.S. political establishment, this exercise is
motivated by the following considerations.
Cukierman (2000a) shows that, with a Lucastype transmission mechanism, uncertainty about
the future state of the economy and asymmetries
in the output gap segment of the CB loss function,
there will be an inflation bias even if the CB targets
the normal level of output. This framework implies
that there should be a positive association between

Cukierman

the variability of economic activity over the cycle
and the magnitude of the inflation bias. Preliminary
cross-sectional evidence in Gerlach (2000) supports
this implication.6 Last but not least, the quadratic
objective function originally postulated by KPBG
carries the rather unintuitive implication that, given
inflation, an upward deviation of employment from
its desired level is as costly as a downward deviation
of the same size. It is hard to see why policymakers,
or social planners for that matter, would object,
given inflation, to a positive output gap. As a matter
of fact, it’s quite likely that, in the range of positive
output gaps, the quadratic function was postulated
mainly for analytical convenience rather than for
its descriptive realism.7
Because there is substantial uncertainty about
the correct model of the economy, the consequences
of asymmetric objectives are examined also for an
economy with a New Keynesian transmission
mechanism of the type recently reviewed by Clarida,
Gali, and Gertler (1999). In this case there is an inflation bias that has two distinct origins. One of those
arises, as in the case of an expectations-augmented
Phillips curve, due to the interaction of asymmetries
in the output gap segment of the loss function with
uncertainty about the future state of the economy.
Thus, flexible inflation targeting in conjunction with
asymmetric output gap objectives leads to credibility
problems even when policymakers target the average
natural level. Furthermore, contrary to conventional
wisdom (with an expectations-augmented Phillips
curve), this bias is an increasing function of the
extent to which the CB is “flexible” in targeting
inflation as measured by the parameter A. Because
this is precisely the parameter about which contemporary CBs tend to be hazy, it follows that there is
also uncertainty about the size of the bias.
The additional inflationary tendency that arises
in the New Keynesian framework is related to the
fact that, because prices are sticky, policymakers
5

The qualifier refers to work by Bean (2000) and Batini and Haldane
(1999), who claim that for recent structural parameters of the United
Kingdom the optimal policy of a flexible inflation targeter is insensitive to the precise value of A.

6

In addition, Ruge-Murcia (2001) provides individual time-series evidence for several countries. His evidence supports the existence of
asymmetries in CB losses from deviations of unemployment from its
natural level for France and the United States but not for the United
Kingdom and Japan.

7

The quadratic function does not admit the possibility that policymakers
might have precautionary demands for expansions and for price stability. A formulation of policymakers’ objective functions that allows
for both possibilities appears in Cukierman and Muscatelli (2002).

J U LY / A U G U S T 2 0 0 2

17

REVIEW

Cukierman

face a long-run tradeoff within some range between
average inflation and the average output gap. Policymakers with asymmetric losses from positive and
negative output gaps choose a point along this tradeoff that is characterized by both positive average
inflation and a positive average output gap.8
Section II documents existing haziness about
the economic models used by decisionmakers in
CBs and about the level of output that they target.
It is argued that, while a large part of this haziness
is due to lack of clear consensus about the transmission mechanism within the economic profession
itself, this state of affairs leaves quite a bit of discretion to CBs and opens the door for strategic use of
information. Section III examines the extent to which
contemporary CBs are transparent about their objectives and concludes that here, too, there is quite a
bit of haziness, particularly among the new “flexible
inflation targeters.” It then reviews recent theoretical
arguments and empirical work that support the
hypothesis that at least some CBs have different
attitudes about positive and negative output gaps.
Section IV shows, for a Lucas-type transmission
mechanism, that, in the presence of such asymmetries and uncertainty about the upcoming state of
the economy, policymakers “hedge” their position
on the side of expansion to reduce the likelihood
of surprise recessions. This behavior is shown to
induce an inflationary bias even when the policymakers’ output target is potential output. Section V
first shows that a similar mechanism operates also
in sticky price, New Keynesian models of the economy. But, because policymakers can control the
real rate of interest in such frameworks, asymmetric
preferences lead to an additional inflationary tendency that is associated with average positive real
effects on the output gap.

II. HAZINESS ABOUT THE ECONOMIC
MODEL USED FOR MAKING POLICY
DECISIONS
Practically all CBs are rather noncommittal
about the economic model or models they use in
making policy decisions. Admittedly, many of the
major CBs have at least one big econometric model
of the economy in store. But the forecasts generated
by such models are only one of many inputs used
in formulating policy. Decisionmakers at major CBs
have access to a multitude of alternative “models”
and information. The aggregation of this information
by each board member and the further aggregation
18

J U LY / A U G U S T 2 0 0 2

of the position of each board member into a collective decision is a rather complex process; a full
description of this would require very detailed
tracking of the thought process of each board
member as well as of the interaction among the
board members. Vickers (1998, p. 370) candidly
admits that there are serious limits to how much
of this process can be put in the public domain9:
“While transparency—inflation reports, MPC
minutes, Treasury Committee hearings and so on—
increases what is in the public domain (desirably
in my view), there is surely information relevant
for policy-making that is simply incapable of being
put in the public domain.”
A substantial part of this ambiguity is caused in
the first place by the absence of consensus within
the economic profession about the correct model
of the economy. In the absence of consensus, a
“reasonable” central banker is likely to hedge his
position by intuitively assigning nonnegative weights
to alternative conceptions of the economy. This
complicates the decisionmaking process of central
bankers, makes them vulnerable to ex post criticism,
but also leaves them substantially more discretion
than they would have otherwise. As a matter of fact,
current economic literature entertains several conceptually different views of the transmission process
of monetary policy even before taking into account
differing views about length of lags, parameter magnitudes, and functional form within a given broad
conception of the transmission mechanism.
This section illustrates some of this conceptual
variety by briefly reviewing and contrasting three
well-known alternative conceptions of the transmission process of monetary policy used in the current
economic literature. One is a monetarist Lucas-type
expectations-augmented Phillips curve and the
other two are neo-Keynesian in spirit in that both
rely on staggered nominal price setting in conjunction with costs of price adjustment. In both variants
the CB is able to influence the real rate by means of
the nominal rate of interest because the price level
is temporarily sticky. In the first version, current
prices are fully backward looking in that current
pricing decisions depend only on predetermined
past prices. In the second version, they are fully
8

I refer to this second mechanism as a “tendency” rather than a “bias”
because it is associated with some gain in the average value of output.

9

Even if all those details could be put in the public domain, it is unlikely
that, because of cognitive limitations, the bulk of the (largely nonprofessional) public would absorb and digest them accurately. A fuller
discussion of those and related issues appears in Winkler (2000).

FEDERAL RESERVE BANK OF ST. LOUIS

forward looking in that current pricing decisions
depend on expected future inflation rather than on
past pricing decisions.10

A Monetarist Lucas-Type Transmission
Mechanism (Model 1)
This transmission mechanism is the one most
frequently used in models of endogenous monetary
policy. The main idea is that monetary policy has
real effects only to the extent that it creates unexpected inflation. In particular, the deviation of output from its natural level is an increasing function
of unexpected inflation. Formally,
(1)

yt ≡Yt –Ynt=α (πt – Et π t ), α>0,

where Y and Yn are actual and natural output, π is
the rate of inflation, Eπ is the (rational) expectation
of that rate of inflation when output decisions are
made, and t is a time index. The instrument of
monetary policy is not modeled explicitly, but it
is assumed, at least implicitly, that the monetary
authority can set its instrument (the money supply
or the interest rate) so as to bring about the inflation
rate that it desires. Hence, from a formal point of
view the “instrument” of the monetary authority
here is the rate of inflation.11 Equation (1) is also
known as an expectations-augmented Phillips curve.
In its starkest monetarist interpretation, prices and
wages are fully flexible and monetary policy has
real effects only when inflation is not currently
fully perceived. In the presence of nominal wage
contracts, which are preset one period in advance
on the basis of expected future inflation, there are
real effects when there are deviations between the
rate of inflation that had been expected at contracting time and the subsequent realization of inflation.
In this variant, Et π t is replaced by Et –1π t .12

A Neo-Keynesian Transmission
Mechanism with Backward-Looking
Pricing (Model 2)
In this framework, the current output gap, normally defined as the deviation of actual from potential output, depends on the lagged real interest rate
and on its own lagged value. Current inflation is
positively related to the lagged value of the output
gap and to its own lagged value. A compact formulation of the model, due to Svensson (1997), is
(2) xt +1 ≡ Yt +1 − Ypt +1 = −ϕ (it − Et π t +1) + φ xt + gt ,

Cukierman

(3)

π t +1 = π t + λ xt + ut +1,

where Ypt is potential output; xt is the output gap;
π t+1 is the rate of inflation between period t and
period t+1; Et π t+1 is the (rational) public’s forecast
of this inflation given the information available to
it in period t ; i t is the nominal rate of interest on
one-period loans contracted in period t; ut+1 is a
cost shock; gt is a nonmonetary shock to aggregate
demand; and ϕ, φ, and λ are nonnegative parameters.
Note that although there is some analogy between
xt and yt from the first model, they are not identical
since natural and potential output are not necessarily identical concepts. The difference between them
is discussed later in this section.
In this framework, the monetary policy instrument is the nominal rate of interest. Because of
price stickiness, the CB can affect the real rate (and
through it the output gap and future inflation) by
its choice of the nominal rate. Svensson (1997) notes
that, in spite of its simplicity, this model captures
some of the essential features of more elaborate
econometric models used by some CBs. The model
reflects the declared belief of some CBs, such as
the BE, that current interest rate policy affects the
output gap with a lag of one period and the rate of
inflation only with a lag of two periods. The model
is fully backward looking in that current pricing
behavior depends only on lagged variables.

A New Keynesian Transmission
Mechanism with Forward-Looking
Pricing (Model 3)
The main difference between this framework
and the previous one is that current price setting
and the current output gap depend on expectations
of future inflation and on the expected future output
gap, respectively, rather than on the lagged values
of those variables. Thus, the model is fully forward
looking. The main idea is that a change in expectations of future variables alters current pricing behavior. This modification has its origin in more explicit
microeconomic foundations with monopolistic
competition and costs of price adjustment. A stylized
10

An additional transmission channel that is not captured by either of
those models is the credit channel.

11

In some versions of this model, policymakers have only imperfect
control of inflation. In such a case the planned rate of inflation becomes
the instrument of monetary policy.

12

A fuller discussion appears in Cukierman (1992, Chap. 3).

J U LY / A U G U S T 2 0 0 2

19

REVIEW

Cukierman

aggregate version of such a model has recently been
summarized compactly by Clarida, Gali, and Gertler
(1999) and is reproduced as follows:
(4)

xt = − ϕ (i t − Et π t +1 ) + Et xt +1 + gt ,

(5)

π t = λ xt + β Et π t +1 + ut .

Here ϕ, λ , and β are positive coefficients. All the
variables have the same meaning as in the previous
model. The expected future output gap appears in
the output gap equation to reflect the notion that,
because individuals smooth consumption, expectations of higher consumption next period (associated
with higher expected output) leads them to demand
more current consumption, which raises current
output.
As in stylized models of sticky staggered prices
pioneered by Calvo (1983), current inflation depends
on future expected inflation. In this type of model,
only a fraction of firms has the opportunity to
adjust its price each period and, because of costs
of price adjustment, each firm adjusts its price at
discrete intervals. Hence, when it is given the chance
to adjust its price, the firm adjusts it by more the
higher is expected future inflation. This interpretation implies that β is a discount factor.

Comparison Between the Conceptions
Underlying the Different Models
The three models above are grounded in different conceptions regarding the channels through
which monetary policy affects output and inflation.
In the Lucas-type model, monetary policy affects
output only if it is unanticipated, either currently
or when relevant nominal contracts have been concluded. Inflation in those types of models is usually
thought of as being directly related to the choice of
money supply via the quantity theory of money.
By contrast, in the last two models, because output
is demand determined, a change in the rate of interest by affecting demand also affects output independently of whether inflation is anticipated or not.
Furthermore, the effect of policy on inflation in
those models is through the effect that policy has
on the output gap.
The main conceptual difference between the
second and third models is this: In the second model,
the current policy cannot affect current inflation
or the current output gap; in the third model, current
policy can affect the current values of both variables
by changing current expectations of future variables.
20

J U LY / A U G U S T 2 0 0 2

Woodford (1999) utilizes this feature of the third
model to show that, under an appropriate form of
commitment to interest rate inertia, changes in
current policy, by changing expectations, have an
immediate effect on inflation and the output gap.
This is a far cry from the BE view (illustrated by the
second model) in which policy in year t can affect
inflation only from year t+2 onward.

Haziness About the Meaning of
Potential or Normal Output
At the broad conceptual level, potential output
is meant to capture long-term supply determinants
of output. But there are several related concepts
such as the natural level of output and the NAIRU
(non-accelerating inflation rate of unemployment).
At the empirical level, those concepts are often
implemented by means of some statistical smoothing procedure such as the Hodrick-Prescott (1997)
filter.
Are those concepts identical? I believe the
answer is not necessarily. In the work of Friedman
(1968) and subsequent U.S.-based neo-monetarists
like Lucas (1972, 1973), the conception of the natural
level of employment is the level of employment
that is generated by the real general equilibrium of
the system in the absence of inflationary surprises.
Its counterpart in the United Kingdom is the NAIRU.
Layard, Nickell, and Jackman (1991, pp. 14-15)
characterize this rate as the rate of unemployment
below which inflation is accelerating and above
which it is decelerating.
Although related, the concepts developed by
Lucas and Layard, Nickell, and Jackman are not
necessarily identical. More importantly, both concepts generally differ from potential output because,
due to the existence of real business cycles, the gap
between actual and potential output may be nonzero even when inflation is fully expected and the
rate of inflation is stable. As a consequence, the output gap, xt, from neo-Keynesian frameworks is not
identical to the monetarist deviation, yt, of actual
from natural output. Nor is there a clear relation
between the output gap and the deviation of actual
output from the NAIRU.
Woodford (2002) proposes to conceptualize
potential output as the equilibrium level of output
under full price flexibility and to view the output
gap as arising from the existence of sticky prices.
Although useful and elegant, this conception of the
output gap does not provide guidance about how

FEDERAL RESERVE BANK OF ST. LOUIS

to measure the level of output under full price flexibility. It would appear that the relation between this
concept and the smoothing procedures used to
measure potential output in practice (such as the
Hodrick-Prescott filter, 1997) is rather tenuous.

Implications for Model Transparency
and for Accountability
The brief survey of alternative current models
of the transmission process presented above illustrates the objective difficulties faced by the contemporary honest central banker. When faced with
those and other different conceptions of how the
economy works, what will he do? It is likely that
he is going to intuitively assign some nonnegative
weight to each of the models and to many other
bits of information and ideas not surveyed here.13
What should he do when asked to be transparent
about the economic model he is using to generate
forecasts? This is not just an academic but also a
practical question. As a matter of fact, when recently
confronted with such a demand, the president of
the ECB (Duisenberg) responded by promising to
publish, in due time, the forecasts generated by the
econometric model of the ECB. Although such an
action is desirable, it is unlikely to come close to
the actual aggregation of information and of models
that decisionmakers at the ECB, the BE, or the Fed
go through when making monetary policy decisions.
To a large extent, the inability of central bankers
to be fully transparent about the economic model
or models they are using is tied to the proliferation
of alternative views of the transmission mechanism
within the economic profession. Because central
bankers are consumers and not providers of economic models, they obviously cannot be faulted for
this state of affairs.14 But the absence of consensus
about the “correct” model of the economy endows
them with considerable discretion, which they can
also use to hedge their positions in the face of model
uncertainty and of political pressures. It also opens
the door for the strategic use of information.15
Most contemporary CBs are pretty transparent
about their inflation target, both in terms of the
index used and the numerical target value. There is
substantially less transparency about output targets.
Even in countries that insist on high levels of transparency like the United Kingdom, there is quite a
bit of murkiness about the output or employment
target that the CB is supposed to attain.
Again, a nonnegligible part of this haziness about

Cukierman

the output target is due to (and made possible by)
the different concepts of “normal” output surveyed
above. Those different conceptions allow substantial
leeway for the measurement of potential or natural
output, leaving room for the reintroduction of discretionary monetary policy through the back door.
This is obviously the case whether or not the output target of contemporary CBs is at the natural or
the potential level of output or above them.16
In the long run, transparency and accountability
will be enhanced when better and more accurate
models of the ways monetary policy affects the
economy become available. The wider implication
of this conclusion is that, until this happens, accountability by means of transparency about the economic models used by decisionmakers at the CB will
be limited. What should be done in the mean time?
There is no easy answer to this question. My own
view is that, given the current state of economic
knowledge, the discharge of accountability should
be achieved to a large extent by two things: appointing as decisionmakers at the CB individuals with
high levels of integrity and professional standards
and making sure these decisionmakers have little
or no association with particular interest groups.

III. ARE NEW CENTRAL BANKS
TRANSPARENT ABOUT THEIR
OBJECTIVES?
In comparison with past decades, there is nowadays substantially more transparency about the
main objective of monetary policy. In most contem13

Jensen (2001) presents a compact hybrid neo-Keynesian model that
combines forward- with backward-looking elements. Using a more
elaborate hybrid model of the same type for the United States,
Rudebusch (2001) estimates the weight on forward-looking elements
to be around one-third and the weight on backward-looking elements
to be around two-thirds.

14

One way to bridge the gap between this proliferation of models and
practical policymaking is to look for a policy rule that is uniformly
best for many models. A recent attempt for two variants of microfounded structural models appears in McCallum and Nelson (1999).
Hansen and Sargent (2000) develop a systematic analysis for decisionmaking when policymakers cannot distinguish between economic
models within a given class.

15

Reflecting on his term in office as Chairman of the Board of the Fed,
Burns once said that when Keynesians on one side and monetarists
on the other assailed him with diametrically opposite criticisms, he
found it safe to duck in the middle.

16

Staiger, Stock, and Watson (1997) show, for the United States, that
there is substantial uncertainty about the location of the natural rate.
Faust and Svensson (2001) show that more ex post transparency about
the output target of policymakers raises social welfare.

J U LY / A U G U S T 2 0 0 2

21

REVIEW

Cukierman

porary CBs, the main legally mandated objective of
monetary policy is price stability and all other objectives are either nonexistent (as is nearly the case in
the charter of the ECB) or relegated to being (at least
legally) a distant second priority (as is the case with
the growth and employment objectives in the charter of the BE). This is a far cry from the 1980s and
previous decades during which most CB charters
featured several conflicting objectives with no clear
specification of the subjective tradeoffs among
them. Nowadays all explicit inflation targeters even
specify a precise numerical value in terms of a welldefined index for the target rate of inflation, and
even the ECB, which is not an explicit inflation targeter, has specified a numerical inflation target for
the euro area.
In spite of those advances, there still are nonnegligible dark spots about the output gap segment
of the loss function of modern CBs. For truly strict
inflation targeters, or inflation nutters, this murkiness is unimportant. Because the output gap is not
part of their objectives, transparency about the output gap segment of their loss function is irrelevant.
But practically all explicit inflation targeters openly
acknowledge that they also care about the output
gap, i.e., they are flexible rather than strict inflation
targeters. For such banks the features of the output
gap segment of the loss function and its importance
relative to achieving the inflation target in each
period become relevant. To illustrate, consider the
following specification of the one-period CB loss
function:
(6)

Lt=Af(xt )+π t2.

When A=0, the CB is a strict inflation targeter,
so murkiness about f(xt ) does not matter. But when
A is positive, the CB is a flexible inflation targeter so
that murkiness about the precise form of the function
f(xt ) and the magnitude of the parameter A become
important. Following Svensson (1997) I will refer
to A as the “flexibility parameter.”17 There is little
doubt that all CBs are quite opaque about the parameter A. This is admitted quite candidly in a recent
review of the U.K. experience with inflation targeting by Vickers who notes that the MPC’s remit is
silent on the parameter A (the full quote and source
appear in the latter part of the introduction).
Ironically, the lack of transparency about f(xt )
seems to matter the most in countries like the United
Kingdom, which strongly insist on formal transparency, and the least in countries like Germany,
which, judging by the BB charter, should be classi22

J U LY / A U G U S T 2 0 0 2

fied as a strict inflation targeter. But the matter is
not that simple. Recent empirical work by Clarida
and Gertler (1997) supports the view that the
Bundesbank actually conducted policy in a way that
is indistinguishable from that of a flexible inflation
targeter. As a matter of fact, the currently emerging
consensus seems to be that, whether they admit it
or not, all CBs are behaving in a manner that is consistent with flexible inflation targeting. The main
difference, on this view, is only whether the bank
and its charter admit the “flexible” part openly or
not. In terms of the loss function in equation (6),
this means that there generally is a lack of transparency with respect to the coefficient A.
How about f(xt )? Available public information
on this term is rather scant for two reasons. First,
neither the CB nor the political authorities have
taken the trouble to indicate what it is. Vickers (1998,
p. 370) ventures several remarks on the shape of
the BE’s loss function since 1997 and concludes
that, at least as far as inflation is concerned, losses
are symmetric; but he remains silent on what the
shape of f(xt ) might be. Secondly, as discussed at
some length in the previous section, there are
numerous ambiguities in the definition of potential,
normal, natural, and NAIRU output. Obviously the
output gap that enters into the loss function inherits
those ambiguities. In summary, existing CBs are
generally quite opaque about their output objective,
the shape of the function f(.), and the flexibility of
the parameter A.

The Case for Asymmetries in CB Losses
from the Output Gap
In the absence of solid information about f(.),
the academic literature has assumed that f(.) is a
quadratic function implying that losses from negative and from positive output gaps are the same as
long as the absolute value of the gap is the same.18
But it is hard to see why CBs, social planners, or
political authorities would consider, given inflation,
a positive output gap of a given magnitude to be
equivalent to a negative output gap of the same
magnitude. A negative output gap means that
17

Note that A is the inverse of Rogoff’s parameter of CB conservativeness.
The terminology in the text is chosen to highlight the fact that, within
the context of the present discussion, it determines the degree of
flexibility in allowing temporary deviations from the inflation target.

18

From here on, I abstract for simplicity from the ambiguities in the
definition of the output gap and assume that the output target of
monetary authorities is equal to a well-defined and publicly known
measure of “potential or natural output.”

FEDERAL RESERVE BANK OF ST. LOUIS

employment is below the normal level, whereas a
positive output gap means employment is above
the normal level. While casual observation suggests
that policymakers dislike employment below the
normal level, it does not support the notion that,
given inflation, they also dislike employment above
the normal level.19
Recently this casual empiricism got backing
from Blinder after his resignation from the office
of Vice Chairman of the Fed. Blinder expressed the
view that the Fed takes far more political heat when
it tightens preemptively to avoid inflation than when
it eases preemptively to avoid unemployment (the
precise quote and reference appear in the introduction). To the extent that the CB is not totally indifferent to the priorities of the political establishment,
this asymmetry is likely to partially affect the Fed’s
policy choices. Preliminary empirical work by
Gerlach (2000) and by Dolado, Maria-Dolores, and
Naveira (2000) supports this hypothesis for the Fed.20
Recent theoretical work by Cukierman (2000a)
shows that, with (i) a Lucas-type transmission mechanism, (ii) uncertainty about the future state of the
economy, and (iii) asymmetries in the output gap
segment of the CB loss function, there will be an
inflation bias even if the CB targets the normal level
of output. This framework implies that there should
be a positive association between the variability of
employment over the cycle and the magnitude of
the inflation bias. Preliminary cross-sectional evidence in Gerlach (2000) supports this implication.
Using a formulation that nests both symmetric and
asymmetric losses from deviations of unemployment
from its natural level, Ruge-Murcia (2001) performs
a test of the asymmetry hypothesis over time within
several countries and finds support for this hypothesis in France and the United States.
In summary, in spite of the silence of policymakers about the shape of f(.), there seem to be
sufficient early indications to warrant a more serious
investigation of the consequences of an asymmetric
f(.). The remainder of the paper investigates the
consequences of this asymmetry for the credibility
of monetary policy and related issues.

Cukierman

above its normal or natural level, and thus there is
no credibility problem because of the classical KPBG
reasons. In accepting these presumptions, this section takes at face value the statements by Blinder
and Vickers and also addresses McCallum’s (1995,
1997) criticism of the KPBG conception of the reasons for inflation. It will be recalled that those statements and McCallum’s arguments imply that the
output target of central bankers is identical to the
normal or potential level of output. The second
presumption is that the CB loss function is more
sensitive to negative than to positive output gaps.
The main results of the section are as follows:
1. The presence of asymmetries in losses from
the output gap in conjunction with uncertainty
on the part of the CB about the state of the
economy induces an inflation bias even when
the CB targets potential or natural output.
2. There is no bias when the CB is a strict inflation stabilizer (A=0).
Those results hold both for a Lucas-type, expectations-augmented Phillips curve and for many other
models including, in particular, a New Keynesian,
sticky/staggered prices transmission mechanism
of the type reviewed in Clarida, Gali, and Gertler
(1999). But in the second case there is an additional
inflationary tendency that arises even when decisionmakers at the CB are fully informed about the
relevant shocks at the time policy choices are made.
This section demonstrates the existence of a bias
within the framework of a Lucas-type expectationsaugmented Phillips curve (model 1). The next section shows that, in addition to this bias, there is in
New Keynesian economies (model 3) an additional
average inflationary tendency. A third result holds
true for both a Lucas-type and a New Keynesian
transmission mechanism:
19

Given inflation, some politicians probably even like positive output
gaps on the view that the higher output is, the better it is. As a matter
of fact, it is quite likely that the quadratic function on the output gap,
so often used in the academic literature, was chosen mainly for analytical convenience rather than for descriptive realism. In the usual KPBG
setup this assumption does not make a difference as long as policymakers do not face uncertainty or are risk neutral because the equilibrium is in the range of negative output gaps in which the quadratic is
reasonable. A formulation of the KPBG framework under certainty
in which the quadratic is limited to the range of negative output gaps
without making any difference for their basic result appears in
Cukierman (1992, Chap. 3, equation (3.1)). But once it is recognized
that policymakers face uncertainty, the characteristics of their objective
function in the entire range of output gaps become important.

20

However, Dolado, Maria-Dolores, and Naveira (2000) do not find evidence of asymmetry in losses from the output gap for the BB, the
Banque de France, or the Banco de Espana.

IV. IS THE CREDIBILITY PROBLEM
GONE WHEN THE CENTRAL BANK
TARGETS THE NORMAL LEVEL OF
OUTPUT?
The discussion in this section and the next one
is built on two presumptions. The first is that contemporary CBs do not attempt to maintain output

J U LY / A U G U S T 2 0 0 2

23

REVIEW

Cukierman

Figure 1
The Sequence of Events
1. E t –1 π t is formed →

2. policy, mt , chosen →

3. Other things the same, the bias is larger the
larger the (inflation targeting) flexibility
parameter A is.

An Asymmetry-Cum-Uncertainty
Inflation Bias with a Lucas-Type
Transmission Mechanism
The results in this subsection draw on Cukierman
(2000a). Here I briefly present the basic framework,
the main result, and the intuition underlying it and
move on to discuss its wider implications. (See that
article for further details and some of the derivations.) The asymmetry in CB losses regarding the
output gap is modeled by postulating that period
t’s loss function is given by

(7)

(

)

1

2
2
 2 Axt + π t when xt < 0
Lt 
,
1 2


π t when xt ≥ 0


2

where xt ≡ Yt –Ypt is the output gap. This specification of the loss function states that the employment
target of policymakers is potential output and that
as long as the output gap is negative the standard
quadratic loss function is in effect. But when the
output gap is positive or zero, policymakers do not
incur any losses or gains. The kink at the zero output gap introduces an effect that is analogous to
the condition that leads to a precautionary saving
motive in the theory of savings and consumption
under uncertainty. A basic result from this literature
is that there is a precautionary saving motive if and
only if marginal utility is convex, i.e., the third derivative is positive (Kimball, 1990).21 I shall return to
the consequences of this analogy later.
The natural level of output is given by
(8)

Ynt=Ypt+ε t,

where ε t=Ynt –Ypt is the output gap in the absence
of inflationary surprises. Actual output is given by
the expectations-augmented Phillips curve in equation (1). For simplicity, ε t is specified as a zero-mean
stochastic shock to the natural level of output with
distribution function G(ε ). Inflation is determined
24

J U LY / A U G U S T 2 0 0 2

3. ε t realizes.

both by the choice of monetary policy and by the
realization of the shock, ε t, and is given by the following equation:
(9)

π t=mt – γεt,

where mt is the rate of inflation planned by the CB
and γ is a positive parameter that determines the
effect of shocks to employment on inflation. For
concreteness I think of ε t as a supply shock so its
effect on inflation is negative. But the basic result
of this subsection goes through also when ε t is a
demand shock so that γ is negative or when ε t is a
combination of supply and demand shocks. Equation
(9) states that, given planned inflation, actual inflation is lower the larger the supply shock to the economy is. Provided there is no instrument uncertainty,
this formulation is consistent both with situations
in which the policy instrument is the interest rate
as well as with situations in which it is some nominal shock.
I focus on a one-shot game with three stages.
The sequence of events and the structure of information is as follows. First, expectations, Et –1π t, are
formed and embedded into nominal contracts. In the
second stage, the CB picks the value of its instrument,
mt. Finally, the stochastic real shock to employment, ε t, realizes and determines, along with monetary policy, both employment and inflation. This
sequence of events is illustrated in Figure 1. A crucial
element is that, when it chooses the setting of its
instrument, the CB is uncertain about the magnitude
of the real shock to output. This is a fortiori true
for the public when they form their expectation.
The shock, ε t, affects employment directly as
well as indirectly by creating, given monetary policy,
unanticipated inflation in a direction that is opposite
to the sign of the shock. From equations (1), (8), and
(9) the combined marginal impact of the shock on
employment is
(10)

q ≡ 1– αγ.
I assume that the direct effect of the shock on

21

The kink at zero in equation (7) implies that the marginal benefit from
higher economic activity is globally convex.

FEDERAL RESERVE BANK OF ST. LOUIS

Cukierman

employment dominates its indirect effect by means
of unexpected inflation so that q is positive. Substituting equations (1), (8), and (9) into the loss function
in equation (7), the expected value of the CB loss
function is
(11)
2
A b( π e − m )
1
2
qε + α m − π e dG(ε ) + Et −1(m − γ ε ) ,
2 −∞
2
where b ≡ (α /q), π e ≡ Et –1π t , and the time index has
been suppressed for simplicity. Minimization of
equation (11) with respect to m yields the following
reaction function for the monetary authority:
1
m=
2
1 + α AG b π e − m
(12)
α 2 AG b π e − m π e − α Aq b(π e − m )εdG ε .
( )
∫ −∞


I turn next to expectation formation which
occurs at the first stage of the game. Although
individuals do not know the realization of ε at this
stage, they do know its stochastic structure as well
as the structure of the economy and of CB objectives.
Taking the expected value of inflation in equation
(9) conditioned on this information as the operational proxy for the public’s rational expectation
of inflation, we obtain

∫

[

(

[(

[(

(13)

)]

)]

)]

π e = m = − α Aq ∫−b(∞π

e

−m

) ε dG(ε ).

In equilibrium, both equations (12) and (13) must be
satisfied. It follows that π e – m=0 so that equation
(13) becomes
(14)
Et −1π t ≡ π e = m

[

]

= − α Aq ∫−0 ∞ε dG (ε ) = −α AqG(0) E ε ε < 0 .
G(0) is the probability of a recession. More precisely
it is the probability that the realization of the employment shock, ε, is lower than the mean of this shock,
which is zero. E[ε |ε <0] is the expected value of ε
conditioned on the economy being in a recession
(ε negative). Because the probability of a recession
is positive and the expected value of ε conditioned
on the economy being in a recession is negative,
both planned and expected inflation are positive.
Furthermore, in spite of its attempt to reduce the
size of recessions, the CB has no influence on output,
which remains at its natural level. Had the CB been
committed to a zero rate of monetary expansion,
output would still be at its natural level. Hence there
is an “inflationary bias” on average.

Intuitively, this bias arises because the CB is more
sensitive to policy errors in which monetary policy
is too tight than to policy errors in which it is too
expansionary, in conjunction with the fact that it
does not have perfect information about the state
of the economy. The upshot is that an inflationary
bias arises even when the CB targets potential output. This bias arises whenever the CB is more averse
to negative than to positive output gaps, in conjunction with the fact that it is uncertain about the state
of the economy. The second condition is obviously
highly realistic, and the first one appears to be satisfied for at least some CBs.
Although, as in KPBG, the bias arises because
of the CB concern (at least in some states of nature)
about the output gap, the new bias identified here
does not rely on dynamic inconsistency. To see this,
note that this bias is present also if the choice of
policy in Figure 1 precedes the formation of expectations, as long as both the formation of expectations
and the choice of policy precede the resolution of
uncertainty about the shock, ε t. The origin of the
bias resides, instead, in the precautionary behavior
of the CB with respect to recessions in a world of
uncertainty, in conjunction with the public’s awareness of this asymmetry in CB objectives.22

Discussion
The expression for the inflation bias in equation
(14) implies that, other things the same, the bias is
larger the larger is the variability of natural output.
Gerlach (2000) presents preliminary cross-sectional
evidence suggesting that there is a positive association between the average level of inflation in a
country and the variance of its rate of growth. In
related work Ruge-Murcia (2001) finds a positive
(over time) relation between inflation and the conditional variance of unemployment in the United
States and France. Given his model, this finding supports the view that policymakers in those countries
are more averse to negative than to positive output
gaps. Cukierman and Muscatelli (2002) find evidence
of nonlinearity in interest rate reaction functions
for the United States, the United Kingdom, and Japan.
The pattern of these nonlinearities supports the
existence of a precautionary demand for expansions in the post-1985 period in the United States.
22

Obviously, it is not easy to verify ex post whether the CB is conducting
policy so as to build in a precautionary demand for expansions. As a
consequence, it is not straightforward to verify a precommittment to
conduct policy in a symmetric manner.

J U LY / A U G U S T 2 0 0 2

25

REVIEW

Cukierman

As demonstrated earlier, this type of precautionary
demand leads to an inflation bias.
Equation (14) also implies that the bias is an
increasing function of the flexibility parameter, A.
Hence CBs of countries that are more flexible
inflation targeters have a more serious credibility
problem. Because we saw earlier that transparency
concerning the flexibility parameter is generally
rather poor, the magnitude of this bias is generally
opaque too. But, holding other things the same, it is
likely to be higher in countries such as the United
Kingdom than in the euro area. This is true if only
because the 1997 charter of the BE explicitly mentions growth and employment as objectives for the
CB, whereas that of the ECB does not.
Those rather pessimistic conclusions appear
to conflict, at first sight, with the remarkable era
of price stability that Western democracies have
recently experienced. The “new inflation bias story”
presented here is consistent with this observation
because it implies that, when the probability of
recession is low and/or its expected depth mild,
the bias will be negligible for most values of the
flexibility parameter, A. But this observation should
also be taken as a warning against overoptimism in
the long run. In particular, if and when the likelihood
of a serious recession increases, the countries of
more flexible inflation targeters are likely to experience larger inflationary accelerations.
Let me conclude this discussion with a theoretical remark regarding the analogy between the
behavior of policymakers in the “new inflation bias
story” presented above and the theory of precautionary savings. The kink at a zero output gap in
the loss function in equation (7) implies that the
marginal benefit from higher economic activity is
globally convex. As shown by Kimball (1990) there
is a precautionary saving motive if and only if the
marginal utility from consumption is convex. Similarly, asymmetric preferences with respect to the
output gap induce a precautionary demand for
expansions on the part of central bankers. This
precautionary demand induces them to conduct a
somewhat looser policy in comparison with the
benchmark case of symmetric losses from the output gap.23
But there is also a crucial difference between
the two cases. While the individual consumer “buys”
more desired future security by foregoing some
current consumption, the central banker does not
buy any improvement in economic activity because
individuals in the economy undo this potential
26

J U LY / A U G U S T 2 0 0 2

improvement by setting their nominal contracts in
a way that anticipates this tendency of the central
banker.

V. THE EFFECTS OF ASYMMETRIC
LOSSES FROM THE OUTPUT GAP IN
NEW KEYNESIAN FRAMEWORKS
This section investigates the consequences of
an asymmetric objective function, as specified in
equation (7), when the economic structure is characterized by a New Keynesian transmission mechanism with forward-looking pricing of the type given
by equations (4) and (5). This section discusses two
related but distinct issues. First, it shows that the
presence of asymmetries in conjunction with uncertainty about future shocks produces an inflation
bias also in New Keynesian frameworks. Second, it
shows that, in New Keynesian frameworks, there
usually is an additional inflationary tendency and
an associated positive average output gap, both of
which obtain even in the absence of uncertainty
about future shocks. For simplicity I abstract from
persistence in the stochastic behavior of the shocks
gt and ut by assuming that both are zero-mean whitenoise processes.

Asymmetric Output Gap Losses
Produce a Bias Also in New Keynesian
Frameworks
The mechanism that produces the inflationary
bias in the Lucas-type transmission mechanism
depends mainly on the fact that the objective function is asymmetric in conjunction with the following: that, when choosing policy, the CB is uncertain
about the realization of shocks at the time its policy
decision is going to affect the economy. In particular,
this type of mechanism will, most likely, operate
within the framework of other transmission processes, including (linear) New Keynesian transmission processes, as long as the CB possesses the loss
function in equation (7) and is uncertain about the
relevant state of the economy. This intuitive argument is demonstrated rigorously in what follows.
The hasty reader may just take note of equation
23

Incidentally, this analogy also implies that there will be a tendency
to inflate for all asymmetric output gap loss functions in which the
marginal benefit of higher economic activity is convex in the level
of output. Another specification of an asymmetric output gap loss
function that satisfies this requirement appears in Ruge-Murcia (2001),
who specifies losses of deviations from natural unemployment as a
linex function.

FEDERAL RESERVE BANK OF ST. LOUIS

Cukierman

(18) and go directly to proposition 1. Substituting
equation (4) into equation (5),
(15)

(

)

π t = −λϕ it − π te + λ xte + βπ te + λ gt + ut
≡

π tp

+ λ gt + ut ,

xte ≡ Et xt+1,

π te ≡ Etπ t+1,

π tp

where
and
is the rate of
inflation implicitly planned by the policymaker
when he sets the interest rate at it. Solving out for
the interest rate,
1
(16) it =
−π t + (λϕ + β )π te + λ xte + λ gt + ut .
λϕ

(

)

Substituting equation (16) into equation (4), rearranging, and using the last expression in (15) to express
actual inflation, π t, in terms of its planned value, π tp,
we obtain
(17)

xt ≡ Yt − Ypt =

(

)

1 p
π t − βπ te + λ gt ,
λ

which states that, given expectations and the realization of the shock gt, the output gap is more likely
to be negative the lower the planned rate of inflation.
Hence, equation (17) implies that if policymakers
desire to reduce a negative output gap, they must
plan a higher rate of inflation.
Consider now a CB whose objective is to
minimize
∞

E0 ∑ δ t Lt ,

(18)

t =0

where δ is the discount factor and Lt is given by
equation (7). Because there are no endogenous state
variables and no persistence in shocks, the minimization problem in equation (18) reduces to a series of
one-period minimization problems and the expected
values of inflation and the output gap are time invariant. I shall, therefore, omit time indices from now
on. Equation (7) implies that, in each period, the
form of the loss function depends on whether the
output gap is negative or not. Equation (17) implies
that the output gap is negative if and only if
(19)

x=

(

)

1 p
π − βπ e + λ g < 0,
λ

which is equivalent to
1
g < βπ e − π p ≡ gc .
λ
(20)

(

)

In this case the loss is given by the first line in equation (7), and otherwise it is given by the second line
in that equation. Substituting (19) into equation (7)
and applying the expected value operator, the typical

one-period, time-invariant minimization problem
is to choose π p so as to minimize the following
expression:
(21)
A
2λ2

∫ (π
gc

p

−∞

)

− βπ e + λ g dF [ g] +
2

(

)

2
1
E π p + λg + u ,
2

where F[ g] is the density function of g and E is the
expected value operator. Differentiating with respect
to π p and rearranging yields the following policy
reaction function for the rate of inflation planned
by the CB:

πp
(22)
=

{

}

A β F[ gc ]π e − λ ∫−g∞c gdF [ g]

{

λ + AF[ gc ]
2

[

A β F[ gc ]π e − λ F [ gc ]E g g ≤ gc

λ + AF[ gc ]
2

]}.

Because individuals understand the modus operandi
of the CB and have rational expectations, expected
inflation, π e, equals planned inflation, π p. Using
this and π p=π e in equation (22) and rearranging
yields

β −1 p 
 β −1 p  
π  Eg g ≤
π 
− λ AF 
λ
 
 λ
.
p
e
(23) π = π =
1
−
β


λ2 + A(1 − β ) F 
π p

 λ
This equation determines π p only implicitly
because π p also appears in the argument of the
distribution function F[.] and in the expected value
on the right-hand side of (23). It is nonetheless
possible to establish that π p must be positive. The
denominator in equation (23) is positive. Hence, the
sign of planned inflation is determined by the sign
of the numerator whose sign is opposite to that of
the conditional expected value in the numerator.
Thus, if

β −1 p 
π  < 0,
E g g ≤
λ


then π p must be positive. Because g has a zero
expected value, the conditional expected value,

β −1 p 
π ,
E g g ≤
λ


is negative for all possible values of π p, except for
the extreme case in which π p is equal to minus
infinity and β<1. In this case, the right-hand side
of equation (23) implies that π p is zero, which conJ U LY / A U G U S T 2 0 0 2

27

REVIEW

Cukierman

tradicts the initial assumption that π p is equal to
minus infinity. Hence, only positive values of planned
inflation are possible in an equilibrium with asymmetric preferences. The main conclusion is summarized in the following proposition.
Proposition 1. In the presence of asymmetric output gap objectives and CB uncertainty about future
shocks to the economy, there is an inflation bias
also in the New Keynesian framework.
This bias arises in spite of the fact that the CB
does not gain anything from having a positive output
gap. It arises instead, as was the case with a Lucas
supply function, because of a precautionary demand
for expansions by the CB. The next proposition
examines the impact of the flexibility parameter, A,
on this bias.
Proposition 2. The bias in proposition 1 is larger
the larger is the flexibility parameter, A.
Proof: Differentiating equation (23) with respect to
A,
∂π
=−
∂A
p

β −1 p 
 β −1 p  
λ3 F 
π E g g ≤
π 
λ
λ
 


 2
 β − 1 p 
π 
 λ + A(1 − β )F 


 λ

2

.

Because

β −1 p 
π 
E g g ≤
λ


is negative and all the remaining terms are positive,
this expression is positive. QED.
Thus, as was the case with a Lucas supply function, the bias is larger the larger is the flexibility
parameter, A.

The Additional Inflationary “Tendency”
of New Keynesian Frameworks
The previous subsection shows that the results
obtained in Section IV for a Lucas supply function
carry over to the New Keynesian framework. But in
the case of the New Keynesian framework, there is
an additional mechanism that tends to make inflation
even higher. This additional inflationary tendency
is directly related to the fact that, due to temporary
price stickiness, the CB is able to alter the real rate
of interest and through it the level of employment
and production. This happens even when the CB
knows future shocks to the economy with certainty.
The analysis in this subsection focuses on this addi28

J U LY / A U G U S T 2 0 0 2

tional inflation-creating mechanism in isolation by
assuming that the CB has full information about
relevant shocks at the time policy choices are made.
In terms of model 3 (from Section II on the specification of models, pp. 19-20) this means that the CB
knows gt and ut when it picks period t’s interest rate,
it. Because there are no endogenous state variables
and future expectations are not affected by current
policy, the minimization of the objective function
in equation (18) is again equivalent to period-byperiod minimization.
In each period there are two possible alternative
interest rate rules for the CB. If the realization of
the cost shock, ut, is such that, given inflationary
expectations, the output gap is either positive or
zero when inflation is maintained at zero, the CB
picks the rate of interest that achieves the zero
inflation target. In this range the CB behaves as an
“inflation nutter,” or strict inflation targeter. If the
realization of the cost shock, ut, is such that, given
inflationary expectations, the output gap is negative
at a zero rate of inflation, the CB faces a tradeoff
between its output and its inflation objective. Hence,
given inflationary expectations, it picks the interest
rate that equalizes the marginal loss from inflation
to the marginal loss from a negative output gap. In
this range the CB behaves as a flexible inflation
targeter. Equations (4) and (5) imply that, at a zero
inflation rate,24
(24)

xt ≥ 0 ⇔ ut + βπ te ≤ 0 ⇔ CB is strict
xt < 0 ⇔ ut + βπ te > 0 ⇔ CB is flexible.

In the first case the CB just picks the nominal rate
of interest that achieves the zero inflation target.
Equations (4) and (5) imply that in this case the
interest rate rule is25
(25)

its = π te +

(

1
1
gt + xte + ut + βπ te
ϕ 
λ

).

In the second case there is a meaningful intraperiod tradeoff between the inflation and the output
gap targets. Hence, the CB picks the nominal interest
rate so as to minimize
24

Non-zero values of the demand shock, gt, and of the expected future
output gap, xet , produce variability in both inflation and the output gap.
Hence, non-zero realizations of these variables do not create a tradeoff between output and inflation variability and it pays to fully offset
them. As a consequence, the sign of the output gap when inflation is
maintained at zero is independent of gt, and of the expected future
output gap, xet .

25

The superscripts s and f that are attached to it indicate that equations
(25) and (27) refer to the interest rate rules of strict and flexible inflation
targeters, respectively.

FEDERAL RESERVE BANK OF ST. LOUIS

(

1
Lt = Axt2 + π t2
2

(26)

Cukierman

)

subject to equation (5). The interest rate rule that
emerges in this case is given by26
(27)

itf = π te +

1
λ
gt + xte +
ut + βπ te

ϕ
A + λ2

(

).

Comparison of equations (25) and (27) reveals
that, for the same realizations of current shocks and
the same values of the expected future output gap
and inflation, both the nominal and the real interest
rates are lower in the second case. Furthermore, the
difference between the two interest rates is larger
the larger is the flexibility parameter, A. Using equation (25) in the expression for inflation (equation
(5)), the rate of inflation in the range ut+βπ te>0 is
given by
(28)

πt =

(

)

A
ut + βπ te .
2
A+λ

The rate of inflation does not respond to the
demand shock or to the expected future output gap
because the full offsetting of those variables improves
performance on both the inflation and the output
gap objectives. On the other hand, some of the cost
shock and inflationary expectations are allowed to
pass through to inflation because, in the case of
those variables, there is a tradeoff between the inflation and the output gap objectives. Because, in the
range ut+βπ te ≤ 0 the CB behaves as a strict inflation
targeter, inflation in this range is always at the zero
target. Using the interest rate rules for the two ranges
in equation (4) and rearranging, the output gaps in
the two ranges are given, respectively, by

(29)

(

)

1
ut + βπ te
λ
λ
xtf = −
ut + βπ te .
2
A+λ
xts = −

(

Et –1π t=Et π t+1=… ≡ Eπ ≡ π e,

(30)

)

Thus, in the first range the output gap is always nonnegative and in the second it is always negative, but
not by as much as it would have been in the absence
of some output stabilization by the CB.
Demonstration That Expected Inflation Is
Positive. Because there is no persistence in shocks
and no endogenous state variables, the expected
value of the rate of inflation is the same for any
horizon and is also the same in each period.27
Thus,

so the time index attached to the expectation can
be deleted. It follows from equation (28), and from
the fact that in the range ut ≤ –βπ te inflation is zero,
that
(31)
∞
π e = ∫ −− βπ
∞ 0.dF (u) + ∫ − βπ e
e

=

[ (

A

(

A 1 − β 1 − F − βπ e

(

)

A
u + βπ e dF (u)
A + λ2

))] + λ ∫
2

∞

−βπ e

udF (u) ,

where F(u) is the distribution function of u and
where, without risk of confusion, the time index
has been suppressed because the distribution of u
is time invariant. This expression determines the
expected rate of inflation, π e, but only implicitly
because π e also appears on the right-hand side of
the equation. It is nonetheless possible to establish
that expected inflation is positive, even without an
explicit solution for it. Note that π e=–∞ cannot be
a solution because, for that value of π e, the righthand side of the equation would be zero and the
left-hand side –∞. Hence, –βπ e>–∞. Because the
expected value of u is zero, it follows that the integral
on the extreme right-hand side of equation (31) is
positive, establishing that both average and expected
inflation are positive.
At first blush one may be tempted to conclude
from this finding that there is an inflationary bias.
But this is premature because in the present stickyprice framework the average positive rate of inflation
may also be associated with a higher level of output.
It is thus more accurate to refer to it as an “inflationary tendency” rather than an inflationary bias. The
following subsection shows that this inflationary
tendency is associated with an output gap that may
be positive on average.
The Average Value of the Output Gap. As
was the case with average inflation, because there
is no persistence in shocks and no endogenous
state variables, the expected value of the output
gap is the same for any horizon and is also the
same in each period. I will therefore omit the time
26

Equation (27) is obtained by minimizing equation (26) with respect
to xt, using the resulting first-order condition to solve for xt, equating
this expression with equation (4), and solving for the implied nominal
rate of interest, i tf .

27

Essentially the no-persistence assumption shuts off any adjustment
in inflationary expectations in response to changes in exogenous
economic conditions.

J U LY / A U G U S T 2 0 0 2

29

REVIEW

Cukierman

index and just denote it by xe ≡ Ex. Using equation
(29),
e
1 − βπ
xe ≡ −
u + βπ e dF (u)
γ −∞
(32)
∞
λ
−
u + βπ e d F (u).
A + λ2 − βπ e

∫ (
∫ (

)

)

Expanding and using equation (31), this expression
can be shown to be equal, after some algebra, to
(33)

xe =

1− β e
π .
λ

Thus, provided β<1 and because average inflation
is positive, the average output gap is positive as well.
But if β=1, the average output gap is zero. It is therefore important to have an idea about the meaning
and magnitude of the parameter β. Gali and Gertler
(1999, p. 207) refer to it as the subjective discount
factor and provide empirical estimates suggesting
that it is about two standard errors below 0.99, which
is the typical value used for this parameter in the
literature (op. cit. footnote 16). Hence, existing evidence is not incompatible with the possibility that
1– β>0. It appears therefore that in a New Keynesian
world it is possible to obtain permanent gains in
output at the cost of permanently higher average
inflation. This obviously violates the long-run neutrality of monetary policy and may appear surprising at first sight. To understand the deeper origin of
this result, it is useful to digress and characterize
the behavior of the average values of inflation and
of the output gap when the CB is a strict inflation
targeter in the entire range of shock realizations.
Average Inflation and Output Gaps Under a
Strict Inflation Targeter as a Benchmark. In this
case the flexibility parameter, A, is equal to zero
and the interest rate rule in equation (25) applies
everywhere. Inserting the condition A=0 into equations (31) and (33) we obtain
xe=π e=0.

(34)

Thus, under a strict inflation stabilizer, expected
inflation and the expected output gap are both at
their zero target values. Inserting equation (34) into
equation (25), the interest rate rule of a strict inflation stabilizer is
i ts = π te +

1
1 
gt + ut ,
ϕ 
λ 

which implies that the expected value (as well as
the average value) of the real interest rate is zero.
30

J U LY / A U G U S T 2 0 0 2

Implications for Degree of Flexibility in
Targeting Inflation for Real Rates of Interest.

What are the implications for the average value of
real rates? Is it going to be above or below the average value of the real rate under strict inflation
targeting? There are two offsetting effects. On one
hand, because (1/λ)>(λ /(A+λ2)), it follows from a
comparison of equations (25) and (27) that, for the
same shock realizations and expectations the real
rate under flexible targeting is always lower than
under strict targeting in the range of negative output gaps. This effect tends to make the average value
of the real rate under flexible targeting lower than
under strict targeting. On the other hand, because
inflationary expectations are higher under flexible
targeting, a higher real rate is needed to achieve a
given rate of inflation under flexible targeting than
under strict targeting. This effect tends to make the
real rate higher under flexible targeting. The final
relation between the average level of real rates
under strict versus flexible inflation targeting
depends, therefore, on the relative strength of
those two effects. The high real rates experienced
during periods of disinflation suggest that, at least
during such periods, the second effect has dominated the first one.

Summary Thoughts on the Long-Run
Nonneutrality of the New Keynesian
Framework and the Implications for
Transparency
The analysis above suggests that, in a New
Keynesian economy, a flexible inflation stabilizer
with asymmetric preferences induces more inflation,
on average, but also more output (at least when
β<1) than a strict inflation stabilizer. This implies
that, contrary to model 1 (from Section II on the
specification of models, pp. 19-20), in such an economy the CB faces (possibly within some restricted
range of low rates of inflation) a long-run tradeoff
between the average level of inflation and the average
level of the output gap.28 The ability to affect output
arises because, due to temporarily sticky prices,
the CB can influence the real rate by means of the
nominal rate of interest.
For a flexible inflation targeter with asymmetric
28

The qualification restricting the statement to low rates of inflation
refers to the possibility that, when inflation increases beyond a certain
threshold, the intervals between price adjustments become shorter.
This ultimately pushes β toward 1 and eliminates any long-run tradeoff
between average inflation and the average output gap.

FEDERAL RESERVE BANK OF ST. LOUIS

preferences it is desirable to have a positive, rather
than a zero, average rate of inflation in order to be
able to reduce the magnitude of negative output
gaps when such gaps occur. As a consequence, the
average output gap, which was zero under a strict
inflation targeter, becomes positive. It is therefore
not quite appropriate to refer to the higher inflation
produced by the flexible targeter as a “bias.” I refer
to it instead as an “inflationary tendency.” Ultimately,
whether the CB or society prefers more inflation
and more stabilization of negative output gaps to
less inflation and less stabilization of such gaps is a
matter of taste.
But, to my knowledge, no CB has ever publicly
acknowledged that there might be such a tradeoff.
Thus, to the extent that there are at least some CBs
with asymmetric preferences, they have been
remarkably silent and opaque about the tradeoff
between output stabilization and inflation and about
their attitude to alternative values of the output gap.
For example, the public stance taken by most explicit
inflation targeters is that there is no relation between
the degree of flexibility in targeting inflation and
the average rate of inflation.
One possible reason for this position is that
public acknowledgment of asymmetric attitudes to
positive and negative output gaps may raise inflationary expectations and necessitate a higher average
level of real rates, which CBs fear will depress the
average level of output and investment. Such a fear
is irrational in the models I have presented because,
by the rational expectations assumption, individuals
know what the true objectives of the CB are in any
case. But once this extreme informational assumption is released for at least some individual price
setters in the economy, it becomes rational for the
CB to de-emphasize institutional factors that might
raise inflationary expectations. Simon has been
emphasizing cognitive and related limitations on
the individual’s ability to absorb information for
many years.29 In the presence of such cognitive
threshold effects within a sufficiently large fraction
of price setters, it is rational for CBs to de-emphasize
a high flexibility parameter and asymmetric preferences in order to maintain credibility.

VI. CONCLUDING REMARKS
The main messages of the paper can be summarized as follows. First, contemporary Western CBs are
rather opaque about the economic models they use
in reaching policy decisions, as well as about major
attributes of their objective functions. Second,

Cukierman

although Western CBs have recently been quite
precise about their inflation targets, there is substantial haziness about output targets and about the
degree of flexibility allowed in targeting inflation.
Third, in a world characterized by uncertainty about
the future state of the economy, the shape of the
loss function over the entire range of inflation and
of output gaps shapes policy choices. All CBs have
been remarkably silent about that. This paper makes
a case for the existence of asymmetric attitudes to
positive and to negative output gaps, at least for
some CBs.30
It shows, both for sticky- and for flexible-price
transmission mechanisms, that in the presence of
such asymmetries and uncertainty about the upcoming state of the economy there is an inflation bias
even when the CB targets potential output. The
reason is that such CBs are willing to tolerate some
higher inflation in order to reduce the risk of unexpectedly deep recession. This precautionary demand
for expansions is analogous to the precautionary
saving motive in the theory of consumption under
uncertainty, as generalized by Kimball (1990).
This “new inflation bias” result implies that, even
if Blinder (1998), Vickers (1998), and McCallum
(1995, 1997) are all right in believing that contemporary CBs target potential output, the risks of inflation are not gone. Although, as in KPBG, the bias
arises because of the CB concern (at least in some
states of nature) about the output gap, the new bias
does not rely on dynamic inconsistency. The origin
of the bias resides, instead, in the precautionary
behavior of the CB, with respect to recessions in a
world of uncertainty, in conjunction with the public’s
awareness of this asymmetry in CB objectives.
Fourth, in sticky-price frameworks with forwardlooking pricing there is, within some range, a longrun tradeoff between average inflation and average
output. Fifth, theory predicts that CBs with asymmetric preferences will locate at a point along this
29

A summary view with implications for economics appears in Simon
(1992). A recent enlightening discussion of Simon’s view for transparency in monetary policy appears in Winkler (2000).

30

Casual observation suggests that most politicians definitely have
asymmetric attitudes toward positive and negative output gaps. During
periods of disinflation and attempted buildups of credibility, the CB
may behave as if it suffers a higher loss from an upward than from a
downward deviation of inflation from target. Nobay and Peel (1998)
analyze the case in which both the inflation and the output gap terms
in the loss function of the CB are asymmetric. Cukierman and Muscatelli
(2002) provide a general framework and related empirical work that
feature both types of asymmetries and make it possible to identify
the dominant asymmetry in each country.

J U LY / A U G U S T 2 0 0 2

31

REVIEW

Cukierman

tradeoff that is characterized by both positive average
inflation and a positive output gap. This finding
implies that asymmetrically inclined policymakers
who believe in sticky-price models of the economy
rather than in flexible-price expectations-augmented
Phillips curves are inherently more inflationary. But
this does not mean they have a larger bias, because
their policies also bring, under sufficiently low inflation, a larger level of output.
Following conventional rational-expectations
practice, the new inflation bias story presented here
assumes that all agents in the economy are perfectly
rational and fully aware of what central bankers
are doing. Individuals familiar with the decisionmaking process within CBs may argue that most
policymakers are not solving an explicit expected
utility maximization problem as postulated here.
Although probably true, this observation does not
necessarily invalidate the relevance of the new inflation bias result. Policymakers can hedge against
deeper-than-wanted recessions by means of various
rules of thumb and institutional arrangements. The
next paragraph provides an illustration of such a
rule of thumb.
The view, currently held by some European CBs,
that current monetary policy can affect inflation
only in the second year after the implementation
of the policy may be thought of as such a built-in
institutional hedging device mainly against unexpected recessions. This device builds in a “flexible
inflation targeting” hedging procedure into the
policy process from the outset. The reason is that,
given this belief, it would be foolish to immediately
attempt to put inflation back on target following,
say, a cost shock. But the belief leading to this policy
prescription of flexible targeting may be disputed.
Woodford (1999), for example, as well as many New
Keynesians, appears to believe that monetary policy
can have an immediate impact on current inflation
via expected inflation. It thus is not unreasonable
to believe that part of the “two-year lag” institutional
belief is motivated by hedging behavior in the face
of uncertainty and asymmetries in the attitudes of
CBs about positive and negative output gaps.
Part of the haziness regarding objectives is
understandable in view of the fact that, in New
Keynesian models, inflationary expectations affect
current pricing decisions.31 In particular, a flexible
inflation targeter with a nonnegligible flexibility
parameter has good reason to appear less flexible
than he really is. This may have underlied the traditional, historical public position of the Bundesbank
32

J U LY / A U G U S T 2 0 0 2

according to which it was not concerned about output, as well as a recent observation by Mervyn King
from the Bank of England. King’s argument is that
it is difficult to distinguish, in practice, between
strict and flexible inflation targeters because both
raise interest rates when inflation and output are
above target. I doubt that a strict inflation targeter
would have made such a statement. As a matter of
fact, CBs with asymmetric output gap concerns have,
in view of the new inflation bias result presented
here, a credibility reason for not highlighting this
fact. By contrast, simple monetary policy games
with signaling imply that a strict inflation targeter
would like to send messages that would make his
identity clear to the public.32 Such a “type” is unlikely
to claim that it is not possible to distinguish flexible
from strict inflation targeters.
Lack of transparency about objectives is probably
more easily remedied than lack of transparency
about economic models because the latter is largely
due to lack of consensus about the true model of
the economy within the economic profession. It
follows that significant advances in our understanding of the channels of monetary policy are likely to
substantially raise the transparency about models
used and with it the accountability of CBs.
During the second part of the 1990s, many
Western economies experienced remarkably low
rates of inflation. Particularly striking is the experience of the United States, in which inflation was
quite low in spite of the powerful and persistent
expansion it went through during the last decade.
Is this all due to higher CB independence and a
stronger focus on price stability? It is likely that this
is part of the explanation, but not the whole story.33
This paper suggests an additional possibility.
Believing that the probability of recession is low,
those banks behaved nearly as strict inflation targeters would have. This conjecture is supported by
the fact that inflation was low also in countries
whose CBs are flexible inflation targeters (with possibly asymmetric preferences). If correct, this conjecture also implies that, when the fears of recession
31

Jensen (2000) shows that in such cases full transparency about objectives is not necessarily desirable.

32

This is the implication of formal models of monetary policy games
with private information. Two simple formulations appear in Vickers
(1986) and in Cukierman (2000b).

33

Cukierman and Lippi (2001) identify an additional factor. The permanent effects of the “new economy” in the United States were initially
underestimated, leading to overestimates of the output gap and, consequently, to more restrictive monetary policies.

FEDERAL RESERVE BANK OF ST. LOUIS

increase again, inflation may take off as the (currently
latent) new inflationary bias of those banks comes
back into being.
Finally, to maintain the paper within manageable
proportions, I deliberately avoided a systematic
discussion of two important questions. Is full transparency feasible, and is it always desirable? The
answer to the first question is likely to be “no,” as
suggested by Vickers (1998) and Winkler (2000).
This still leaves open a question about whether it is
desirable to extend transparency as far as the feasibility constraints would allow. The answer to this
question is by no means clear cut. Recent arguments
for and against doing that appear in Faust and
Svensson (2001), Geraats (1999), Jensen (2000), and
Cukierman (2001) and are partially summarized in
the last paper. Fuller understanding of the benefits
and costs of transparency must await further economic outcomes as well as academic work.

REFERENCES
Barro, Robert J. and Gordon, David B. “A Positive Theory of
Monetary Policy in a Natural Rate Model.” Journal of
Political Economy, 1983, 91(4), pp. 589-610.
Batini, Nicoletta and Haldane, Andrew G. “Forward Looking
Rules for Monetary Policy,” in John B. Taylor, ed., Monetary
Policy Rules. Chicago: University of Chicago Press, 1999.
Bean, Charles. “The New UK Monetary Arrangements: A View
from the Literature.” Unpublished manuscript, Centre
for Economic Performance, LSE, January 2000.
Blinder, Alan S. Central Banking in Theory and Practice.
Cambridge, MA: MIT Press, 1998.
Buiter, Willem H. “Alice in Euroland.” Journal of Common
Market Studies, June 1999, 37(2), pp. 181-209.
Calvo, Guillermo A. “Staggered Prices in a Utility-Maximizing
Framework.” Journal of Monetary Economics, 1983, 12(3),
pp. 383-98.
Clarida, Richard and Gertler, Mark. “How the Bundesbank
Conducts Monetary Policy,” in C.D. Romer and D.H.
Romer, eds., Reducing Inflation: Motivation and Strategy.
Chicago: University of Chicago Press, 1997.
___________ ; Gali, Jordi and Gertler, Mark. “The Science of
Monetary Policy: A New Keynesian Perspective.” Journal of
Economic Literature, December 1999, 37(4), pp. 1661-707.

Cukierman

Cukierman, Alex. Central Bank Strategy, Credibility, and
Independence: Theory and Evidence. Cambridge, MA: MIT
Press, 1992.
___________. “The Inflation Bias Result Revisited.”
Unpublished manuscript, Tel-Aviv University, April 2000a.
<www.tau.ac.il/~alexcuk/pdf/infbias1.pdf>.
___________. “Establishing a Reputation for Dependability
by Means of Inflation Targets.” Economics of Governance,
February 2000b, 1(1), pp. 53-76.
<www.tau.ac.il/%7Ealexcuk/pdf/targt7991.pdf>.
___________. “Accountability, Credibility, Transparency and
Stabilization Policy in the Eurosystem,” in Charles
Wyplosz, ed., The Impact of EMU on Europe and the
Developing Countries. Oxford: Oxford University Press,
2001, pp. 40-75.
____________ and Lippi, Francesco. “Endogenous Monetary
Policy with Unobserved Potential Output.” Presented at
the NBER November/December 2001 research conference
Macroeconomic Policy in a Dynamic, Uncertain Economy.
<www.tau.ac.il/~alexcuk/pdf/upo-nber.pdf>.
___________ and Muscatelli, V.A. “Asymmetric Responses
in Monetary Policy: Evidence and Consequences for the
Inflation Bias.” Unpublished manuscript, Tel-Aviv
University and University of Glasgow, February 2002.
<www.tau.ac.il/~alexcuk/pdf/cukierman-muscatelli1.pdf>.
de Haan, Jakob and Eijffinger, Sylvester C.W. “The
Democratic Accountability of the European Central Bank:
A Comment on Two Fairy-Tales.” Journal of Common
Market Studies, 2000, 38(3), pp. 394-407.
Dolado, Juan J.; Maria-Dolores, Ramon and Naveira, M.
“Asymmetries in Monetary Policy: Evidence for Four
Central Banks.” Discussion Paper No. 2441, Centre for
Economic Policy Research, April 2000.
Faust, Jon and Svensson, Lars E.O. “Transparency and
Credibility: Monetary Policy with Unobservable Goals.”
International Economic Review, 2001, 42(2), pp. 369-97.
Friedman, Milton. “The Role of Monetary Policy.” American
Economic Review, 1968, 58, pp. 1-17.
Gali, Jordi and Gertler, Mark. “Inflation Dynamics: A
Structural Econometric Analysis.” Journal of Monetary
Economics, 1999, 44(2), pp. 195-222.

J U LY / A U G U S T 2 0 0 2

33

Cukierman

Geraats, Petra M. “Transparency and Reputation: Should
the ECB Publish Its Inflation Forecasts?” Presented at the
ECB conference Monetary Policy-Making Under Uncertainty,
Frankfurt, Germany, December 1999.
Gerlach, Stefan. “Asymmetric Policy Reactions and Inflation.”
Unpublished manuscript, Bank for International
Settlements, April 2000.
Hansen, Lars P. and Sargent, Thomas. “Wanting Robustness
in Macroeconomics.” Unpublished manuscript, June 2000.
Hodrick, Robert J. and Prescott, Edward C. “Postwar U.S.
Business Cycles: An Empirical Investigation.” Journal of
Money, Credit, and Banking, February 1997, 29(1), pp. 1-16.
Issing, Ottmar. “The Eurosystem: Transparent and
Accountable or ‘Willem in Euroland’.” Journal of Common
Market Studies, September 1999, 37(3), pp. 503-19.
Jensen, H. “Optimal Degrees of Transparency in Monetary
Policymaking.” Unpublished manuscript, University of
Copenhagen, August 2000.
___________. “Targeting Nominal Income Growth or
Inflation?” Unpublished manuscript, University of
Copenhagen, August 2001 (forthcoming in American
Economic Review).
Kimball, Miles S. “Precautionary Saving in the Small and
in the Large.” Econometrica, January 1990, 58, pp. 53-73.
King, Mervyn. “The Inflation Target Five Years On.” Bank
of England Quarterly Bulletin, November 1997, 37(4), pp.
434-42.

REVIEW
McCallum, Bennett T. “Two Fallacies Concerning Central
Bank Independence.” American Economic Review Papers
and Proceedings, May 1995, 85(2), pp. 207-11.
___________. “Crucial Issues Concerning Central Bank
Independence.” Journal of Monetary Economics, June 1997,
39(1), pp. 99-112.
___________ and Nelson, Edward. “Performance of
Operational Policy Rules in an Estimated Semiclassical
Structural Model,” in John B. Taylor, ed., Monetary Policy
Rules. Chicago: University of Chicago Press, 1999.
Nobay, A.R. and Peel, David A. “Optimal Monetary Policy
in a Model of Asymmetric Central Bank Preferences.”
Unpublished manuscript, Financial Markets Group, LSE,
1998.
Rudebusch, Glenn D. “Assessing Nominal Income Rules for
Monetary Policy with Model and Data Uncertainty.”
Unpublished manuscript, Federal Reserve Bank of San
Francisco, March 2001 (forthcoming in Economic Journal
and available at <www.frbsf.org/economics/economists/
grudebusch/index.html>).
Rogoff, Kenneth. “The Optimal Degree of Commitment to
a Monetary Target.” Quarterly Journal of Economics,
November 1985, 100(4), pp. 1169-89.
Ruge-Murcia, Francisco J. “The Inflation Bias When the
Central Bank Targets the Natural Rate of Unemployment.”
Unpublished manuscript, University of Montreal,
September 2001.
Sargent, Thomas J. The Conquest of American Inflation.
Princeton: Princeton University Press, 1999.

Kydland, Finn E. and Prescott, Edward C. “Rules Rather
Than Discretion: The Inconsistency of Optimal Plans.”
Journal of Political Economy, 1977, 85(3), pp. 473-91.

Simon, H.; Egidi, M.; Marris, R. and Viale, R. Economics,
Bounded Rationality and the Cognitive Revolution. Aldershot,
UK: Edward Elgar, 1992.

Layard, Richard; Nickell, Stephen and Jackman, Richard.
Unemployment—Macroeconomic Performance and the
Labour Market. Oxford: Oxford University Press, 1991.

Staiger, Douglas; Stock, James H. and Watson, Mark W. “How
Precise Are Estimates of the Natural Rate of Unemployment?”
in C.D. Romer and D.H. Romer, eds., Reducing Inflation:
Motivation and Strategy. Chicago: University of Chicago
Press, 1997.

Lucas, Robert E. Jr. “Expectations and the Neutrality of
Money.” Journal of Economic Theory, 1972, 4(2), pp. 10324.
___________. “Some International Evidence on Output
Inflation Tradeoffs.” American Economic Review, 1973,
63(3), pp. 326-34.
34

J U LY / A U G U S T 2 0 0 2

Svensson, Lars E.O. “Inflation Forecast Targeting:
Implementing and Monitoring Inflation Targets.” European
Economic Review, June 1997, 41(6), pp. 1111-46.
Vickers, John. “Signalling in a Model of Monetary Policy

FEDERAL RESERVE BANK OF ST. LOUIS

Cukierman

with Incomplete Information.” Oxford Economic Papers,
November 1986, 38(3), pp. 443-55.
___________. “Inflation Targeting in Practice: The UK
Experience.” Bank of England Quarterly Bulletin, 1998,
38(4), pp. 368-75.
Winkler, Bernhard. “Which Kind of Transparency? On the
Need for Clarity in Monetary Policy-Making.” Working
Paper No. 26, European Central Bank, August 2000.
Woodford, Michael. “Optimal Monetary Policy Inertia.”
Unpublished manuscript, Princeton University, 1999.
<www.princeton.edu/~woodford/inertia.pdf>.
___________. “Interest and Prices.” Unpublished manuscript,
Princeton University, January 2002. <www.princeton.edu/
~woodford>.

J U LY / A U G U S T 2 0 0 2

35

Cukierman

36

J U LY / A U G U S T 2 0 0 2

REVIEW

Commentary
Carl E. Walsh
t is very appropriate that a conference on
monetary policy transparency begin with a
paper by Alex Cukierman. His 1986 paper with
Allan Meltzer was the first modern treatment of
transparency and the model developed in that paper
continues to serve as the basic framework for much
of the recent work in this area.
Economists at most major central banks seem
to feel the average inflation bias that occupied so
much space in academic journals has been conquered. Whether it is because they now know to
just do the right thing (McCallum 1995), because
they target only the natural rate of output (Blinder,
1998; Svensson, 1999), or because they have gained
reputations as inflation fighters through increased
transparency and greater accountability is less certain. While many central banks have adopted operating procedures that are designed to provide the public
with clearer and more complete information about
policy decisions, and this increased transparency
is often cited as critical for inflation targeters,
Cukierman argues that transparency is still incomplete. This is true even among central banks that are
quite transparent along some dimensions, publicly
announcing inflation targets, for instance. This incompleteness limits the ability of the public to hold
monetary policymakers accountable for their actions.
Cukierman highlights two aspects of the policy
environment that remain opaque—models and
objectives. Emphasizing the role of objectives in the
second half of his paper, Cukierman explores the
implications for inflation of asymmetric preferences
and, specifically, the case in which, at a given inflation rate, output expansions are viewed as beneficial
while contractions are viewed as costly.
Cukierman notes that the different notions of
the output gap implicit in alternative models is one
source of policy opaqueness. First, I want to develop
more formally the distinction between alternative
measures of the output gap and argue that different
economic models and different definitions of the

I

Carl E. Walsh is a professor of economics at the University of
California, Santa Cruz, and a visiting scholar at the Federal Reserve
Bank of San Francisco. Any opinions expressed are those of the
author and not necessarily those of the Federal Reserve Bank of San
Francisco or the Federal Reserve System.

© 2002, The Federal Reserve Bank of St. Louis.

output gap lead to different policy objectives. If
central banks are opaque about the models because
of uncertainty about the true transmission mechanism of policy, then this will also be reflected in
uncertainty (and therefore opaqueness) about the
objectives of policy. Thus, uncertainty about the true
economic model and opaqueness about policy objectives are intertwined. I then show that a model
commonly used in the recent literature to analyze
policy transparency arises naturally when the central
bank targets the wrong output gap. Turning to asymmetric preferences, I provide a graphical representation of Cukierman’s model that helps to illustrate
why a positive average inflation rate arises in equilibrium, and I then touch on the nonneutrality of
money in the New Keynesian model he uses.

TRANSPARENCY: MODELS AND
OBJECTIVES
Cukierman argues that even central banks
such as the Bank of England—that is, central banks
thought of as being very transparent—are in fact
still fairly opaque because they are not transparent
about either their exact policy objectives or the
models they use in the decisionmaking process. It
might seem hard to reconcile this view of central
banks with the general perception that monetary
policymaking in many countries has become more
transparent—after all, if we think of policymakers
as solving an optimizing problem, that problem is
characterized by the policymakers’ objectives and
the constraints they face, given by their model of
how the economy operates. So if central banks are
not transparent about either their objectives or their
constraints, what is there for them to be transparent
about? Clearly inflation targeters are transparent
about at least some of their objectives. As Cukierman
notes, however, even inflation targeters appear to
care about output objectives, yet none have made
these concerns explicit.

The Lucas Supply Curve Versus New
Keynesian Inflation Adjustment
Opaqueness about models and opaqueness
about objectives are not independent—the choice
of a particular model can determine the appropriate
objectives of policy. I want to illustrate this point
using two of the alternative models Cukierman sets
out. Cukierman actually discusses three alternative
models of the monetary transmission mechanism
to make his point that one reason central banks are
J U LY / A U G U S T 2 0 0 2

37

REVIEW

Walsh

not transparent about the models they use is that
economists have not reached agreement on the
“correct model” of the economy. The three models
are (i) a monetarist model based on a Lucas-type
transmission process in which it is monetary surprises that matter, (ii) a backward-looking model of
sticky price adjustment, and (iii) a forward-looking
New Keynesian model of sticky prices. I will focus
on the first and third of these models. In their most
basic form, these models imply different objectives
for monetary policy. Thus, a lack of transparency
about the central bank’s model inevitably also
reduces the transparency of its objectives.
The key equation that distinguishes the alternative frameworks links inflation and output. In the
Lucas supply curve, one has
(1)

xtL = α (π t − Et −1π t )
xtL = yt − ynt ,

where inflation is denoted by π, the actual (log) level
of real output is yt, ynt is the log natural rate of output (both defined as deviations around the steadystate level of output), and xtL is the gap between
actual output and the natural rate. In the basic New
Keynesian model,

π t = β Et π t +1 + κ xtNK

and
xtNK ≡ yt − y f t ,
where yft is the log output level that would arise in
the absence of nominal rigidities (expressed as a
deviation around the steady-state level) and xtNK is
the gap between actual output and the flexible-price
equilibrium output level.1
As Cukierman notes, the two models do not
necessarily imply the same definition of the output
gaps xtL and xtNK, nor do either of these theoretical
constructs correspond closely to standard empirical
methods of measuring output gaps. The first issue
to address is the relationship between these alternative definitions of the output gap. The appropriate
objective of monetary policy implied by these two
models differs; so, if central banks, perhaps because
of uncertainty about the structure of the economy,
are opaque about their model, it will be difficult to
be transparent about their objectives.
Output Gaps. The output gap is the difference
between actual output and some reference output
38

(3)

J U LY / A U G U S T 2 0 0 2

Yt = e zt Nta ,

where z is an aggregate productivity disturbance
and Nt is aggregate employment. The utility of the
representative agent is
(4)

and

(2)

level. Cukierman draws a distinction between the
appropriate definition of this reference level in the
Lucas neo-monetarist approach and in the New
Keynesian approach. While measurement issues
arise in trying to make operational any concept of
the output gap, I think economic theory provides
some guidance as to which one should be the focus
of policy.
To contrast alternative interpretations of the
output gap, it will be useful to add some more structure to the model. Suppose the aggregate production
function takes the form

 C1−σ 
 N1+η  
U = ∑ β i  t + i  − χ  t + i  .
i =0
 1 + η  
 1 − σ 

In the absence of any nominal rigidities, labor market equilibrium would be determined by the two
conditions
(5)

W 
ae zt Nta −1 = θ  t 
 Pt 

and
(6)

 χN η   W 
µ  −σt  =  t  ,
 Ct   Pt 

where 1≤ θ<∞ and 1 ≤ µ<∞ are mark-ups in the
goods and labor markets arising from the presence
of monopolistic competition. If both the goods and
labor markets are characterized by perfect competition, θ=µ=1 and (5) and (6) reduce to the familiar
condition that the marginal product of labor equals
the marginal rate of substitution between leisure
and consumption.
Letting a subscript f denote the equilibrium in
the absence of nominal rigidities, and noting that
in the absence of investment and government purchases Ct=Yt=e ztN ta, the flexible-wage and flexibleprice equilibrium level of output is
1

The inflation adjustment equations in recent New Keynesian models
imply that inflation is related to expected future inflation and real
marginal cost. Real marginal cost is then related to the output gap to
yield an equation such as (2) (see Galí and Gertler, 1999). Cukierman
identifies the gap in the New Keynesian model as output minus
potential. However, the standard definition of the gap in recent New
Keynesian models is the difference between (log) output and the log
of the flexible-price equilibrium level of output. This does not correspond to “potential output” in the sense that Cukierman uses it as
reflecting long-run supply factors.

FEDERAL RESERVE BANK OF ST. LOUIS

a



1+η

Walsh



 a  1+η − a(1−σ )  1+η − a(1−σ )  zt
Y ft = e zt N aft = 
e
.

θµχ 
Expressed in log terms as a deviation around
the steady-state,2
(7)



1+ η
y ft = 
 z t ≡ γz t .
 1 + η − a(1 − σ ) 

Recall that in New Keynesian models, the output
gap is identified with the deviation of actual output
around this flexible-price output level, or
xtNK = yt − y f t = yt − γ z t .
How does this gap variable compare with the gap
between output and the natural rate in models based
on a Lucas supply curve? According to Friedman
(1968),
The “natural rate of unemployment,” in
other words, is the level that would be
ground out by the Walrasian system of general equilibrium equations, provided there
is imbedded in them the actual structural
characteristics of the labor and commodity
markets, including market imperfections,
stochastic variability in demands and supplies, the costs of gathering information
about job vacancies and labor availability,
the costs of mobility, and so on.
At one level, this definition could be taken to
mean the level of employment in a New Keynesian
model is always at the natural rate. After all, the
costs of adjusting prices and wages are part of the
“structural characteristics” of the economy. Fluctuations in demand induced by monetary policy alter
the level of employment ground out by the general
equilibrium model. Yet this is clearly not what economists have interpreted the natural rate to mean.
Earlier in the same paragraph from which the quotation above is drawn, Friedman speaks of the unemployment rate “consistent with equilibrium in the
structure of real wage rates” (emphasis in original).
This definition seems more consistent with the
notion of the flexible-price equilibrium level of
employment. Under that interpretation, the output gap in the New Keynesian models is, in fact,
equal to the gap between output and the natural
level of output, and the Lucas supply curve and
New Keynesian gaps are the same.
A more common interpretation of the natural
level of output, however, corresponds to the equilib-

rium output in the absence of inflation surprises,
i.e., when xtL=0. How does output in the absence
of inflation surprises compare with the flexibleprice equilibrium level, and what is the relationship
between the gap measure xtL and the measure xtNK?
To answer this question, one needs to know where
the Lucas supply function comes from.
The standard motivation for the Lucas supply
function is not the information-based story originally
developed by Lucas (1972). Instead, it is based on
Fischer (1977), who shows that equation (1) can arise
when prices are flexible and goods markets are
perfectly competitive but nominal wages are set at
the start of the period, prior to observing the current
shocks (including innovations to monetary policy).
With competitive goods markets and flexible
prices, firms adjust employment to ensure the real
wage is equal to the marginal product of labor. If
—
Wt is the period t nominal wage set at the end of
period t –1, employment satisfies
W 
ae zt Nta −1 = θ  t  .
 Pt 
In log deviations around the steady-state,
 1 
nt = 
 ( p − wt + z t ).
1− a t
Assume the nominal wage is set to ensure that the
expected marginal rate of substitution between
leisure and consumption is equal to the expected
marginal product of labor:
 χN η 
W   a
µEt −1 −σt  = Et −1 t  =   Et −1 e zt Nta −1 .
 Pt   θ 
 Ct 

(

)

This implies, in terms of a log-linear approximation
around the steady-state, that the nominal wage is
set equal to


η +σ
wt = Et −1 pt + 
 ρ z t −1.
1 + η − a(1 − σ ) 
Note that I have assumed the productivity disturbance zt follows an AR(1) process zt=ρ zt –1+et,
where et is a white noise process. Equilibrium
employment is given by
 1 
nt = 
 e + ( pt − Et −1 pt )
1− a t

[

]



1− σ
+
 ρ z t −1,
1
1
+
−
−
η
σ
(
)
a


2

The log steady-state level of output is {a/[1+η – a(1 – σ )]}ln[a/θµχ ].

J U LY / A U G U S T 2 0 0 2

39

REVIEW

Walsh

and output is
yt = z t + ant
 1 
 a 
=
 e + γρz t −1.
 ( p − Et −1 pt ) + 
1− a t
1− a t
Therefore, the natural rate of output defined as output in the absence of price surprises is
 1 
ynt = 
 e + γρ z t −1.
1− a t
With nominal wages fixed, a policy that stabilizes
the price level (eliminates price surprises) keeps
the real wage unchanged in the face of productivity
innovations. Employment rises with a positive productivity shock (et>0) as firms hire more workers
until the marginal product of labor is again equal
to the (fixed) real wage. The impact of et on ynt is
et /(1– a). In contrast, the efficient, flexible-price
response is equal to γ et (see equation (7)). Since

  1 
η +σ
γ ≡
,
 ≤
 1 + η − a (1 − σ )  1 − a 
the natural rate fluctuates more in responses to
productivity innovations than does the flexibleprice equilibrium output level. A policy of price
stability, when nominal wages are sticky, leads to
too much output variability. Stabilizing the output
gap defined by xtL is not the optimal policy when
nominal wages are sticky.
The flexible-price equilibrium will be replicated
if
(8)



η +σ
π t − Et −1π t = −
 et .
 1 + η − a(1 − σ ) 

This fall in prices in the face of a productivity shock
raises the real wage, reducing the demand for labor.
This ensures that the marginal rate of substitution
between leisure and consumption remains equal
to the real wage. As one would expect from the
analysis of Erceg, Henderson, and Levin (2000), the
policy given by (8) would, in a sticky-wage environment, ensure that the nominal wage remains constant, thereby undoing the distortion generated by
sticky nominal wages.
Of course, the converse results arise if prices
are sticky and policymakers attempt to avoid wage
surprises.
How does the gap between output and the natural level of output compare with the gap between
output and the flexible-price equilibrium level of
output? It is straightforward to show
40

J U LY / A U G U S T 2 0 0 2


η +σ
 a 
e.
xtNK = xtL + 

 1 − a   1 + η − a(1 − σ )  t
Consider the impact of a positive productivity
disturbance, et>0. A policy that tries to stabilize xtL
needs to let xtNK rise. That is, output will expand above
the flexible-price equilibrium level. In contrast, if
the central bank focuses on stabilizing xtNK, it will
allow output to fall below the natural rate in the
face of a positive productivity shock.
Which output gap measure should the central
bank focus on? The policy recommendation from
the Lucas model would seem to be “avoid inflation
surprises.” Yet such a policy is inefficient because
it generates economic fluctuations that are too large
in response to productivity shocks. The natural rate
is not the appropriate output benchmark for stabilization policies when nominal wages are sticky.
On the other hand, if wages are flexible and prices
are sticky, eliminating inflation surprises by maintaining zero inflation would be optimal. If the central
bank is uncertain whether the economy is characterized by sticky wages or sticky prices, it will also
be uncertain about the optimal policy is should follow. If this uncertainty means the central bank is
opaque about its views of the monetary transmission
mechanism, then it is also likely to be opaque with
respect to its objectives.
The general lesson is that policy objectives are
not independent of the structure of the economy.
In a Lucas supply curve model based on nominal
wage rigidity, price stability is not the optimal policy,
although it is in a New Keynesian model of sticky
prices. Both these models are based on a key simplifying assumption—only one nominal variable is
sticky. With a single monetary distortion, optimal
policy calls for undoing that distortion. If both wages
and prices are sticky, then neither price stability
nor nominal wage stability will be optimal.
Targeting the Wrong Gap and Models of
Transparency. Cukierman lists another gap measure—the difference between output and potential.
Since potential is a constant in my simple example,
this gap measure is just
yt = xtNK + γ z t .
Suppose the central bank does focus on output
relative to potential and the economy is actually
characterized, as in New Keynesian models, by
flexible nominal wages and sticky prices. In this
case, inflation is given by

FEDERAL RESERVE BANK OF ST. LOUIS

(9)

Walsh

π t = β Etπ t +1 + κ xtNK ,

while the central bank’s loss function is
∞

[

Lt = (1 − β ) Et ∑ β i π t2+ i + Ayt2+ i
(10)

i =0
∞

]

(

)

2
= (1 − β ) Et ∑ β i π t2+ i + A xtNK
+ γ z t + i .
+
i


i =0

Notice that, by focusing on yt rather than xtNK, we
have a situation in equation (10) that is equivalent
to the presence of a stochastic output target equal
to γ zt. Alternatively, the central bank’s decision problem can be written in terms of yt. In this case, the
loss function is
∞

(

)

Lt = (1 − β ) Et ∑ β i π t2+ i + Ayt2+ i ,
i =0

and this is minimized subject to

π t = β Et π t +1 + κ yt + κγ et .
This reveals how the productivity shock et
appears as a cost shock (and therefore leads to a
policy trade-off—see Clarida, Galí, and Gertler, 1999)
because the central bank employs the wrong measure of the output gap. In the present model, the
socially optimal policy would set π t=0 and xtNK=0;
but, when the central bank targets output relative to
potential, it is straightforward to show that inflation
fluctuates too much.
Transparency and a Stochastic Output Target.
When the central bank incorrectly targets output
relative to potential, we have a situation that is
equivalent to the presence of a stochastic output
target. What is interesting about this case is that
recent work on central bank transparency has often
been based on the assumption that the central bank
has a stochastic output target. Models with stochastic output targets have been used by Faust and
Svensson (2001), Jensen (2000), and Walsh (2002)
to study the role of transparency. As just shown,
this situation can arise when the central bank targets
the wrong measure of the output gap, perhaps
because of the sort of model uncertainty that
Cukierman emphasizes.3
Faust and Svensson (2000) conclude that transparency is desirable. In their model, transparency
takes the form of better information about the central
bank’s control error— transparency is increased if
the central bank provides more information about
its forecasts. Thus, greater transparency improves
the ability of the public to monitor the central bank
by distinguishing between control errors and stochas-

tic shifts in the central bank’s output objective.
Improved transparency means that any deliberate
attempt by the central bank to expand output would
quickly be discovered and lead to a rise in expected
inflation. This rise in expected inflation increases
the marginal cost of an expansion, inducing the
central bank to refrain from trying to overly expand
real economic activity. Transparency acts as a disciplinary device (see also Walsh 2000).
Cukierman (2000) and Jensen (2000) point out
that transparency may come at a cost. By making
expected inflation more sensitive to central bank
actions, the cost of engaging in policies aimed at
stabilizing output rises. This can distort stabilization
policy and lead to excessive fluctuations in real
economic activity.
This type of distortion is common in many
systems based on an imperfect measure of performance. Announcing a target for inflation, for example, establishes a measure by which the central
bank’s performance can be measured. If too much
stress is placed on achieving the target (essentially
making inflation targeting a high-powered incentive
scheme), the central bank may downplay other
potentially desirable objectives. However, greater
transparency by publishing the central bank’s forecasts would allow the public to more easily verify
whether the central bank’s short-run target for inflation is appropriate, given the central bank’s forecast
of economic conditions. In other words, greater
transparency allows the public to more closely monitor the central bank.4 Better monitoring improves
the public’s ability to hold the central bank accountable for achieving its inflation target. Thus, greater
transparency is consistent with a stricter inflation
targeting regime (i.e., a high-powered incentive
scheme with more weight placed on achieving the
target) because the public is able to determine the
appropriate state-contingent target inflation rate.

THE ROLE OF ASYMMETRIC
PREFERENCES
The second part of Cukierman’s paper develops
the implications for inflation of asymmetric central
bank preferences. Cukierman questions the assumption of symmetric preferences that is implied by
the standard quadratic specification for the central
3

For a survey of the recent literature on central bank transparency,
see Geraats (2002).

4

Walsh (2002) relates transparency to the ability to monitor the central
bank.

J U LY / A U G U S T 2 0 0 2

41

REVIEW

Walsh

Figure 1

Figure 2
Equilibrium Inflation with Asymmetric
Preferences

Asymmetric Preferences and Cost Shocks
When Expected Future Inflation Is Zero

Inflation

Inflation

Inflation Adjustment
Relationship

Inflation Adjustment
Relationship
Policy Relationship

Policy Relationship

Output Gap

Output Gap

bank’s loss function. Instead, he argues that, given
the rate of inflation, central banks prefer a 1 percent
output gap to a –1 percent gap. This assumption
strikes me as quite reasonable, and there are a number of ways of modeling it. Perhaps the simplest is
to subtract a linear term in the output gap from the
standard quadratic loss function. This makes the
marginal benefit of an expansion positive when
evaluated at a zero output gap. Ruge-Murcia (2001)
uses a linex function to allow for asymmetric preferences, although he assumes this applies to inflation,
not output.
Cukierman employs a specification that is very
simple but that captures the basic idea—he assumes
that as long as the output gap is positive, the central
bank cares only about inflation stabilization. When
the gap is negative, then the familiar quadratic preferences kick in. In his neo-monetarist model, the
central bank must act prior to observing the current
shocks. To ensure against a bad output realization,
the policymaker sets the nominal money supply
above the zero inflation level. As a consequence,
an average inflation bias appears.

A Graphical Analysis in the New
Keynesian Model
As Cukierman notes, a similar effect arises in
his New Keynesian model, even if the central bank
can observe the shocks. It is easy to illustrate this
graphically. In Figure 1, the line labeled “Policy
Relationship” illustrates the inflation and output
gap combinations that are consistent with the cen42

J U LY / A U G U S T 2 0 0 2

tral bank’s first order condition.5 Also shown in the
figure is the inflation adjustment curve, drawn as a
solid line for the case of zero expected inflation and
a zero cost shock. Inflation occurs where the policy
relationship and the inflation adjustment curve
intersect. Assume the cost shock takes on the values
ε>0 and –ε<0 with equal probability, as indicated
by the dashed lines. Since inflation is zero when
the cost shock is –ε and positive when the shock is
ε, on average, inflation will be positive. Since private
agents will anticipate this inflation bias, expected
inflation rises, shifting the inflation adjustment
curves upward until equilibrium is established at
(x–,π– ), as shown in Figure 2 where the inflation
adjustment equation for zero shock intersects the
vertical axis at βπ– . As Cukierman also notes, the
equilibrium involves positive average inflation and
a positive average output gap.
Figure 3 illustrates the effects of an increase in
the weight the central bank places on its output
objectives (an increase in the parameter A). With a
larger A, the central bank is willing to accept higher
inflation to limit declines in the output gap. As a
consequence, average inflation rises. In addition to
depending positively on A, the average inflation
rate is increasing in the variance of the cost shock.
This can be seen by increasing the distance between
the inflation adjustment curves for ε>0 and –ε<0.
5

For x>0, the central bank sets π=0. For x<0, the central bank equates
the marginal rate of substitution between the output gap and inflation,
–Ax/π, to the marginal rate of transformation, κ, or κπ+Ax=0, where
A is the weight on output fluctuations in the objective function and κ
is the marginal effect of output on inflation.

FEDERAL RESERVE BANK OF ST. LOUIS

Walsh

Are Preferences Asymmetric?
Asymmetric preferences over output is one
possibility, but there are other ways in which the
central bank’s preferences may be asymmetric.
Ruge-Murcia (2001) models the asymmetry as applying to inflation. He assumes that inflation-targeting
central banks are more concerned about overshooting their inflation target than they are about undershooting the target. As a consequence, he finds there
is a deflationary bias. That is, average inflation will
be systematically below the announced target. This
is the opposite of Cukierman’s conclusion that inflation will be systematically above target.6
The presence and form of asymmetric preferences seems to me an empirical issue. Cukierman
cites some evidence that supports his specification.
For instance, Gerlach (2000) finds some support
for a positive association between variability and
the level of inflation. For the inflation targeting
countries he studies, Ruge-Murcia (2001) finds that
average inflation is negatively related to the variance
of inflation, evidence that he interprets as supportive
of his specification. Clearly, there cannot be both
an inflation bias and a deflation bias, so this is an
area that will need to be resolved by further empirical testing.

IS THERE LONG-RUN NONNEUTRALITY
IN THE NEW KEYNESIAN MODEL?
Finally, I want to comment on the presence of
a long-run trade-off between average inflation
and the output gap in the New Keynesian model
Cukierman employs. Cukierman notes that the
existence of this trade-off leads to what he labels
an “inflation tendency.” A positive average rate of
inflation produces an output gap that is also positive
on average. If the average rate of inflation is π–>0,
then equation (2) implies the average output gap
is x–=(1– β )π–/κ >0. This situation was illustrated
in Figures 2 and 3 by the positive output gap that
accompanies the positive equilibrium inflation rate.
This apparent trade-off arises in some, but not
all, derivations of the inflation equation given by
(2). For example, suppose prices are set according
to a Calvo mechanism in which a randomly drawn
fraction 1– θ of all firms adjust their prices each
period. Adjusting firms set prices to maximize the
present discounted value of profits, subject to a
constant elasticity demand for their goods. Following
Erceg, Henderson, and Levin (2000) and Christiano,
Eichenbaum, and Evans (2001), assume that the

Figure 3
Equilibrium Inflation with a Larger
Weight on Output
Inflation
Policy Relationship

Inflation Adjustment
Relationship

Output Gap

other θ fraction of firms simply update their prices
based on the average rate of inflation.7 One could
think of the costs of adjusting as reflecting decisionmaking costs so that each period not all firms decide
to fully optimize in setting their price.
Let ψˆt be the firm’s real marginal cost, with ˆ
denoting percentage deviation from the steady-state,
and β the discount factor. Then one obtains
 (1 − βθ )(1 − θ )  ˆ
πˆ t = βEt πˆ t +1 + 
ψ t .
θ

By using the production function and the household’s marginal rate of substitution between leisure
and consumption, real marginal cost can be eliminated to yield a standard New Keynesian inflationadjustment equation8:

πˆ t = β Et πˆ t +1+ κ xtNK .

(11)
6

In Figure 1, make the policy relationship concave, rather than convex,
to illustrate the resulting deflationary bias.

7

Christiano, Eichenbaum, and Evans (2001) characterize this as static
pricing. They also analyze “dynamic” pricing in which firms update
price based on the lagged inflation rate.

8

The marginal cost variable can be related to the output gap by noting
that from (3) and (4),

ˆ t − ˆpt − ( a − 1) n
ˆt = ηn
ˆ t + σxtNK − ( a − 1) n
ˆt
ψˆ t = w
= [ηa + σ + a(1 − a)]xtNK .
So in (11),

 (1 − βθ )(1 − θ ) 
κ≡
[ηa + σ + a(1 − a)].
θ


J U LY / A U G U S T 2 0 0 2

43

REVIEW

Walsh

The critical point to note is that equation (11)
does not involve the level of the inflation rate. It is
expressed in terms of the deviation of inflation from
the steady-state. By definition, π̂ equals zero in the
steady-state, and (11) then implies that the output
gap will also be zero, regardless of the average
steady-state rate of inflation. Thus, in this version
of the New Keynesian model there is no long-run
trade-off between the average rate of inflation and
the output gap.

CONCLUSIONS
Central banks are not transparent about their
models, for the reasons Cukierman highlighted. If
policymakers are uncertain about the true model
of the economy, then opaqueness about objectives
is not surprising, since a choice of model serves to
define the appropriate objectives. In the models
Cukierman uses to illustrate differences in the transmission process, different policies are optimal. Under
the standard Lucas supply curve a la Fischer, monetary policy should stabilize nominal wages; in the
basic New Keynesian model, prices should be stabilized. Greater transparency about objectives is likely
to arise, therefore, only when there is greater agreement on models.
Transparency is important if policymakers are
to be held accountable. It is difficult to monitor the
central bank’s performance if little information is
available about the economic outlook that forms
the basis for the central bank’s policy decisions.
Greater transparency, by improving the ability to
monitor the central bank, contributes to making
policymakers more accountable for achieving the
central bank’s inflation target.
As for asymmetric preferences, I find it plausible
that central bankers are not indifferent between
expansions and recessions or between inflation
target overshoots and undershoots. How important
this is empirically, however, is an open question.

REFERENCES
Blinder, Alan S. Central Banking in Theory and Practice.
Cambridge, MA: MIT Press, 1998.
Christiano, Lawrence J.; Eichenbaum, Martin and Evans,
Charles. “Nominal Rigidities and the Dynamic Effects of
a Shock to Monetary Policy.” Working Paper 8403, National
Bureau of Economic Research, July 2001.
Clarida, Richard; Galí, Jordi and Gertler, Mark. “The Science
44

J U LY / A U G U S T 2 0 0 2

of Monetary Policy: A New Keynesian Perspective.”
Journal of Economic Literature, December 1999, 37(4),
pp. 1661-707.
Cukierman, Alex. “Accountability, Credibility, Transparency
and Stabilization Policy in the Eurosystem.” Unpublished
manuscript, The Eitan Berglas School of Economics,
Tel Aviv University, 2000.
___________ and Meltzer, Allan H. “A Theory of Ambiguity,
Credibility and Inflation under Discretion and Asymmetric
Information.” Econometrica, September 1986, 54(5), pp.
1099-128.
Dixit, Avinash. “A Repeated Game Model of Monetary
Union.” Economic Journal, October 2000, 110(466), pp.
759-80.
Erceg, Christopher J.; Henderson, Dale and Levin, Andrew
T. “Optimal Monetary Policy with Staggered Wage and
Price Contracts.” Journal of Monetary Economics, October
2000, 46(2), pp. 281-313.
Faust, Jon and Svensson, Lars E.O. “Transparency and
Credibility: Monetary Policy with Unobservable Goals.”
International Economic Review, May 2001, 42(2), pp.
369-97.
Fischer, Stanley. “Long-Term Contracts, Rational Expectations,
and the Optimal Money Supply Rule.” Journal of Political
Economy, February 1977, 85(1), pp. 191-206.
Friedman, Milton. “The Role of Monetary Policy.” American
Economic Review, March 1968, 58(1), pp. 1-17.
Geraats, Petra. “Central Bank Transparency.” Economic
Journal, 2002 (forthcoming).
Galí, Jordi and Gertler, Mark. “Inflation Dynamics: A
Structural Econometric Investigation.” Journal of Monetary
Economics, October 1999, 44(2), pp. 195-222.
Gerlach, Stefan. “Asymmetric Policy Reactions and Inflation.”
BIS, April 2000.
Jensen, Henrik. “Optimal Degrees of Transparency in
Monetary Policymaking.” University of Copenhagen,
December 2000.
Lucas, Robert E. Jr. “Expectations and the Neutrality of
Money.” Journal of Economic Theory, April 1972, 4(2),
pp. 103-24.

FEDERAL RESERVE BANK OF ST. LOUIS

Walsh

McCallum, Bennett T. “Two Fallacies Concerning Central
Bank Independence.” American Economic Review, May
1995, 85(2), pp. 207-11.
Ruge-Murcia, Francisco J. “Inflation Targeting under
Asymmetric Preferences.” Université de Montréal, June
2001.
Svensson, Lars E.O. “How Should Monetary Policy Be
Conducted in an Era of Price Stability,” in New Challenges
for Monetary Policy. Federal Reserve Bank of Kansas City,
1999, pp. 195-259.
Walsh, Carl E. “Market Discipline and Monetary Policy.”
Oxford Economic Papers, April 2000, 52(2), pp. 249-71.
___________. “Accountability, Transparency, and Inflation
Targeting.” Journal of Money, Credit, and Banking, 2002
(forthcoming).

J U LY / A U G U S T 2 0 0 2

45

REVIEW

Walsh

46

J U LY / A U G U S T 2 0 0 2

Central Bank Structure,
Policy Efficiency, and
Macroeconomic
Performance: Exploring
Empirical Relationships
Stephen G. Cecchetti and Stefan Krause

I. INTRODUCTION
ll economists agree that more information
is better than less. When people are better
informed, they make better decisions,
enhancing the efficiency of the economy in allocating resources and improving overall welfare. It
would be difficult to find an area of economic life
where this line of argument has carried more weight
than it has in central banking circles in recent years.
The job of central bankers is to conduct monetary policy in order to promote price stability, sustainable growth, and a stable financial system. They
do this in an environment fraught with unavoidable
uncertainties. But in conducting policy, there is
one uncertainty that policymakers can reduce: the
uncertainty they themselves create. Everyone agrees
that monetary policymakers should do their best to
minimize the noise their actions add to the environment. The essence of good, transparent policy is
that the economy and the markets respond to the
data, not to the policymakers.
The result of this agreement is that today we
have the nearly universal and immediate public
broadcast of all interest rate changes. As everyone
in financial markets around the world knows, the
Federal Reserve’s Federal Open Market Committee
(FOMC) makes a public statement at 2:15 p.m.
EST following each meeting. But the first public
announcement of a move in the federal funds rate
target was made on February 4, 1994, and the regular
issuance of a statement became an official feature

A

Stephen G. Cecchetti is a professor of economics at The Ohio State
University and the National Bureau of Economic Research. Stefan
Krause is an assistant professor of economics at Emory University.
The authors thank Alfonso Flores-Lagunes, Dino Kos, Roisin O’Sullivan,
and Daniel Thornton for comments and discussions, as well as
Gabriel Sterne for assistance with the data.

© 2002, The Federal Reserve Bank of St. Louis.

of the FOMC’s procedures only on January 19, 2000.
Before that, it was customary for FOMC policy
changes to be communicated to market participants
through actions rather than words.
There are still people who argue for the efficacy
of central bank secrecy in various forms, claiming
that surprises are more effective and that even accurate information can be misinterpreted, resulting
in undesirable financial market volatility. We think
that it is fair to say that these arguments have not
been persuasive and that the advocates of policy
transparency have won the day. We have been
reduced to arguments about the mechanics and
exact timing of the release of information. Should
the minutes of a meeting be released as soon as
physically possible following the meeting, as done
by the Bank of England’s Monetary Policy Committee;
or should there be a modest delay until just after
the following meeting, which is the FOMC’s practice;
or is it acceptable to wait for years, as the European
Central Bank is planning to do? Is it necessary or
advisable for the head of the interest rate–setting
body to hold regularly scheduled news conferences?
Should the policymakers be required to appear
before legislative bodies to provide descriptions of
their decisionmaking processes and justifications
for their actions? How public should the inputs—
forecasts, models, and anecdotes—into interest
rate decisions be? All of these questions concern
minor issues about the availability of information.
As for general principles, we have now progressed to the point where on September 26, 1999,
the Interim Committee of the Board of Governors
of the International Monetary Fund issued the Code
of Good Practices on Transparency in Monetary
and Financial Policies: Declaration and Principles
(which we will refer to as the IMF Code). As in the
case of other standards and codes promulgated
under the auspices of the IMF,1 the expectation is
that they will be adhered to by all of the countries
in the world.
We take the statements in the IMF Code to represent a rough version of the consensus on the value
of monetary policy transparency. Paragraph 4 of
the IMF Code states:
The case for transparency of monetary and
financial policies is based on two main
premises. First, the effectiveness of monetary
1

The IMF monitors compliance with codes and standards on data
dissemination, fiscal transparency, banking supervision, accounting,
and auditing that are issued by a variety of international agencies.

J U LY / A U G U S T 2 0 0 2

47

REVIEW

Cecchetti and Krause

and financial policies can be strengthened
if the goals and instruments of policy are
known to the public and if the authorities
can make a credible commitment to meeting
them. In making available more information
about monetary and financial policies, good
transparency practices promote the potential
efficiency of markets. Second, good governance calls for central banks and financial
agencies to be accountable, particularly
where the monetary and financial authorities
are granted a high degree of autonomy.2
This is a concise statement of the view that the
key ingredients for an effective central bank are
independence, credibility, transparency, and accountability. Going one step further, there is general agreement that independent, transparent, accountable,
and credible central banks are able to deliver better
overall policy outcomes.3
Many people have concluded that the substantial
changes undertaken in the operational framework
of central banks over the past decade or more have
produced better overall policy outcomes. And there
is substantial prima facia evidence to support the
case. Looking at a broad array of industrialized,
transition, and emerging market economies, we
see institutional reforms that have increased both
the independence and accountability of central
banks and, in addition, made monetary policy more
transparent through clear public statement of instruments, methods, and objectives. Not only this, but
over the same decade or so, many central banks
have succeeded in establishing significant reputations for competence, acquiring considerable credibility in the process.
The data that we study here bear out that, as
the institutional framework was evolving, macroeconomic performance was improving. Both the
level and variability of inflation were lower over the
past five years than they were in the previous ten.
Looking at a broad cross-section of 63 countries, we
see that median inflation has dropped from 7.04
percent in 1985:Q1–1994:Q4 to 2.97 percent in
1995:Q1–1999:Q4. The decrease in average inflation
has been even sharper, going from 83.19 percent
to 8.59 percent. Inflation rose in only 10 of the 63
countries, and in the bulk of those the increase was
small—only in Ghana, Indonesia, and Turkey did
average inflation rise by more than 2 percentage
points.
Successful policymaking usually means more
48

J U LY / A U G U S T 2 0 0 2

than just reducing inflation. It means stabilizing
inflation and output as well. Looking at a somewhat
narrower sample of 24 countries, we see that 20
experienced lower inflation variability while output
variability was lower in 15.4 Again, this occurred as
the institutional framework for policymaking was
changing, suggesting at least the possibility of a
relationship.
The remainder of the paper explores the empirical relationship between economic performance
and the monetary policy framework. For reasons
that will become clear later, the data on transparency,
accountability, credibility, and independence force
us to study a cross-section of countries. That is, we
examine the extent to which contemporaneous
differences in institutional design are able to explain
the observed variation in performance across countries during a fixed period of time. We are not able
to study how changes in the structure of policymaking have affected changes in macroeconomic
outcomes.
With the exception of the measure of credibility,
our data on the monetary policy framework in each
country are from the Bank of England’s Center for
Central Bank Studies survey of 93 central banks
reported in Fry et al. (2000). This survey contains
an incredible wealth of information, including
measures of the degree of independence, accountability, and transparency of central banks. But Fry
et al. (2000) did their survey only once in 1998
(with revisions in 1999), and so that is all that is
available.
Our starting point in Section II is the development of measures of macroeconomic performance
and monetary policy efficiency. These measures
turn out to be related, and we describe how both
2

The “Code of Good Practices on Transparency in Monetary and
Financial Policies: Declaration and Principles” is available in its
entirety at <www.imf.org/external/np/mae/mft/code/index.htm>.

3

Empirical studies by Alesina (1988), Grilli, Masciandaro, and Tabellini
(1991), Cukierman (1992), Cukierman, Webb, and Neyapti (1992),
and Alesina and Summers (1993), among others, find evidence of a
negative correlation of central bank independence with lower and
more stable inflation, within industrialized countries. Also, Chortareas,
Stasavage, and Sterne (2002) examine the association between the
cross-country differences in macroeconomic outcomes and the degree
of transparency exhibited by monetary policy, measured by the
detail with which central banks publish economic forecasts. Their
results suggest that a high degree of transparency in economic forecasts is associated with a lower inflation for all countries (with the
exception of the ones that target the exchange rate, for which the
publication of forecasts has no significant impact on inflation).

4

See Cecchetti, Flores-Lagunes, and Krause (2002) for details on these
calculations.

FEDERAL RESERVE BANK OF ST. LOUIS

II. MEASURING MACROECONOMIC
PERFORMANCE AND EFFICIENCY OF
MONETARY POLICY
Following Cecchetti et al. (2002), we derive
measures of macroeconomic performance and
policy efficiency using the inflation-output variability trade-off, or efficiency frontier. To obtain these
measures, we first summarize how Cecchetti et al.
perform the theoretical derivation and then proceed
to briefly describe the estimation method used in
constructing the measures. Finally, we report the
results on macroeconomic performance and policy
efficiency loss for the period of 1991:Q1–1998:Q4.

Theoretical Derivation of the Measures
The measures of interest can be derived using
a two-dimensional graph, and so we begin with a
simple intuitive explanation. The concept of an
inflation-output variability frontier is easiest under-

Figure 1
Efficiency Frontier and Performance Point

Variance of Output

of them arise from an optimal policy problem.
Following our previous work with Flores-Lagunes
(Cecchetti, Flores-Lagunes, and Krause, 2002), we
measure performance as a weighted average of
output and inflation variability, while our measure
of policy efficiency (or inefficiency) is related to
the distance of the economy’s performance point
to the inflation-output variability frontier.
In Section III we discuss how we measure the
credibility of monetary policy. This is clearly a difficult undertaking and there are a number of possible
ways to proceed. One possibility would be to use
surveys or press reports to examine what people
think about the actions of central bankers. But since
we study a large number of countries, collecting
such data is an almost impossible task. Instead, we
have adopted the view that credibility comes from
what you do, not what you say or what someone
else says about it. This premise led us to measure
credibility by looking at past inflation performance,
and here we define a credible central bank as one
that has successfully delivered low inflation.
The remainder of the paper puts all of these
pieces together and looks for correlations among
them. This is the subject of Section IV, and our findings are somewhat discouraging. In the end, we conclude that credibility trumps virtually everything
else: countries with a history of high inflation exhibit
comparatively worse macroeconomic and policy
performance, regardless of the framework in which
their central banks operate.

Cecchetti and Krause

Variance of Inflation

stood by considering a simple economy that is
affected by two general types of disturbances, both
of which may require policy responses. These are
aggregate demand shocks—which move output and
inflation in the same direction—and aggregate supply
shocks—which move output and inflation in opposite directions. Since monetary policy can move
output and inflation in the same direction, it can
completely offset the effect of aggregate demand
shocks. By contrast, aggregate supply shocks will
force the monetary authority to face a trade-off between the variability of output and that of inflation.5
This trade-off allows us to construct an efficiency
frontier for monetary policy that traces the points
of minimum inflation and output variability. This is
the curved line in Figure 1, known in the literature
as the Taylor curve (Taylor, 1979). The location of
the efficiency frontier depends on the variability of
aggregate supply shocks—the smaller such variability, the closer the frontier will be to the origin. If
monetary policy is optimal, the economy will be
on this curve. The location of the economy on the
frontier depends on the policymaker’s preferences
for inflation and output stability.
When policy is suboptimal, the economy will not
be on this frontier. Instead, the performance point
will be up and to the right, with inflation and output
variability both in excess of other feasible points.
Movements of the performance point toward the
5

For a simple algebraic model and a discussion of the derivation of
the output-inflation variability frontier, see Cecchetti and Ehrmann
(2001).

J U LY / A U G U S T 2 0 0 2

49

REVIEW

Cecchetti and Krause

Figure 2
Derivation of the Optimal Variances

Variation of Output

Orginal Frontier
Shifted Frontier

Performance Point
[Var( π ), Var(y)]

Optimal Variances
[Var( π )*, Var(y)*]
Variation of Inflation

frontier are an indication of improved policymaking.
We require measures of an economy’s performance, in terms of output and inflation variability,
as well as the distance of that point from the efficiency frontier. To compute these, we assume that
the objective of the central banker is to minimize a
weighted sum of inflation and output variability.
This is the standard quadratic loss function used in
most contemporary analyses of central bank policy.
We can summarize this loss through the following
specific representation:
(1)

Loss = λVar (π ) + (1 − λ ) Var ( y), 0 ≤ λ ≤ 1,

where π is inflation, y is output, and λ is the policymaker’s preference parameter—Cecchetti and
Ehrmann (2001) call this the policymaker’s inflation
variability aversion.
But measuring the loss associated with a particular performance point requires that we have an
estimate of the preference parameter, λ. Our
approach is to consider a set of plausible values of
λ for each of the analyzed countries based on the
estimates obtained elsewhere by Cecchetti and
Ehrmann (2001) and Krause (2002). This procedure
means that we do not have to identify a single value
of this parameter for each individual country. In
the following section, we show that our results are
robust to this choice. With this in mind, we set λ
equal to 0.8 for all countries, with the exception
of Israel, Mexico, Chile, and Greece, for which we
choose a value of 0.3. These four countries experienced very high levels of inflation during the 1980s,
50

J U LY / A U G U S T 2 0 0 2

suggesting that inflation variability must have had
a much lower weight in the policymaker’s loss
function.
Before we proceed with describing the measure
of policy efficiency, we need to discuss how we
derive the optimal variances of output and inflation.
Beginning with Figure 1, we shift the efficiency
trade-off homothetically outward until it passes
through the performance point representing the
observed variances of inflation and output. Figure 2
shows the original and shifted frontiers. Graphically,
the optimal variances are at the intersection of the
original frontier with a line from the origin to the
performance point. Cecchetti et al. show with more
detail how to derive these variances analytically.
We can now define the measures of performance
and policy efficiency that we will use in our empirical
computations. To compute macroeconomic performance, we combine the observed variances of output
and inflation together to construct a single measure
of stability. We define performance, P, as
P = λ Var(π ) + (1 − λ )Var( y).

(2)

The lower P, the more stable the economy.
We gauge monetary policy efficiency by looking
at how close the actual performance is to the performance under optimal policy. Policy inefficiency
is measured by
(3)

[

]

[

]

E = λ Var (π ) −Var (π )* + (1 − λ ) Var ( y) − Var ( y)* ,
where Var(π )* and Var( y)* are the variances of
inflation and output under optimal policy, respectively. The more efficient policymakers are at
implementing the optimal policy, the closer E will
be to zero.

Estimating the Efficiency Frontier
As we described above, in Cecchetti et al. we
construct an efficiency frontier for the countries in
the sample in order to compute macroeconomic
performance and policy efficiency loss. The basic
procedure is as follows. Beginning with the quadratic
loss function representing trade-offs among combinations of inflation and output variability, we treat
policy as a solution to a control problem in which
the interest rate path is chosen to place the economy
at the point on the variability frontier that minimizes
the loss. Formally, we compute the policy reaction
function that minimizes the loss, subject to the
constraint that is imposed by the structure of the

FEDERAL RESERVE BANK OF ST. LOUIS

economy. For a given loss function, with a particular
weighting of inflation and output variability, we are
able to plot a single point on the efficiency frontier.
As we change the relative weight assigned to the
variance of inflation and output in the loss function,
we are able to trace out the entire efficiency frontier.
Our econometric procedure has two main steps.
First, we estimate simple structural models of inflation and output for each of the 24 countries in our
sample. Next, we describe the construction of the
efficiency frontier from the model estimates. This
will allow us to compute the macroeconomic performance and policy efficiency loss.
We consider linear two-equation systems for
each country based on a dynamic aggregate demand/
aggregate supply model. The basic model consists
of the following two equations:
2

2

l =1
2

l =1

yt = ∑ α1l it −l + ∑ α1( l +2 ) yt −l
(4)

+ ∑ α1( l + 4 ) π t −1 + α17 xt −1 + ε1t
l =1

(5)

2

2

l =1

l =1

π t = ∑ α 21 yt −1 + ∑ α 2(l +2)π t −1 + α 25 xt −1 + ε 2t .

The first equation represents an aggregate
demand curve. It relates (demeaned and detrended)
log industrial production, y, to two of its own lags;
to two lags of the nominal interest rate, i; to two
lags of demeaned inflation, π ; and to one lag of
demeaned external price inflation, x. The second
equation is an aggregate supply curve. Here, inflation
is assumed to be a function of two of its own lags,
representing inflation expectations, two lags of
(demeaned and detrended) log industrial production, and one lag of demeaned external price inflation. The error terms ε1 and ε 2 are assumed to be
mean zero and constant variance.
We estimate equations (4) and (5) for each country separately using ordinary least squares.6 The
Durbin h test allows us to determine whether additional lags of the variables were required to correct
for autocorrelation.7 In some cases we also include
dummy variables to account for currency crises,
sharp recessions, or structural changes.
The next step consists of employing the estimated model to construct the efficiency frontier.
We assume that the policymaker’s objective is to
minimize an objective function (given by the loss
function in (1)) subject to the constraints imposed
by the dynamic structure of the economy given by
equations (4) and (5). This optimization allows us

Cecchetti and Krause

to obtain a pair of optimal variances of inflation
and output for a given value of λ . By varying the λ
over the interval [0.001, 0.999] with an increment
of 0.001, we are able to trace out an entire frontier
similar to the one in Figure 1.
Finally, given the values chosen for λ and the
optimal variances for each country, we can compute
the measures of interest.

Estimates of Macroeconomic
Performance and Policy Efficiency Loss
We now look at the estimates of performance
and efficiency loss for the 24 countries in the
Cecchetti et al. study, using data for 1991:Q1–
1998:Q4. The results are plotted in Figures 3A and
3B and the estimates of the measures are reported
in Table A1 in the appendix.8 For each country, the
vertical height of the bar measures the performance
loss, P. This is divided into two portions: (i) the minimal performance loss, which measures what would
be attained if the economy were on its inflationvariability frontier, and (ii) the remainder, which
measures policy inefficiency. The differences in
scale require that we divide the countries into two
groups: those with relatively stable performance in
Figure 3A and those with higher output and inflation variability in Figure 3B.
Overall, the results suggest that there is high
variation in both performance and policy efficiency.
The Netherlands, for example, has the lowest value
for both P and E, while Israel has the most inefficient
policymakers and most volatile economy. There are
also cases between these, such as Finland, where
policy is efficient but the economy is relatively
6

Since we are estimating a system of two equations separately, there
might exist some cross-correlation between the error terms of the
equations that can be exploited to obtain more efficient estimators
with a system estimator such as seemingly unrelated regressions
(SUR). To check whether the separate estimation of each equation is
efficient relative to system estimation, we tested the contemporaneous
correlation of the error terms of the two-equation model for each
period in each of the countries in our sample. We were not able to
reject the null hypothesis of zero contemporaneous correlation at a
10 percent level or higher in both periods for all countries with the
exception of two. Still, in neither of these two cases are the SUR
coefficients and standard errors significantly different from the ones
obtained through the OLS estimation.

7

We tested for nonstationarity of the error terms in both equations using
the Phillips-Perron test. We were able to reject the null hypothesis of
nonstationarity at the 1 percent significance level in all countries for
both subperiods.

8

Both the performance loss and efficiency measure have been scaled
up by a factor of 100.

J U LY / A U G U S T 2 0 0 2

51

REVIEW

Cecchetti and Krause

Figure 3
Macroeconomic Performance Loss
A.
10.0
Minimal Performance Loss

Efficiency Loss

9.0
8.0
7.0
6.0
5.0
4.0
3.0
2.0
1.0

ly
Ita

a

246.2

Ca
n

ad

S
U

ria
Au

la
er

160.0

Sw

itz

er
G

st

nd

y
an

K

m

m
D

Be

en

lg

U

ar

k

m
iu

ce
an
Fr

st
ra
Au

N

et

he

rla

nd

s

lia

0.0

B.
50.0
Minimal Performance Loss

Efficiency Loss

45.0
40.0
35.0
30.0
25.0
20.0
15.0
10.0

J U LY / A U G U S T 2 0 0 2

Ch
ile

o
M

ex

ic

l
ra
e
Is

ec
e
G

re

l
ga
rtu
Po

Ko
re
a

en
Sw
ed

d
an

al
Ze
N
ew

52

Fi
nl

d
an

ai
Sp

pa
Ja

Ire

la
n

d

n

0.0

n

5.0

FEDERAL RESERVE BANK OF ST. LOUIS

unstable, and Switzerland, where performance is
good but policy is not.9
Our goal is to examine whether the crosssectional variation in these measures of performance
and policy efficiency can be explained by differences
in central bank independence, accountability,
transparency, and credibility. Before undertaking
this task, we need to describe the data on monetary
policy framework variables, which we do in the
following section.

III. MEASURES OF MONETARY
FRAMEWORK CHARACTERISTICS
In order to relate macroeconomic performance
and policy efficiency to central bank features, we
require quantitative measures of the several institutional characteristics of the central bank. For this
purpose we employ the measures of central bank
independence, accountability, and transparency
derived by Fry et al. (2000) and based on survey
information. We first describe these and then proceed to discuss our construction of a measure of
policy credibility that is based on past inflation
performance.10

Cecchetti and Krause

degree and frequency at which each central bank
provides reports on its policy decisions, assessments
about the state of the economy, and public explanations of forecasts. The index of transparency is
obtained as a simple average of these three criteria.

Central Bank Credibility
We now turn to the derivation of the credibility
index. Cukierman and Meltzer (1986) define monetary policy credibility as “the absolute value of the
difference between the policymaker’s plans and
the public’s beliefs about those plans.”11 The further
realized inflation is from the announced target level,
the less credible is the policymaker. If the monetary
authority has an explicit inflation target, credibility
can be measured by the distance from the expected
inflation to the target (Svensson, 1999).
Consistent with these suggestions, we construct
an index of policy credibility that takes into account
the deviations of expected inflation from the central
bank’s target level. In order to normalize this index
between 0 and 1, we define it as
(6)

1
if E (π ) ≤ π t


1


t
t
20
if
IC = 1 −
E
−
<
E
<
%
π
π
π
π
(
)
(
)
.
t

 0.2 − π
if E (π ) ≥ 20%

0

Central Bank Independence
Fry et al. construct a weighted index for independence by studying the responses to five questions on
their survey. These questions look at the following
elements: how important is price stability as an objective; how important is the role of the central bank
in choosing the levels of the target variable (goal
independence) and the policy instrument (instrument independence); to what extent does the government rely on central bank financing; and how long
is the term of office of the governor/chairman?

Central Bank Accountability
The Fry et al. survey looks at two main forms
of accountability. First, it asks whether the policy
contract between the government and the central
bank incorporates a numerical target for the goal
variable, what the role of the government is in setting this target, and which procedures take place if
the target is missed. Second, accountability measures how the government and parliament monitor
the central bank. The index of accountability is constructed by taking the average of these two measures.

(

The index of credibility takes a value of 1 if expected
annual inflation is less than or equal to the target
level of inflation, π t, and it decreases linearly as
expected inflation rises. If expected inflation is
greater than 20 percent, we assign the index a value
of 0.
Finally, to compute this index we assume that
the target level for inflation is equal to 2 percent
for all countries and we proxy E(π ) as the average
realized inflation for the period between 1985:Q1
and 1989:Q4 for all 63 countries in our sample.
The data on the index of credibility is presented in
Table A2 in the appendix.

IV. EMPIRICAL RESULTS
We now turn to an examination of all of the
information on performance, efficiency, and insti9

For a more detailed discussion of these results, as well as an examination of changes in performance and policy efficiency over time, see
Cecchetti et al. (2002).

10

We report the values of these indices in Table A2 in the appendix.

11

Cukierman and Meltzer (1986, p. 1108).

Central Bank Transparency
To derive a measure of transparency or policy
explanations, Fry et al. look at the responses to the

)

J U LY / A U G U S T 2 0 0 2

53

REVIEW

Cecchetti and Krause

Table 1
Performance, Efficiency, and Monetary Policy Framework (Correlation Coefficients)
Average inflation
(1995-99)
Independence

Macro performance
(1990-97)

Policy inefficiency
(1990-97)

0.072 (0.74)

0.055 (0.80)

–0.129 (0.17)

Accountability

–0.093 (0.48)

0.019 (0.93)

0.012 (0.96)

Transparency

–0.349 (0.04)

–0.254 (0.24)

–0.257 (0.24)

Credibility

–0.571 (0.00)

–0.757 (0.00)

–0.753 (0.00)

NOTE: Better macroeconomic performance and more efficient policy are identified with values closer to zero, while higher independence, accountability, transparency, and credibility are identified with higher values. The heteroskedasticity-corrected p values are in
parentheses.

tutional structure. We expect that countries with
more independent, transparent, accountable, and
credible central banks will in general exhibit better
macroeconomic outcomes. We take this hypothesis
to the data and consider the relationships between
macroeconomic performance (as measured by P and
by average inflation) and policy efficiency (measured
by E and by policy framework variables described
in the previous section). We look at both simple
correlations and multivariate analysis.

Simple Correlations
Table 1 displays the simple correlations among
the four indices of central bank framework and our
measures of macroeconomic performance and
policy efficiency, as well as average inflation for
the period of 1995:Q1–1999:Q4.
First, we observe that there is a positive correlation between central bank independence and the
performance and efficiency loss measures, while
for the broader cross-section of countries independence is negatively correlated with average inflation.
This relationship has been extensively documented
for industrialized countries, and it is still present
when considering transition and developing economies. Nevertheless, none of these correlations is
significantly different from zero at even the 10 percent level.
Proceeding down the table, we see that the index
of central bank accountability is negatively correlated
with average inflation and positively correlated
with performance and efficiency loss. But as with
the case of independence, neither of these correlations is significant, suggesting that this particular
characteristic of the monetary framework, at least
by itself, does not play a crucial role in explaining
the cross-country differences in inflation, performance, and policy efficiency.
54

J U LY / A U G U S T 2 0 0 2

Looking at the one-dimensional relationship
with transparency, we see that all of the correlations
are negative. The result for average inflation is also
significantly different from zero at the 5 percent
level. Furthermore, our point estimate of –0.35 in
this case is basically indistinguishable from the
correlation between average log inflation and the
alternative (Guttman) index of transparency reported
in Chortareas, Stasavage, and Sterne (2002).12
Finally, we find that the correlation between
the index of credibility and the three outcome measures is negative and significant at the 1 percent
level. This is our most clear result. Countries that
maintained low inflation in the past are expected
to exhibit lower current inflation and less variable
inflation and output. Good policymaking is positively
serially correlated.

Multivariate Analysis
We now turn to a simple multivariate analysis.
Table 2 reports the results of regressing, simultaneously, average inflation, macroeconomic performance, and policy efficiency on the four monetary
framework variables and compares these results
with the ones arising from excluding the credibility
index as an explanatory variable. All three regressions are dominated by the presence of the credibility measure, which enters with a negative coefficient
and is estimated very precisely.13 The coefficients
12

Chortareas, Stasavage, and Sterne (2002) also use a larger data set,
which includes 87 countries.

13

This result is even sharper when we use the data on average inflation
for the period 1990:Q1–1994:Q4 to construct the index of credibility.
Under these circumstances, both the R2 and the coefficients associated
with credibility rise, giving further support to the argument that countries with high past inflation exhibit poor performance regardless of
their framework.

FEDERAL RESERVE BANK OF ST. LOUIS

Cecchetti and Krause

Table 2
Performance, Efficiency, and Monetary Policy Framework (Regression Results)
Average inflation
(1995-99)
Intercept

0.289 (0.00)

0.439 (0.63)

0.970 (0.38) –0.059 (0.97)

0.803 (0.47) –0.206 (0.90)

0.478 (0.37)

0.479 (0.38)

0.003 (0.97)

0.026 (0.74)

Accountability

–0.062 (0.28)

–0.030 (0.60)

Transparency

–0.108 (0.23)

–0.217 (0.05)

Credibility

–0.172 (0.00)

No. of observations

0.364
60

0.514 (0.73)

Policy inefficiency
(1990-97)

0.222 (0.01)

Independence

R2

Macro performance
(1990-97)

0.313 (0.72)

0.545 (0.55)

0.619 (0.67)
0.318 (0.71)

–0.444 (0.17) –0.563 (0.29)

–0.462 (0.16) –0.579 (0.27)

–1.405 (0.00)

–1.378 (0.00)

0.121
60

0.677
22

0.072
22

0.665
22

0.073
22

NOTE: Better macroeconomic performance and more efficient policy are identified with values closer to zero, while higher independence, accountability, transparency, and credibility are identified with higher values. The heteroskedasticity-corrected p values are in
parentheses.

on the remaining three regressors are negative in
only four of the nine cases, and they are all estimated
very imprecisely, as indicated by the relatively high
p values reported in Table 2. If we drop the credibility measure from the specification, then we observe,
as expected, a sharp drop in the goodness of fit.
The other variables remain insignificant, with the
exception of transparency, which enters the regression of average inflation on the framework variables
with a negative sign and a significant (to the 5 percent level) coefficient. These results provide further
evidence supporting the view that central bank
credibility—represented by past inflation performance—is the main determinant of current macroeconomic performance and policy efficiency.14
Since our results suggest that credibility and, to
a somewhat lesser extent, transparency are the two
factors that explain most of the cross-country variation in macroeconomic outcomes, it is interesting
to ask how large the impact is. To address this, we
calculate the extent to which changes in the levels
of transparency and credibility translate into lower
average inflation. That is, we find the inflation that
would have had to take place (as a deviation from 2
percent) after these changes, holding output variation in the loss function constant. For the case of
Spain, an increase in transparency from 0.59 to the
sample median, 0.79, is equivalent to a drop of 0.53
percentage points in average inflation. An increase
in credibility for the United Kingdom from 0.82 to
0.91 percentage points (estimated value for the
United States and France) would represent a drop
in average inflation of 1.19 percentage points.

Interpreting the Results
Our findings suggest that credibility is the primary factor explaining the cross-country variation
in macroeconomic outcomes, trumping the contribution of the other framework variables. This result
is consistent with Jensen’s (2000) argument that a
committed (i.e., credible) central bank will not necessarily provide economic agents with substantial
information about the behavior of instruments and
targets. He derives an optimal level of transparency,
which will depend on the initial credibility of the
bank and the amount of information available to
the agents. The model suggests that a high degree
of transparency need not always be an advantage
to the central bank.
On the other hand, it is reasonable to believe
that independence, accountability, and transparency
actually lead to increased central bank credibility.
Given that we lack a time-series for the data on the
policy framework, we are unable to examine this
claim head on. All we can do is see whether credibility is highly correlated with accountability, independence, and transparency. We find that credibility
and transparency have a correlation of 0.31, but
that credibility is virtually uncorrelated with the
measures of accountability and independence.
Looking back at the performance and efficiency
14

We also tested whether the policy framework variables were associated with the cross-country differences in the sacrifice ratio (which
we approximate using the estimated efficiency frontier for 24 countries), but we failed to find any significant relationship.

J U LY / A U G U S T 2 0 0 2

55

Cecchetti and Krause

measures plotted in Figure 3B, we see that Chile
and Mexico are substantial outliers. This naturally
leads us to ask whether our results are dominated
by these two countries. Deleting them from the
sample, we find that the general character of the
results is largely unchanged. While coefficients on
the other framework variables remain statistically
insignificant, the coefficient associated with the
credibility index changes from –1.405 to –0.429 in
the macroeconomic performance regression (with
the R2 actually increasing from 0.677 to 0.811) and
from –1.378 to –0.386 in the policy efficiency
regression (with the R2 rising from 0.665 to 0.852).
The coefficients still remain significant at the 1
percent level.

V. CONCLUSIONS
This paper explores the empirical relationships
between economic outcomes and the monetary
policy framework. Our findings suggest that a better
macroeconomic performance and more efficient
policy are present in more credible and, to some
extent, more transparent central banks. Independence and accountability, to the extent that we are
able to measure them, do not seem to explain much
of the cross-country variation in macroeconomic
outcomes, either individually or in conjunction
with other variables. Further exploration of the
relationship of macroeconomic performance awaits
new time-series data.

REFERENCES
Alesina, Alberto. “Macroeconomics and Politics,” Stanley
Fisher, ed., NBER Macroeconomics Annual. Cambridge,
MA: MIT Press, 1988.
___________ and Summers, Lawrence H. “Central Bank
Independence and Macroeconomic Performance: Some
Comparative Evidence.” Journal of Money, Credit, and
Banking, May 1993, 25(2), pp. 151-62.
Cecchetti, Stephen G. and Ehrmann, Michael. “Does Inflation
Targeting Increase Output Volatility? An International
Comparison of Policymakers’ Preferences and Outcomes,”
in Norman Loayza and Klaus Schmidt-Hebbel, eds.,
Monetary Policy: Rules and Transmission Mechanisms,
No. 4 in the Series on Central Banking, Analysis and
Economic Policies. Santiago, Chile: Central Bank of Chile,
2001, pp. 247-74.
___________; Flores-Lagunes, Alfonso and Krause, Stefan.

56

J U LY / A U G U S T 2 0 0 2

REVIEW
“Has Monetary Policy Become More Efficient? A CrossCountry Analysis.” Unpublished manuscript, 2002.
Chortareas, Georgios; Stasavage, David and Sterne, Gabriel.
“Does it Pay To Be Transparent? International Evidence
from Central Bank Forecasts.” Federal Reserve Bank of
St. Louis Review, July/August 2002, 84(4), pp. 99-118.
Cukierman, Alex. Central Bank Strategy, Credibility and
Independence: Theory and Evidence. Cambridge, MA: MIT
Press, 1992.
___________. “Central Bank Independence and Monetary
Control.” The Economic Journal, November 1994, 104,
pp. 1437-48.
___________ and Meltzer, Allan H. “A Theory of Ambiguity,
Credibility, and Inflation under Discretion and Asymmetric
Information.” Econometrica, September 1986, 54(5), pp.
1099-128.
___________; Webb, Steven B. and Neyapti, Bilin. “Measuring
the Independence of Central Bank and Its Effect on Policy
Outcomes.” World Bank Economic Review, September
1992, 6(3), pp. 353-98.
Fry, Maxwell; Julius, DeAnne; Mahadeva, Lavan; Roger,
Sandra and Sterne, Gabriel. “Key Issues in the Choice of
Monetary Policy Framework,” in Lavan Mahadeva and
Gabriel Sterne, eds., Monetary Policy Frameworks in a
Global Context. London: Routledge (Bank of England), 2000.
Grilli, Vittorio, Masciandaro, Donato and Tabellini, Guido.
“Political and Monetary Institutions and Public Financial
Policies in the Industrial Countries.” Economic Policy: A
European Forum, October 1991, 6(2), pp. 341-92.
Jensen, Hans E. “Optimal Degrees of Transparency in
Monetary Policymaking.” Working paper, University of
Copenhagen, 2000.
Krause, Stefan. “Measuring Monetary Policy Efficiency in
European Union Countries.” Unpublished manuscript,
2002.
Svensson, Lars E.O. “How Should Monetary Policy Be
Conducted in an Era of Price Stability?” Prepared for the
symposium New Challenges for Monetary Policy, Federal
Reserve Bank of Kansas City, 26-28 August 1999.
Taylor, John B. “Estimation and Control of a Macroeconomic
Model with Rational Expectations.” Econometrica,
September 1979, 47(5), pp. 1267-86.

FEDERAL RESERVE BANK OF ST. LOUIS

Cecchetti and Krause

Appendix

Table A1 presents the estimates for the measures of macroeconomic performance and policy
inefficiency obtained by Cecchetti et al. (2002).
Table A2 reports the data on inflation and the
monetary policy framework variables. Average
inflation is obtained from the simple mean of quarterly data of consumer price index (CPI) inflation
for the period 1995:Q1–1999:Q4, from the IFS
statistics. The data for the indices used for indepen-

dence and accountability are obtained from the
weighted total scores in Tables A.5 and A.6 of Fry
et al. (2000), respectively, while transparency is
measured using the unweighted total score for
explaining policy, presented in Table A.7 of Fry et al.
Finally, the index of policy credibility is constructed
as specified in Section III, using the average CPI
inflation for the period between 1985:Q1 and
1989:Q4, from the IFS statistics.

Table A1
Macroeconomic Performance and Monetary Policy Inefficiency
Country

Macro performance (1991-98)

Policy inefficiency (1991-98)

Australia

0.0217

0.0001

Austria

0.0491

0.0369

Belgium

0.0301

0.0161

Canada

0.0566

0.0130

Chile

2.4625

2.4188

Denmark

0.0333

0.0202

Finland

0.1630

0.1103

France

0.0232

0.0134

Germany

0.0473

0.0254

Greece

0.3620

0.3062

Ireland

0.0716

0.0551

Israel

0.4360

0.4048

Italy

0.0689

0.0610

Japan

0.0880

0.0707

Korea

0.2383

0.2200

Mexico

1.6003

1.5711

Netherlands

0.0127

0.0008

New Zealand

0.1059

0.0957

Portugal

0.2598

0.2413

Spain

0.0971

0.0930

Sweden

0.2311

0.0757

Switzerland

0.0473

0.0391

UK

0.0338

0.0292

US

0.0521

0.0313

Average

0.2747

0.2479

NOTE: Better macroeconomic performance and more efficient policy are identified with values closer to zero.

J U LY / A U G U S T 2 0 0 2

57

REVIEW

Cecchetti and Krause

Table A2
Average Inflation and Policy Framework Variables
Average inflation
(%) (1995-99)

Index of
independence

Index of
accountability

Index of
transparency

Index of
credibility

Argentina

0.77

0.79

1.00

0.53

0.00

Australia

1.97

0.73

0.83

0.78

0.68

Country

Austria

1.38

0.68

0.67

0.27

0.99

Bahamas

1.32

0.39

1.00

0.50

0.83

Bahrain

1.08

0.54

0.75

0.18

1.00

Barbados

2.46

0.24

0.92

0.73

0.89

Belgium

1.45

0.77

0.33

0.68

0.98

Belize

1.66

0.43

0.42

0.48

0.97

Canada

1.61

0.91

1.00

0.79

0.87

Chile

6.04

0.93

0.17

0.83

0.00

China, P.R.

5.20

0.68

1.00

0.63

0.28

Croatia

4.53

0.79

0.83

0.42

0.00

Cyprus

2.62

0.77

0.58

0.48

0.93

Denmark

2.15

0.88

0.75

NA

0.87

Eastern Caribbean
Ecuador

2.17

0.49

0.92

0.48

0.98

33.14

0.93

0.75

0.59

0.00

Egypt

7.09

0.53

0.83

0.47

0.06

Fiji

3.26

0.73

0.17

0.64

0.78

Finland

1.07

0.91

0.92

0.74

0.84

France

1.24

0.90

0.83

0.53

0.91

Germany
Ghana
Greece
Hungary

1.31

0.96

0.17

0.70

1.00

32.44

0.60

0.58

0.36

0.00

6.02

0.86

0.33

0.36

0.16

18.85

0.86

0.83

0.49

0.51

Iceland

2.13

0.59

0.92

0.65

0.00

India

8.89

0.83

0.67

0.75

0.68

Indonesia

21.03

0.66

0.83

0.83

0.73

Ireland

1.95

0.87

0.83

0.78

0.90

Israel

8.22

0.66

1.00

0.68

0.00

Italy

2.97

0.88

0.58

0.81

0.77

14.19

0.39

0.42

0.65

0.33

0.41

0.93

NA

0.89

1.00

Jamaica
Japan
Jordan

3.39

0.74

0.75

0.60

0.72

Kenya

6.06

0.66

0.67

0.52

0.56

Korea

4.42

0.73

0.83

0.88

0.88

Kuwait

2.01

0.63

0.67

0.38

1.00

Malaysia

3.92

0.75

0.67

0.71

1.00

Malta

2.82

0.83

0.83

0.67

1.00

Mauritius

6.63

0.70

0.33

0.20

0.77

58

J U LY / A U G U S T 2 0 0 2

FEDERAL RESERVE BANK OF ST. LOUIS

Cecchetti and Krause

Table A2 cont’d
Average Inflation and Policy Framework Variables
Country

Average inflation
(%) (1995-99)

Index of
independence

Index of
accountability

Index of
transparency

Index of
credibility

Mexico

24.67

0.82

0.92

0.69

0.00

Namibia

8.33

0.50

0.33

0.56

0.36

Netherlands

2.06

0.91

0.83

0.79

1.00

New Zealand
Nigeria

1.68

0.89

1.00

0.92

0.48

26.08

0.42

0.92

0.37

0.00

Norway

2.18

0.57

0.50

0.89

0.75

Peru

8.41

0.89

0.92

0.38

0.00

16.47

0.86

0.58

0.69

0.00

2.90

0.85

0.83

0.78

0.41

Poland
Portugal
South Africa

7.34

0.85

0.75

0.70

0.24

Sierra Leone

27.53

0.62

0.83

0.47

0.00

Singapore

0.97

0.90

0.25

NA

1.00

Spain

2.87

0.80

0.83

0.59

0.73

Sri Lanka

9.49

0.54

0.58

0.48

0.64

Sweden

0.77

0.97

0.83

0.95

0.80

0.80

0.90

0.17

0.86

1.00

17.12

0.60

0.92

0.51

0.00

Switzerland
Tanzania
Thailand

5.11

0.82

0.50

0.67

0.93

Tonga

2.86

0.52

0.00

0.30

0.46

Turkey

81.60

0.70

0.42

0.24

0.00

2.79

0.77

1.00

0.94

0.82

UK
Uruguay
US
Zambia

21.54

0.70

0.83

0.04

0.00

2.36

0.92

0.83

0.95

0.91

35.17

0.66

0.17

0.57

0.00

J U LY / A U G U S T 2 0 0 2

59

Cecchetti and Krause

60

J U LY / A U G U S T 2 0 0 2

REVIEW

Commentary
K. Alec Chrystal
his paper builds on earlier work by Steve
Cecchetti and his colleagues that looks at the
institutional characteristics of central banks
and the regimes they operate and analyzes their
influence on macroeconomic performance. This is
stimulating work that makes considerable progress
in monetary policy analysis. It is also elegant in the
sense that it makes a great deal of progress with
simple tools logically applied. I have learned a lot
from reading this paper and some earlier related
work by the same authors and their collaborators.
The job of a discussant, of course, is to point
out problems and limitations of the research. I cannot criticize much of the data on which the study is
based, as it was collected by my former colleagues
at the Bank of England; and I agree with the main
conclusion of the current paper, which is that credibility matters a lot for monetary policy. However, I
shall argue that the way in which credibility is measured leaves a lot to be desired, and so the main
empirical result in the paper should be treated with
some caution. There is, as usual, plenty of room for
further work on this fascinating issue.
The current paper builds on measures of macro
performance and efficiency derived in a previous
paper (Cecchetti, Flores-Lagunes, and Krause, 2001
[CFK]). As the meaning and measurement of these
terms bears directly on the results obtained in the
current paper, it is worth discussing briefly what
these concepts mean and how they are measured.
“Macroeconomic performance” relates to whether
a preference-weighted average of inflation and output variances has increased or decreased, i.e., in
effect whether the Taylor curve has shifted closer
to the origin. “Efficiency” relates to the extent to
which a performance gain can be attributed to policy
better offsetting demand shocks as opposed to being
a result of reduced variance of supply shocks.
CFK estimate a two-equation linear aggregate
supply/aggregate demand (AS/AD) model for 23
countries and then use the estimated structure in a
Theil-Tinbergen type policy optimization exercise
to solve for the optimal policy rule. The optimum

T

K. Alec Chrystal is a professor of Money and Banking and head of the
Finance faculty at City University Business School, London.

© 2002, The Federal Reserve Bank of St. Louis.

is compared with the actual outcome; by comparing
1980s and 1990s results, they derive estimates of the
extent to which the improvements in actual performance can be attributed to more efficient policy and
the extent to which they are due to reduced supply
shocks. The conclusion according to the CFK estimates is that nearly all countries in their sample
showed an improvement in performance between
the 1980s and 1990s, and the bulk of this improvement was due to increases in the efficiency of policy
rather than to reductions in supply shocks.
These are interesting and important results. But
as with all empirical work there are some questions
one can ask about implicit assumptions on which
the key results depend. My first question relates to
time periods. The authors have compared two time
periods of equal length, but these represent different
and partial phases of two different business cycles.
Roughly speaking the 1980s cycle is measured
trough to peak while the 1990s cycle is close to
being peak to peak (or at least peak to more then
half way back up). This distinction may not be critical, but it would surely be desirable when the ultimate historical research on these topics is done to
compare complete cycles in terms of policy impacts.
My second question relates to the spillovers
between countries. Is it really a coincidence that
most countries have improved their macroeconomic
performance at the same time? It could be that
central bankers have all been on the same courses
or attended the same conferences where they have
learned the secrets from their colleagues in other
countries. However, it could also be that successful
stabilization policy in one country makes policy
much easier for neighboring countries. This does
not diminish the achievement of better performance
but it does affect who should get the credit. CFK do
make allowances for external prices in their empirical models, but there are no other ways in which it
is apparent that a more stable external environment
makes domestic policymaking easier.
This point is given greater force when one looks
at the countries with the lowest policy inefficiency
loss in the 1990s (as shown in Appendix Table A1 of
Cecchetti and Krause as derived from the estimates
in CFK). Five of the six most efficient central banks
are those of the Netherlands, Belgium, Denmark,
Ireland, and France, all members of the exchange rate
mechanism (ERM) with mutually pegged exchange
rate bands. In the cases of France, the Netherlands,
and Belgium especially, these were pegged in a
narrow fluctuation band and thus they had minimal
J U LY / A U G U S T 2 0 0 2

61

Chrystal

discretionary ranges for domestic monetary policy.
Since they were not free to alter monetary policy
to offset demand shocks, what should we make of
this result? Certainly we could not conclude that it
was the optimal manipulation of the domestic policy
interest rate that delivered the efficient outcome,
since policy rates in these countries were focused
on the exchange rate target rather than domestic
aggregate demand or inflation.
So how can we claim that policy was efficient
if there was no room for activist policy? Surprisingly,
CFK do not comment on this outcome. It could be
that the optimal policy is indeed to tie the hands of
the authorities, but then it would be hard to argue
that they were being efficient in offsetting demand
shocks when they have no way of doing so. Could
it be instead that a pegged exchange rate regime
has some role in reducing shocks? If so, how can
we explain the improvements in policy outcomes
in those countries that had pegged rates in both the
1980s and the 1990s? An alternative interpretation
for the European Monetary Union (EMU) member
countries is that it was German monetary policy
that improved between the 1980s and 1990s and
by pegging to the Deutsche mark they imported
this policy gain. This of course requires us to alter
the analysis of each country optimizing its policy
in isolation, and it raises the further question of
how policy could be more efficient in the five countries pegged to the Deutsche mark (mentioned above)
than in Germany itself.
What is new in the paper presented to this conference (Cecchetti and Krause, 2001) is the bringing
together of the results from CFK with some measures
of institutional differences between central banks.
Three of these measures—independence, accountability, and transparency—are taken directly from
indexes constructed by Maxwell Fry et al. (2000)
for the Bank of England study on which this paper
draws. A new measure of credibility is constructed,
and, since this (and the results associated with it) is
the key innovation of the paper, I shall concentrate
on discussing this variable. It turns out that, of the
other factors, transparency is the only one that has
even marginal significance.
The credibility index is based on actual average
inflation in the period 1985:Q1 to 1989:Q4. Credibility is zero if inflation in this period exceeded 20
percent and it is unity if it was less that 2 percent.
Otherwise, credibility is assigned a number between
0 and 1 depending on where inflation sits in the
range of 2 to 20 percent.
62

J U LY / A U G U S T 2 0 0 2

REVIEW

The key results are (i) that, for those 23 countries studied in CFK, the measure of credibility is
highly correlated (negatively) with macroeconomic
performance and with policy efficiency and (ii) that,
for a larger sample of countries, credibility is the
characteristic that most contributes to lower inflation in the 1995-99 period.
The key issue is whether we think this measure
of credibility is itself credible. I do not. Why, for
example, should the credibility of the U.K. Monetary
Policy Committee (MPC) after 1997 be judged by
inflation in the United Kingdom ten years before
the MPC was established and even several years
before inflation targeting was first contemplated?
The answer surely is that it makes no sense at all.
It is no real surprise that macroeconomic performance and this measure of credibility should be
highly correlated because credibility (by this measure) and performance are both related to the level
of inflation—those countries with high inflation in
the late 1980s will still have had relatively high inflation in the 1990s. Furthermore, the fact that this
“credibility” (as measured by the inflation of the
late 1980s) appears to cause lower inflation in the
late 1990s could simply mean that that inflation is
autocorrelated—high-inflation countries in the late
1980s are still, on average, high-inflation countries
in the late 1990s. Two particular countries stand out
as being clearly misrepresented by this credibility
measure. The first is Indonesia, which is rated as
having high credibility on the basis of its relatively
low inflation in the late 1980s. But could there be
any country with lower credibility after 1997? The
other is Chile, which managed a highly credible
(and creditable) disinflation in the late 1990s yet is
accorded zero credibility on the basis of its high
inflation in the late 1980s.
Another obvious point is that virtually no country had an inflation-targeting regime in the late
1980s, and yet many did have such regimes by the
late 1990s. How can it make sense to judge the credibility of these new regimes from the outcomes in
some earlier regime?
So how should we measure credibility? I would
suggest that it has to be some measure that can be
taken within the period of operation of a regime
rather than from earlier periods. Also it cannot be
based purely upon economic outcomes because
that fails to identify the separate effects of beliefs
and actions. In an inflation-targeting regime, credibility must surely be measured by the deviations
between expectations of inflation and the stated

FEDERAL RESERVE BANK OF ST. LOUIS

target. These expectations could be measured either
from expectation surveys or from inflation expectations implied by comparisons between nominal
and indexed bonds. Of course these measures are
not available for many countries. But this does not
alter the fact that using actual inflation from some
time ago doesn’t do it. Any measure based upon
inflation outcomes in backward-looking data fails
to identify the separate influences of credibility,
policy actions, shocks, and history.
Any convincing attempt to measure the impact
of credibility must also surely do more than look at
a one-shot cross section of countries. In the paper
under discussion it is just about acceptable to calculate policy efficiency in a first stage and then see if
it correlates with “credibility” later. However, in a
panel study in which credibility within individual
countries was allowed to evolve over time, it would
be important to calculate efficiency conditional
on credibility. Only this way could we potentially
answer the most interesting question relating to
successful monetary policies: To what extent was
the actual policy outcome achieved due to the interest rate changes themselves and to what extent was
it due to the credibility of the authorities? It is no
great surprise to find that in a one-off cross section
the countries that had the best macro performance
(lowest weighted combination of inflation and
output variance) also had the most efficient policy
(closest outcome to the optimum) and the most credible regimes. However, we cannot say from this work
whether credibility was a by-product of the good
policy outcome or whether credibility helped produce it.
One argument in defense of the specific measure
of credibility used by Cecchetti and Krause might
be that, because it is measured prior to the years in
which the inflation impact and policy efficiency
are estimated, then it must be credibility that causes
the outcomes and not vice versa. However, this is
not very convincing because most of the leverage
in the regressions reported (in column 1 of Table 2
of Cecchetti and Krause) is achieved by the extreme
classification (in effect a 0,1 dummy variable) of
totally credible and totally incredible countries, and
most who fit these extreme categories would continue to be in the same class in the late 1990s as
they were in the 1980s. For all of such countries we
cannot say that their better policy outcome was due
to credibility because their credibility was identical
in both decades (according to the measure used in
this study). At best we can only attribute credibility

Chrystal

as the cause of an improved policy outcome where
some increase in credibility has been demonstrated.
And a measurement exercise along these lines has
not been attempted; we only have an index of credibility at one point in time. Credibility surely does
matter, but more work needs to be done to answer
the question: How much?
So why am I persuaded that credibility matters
while being skeptical about the apparently strong
results achieved by Cecchetti and Krause? As I have
stressed, the doubts about the Cecchetti and Krause
results relate to the way they measure credibility.
My belief that credibility must matter comes from
a related perspective on the same issue. Is the macroeconomic performance of the 1990s superior to
that of the 1980s simply because the monetary
authorities learned how to pull the strings of the
monetary puppet show in a more timely and accurate manner than their predecessors? The order of
magnitude of interest elasticities that come out of
most macro models makes it difficult to conclude
that interest rate decisions more accurately offset
demand shocks, and so central bankers just learned
to be better optimal controllers. The Bank of England
model, for example, suggests that a 100-basis-point
change in the official rate today will have a 30-basispoint effect on output growth after about one year
and a 30-basis-point effect on inflation after about
another year. It is highly implausible that the relatively modest official rate changes we have seen in
the last decade could have been sufficient to control
the aggregate economy if the demand and supply
shocks had been of similar magnitudes to those
experienced in earlier periods.
A much more likely explanation is that, at the
world level, most aggregate demand and supply
shocks are endogenous and influenced by the policy
regime. The greater monetary policy credibility
across the world (but especially in major countries)
has significantly reduced the demand and supply
“shocks” to which monetary authorities have to
react. This has meant that the macro outcome has
been improved even though the policy responses
(in terms of interest rates changes) needed to achieve
this outcome have been relatively modest.
Some might call this the “Greenspan effect.” U.S.
inflation has stayed under control even at a high
level of activity because agents have confidence
that the FOMC, and Chairman Greenspan in particular, has things under control. Surely this belief is not
based solely on the direct effects of specific policy
rate decisions and the fact that they worked in some
J U LY / A U G U S T 2 0 0 2

63

Chrystal

mechanistic way. Rather, it is based on the selffulfilling prophesy—if enough people believe that
the Fed will successfully stabilize output and inflation, that will generate the desired outcome on its
own irrespective (almost) of what the Fed actually
does.
Some day the world will find out if there really
is a “Greenspan effect.” I hope that we will not settle this issue for many years yet. Good health, Mr.
Chairman.

REFERENCES
Cecchetti, Stephen G.; Flores-Lagunes, Alfonso and Krause,
Stefan. “Has Monetary Policy Become More Efficient? A
Cross-Country Analysis.” The Ohio State University, May
2001.
Fry, Maxwell; Julius, DeAnne; Mahadeva, Lavan; Roger,
Sandra and Sterne, Gabriel. “Key Issues in the Choice
of Monetary Policy Framework,” in Lavan Mahadeva and
Gabriel Sterne, eds., Monetary Policy Frameworks in a
Global Context. London: Routledge (Bank of England),
2000.

64

J U LY / A U G U S T 2 0 0 2

REVIEW

Market Anticipations
of Monetary Policy
Actions
William Poole, Robert H. Rasche, and
Daniel L. Thornton
he purpose of this paper is to investigate the
extent to which market participants anticipate
Federal Reserve policy actions. The topic is
central to macroeconomics. Since the early 1970s
theorists have emphasized that a complete model
of the economy requires a full specification of the
behavior of policymakers. Otherwise, there is no
way to model the expectations upon which private
agents base their decisions.
The recent trend in monetary policy has been
toward greater transparency, accountability, and
credibility. This trend is largely explained by two
ideas. First, the economics profession has accepted
the proposition that monetary policy is the fundamental determinant of inflation in the long run.1
Second, central bank credibility and clear market
expectations about monetary policy are critical to
policy success.2
The key theoretical development in this context
was the application of rational expectations to
macroeconomics and the statement of the famous
Lucas critique. Lucas (1976) argued that the economy
and policymakers are interdependent. Specifically,
the public forms expectations of the dynamic feedback rule that policymakers follow to implement
policy. This line of argument led naturally and immediately to the distinction between expected and
surprise policy actions and a number of papers
exploring their different effects on the economy.3
For example, the more transparent the central bank,
the less likely that it will be able to institute a surprise inflation to temporarily raise output growth.
Our purpose is not to add to the extensive theoretical literature, but instead to document in considerable detail the extent to which U.S. monetary
policy has become increasingly open and trans-

T

William Poole is the president, Robert H. Rasche is a senior vice
president and director of research, and Daniel L. Thornton is a vice
president and economic advisor at the Federal Reserve Bank of St. Louis.
Kathy Cosgrove and Charles Hokayem provided research assistance.

© 2002, The Federal Reserve Bank of St. Louis.

parent. The trend toward greater transparency has
been especially evident in recent years.4 In 1994,
the FOMC began the practice of announcing policy
actions immediately upon making them, and in
1995 the practice was formally adopted.5 Since
August 1997 the FOMC has included a numeric
value of the “intended federal funds rate” in each
directive. Since May 1999 a press statement has
been released at the conclusion of every meeting.
These press statements initially included a numeric
value for the “intended federal funds rate” and a
statement of the “policy bias.”
In February 2000 the FOMC replaced the “policy
bias” in the Directive that had been used since
February 1983 with a statement of the “balance of
risks.”6 In this statement the FOMC indicates its
beliefs about how the risks of heightened inflation
pressure and economic weakness are balanced over
the foreseeable future. The new language was not
intended to indicate the likely direction or timing
of future policy moves.
These moves toward greater openness and
transparency should have increased the ability of
markets to anticipate policy actions. Poole and
Rasche (2000) and Kuttner (2001) used data from
the federal funds futures market to estimate the
extent to which the market has anticipated the Fed’s
actions. While their methodologies differ slightly,
1

There is a continuing debate, however, about exactly how central
banks control the long-run inflation rate and the relative importance
of money. For further discussion, see McCallum (2001).

2

In the final analysis, credibility is earned—central banks will be known
by their actions, not by their words. The Swiss National Bank and the
Bundesbank had considerable credibility because they kept the inflation rate low. See Meyer (2001) for a discussion of the need to earn
credibility.

3

A number of arguments have been advanced for why only surprise
policy actions matter. Recently, Woodford (2001) presented arguments
against several of these propositions. Indeed, he shows that in models
with forward-looking expectations, what matters is the market’s expectation of future policy. The remaining argument against expected
policy having real effects occurs if prices adjust very rapidly to expected
policy actions. In such an environment, policymakers would be unable
to change the stock of real money and, consequently, unable to affect
any real variable. For a recent attempt to differentiate empirically
between the effects of expected and unexpected policy actions, see
Hoover and Jorda (2001).

4

In its landmark Freedom of Information Act case (Merrill vs. FOMC)
that was argued before the U. S. Supreme Court in 1976, the Fed vigorously defended the need for secrecy. See Goodfriend (1986) for a
discussion of the Merrill case and the Fed’s arguments.

5

For a detailed history of the Fed’s disclosure practice, see Rasche (2001).

6

See Thornton and Wheelock (2000) for a detailed analysis of the policy
bias statement.

J U LY / A U G U S T 2 0 0 2

65

Poole, Rasche, Thornton

both looked at the reaction of the federal funds
futures rate on days when the Fed changed the funds
rate target; in this way, they estimated the extent to
which the market was surprised by Fed actions. The
expected target change is obtained by subtracting
this estimate from the actual target change. These
measures were then used to estimate the response
of market rates to unexpected changes in policy.
Both analyses find that Treasury rates responded
significantly to unexpected target changes, but not
to expected target changes.
This paper extends this literature in several
important directions. First, because this methodology
requires that market participants know that the Fed
has changed the funds rate target, we perform the
analysis separately over the two periods: pre-1994
and post-1993. (Pre-1994 refers to the period
before the February 4, 1994, FOMC meeting; post1993 refers to the period after that meeting.) As of
February 4, 1994, there is no doubt that the market
has been aware each time the target was changed
because each change has been announced. Before
1994 the market’s knowledge of Fed actions cannot
be taken for granted. Consequently, we undertake a
detailed analysis of what the market knew about Fed
policy actions before 1994 to determine instances
when market participants were and were not aware
that the target had changed.
Second, we show that the Poole/Rasche and
Kuttner methodology eliminates part, but not all,
of the measurement error associated with identifying unexpected changes in the funds rate target.
Failure to account for the remaining source of measurement error results in a downward bias in the
estimate of the response of the Treasury rates to
unexpected target changes. We implement an errorsin-variables estimator to correct for this bias.
Third, we attempt to identify the extent to
which market participants were surprised by the
Fed’s inaction. That is, we identify dates when the
market expected the Fed to act but no action was
taken. This is particularly relevant for the post-1993
period. Given the FOMC’s practice since 1993 of
changing the target primarily at regularly scheduled
meetings, it is reasonable to assume that there may
have been instances when the market was expecting
an action that the FOMC did not take. The absence
of action may have prompted market participants
to revise their expectation for the future funds rate.
Fourth, we investigate how far in advance the
market appeared to correctly anticipate a policy
action. The Poole/Rasche and Kuttner methodology
66

J U LY / A U G U S T 2 0 0 2

REVIEW

indicates only whether the market anticipated the
Fed’s action at the time the action was taken; it does
not provide information about how far in advance
the market expected the action. This measurement
required a detailed analysis of what the market
expected and the behavior of longer-term federal
funds futures rates.
Finally, we provide additional evidence that
the recent trend toward greater transparency has
significantly increased market participants’ ability
to anticipate Fed actions.

IDENTIFYING UNEXPECTED MONETARY
POLICY ACTIONS
One problem in estimating the response of the
economy to exogenous policy actions of the Fed has
been that it has been difficult to isolate a variable
that measures such actions. The search for a single
measure of exogenous Fed policy actions has been
hampered by the fact that the Fed has changed its
emphasis in conducting monetary policy over the
years.
The practice of changing operating procedures,
and in some instances changing policy objectives,
combined with the lack of transparency about either
the Fed’s objectives or its operating procedure makes
it very difficult to isolate one variable that reflects
Fed policy actions. It is hardly surprising that a number of variables—growth rates of monetary and
reserve aggregates, changes in the discount rate,
and short-term interest rates, particularly the overnight federal funds rate—have been used as measures of Fed policy actions.
Knowing the Fed’s policy instrument is an
important element for assessing the effect of monetary policy actions, but it is not the only element. If
markets are efficient, anticipated policy actions are
already reflected in economic variables—markets
respond only to unexpected policy actions. To identify the effect of policy actions on the economy, the
observed policy instrument must be partitioned
into its expected and unexpected components.
Failure to distinguish between expected and unexpected policy actions gives rise to a measurement
problem, which biases downward the estimated
response of economic variables to a change in the
policy instrument. To correctly assess the impact of
policy actions, then, the policy instrument must be
known, observed, and partitioned into its expected
and unexpected components.
There is little difficulty in identifying policy

FEDERAL RESERVE BANK OF ST. LOUIS

actions since the late 1980s. For one thing, the Fed
has explicitly targeted the federal funds rate during
this period and there is evidence that the market
was aware that the Fed targeted the funds rate as
early as 1989. In addition, in October 1988 the
Chicago Board of Trade began trading federal funds
futures contracts. A federal funds futures contract
is a bet on the average effective federal funds rate
for the month in which the contract matures. Consequently, the federal funds futures rate reflects the
market’s expectation for the average level of the
federal funds rate for that month. In this environment, the federal funds futures rate is a nearly ideal
measure of the market’s expectation of Fed policy.
To illustrate, let fffth denote the rate on the h-month
federal funds futures contract on day t. Note that
(1)

h
fffth ≡ (1 / m )∑m
i =1 Et ff i ,

where ffih denotes the federal funds rate on day i of
the hth month, Et denotes the expectation on day
t, and m denotes the number of days in the month.
Now assume that the Fed is targeting the federal
funds rate and that the funds rate stays very close
to the target, i.e.,
(2)

fft = fft* + ηt ,

where fft* denotes the Fed’s target for the federal
funds rate on day t and ηt denotes a mean zero, but
not necessarily i.i.d., random variable.7 Substituting
(2) into (1) yields
(3)

*h 8
fffth ≡ (1 / m )∑m
i =1 Et ffi .

On day t, the change in the h-month federal
funds futures rate would be
(4)

*h
*h
∆ fffth ≡ (1 / m)∑m
i =1( Et ffi − E t −1 ffi ) .

Suppose that on day t there is a change in the
intended funds rate that is expected to persist for
h months or longer. If market participants correctly
anticipate both the timing and the magnitude of
the Fed’s action, the h-month-ahead federal funds
futures rate would not respond to the action, i.e.,
∆fffth=0. The change in the h-month-ahead federal
funds futures rate on days when the market knows
that the Fed has changed its funds rate target is a
measure of the unexpected change in the target, so
long as the new target is expected to persist for the
term of the futures contract. The expected target
change can be calculated by subtracting this number
from the actual target change.
Poole/Rasche and Kuttner use this procedure

Poole, Rasche, Thornton

to identify unexpected policy actions. Poole and
Rasche use the change in the 1-month federal funds
futures rate on the day the target was changed. On
the first day of the month fff t1–1 is replaced by the
futures rate on the 2-month contract for the last
day of the previous month.
In contrast, Kuttner estimates the unexpected
target change using the current month’s futures
rate contract. Specifically, Kuttner’s estimate of the
unexpected target change is
m
(5)
∆fft*u ≡
( ffft0 − ffft0−1) ,
m−t
where ffft0 is the value of the current month’s federal
funds futures rate on the t th day of the month and
m is the number of days in the month. On the first
day of the month, fff t0–1 is replaced by the futures
rate on the 1-month contract on the last day of the
previous month. On the last three days of the month,
Kuttner uses the Poole/Rasche measure of the unexpected target change.

Knowledge of Fed Actions
These measures presume that market participants are aware of the target change. If the market
participants are unaware that the target has changed,
expectations for the funds rate would not necessarily reflect expectations for the Fed’s policy instrument. Even if market participants were aware that
the Fed had taken some policy action, evidenced,
for example, by a change in the discount rate, the
change in the federal funds futures rate would not
necessarily reflect the “unexpected change in the
funds rate target.”
Likewise, if market participants do not know
that the target has changed on a particular day,
that day’s change in the federal funds futures rate
could not measure the unexpected change in the
funds rate target. Indeed, on such days the change
in the futures rate would normally be relatively
small, which might be interpreted as the market
having expected the target change. In truth, however, market participants would be simply unaware
that the target had changed.
After 1994, knowledge of FOMC actions is not
an issue. As previously stated, at its February 4, 1994,
7

There is some well-documented persistence in deviations of the funds
rate from the funds rate target. For example, see Taylor (2001).

8

This and subsequent analyses ignore the possibility of a small premium
in the futures market, documented by Robertson and Thornton (1997),
because any such premium is so small that its existence would have
a negligible impact.

J U LY / A U G U S T 2 0 0 2

67

REVIEW

Poole, Rasche, Thornton

meeting, the FOMC began the practice of announcing target changes immediately.9 Knowledge of
target changes before 1994, when target changes
were not announced, is problematic. The process
of knowing when the target was changed was further
complicated by the fact that during this period most
target changes were made between, rather than at,
FOMC meetings. Furthermore, until late 1989 (when
the Fed appears to have adopted the practice of
changing the target only in multiples of 25 basis
points), target changes of various amounts smaller
than 25 basis points were common.

THE MARKET REACTION TO
UNEXPECTED TARGET CHANGES—
POST-1993
In this section we estimate the response of
market rates to unanticipated changes in the funds
rate target. We begin by analyzing the post-1993
period. The policy action on February 4, 1994, is
excluded from our analysis because this is the first
time that the FOMC announced its decision. Since
there was no information prior to the conclusion
of that meeting to indicate that such an announcement would be forthcoming, market reaction was
conditioned on less information than at subsequent
meetings.
To estimate the response of various Treasury
rates to changes in the funds rate target, Poole/
Rasche and Kuttner estimated the equation
(6)

∆it = α + β1∆ fft*e + β2 ∆ fft*u + ε t ,

where ∆it denotes the change in the selected
Treasury rate and ∆fft*e denotes the expected change
in the funds rate target, i.e., ∆fft*e=∆fft*– ∆fft*u.
Ordinary least-squares (OLS) estimates of β1
and β2 are biased because the measures of the unexpected target change suffer from measurement error.
The measurement error arises because each day
markets process information that comes in various
forms. While special attention is paid to headline
news—reports of major government statistics,
announcements of funds rate target changes, etc.—
market participants process information from a
variety of other sources that are less easily identified.
Hence, federal funds futures rates change even on
days when there is no headline news or a target
change. Such ambient news is included in the Poole/
Rasche and Kuttner measures of the unexpected
change in the funds rate target.
We adjust for the errors-in-variable bias using a
classic econometric approach. It is convenient to
68

J U LY / A U G U S T 2 0 0 2

rewrite (6) so there is only one variable that is measured with error:
(7)

∆ it = α + β1( ∆ fft* − ∆ fft*u ) + β2 ∆ fft*u + ε t ,

which simplifies to
(8)

∆it = α + β1∆ fft* + ( β2 − β1 ) ∆ fft*u + ε t .

Classic Errors-in-Variables Model
Errors-in-variables bias arises when one of the
variables is measured with error. To illustrate the
problem and the corresponding errors-in-variables
estimation, assume that
(9)

∆ fft*um = ∆ fft*u + ut ,

where ∆fft*um is an estimate of the true unexpected
change in the funds rate target and ut is a random
measurement error that is uncorrelated with ∆fft*u.
Substituting (9) into (8) yields
(10)

∆it = α + β1∆ fft* + ( β2 − β1 ) ∆ fft*um − ut + ε t
= α + β1∆ fft* + ( β2 − β1 ) ∆ fft*um + ϖ t .

It is clear from (10) that ∆fft*um is negatively correlated with ϖt, which will bias the estimate of ( β2 – β1)
down. The classic errors-in-variables estimator
makes use of the assumptions that Eut=Eεt=0 and
Eut ε t=0. Under these assumptions, the covariance
between ( β2 – β1)∆fft*um and ϖt is –( β2 – β1)σ u2, where
σ u2 is the variance of the measurement error.10

Identifying Ambient Variation in the
Futures Rate
The application of the classic errors-in-variables
estimation technique requires a measure of the
variance of the shock associated with the ambient
news. We accomplish this by identifying all of the
policy events since 1994. A policy event is either a
meeting of the FOMC or an intermeeting target
change. During this period all but four of the target
changes occurred at regularly scheduled FOMC
meetings. There were 62 such events from March
1994 through May 2001. We then read the front page
and the Credit Markets column of the Wall Street
Journal (WSJ ) at least two days before each of these
events to infer what the market anticipated would
happen on “event days.”
9

The FOMC formally adopted this practice as a procedure at its JanuaryFebruary 1995 meeting.

10

For more details, see Johnston (1963, pp. 168-70).

FEDERAL RESERVE BANK OF ST. LOUIS

On meeting days when the target was not
changed, we concluded that the market anticipated
that no action would be taken if the commentary
suggested that market analysts overwhelmingly
believed that no action would be taken. We inferred
that the market anticipated no action when the WSJ
reported there was a “consensus” or “unanimity”
among market analysts.
When the funds rate target was changed, we
required market analysts to correctly anticipate
the magnitude of the target change. In many cases
the WSJ reported the results of a survey. In these
instances, we inferred that the market correctly
anticipated the FOMC’s action if more than threefourths of the survey respondents correctly predicted
the action.
This procedure resulted in the contingency
table shown in Table 1. The dates for each of these
groups and the corresponding Poole/Rasche and
Kuttner shock measures are presented in Table 2.11
Of the 62 events since March 1994, we conclude
the market fully anticipated 44. For most of these
events the FOMC did not change the funds rate. On
only four occasions when there was no target change
did we conclude the market was surprised. The target
was changed 24 times during this period. We conclude market participants were surprised on 14 of
these occasions.
Our classification using the WSJ is generally
supported by the shock measure. There are only two
occasions when the Poole/Rasche shock measure
was larger than 5 basis points when our reading
of the WSJ indicated that the market expected the
FOMC’s action. On both of these occasions, the target
was changed. Moreover, when our reading of the
WSJ indicated that the market was surprised by the
action, the Poole/Rasche shock measure is larger
than 5 basis points on all but three occasions. Market
participants appear to have been surprised by all
four of the intermeeting target changes. Indeed,
three of the four largest shocks by either measure
occurred on these days. This suggests that, while
the market may be able to anticipate the direction
and size of the next target change, predicting the
timing of an action is difficult unless the FOMC follows a rule, such as only adjusting the funds rate
target at regularly scheduled meetings.

The Results
The variance of the observed change in the 1month federal funds futures rate for the 44 events
in the second row of Table 1 is our estimate of σ u2,
the variance of the measurement error. OLS estimates

Poole, Rasche, Thornton

Table 1
Contingency Table of Anticipated and
Unanticipated Events Obtained from the
Wall Street Journal

Surprise

No target
change

Target
change

Total

4

14

18

No surprise

34

10

44

Total

38

24

62

and estimates obtained using a classic errors-invariables estimation technique (EV) are presented in
Tables 3 and 4 for the post-1993 period using the
Poole/Rasche and Kuttner shocks, respectively. Not
surprisingly, the OLS estimates suffer from errorsin-variables bias. In all cases, EV estimates of β 2 are
larger than the corresponding OLS estimates. The
response of these rates to target shocks is larger with
the Poole/Rasche measure than with the Kuttner
measure, but the differences are generally small.
Figure 1, which shows the two measures of target
shocks, reveals that there is close correspondence
between these measures.12 Hence, it is hardly surprising that these measures yield very similar results.
As a result, only the Poole/Rasche shock will be
presented in the remainder of the paper.

Do Markets Respond to Expected
Target Changes?
One unexpected result is the finding that the
3-month rate responds significantly to actual target
changes, suggesting that the market responds to
expected changes. The estimated coefficient on the
target change in the regression for the 3-month rate
is statistically significant at the 5 percent level. This
result is at odds with the efficient markets hypothesis
and with Poole/Rasche and Kuttner, who found
that markets did not respond to anticipated target
changes. It is also at odds with our findings (presented in the next section) for the pre-1994 period.
Kuttner (2001) reports a similar result when he
used monthly average data. Specifically, he found
that both the 3- and 6-month T-bill rates responded
significantly to his measure of the surprise target
11

As Kuttner (2001) has noted, the change on October 15, 1998, was
announced at 3:15 p.m. Eastern time, after the markets closed. Consequently, for the purpose of the empirical analysis, this change is dated
as October 16.

12

The simple correlation between these measures is 0.98.

J U LY / A U G U S T 2 0 0 2

69

REVIEW

Poole, Rasche, Thornton

Table 2
Dates and Poole/Rasche and Kuttner Shock
Measures Corresponding to Table 1
Date

Poole/Rasche

Kuttner

Figure reference
number

–0.08

–0.20

Poole/Rasche

Kuttner

Figure reference
number

No surprise/no target change cont’d

2/05/97

–0.02

–0.03

NA

7/02/97

–0.01

–0.02

NA

Figure 7

8/19/97

0.01

–0.01

NA

0.00

0.00

NA

Surprise/no target change

9/27/94

Date

12/20/94

–0.11

–0.17

Figure A1-A

9/30/97

9/24/96

–0.13

–0.12

Figure 6

11/12/97

–0.02

–0.04

NA

Figure A1-B

12/16/97

–0.01

–0.01

NA

2/04/98

0.01

0.00

NA

3/31/98

0.00

0.00

NA

5/20/97

–0.09

–0.11

Surprise/target change

3/22/94

–0.04

–0.03

Figure 4

5/19/98

–0.02

–0.03

NA

4/18/94*

0.10

0.10

Figure 2

7/01/98

–0.01

–0.01

NA

5/17/94

0.05

0.13

Figure 3

8/19/98

0.00

0.00

NA

8/16/94

0.10

0.14

Figure A2-A

12/22/98

0.00

0.00

NA

11/15/94

0.09

0.14

Figure A2-B

2/03/99

–0.01

0.00

NA

7/06/95

–0.07

–0.01

Figure A2-C

3/30/99

0.00

0.00

NA

12/19/95

–0.11

–0.10

Figure A2-D

5/18/99

–0.01

–0.02

NA

1/31/96

–0.07

–0.07

Figure A2-E

10/05/99

0.00

–0.04

NA

10/16/98*

–0.20

–0.26

Figure A2-F

12/21/99

0.00

0.03

NA

11/17/98

–0.06

–0.06

Figure A2-G

6/28/00

–0.02

–0.02

NA

11/16/99

0.08

0.09

Figure A2-H

8/22/00

0.00

0.00

NA

1/03/01*

–0.29

–0.38

Figure A2-I

10/03/00

0.00

0.00

NA

3/20/01

0.03

0.06

Figure 5

11/15/00

0.00

0.00

NA

4/18/01*

–0.42

–0.43

Figure A2-J

12/19/00

0.05

0.05

NA

No surprise/no target change

No surprise/target change

7/06/94

–0.02

–0.05

NA

2/01/95

0.02

0.05

3/28/95

0.00

0.10

NA

3/25/97

0.04

0.03

Figure A3-B

5/23/95

0.01

0.00

NA

9/29/98

0.06

0.06

Figure A3-C

8/22/95

0.02

0.00

NA

6/30/99

–0.04

–0.04

Figure 9

9/26/95

0.04

0.00

NA

8/24/99

0.03

0.02

Figure 10

11/15/95

0.01

0.06

NA

2/02/00

–0.04

–0.05

Figure A3-D

3/26/96

0.01

–0.03

NA

3/21/00

–0.01

–0.03

Figure 8

5/21/96

0.01

0.00

NA

5/16/00

0.04

0.05

Figure A3-E

7/03/96

–0.05

–0.05

NA

1/31/01

0.00

0.00

Figure A3-F

8/20/96

–0.01

–0.04

NA

5/15/01

–0.07

–0.08

Figure A3-G

11/13/96

0.01

0.00

NA

12/17/96

0.00

0.10

NA

70

J U LY / A U G U S T 2 0 0 2

Figure A3-A

NOTE: *Indicates an intermeeting target change.

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Table 3
OLS and EV Estimates of the Response of Treasury Rates to Target Surprises Using the
Poole/Rasche Measure (Post-1993)

∆it = α + β1∆ fft* + ( β2 − β1 ) ∆ fft*u + ε t
OLS

EV
β1

β2

–
R 2/se

0.600/0.086 –0.122 (0.01)

0.071 (0.03)

0.808 (0.28)

0.597/0.086

0.586 (0.18)

0.531/0.074 –0.035 (0.01)

0.045 (0.04)

0.635 (0.20)

0.528/0.074

0.502 (0.17)

0.384/0.080 –0.035 (0.01)

0.024 (0.03)

0.546 (0.19)

0.381/0.080

0.023 (0.04)

0.334 (0.18)

0.115/0.096 –0.027 (0.02)

0.015 (0.04)

0.364 (0.20)

0.114/0.096

–0.029 (0.02) –0.023 (0.04)

0.159 (0.22)

0.000/0.106 –0.028 (0.02) –0.028 (0.04)

0.182 (0.23)

0.000/0.106

0.014 (0.21)

0.000/0.098 –0.024 (0.02) –0.052 (0.04)

0.027 (0.22)

0.000/0.098

0.003/0.073 –0.029 (0.01) –0.050 (0.04) –0.075 (0.13)

0.003/0.073

Rate

α

β1

β2

∆tb3

–0.112 (0.01)

0.083 (0.03)

0.757 (0.24)

∆tb6

–0.035 (0.01)

0.056 (0.03)

∆tb12

–0.035 (0.01)

0.034 (0.03)

∆T2yr

–0.027 (0.02)

∆T5yr

0.025 (0.02) –0.049 (0.04)

∆T10yr
∆T30yr

–0.029 (0.01) –0.048 (0.03) –0.083 (0.13)

–
R 2/se

α

NOTE: Estimated standard errors are in parentheses.

Table 4
OLS and EV Estimates of the Response of Treasury Rates to Target Surprises Using the Kuttner
Measure (Post-1993)

∆it = α + β1∆ fft* + ( β2 − β1 ) ∆ fft*u + ε t
OLS

EV
β1

β2

–
R 2/se

0.607/0.085 –0.018 (0.01)

0.065 (0.04)

0.706 (0.23)

0.604/0.085

0.489 (0.15)

0.506/0.076 –0.040 (0.01)

0.048 (0.04)

0.528 (0.17)

0.502/0.076

0.392 (0.16)

0.332/0.083 –0.041 (0.02)

0.034 (0.04)

0.426 (0.17)

0.329/0.084

0.046 (0.04)

0.204 (0.17)

0.053/0.010 –0.034 (0.02)

0.041 (0.05)

0.224 (0.19)

0.052/0.100

0.008 (0.04)

0.027 (0.21)

0.000/0.108 –0.035 (0.02)

0.005 (0.04)

0.038 (0.23)

0.000/0.108

–0.029 (0.02) –0.021 (0.04) –0.087 (0.20)

0.000/0.097 –0.029 (0.02) –0.022 (0.04) –0.083 (0.21)

0.000/0.097

–0.032 (0.01) –0.028 (0.03) –0.142 (0.11)

0.035/0.072 –0.031 (0.01) –0.029 (0.03) –0.141 (0.12)

0.035/0.072

Rate

α

β1

β2

∆tb3

–0.017 (0.01)

0.077 (0.03)

0.662 (0.20)

∆tb6

–0.040 (0.01)

0.059 (0.04)

∆tb12

–0.041 (0.01)

0.044 (0.03)

∆T2yr

–0.034 (0.02)

∆T5yr

–0.035 (0.02)

∆T10yr
∆T30yr

–
R 2/se

α

NOTE: Estimated standard errors are in parentheses.

change when monthly data were used. He interprets
this result as being consistent with the expectations
theory of the term structure, suggesting that “the
anticipated rate changes are associated with expectations of further actions in subsequent months.”13
While market participants may revise their expectation of future rate changes in response to an unanticipated target change, we do not believe that
they would do so in response to an expected target
change. Consequently, we suspect there is another
explanation for this result.

One possible explanation comes from noting
that before 1994 there were relatively few occasions
when the funds rate target and the discount rate
were changed simultaneously. After 1994 things are
very different. Of the 24 target changes considered
in the post-1993 period, 16 were accompanied by
a change in the discount rate. Thornton (1996)
found that the 3-month T-bill rate responded differently to target changes when the discount rate was
13

Kuttner (2001, p. 541).

J U LY / A U G U S T 2 0 0 2

71

REVIEW

Poole, Rasche, Thornton

Figure 1
Poole/Rasche and Kuttner Measures of Unexpected Funds Rate Target Changes
(Post-1993)
0.5
0.4

Kuttner Measure

0.3
0.2
0.1
0
–0.1
–0.2
–0.3
–0.4
–0.5
–0.5

–0.4

–0.3

–0.2

–0.1

0

0.1

0.2

0.3

0.4

0.5

Poole/Rasche Measure
NOTE:

Denotes unexpected target changes associated with intermeeting changes in the funds rate target.

changed. Discount rate changes appear to have an
independent effect on market rates. Hence, it is
possible that the significant response to expected
target changes (reported in Tables 3 and 4) is due to
the fact that, on some occasions, the Fed provided
additional information by simultaneously changing
the discount rate.
To investigate this possibility, the equations were
reestimated with target changes partitioned into
those when the discount rate was changed and
those when it was not. Specifically, the equation
(11)
∆it = α + β1∆ fft* | ∆ dr + β1′∆ fft* | no∆ dr + β 3∆ fft*u + ε t
was estimated.
EV estimates of equation (11) are reported in
Table 5. They suggest that changes in the funds
rate target that are accompanied by changes in the
discount rate provide additional (unanticipated)
information. In the absence of such additional
information, the market does not respond significantly to expected target changes. The market only
responds to “expected” target changes when new
information is simultaneously provided. In this case,
72

J U LY / A U G U S T 2 0 0 2

the new information comes in the form of a discount
rate change.

THE MARKET REACTION TO
UNEXPECTED TARGET CHANGES—
PRE-1994
To apply the Poole/Rasche and Kuttner methodology to target changes before 1994, we must first
identify whether the market realized on the day of
the event that a target change had occurred. To determine the market’s knowledge of a target change, we
read the front page and the Credit Markets column
from the WSJ for at least two days after each change
in the funds rate target. This procedure is complicated by the fact that there is some difference of opinion about when the funds rate target was changed.
We started with a widely used series of target changes
reported by the Federal Reserve Bank of New York.
Recently, however, Thornton and Wheelock (2000)
presented an alternative series prepared by the staff
of the FOMC Secretariat’s office. Before 1989 these
series sometimes differ in the dating and magnitude
of Fed actions. The dates considered are the union

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Table 5
EV Estimates with Target Changes Partitioned into Those That Were and Were Not Accompanied
by a Change in the Discount Rate

∆it = α + β1∆ fft* | ∆ dr + β1′∆ fft* | no∆ dr + β 3∆ fft*u + ε t
Rate

α

β1

β 1′

β3

–
R 2/se

∆tb3

–0.122 (0.11)

0.071 (0.03)

0.075 (0.06)

0.738 (0.30)

0.577/0.088

∆tb6

–0.035 (0.01)

0.033 (0.04)

0.098 (0.08)

0.605 (0.23)

0.509/0.076

∆tb12

–0.036 (0.01)

0.005 (0.05)

0.112 (0.08)

0.546 (0.21)

0.370/0.081

∆T2yr

–0.029 (0.02)

–0.016 (0.05)

0.161 (0.13)

0.388 (0.22)

0.127/0.096

∆T5yr

–0.030 (0.02)

–0.064 (0.05)

0.142 (0.14)

0.256 (0.25)

0.000/0.105

∆T10yr

–0.026 (0.02)

–0.083 (0.05)

0.093 (0.11)

0.118 (0.24)

0.000/0.097

∆T30yr

–0.030 (0.01)

–0.072 (0.04)

0.055 (0.10)

0.002 (0.10)

0.014/0.107

NOTE: Estimated standard errors are in parentheses.

of the two data sets. As a further check on the dating
of the target change, we consulted the Report of Open
Market Operations and Money Market Conditions
(hereafter ROMO), which is prepared biweekly by the
Manager of the Trading Desk of the Federal Reserve
Bank of New York (the Desk). A detailed analysis of
these differences led us to use the Secretariat’s date
of July 6, 1989, rather than the New York Fed’s date
of July 7. The boxed insert provides a discussion of
the more interesting dating conflicts, including the
July 1989 conflict.
The market began to focus more attention on
interest rates, including the federal funds rate, in
1987. Earlier in the decade of the 1980s, much of
the discussion of policy was in terms of the effect
of policy actions on the rate of money growth. Aware
that Fed actions to increase or decrease reserve pressure influenced the federal funds rate, the market
increasingly gauged policy by movements in the
funds rate. However, market analysts frequently
were unable to determine whether changes in the
funds rate signaled a monetary policy action. In
the early part of 1988, it appears that the market
became more aware that the Fed was relying heavily
on the funds rate to implement policy and market
analysts began to surmise the Fed’s intentions for
the funds rate by observing Desk operations relative
to the behavior of the funds rate.
Table 6 reports the amounts and dates of all
funds rate target changes and the new effective target
level reported by the Federal Reserve Bank of New
York between August 1987 and December 1993.14
If there is a difference between the New York series

and the Secretariat’s series, the Secretariat’s dating
of the action is also indicated. The table also indicates
whether the discount rate was changed. Under
Chairman Greenspan the funds rate target was
changed whenever the discount rate was changed.
This was not the case previously; more often than
not the discount rate and the funds rate target were
changed on different days.
Despite the increased awareness that the Fed
was paying attention to the funds rate in conducting
monetary policy, there is little indication that the
market was aware that the Fed was setting an explicit
objective for the federal funds rate before 1989. We
believe that the first time in the 1980s that market
participants knew that policy action occurred was
May 9, 1988, when the Desk injected fewer reserves
than analysts expected. This action sparked speculation that the Fed was increasing its fight against
inflation, and market analysts concluded that the
action would cause the funds rate to trade at 7 percent or slightly higher.15
14

As Kuttner (2001) has noted, the target change that occurred at the
December 1990 FOMC meeting was effectively revealed to the market
with the announcement of a 50-basis-point cut of the discount rate on
December 18. The announcement was made at 3:30 p.m., however,
after the markets had closed. Consequently, this change is dated as
December 19. It should also be noted that there are two dates in Table
6 that differ from those reported in Thornton and Wheelock (2000,
Appendix B). The first is October 16, 1989; Thornton and Wheelock
originally used October 18. The second is January 9, 1991, originally
dated January 8.

15

This is also one of the dates where there is discrepancy on exactly
when the change was implemented. The Secretariat’s series suggests
that the change took place on May 7. There is no indication that the
market was aware of an action on that date, however.

J U LY / A U G U S T 2 0 0 2

73

Poole, Rasche, Thornton

CONFLICT IN THE DATING OF
TARGET CHANGES
There are a few cases, deserving of special
attention, where there is conflict in the dating of
the change in the federal funds rate target as
reported by the New York Fed compared with the
dating provided by the staff of the Secretariat. The
first occurred in January 1989. The New York Fed
suggests the funds rate target was increased by
25 basis points on January 5, 1989. The staff of
the Secretariat is less precise, putting the change
early in January. From our reading of the WSJ, it
is apparent that the market was aware that the Fed
changed policy before January 5; however, the
precise date cannot be determined. On January 5
the WSJ merely indicates that analysts thought
that the Fed had tightened credit earlier in the
week. The Report of Open Market Operations
and Money Market Conditions (hereafter, ROMO),
however, clearly indicates that “on the second
Thursday—January 5—the borrowing allowance
was increased to $600 million, in line with the
Committee’s decision at the December meeting.”
Hence, while the market thought that the Desk
took actions consistent with changing policy in
the first few days of January, the Desk indicates
that the action was not taken until January 5.
The second case occurred in July 1989. The
New York Fed dates the change on July 7, the
Secretariat on July 6. Market analysts agree with
the Secretariat staff’s dating of the action and
The market was not consistently aware of target
changes at the time they happened until late 1989.
This is about the time that the Fed began the practice
of making target changes in multiples of 25 basis
points. After late 1989 market analysts appear to
have become adroit at identifying target changes
when they occurred. In most cases analysts determined that the target had changed based on signals
from the Desk. In many cases, however, the precise
nature of the signal was not specified.

Did Market Analysts Anticipate Fed
Actions?
In order to make the EV adjustment, we must
again identify days on which the market was affected
only by ambient news. Hence, the relevant question
is, did market analysts anticipate Fed actions? The
74

J U LY / A U G U S T 2 0 0 2

REVIEW

thought the Fed moved on the 6th, when the
funds rate traded significantly below its previous
trading range of 9.5 percent and the Desk made
no attempt to offset the rate move. The ROMO
indicates that “after the Committee’s July 5-6
meeting, the borrowing allowance was set at
$600 million. This adjustment represented a
slight intended easing of pressures on reserve
positions, while also recognizing the recent rise
in seasonal borrowing. (In the FOMC’s discussion, ‘unchanged’ conditions of reserve availability were associated with a borrowing level of
$650 million; at any event, the Desk continued
to view the borrowing allowance with some
flexibility.)” The July 5-6 FOMC meeting
adjourned at 11:50 a.m. Eastern time on July 6.
Hence, it is very unlikely the Desk implemented
the FOMC’s decision on the 6th. Because of this,
the New York Fed dates the change on July 7, but
because the decision was made on July 6, the
staff of the Secretariat dates the change on the
6th. Nevertheless, the market interpreted the
Desk’s failure to act on July 6, when the funds
rate traded significantly below the previous trading
level, as a policy action. While the decision is
somewhat arbitrary, we have decided to use the
Secretariat’s dating of this target change.
The third case occurred in October 1989. The
New York Fed dates the change on October 16 and
the staff of the Secretariat dates it on October 19.
Market analysts thought that the Fed had taken a
(Continued on p. 75)
answer is yes, and no. There were many occasions
when actions to increase or decrease pressure in
the reserve market came as no surprise. Information
on the state of the economy, inflation, or movements
in the short-term interest rate fueled speculation
that the Fed would soon change the discount rate
or take other actions to alter the availability of
credit. In this sense, there appears to be relatively
few cases where the market was completely surprised by an action.
On the other hand, the precise dating of the Fed
action nearly always surprised the market. Unlike
the post-1993 period, we could find few instances
where there was a widespread expectation that the
Fed would take an action on a particular day. Moreover, we found no instance where there was a widespread expectation that the Fed would take an

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

(Continued from p. 74)
policy action on October 16, when the Desk did
not attempt to offset a significant decline in the
federal funds rate. Indeed, fff 1 declined by 16 basis
points on October 16, suggesting that a very significant revision in the market’s expectation for the
federal funds rate occurred on that day. Market
analysts also thought that the Fed took an action
on the 19th, when the Desk added reserves despite
the fact that the funds rate had drifted below the
previous trading level.
The ROMO points to the source of the confusion. The ROMO for the maintenance period ending October 19, 1989, indicates that
The financial markets were jittery after
the second weekend, in the wake of the
190-point plunge in the Dow Jones
Industrial Average in late afternoon trading
on October 13. New reports over the weekend had cited a “Fed official” as saying
that the System would assure the provision of adequate liquidity. As a result, market participants widely expected a reserve
injection on Monday and these anticipations appeared to exert additional downward pressure on the funds rate. The Desk
responded to the unsettled conditions in
financial markets by executing customer-

action on the day the funds rate target was actually
changed. Hence, in this respect, all target changes
before 1994 were unexpected. Because the market
frequently saw the need for an action, not all “unexpected” target changes resulted in large adjustments
to federal funds futures rates.
We were unable to identify any occasion when
the market correctly anticipated the Fed’s action
on a particular day, other than at scheduled FOMC
meetings. Consequently, we determined the variance
of the ambient news, σ u2, by using days when there
was no headline announcement, no FOMC meeting,
and no change in either the funds rate target or the
discount rate.
The OLS and EV estimates for the pre-1994
period are presented in Table 7. The response of
Treasury rates for the pre-1994 period is somewhat
larger than for the post-1993 period, especially at
the longer end of the term structure. Moreover, the
–
R 2s indicate that a much larger proportion of the
variance in Treasury rates on days when the market

related repurchase agreements on the
second Monday and Tuesday [October
16 & 17]. A final round of customer RPs
was arranged on the settlement date
[October 18], against the background of
a bit firmer Federal funds rate that morning—8 3/4 percent—which appeared to
stem partly from market uncertainties in
the wake of Tuesday night’s earthquake
in San Francisco. Also, a background
factor by this point was the decision discussed at Wednesday’s FOMC conference
call to begin implementing a slightly
more accommodative reserve posture in
light of recently incoming economic
information: it was now expected that
Fed funds trading would tend to center
around 8 3/4 percent.
The New York Fed and the staff of the
Secretariat are obviously disputing the dating
of the same policy action that could not have
occurred on the same day. The discussion in
the ROMO gives rather weak support to the
Secretariat’s dating, but the Desk’s action of
injecting reserves on October 16 when the funds
rate was declining suggests that the Desk was pursuing a lower funds rate on Monday. Consequently,
we use the New York Fed’s dating of this action.

knew that the Fed changed the funds rate target is
explained by unexpected target changes. This is
particularly true at the very long end of the term
structure where all rates respond significantly to
unexpected target changes. Furthermore, as the
efficient market hypothesis suggests, none of the
rates responds significantly to anticipated target
changes.

INTERPRETING THE RESPONSE OF
TREASURY RATES
Interpreting the response of Treasury rates to
unexpected changes in the funds rate target requires
an economic structure. While the simple expectations hypothesis (EH) of the term structure of interest
rates is nearly always rejected, longer-term instruments are clearly forward looking.16 Consequently,
16

For evidence of the EH when the short-term rate is the effective federal
funds rate, see Hardouvelis (1988), Simon (1990), Roberds, Runkle,
and Whiteman (1996), and Thornton (2002).

J U LY / A U G U S T 2 0 0 2

75

REVIEW

Poole, Rasche, Thornton

Table 6
Knowledge of Fed Actions Obtained from Reading the Credit Markets Column of the
Wall Street Journal
Date

ff *

∆ff *

Secretariat

Poole/Rasche
shock

Knowledge

8/27/87

6.7500

0.1250

NA

No

9/03/87

6.8750

0.1250

NA

No

9/04/87†

7.2500

0.3750

NA

No

9/24/87

7.3125

0.0625

10/22/87

7.1250

–0.1875

9/22/87

0.1250

10/23/87

–0.3750

NA

No

NA

No

NA

No

NA

No

10/28/87

7.0000

–0.1250

NA

No

11/04/87

6.8125

–0.1875

NA

No

1/28/88

6.6250

–0.1875

NA

No

NA

No

NA

No

NA

No

NA

No

NA

No

2/10/88
2/11/88

–0.1250
6.5000

–0.1250

3/29/88
3/30/88

0.2500
6.7500

0.2500

5/07/88

0.2500

5/09/88

7.0000

0.2500

NA

Yes

5/25/88

7.2500

0.2500

NA

No

6/22/88

7.5000

0.2500

NA

No

7/19/88

7.6875

0.1875

NA

No

8/08/88

7.7500

0.0625

NA

No

8/09/88†

8.1250

0.3750

NA

Yes

10/20/88

8.2500

0.1250

0.00

No

11/17/88

8.3125

0.0625

0.07

No

11/22/88

8.3750

0.0625

12/14/88

0.4000

0.07

No

0.02

No

12/15/88

8.6875

0.3125

0.05

Yes

12/29/88

8.7500

0.0625

–0.06

No

NA

Yes

1/05/89

9.0000

0.2500

0.00

No

2/09/89

9.0625

0.0625

0.01

No

2/14/89

9.3125

0.2500

0.04

Yes

2/23/89

9.5625

0.2500

0.14

Yes

2/24/89†

9.7500

0.1875

0.14

Yes

5/04/89

9.8125

0.0625

0.02

No

6/06/89

9.5625

–0.2500

0.01

Yes

0.03

Yes

7/07/89

9.3125

–0.2500

–0.05

No

7/27/89

9.0625

–0.2500

–0.06

No

8/10/89

9.0000

–0.0625

0.02

No

Early 1/89

0.3125

7/06/89

76

J U LY / A U G U S T 2 0 0 2

–0.2500

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Table 6 cont’d
Knowledge of Fed Actions Obtained from Reading the Credit Markets Column of the
Wall Street Journal
Date
10/16/89

ff *

∆ff *

8.7500

–0.2500

8.5000

–0.2500

Secretariat

10/19/89
11/06/89

–0.2500

12/19/89

–0.2500

Poole/Rasche
shock

Knowledge

–0.16

Yes

0.00

Yes

0.03

No

0.00‡

No

12/20/89

8.2500

–0.2500

–0.17‡

Yes

7/13/90

8.0000

–0.2500

–0.09

Yes

10/29/90

7.7500

–0.2500

–0.02

Yes

11/14/90

7.5000

–0.2500

0.02

No

12/07/90

7.2500

–0.2500

–0.14

Yes

12/19/90†

7.0000

–0.2500

–0.16

Yes

1/08/91

6.7500

–0.2500

–0.10

Yes

2/01/91†

6.2500

–0.5000

–0.20

Yes

3/08/91

6.0000

–0.2500

–0.13

Yes

4/30/91†

5.7500

–0.2500

–0.17

Yes

8/06/91

5.5000

–0.2500

–0.09

Yes

9/13/91†

5.2500

–0.2500

–0.04

Yes

10/31/91

5.0000

–0.2500

–0.05

No

11/06/91†

4.7500

–0.2500

–0.12

Yes

12/06/91

4.5000

–0.2500

–0.11

Yes

12/20/91†

4.0000

–0.5000

–0.26

Yes

4/09/92

3.7500

–0.2500

–0.21

Yes

7/02/92†

3.2500

–0.5000

–0.32

Yes

9/04/92

3.0000

–0.2500

–0.20

Yes

NOTE: †Indicates the target change was accompanied by a change in the discount rate.
‡The Poole/Rasche measure is unavailable on these days, so the Kuttner measure is reported.

it is reasonable to assume that the long-term rate is
determined, at least in part, by the market’s expectation of the funds rate target. The simple EH hypothesizes that the long-term rate is equal to the market’s
expectation for the overnight federal funds rate over
the holding period of the long-term rate plus a constant risk premium, π, i.e.,
(12)

itn = (1 / n)∑ni =−01 Et fft + i + π n ,

where i tn denotes the n-day maturity Treasury rate
on day t and π n denotes a maturity-specific constant
risk premium. It is perhaps more reasonable to
assume that there is a time-varying component to

the risk premium, so that the EH can be more generally written as
(13)

i tn = (1 / n)∑ni =−01 Et fft + i + π n + ω t + ν tn ,

where ωt denotes the unobserved time-varying
component of the risk premium and vtn denotes a
random idiosyncratic shock to the n-day maturity
Treasury rate.
Substituting (2) into (13) and taking the first
difference yields
(14)
∆itn = (1 / n)∑ni =−01[ Et fft*+ i − Et −1 fft*+ i −1] + ∆ω t + ∆ν tn .
To see how our results can be interpreted, we
J U LY / A U G U S T 2 0 0 2

77

REVIEW

Poole, Rasche, Thornton

Table 7
OLS and EV Estimates of the Response of Treasury Rates to Target Surprises Using the
Poole/Rasche Measure (Pre-1994)

∆it = α + β1∆ fft* + ( β2 − β1 ) ∆ fft*u + ε t
OLS

EV
–
R 2/se

β1

β2

–
R 2/se

0.027 (0.07)

0.823 (0.10)

0.840/0.042

0.840 (0.09)

0.816/0.047 –0.023 (0.01) –0.059 (0.12)

0.899 (0.11)

0.811/0.048

0.860 (0.09)

0.861/0.042 –0.006 (0.01) –0.032 (0.06)

0.918 (0.10)

0.856/0.042

0.040 (0.08)

0.715 (0.12)

0.545/0.078

0.003 (0.01)

0.003 (0.09)

0.761 (0.14)

0.543/0.078

0.021 (0.08)

0.534 (0.13)

0.413/0.073

0.002 (0.01) –0.007 (0.09)

0.569 (0.14)

0.411/0.074

0.010 (0.01)

0.008 (0.07)

0.399 (0.10)

0.304/0.067

0.011 (0.01) –0.014 (0.07)

0.426 (0.11)

0.302/0.067

0.008 (0.01)

0.062 (0.07)

0.264 (0.09)

0.187/0.072

0.008 (0.01)

0.277 (0.10)

0.186/0.065

Rate

α

β1

β2

α

∆tb3

–0.017 (0.01)

0.067 (0.06)

0.774 (0.09)

0.844/0.042 –0.017 (0.01)

∆tb6

–0.023 (0.02) –0.012 (0.11)

∆tb12

–0.006 (0.01)

0.014 (0.05)

∆T2yr

0.002 (0.01)

∆T5yr

0.002 (0.01)

∆T10yr
∆T30yr

0.051 (0.07)

NOTE: Estimated standard errors are in parentheses.

impose the restriction that the market only responds
to unexpected target changes so that equation (6)
can be rewritten as
∆itn = α + β2 ∆ fft*u + ε t .

(15)

Given these assumptions, the OLS estimator of β 2
is equal to
(16)

ˆβ2 =
1 T
*
*
*u
∑ [(1 / n)∑ni =−01[ Et fft + i − Et −1 fft + i −1 ] + ω t + ν t ][ ∆ fft ]
T t =1
.
1 T
*u 2
∑ [ ∆ fft ]
T t =1

The problem is that fft*u is unobservable. To
see the potential problems associated with using
the federal funds futures rate, assume that the hmonth-ahead federal funds futures rate is equal to
the market’s expectation for the average effective
federal funds rate h months into the future, adjusted
for term premiums and idiosyncratic shocks, i.e.,
(17)

h
h
h
h
fffth = (1 / m ) Et ∑m
k =1 ffk + ϕ + θ t + ηt ,

where ϕ h+θ th denotes the potential constant and
time-varying components of a term premium for
the h-month-ahead federal funds futures rate, ff kh
denotes the effective federal funds rate on day k
of month h in the future, and ηth denotes an idiosyncratic shock to the h-month federal funds futures
rate. Taking the first difference of (17) yields
78

J U LY / A U G U S T 2 0 0 2

(18)
h
h
h
h
∆ fffth = (1 / m)∑m
k =1[ Et ffk − Et −1 ffk ] + ∆θ t + ∆ηt .
Assuming that the target change (γ ) is expected
to be constant over the next month and that Et ff k1=
Et ff k*, the Poole/Rasche measure of the unexpected
target change is ∆fff 1t =γ+∆θ t1+∆ηt1, so that γ=
∆fff 1t – ∆θ 1t – ∆ηt1. Substituting this expression into
(16) yields
(19)

ˆβ2 =
1 T
*
*
∑ [(1 / n)∑ni =−01[ Et fft + i − Et −1 fft + i −1 ] + ∆ω t + ∆ν t ][γ − ∆θt − ∆ηt ]
T t =1
.17
1 T
2
∑ [γ − ∆θt − ∆ηt ]
T t =1

Assume that (i) the idiosyncratic shocks are
independent of each other and of the time-varying
term premiums, (ii) ρ̂ is an estimate of the coefficient
of the correlation between the change in the timevarying components of the term premiums, and
(iii) s∆2ϖ and s∆2θ are estimates of the variance of the
time-varying components for the Treasury and federal funds futures rates, respectively. If participants
in the Treasury market revise their expectation
*
for the funds rate target permanently, i.e., Et fft+i
*
– Et –1 ff t+i –1=γ, for all i, (19) can be rewritten as
17

The maturity superscripts have been dropped for notational
convenience.

FEDERAL RESERVE BANK OF ST. LOUIS

(20)

1 T
∑ [γ + ∆ω t + ∆ν t ][γ − ∆θt − ∆ηt ]
ˆβ2 = T t =1 1 N
2
∑ [γ − ∆θt − ∆ηt ]
T k =1
Tγ 2 − ρs s
= 2 ˆ2 ∆ω ∆2θ .
Tγ + s ∆θ + s ∆η

If there are neither time-varying risk premiums nor
idiosyncratic shocks to the federal funds futures
rate, βˆ2=1. To the extent that we have corrected
for the bias due to common shocks, the estimate of
βˆ2 should be close to 1 if the market participants
permanently revise their expectation for the funds
rate target point-for-point with the unexpected
target change and if the idiosyncratic variation in
the 1-month futures rate is relatively small.
Estimates of βˆ2 will be less than 1 if the market
believes that the change in the target will last for a
period that is shorter than the maturity of the instrument. Note that the estimate of βˆ2 could also be
greater than 1. This could occur if market participants believe that the unexpected target change
will lead to further changes in the same direction.18
If the market correctly anticipates the magnitude
of the Fed’s action but misses the timing, the size
of the response will depend on the extent to which
the market missed the timing—the larger the miss,
the larger the response.
The estimates of β 2 in Tables 3 and 7 for the
post-1993 and pre-1994 periods suggest that
Treasury rates respond significantly to unexpected
changes in the Fed’s funds rate target. For both the
3- and 6-month T-bill rates the estimated coefficient
is not significantly different from 1, suggesting that
the market revises its expectation for the funds rate
target several months into the future point-for-point
with the unexpected change in the target. During
the pre-1994 period, the estimated coefficients on
the 12-month and 2-year rates are also not significantly different from 1, suggesting that the market
revised its expectation for the funds rate target over
a longer horizon before 1994. In most of these
instances, however, the point estimates are quite
different from 1. It is impossible to say whether this
is due to missing the timing of the Fed’s action or
to the relative importance of idiosyncratic variation
in the futures rate.
For both periods, the response of the Treasury
rate to unexpected target changes declines as the
term lengthens. For the post-1993 period, the
response is not significantly different from zero for

Poole, Rasche, Thornton

maturities beyond 12 months. Indeed, for the 10and 30-year rates the point estimates are essentially zero. In contrast, for the pre-1994 period the
response is statistically significant for all maturities.
One possible interpretation for the general
result that the response declines as the maturity
lengthens is that the market believes that the funds
rate will stay at its new level for a relatively short
period of time. For the pre-1994 period, the response
is nearly the same for maturities up to 12 months
and then declines. Kuttner (2001) and Cook and
Hahn (1989) interpret this result to “mean reversion”
of the federal funds rate. Specifically, they suggest
that beyond one year, the market expects the funds
rate to revert to its mean level. The cycles in the
nominal federal funds rate are very long, however.
It seems unlikely that the market would anticipate
that the funds rate would start to return to its mean
level in just over a year. Moreover, for the post-1993
period, the estimated coefficients begin to decline
after three months. For this explanation to account
for the post-1993 results, the market would have to
anticipate mean reversion after three months—an
incredibly short period.

CASE STUDIES
A potential problem in interpreting the estimate
of β 2 arises from the fact that all interest rates are
affected by publicly available information. Case
studies can shed light on this and several other issues.
To illustrate the potential problem, note that
(15) is actually
(21)

∆itn = α + β [ ∆ ffft1| ∆ fft*u ≠ 0] + ε t .

Now assume that equation (21) is mistakenly estimated using days when there were no unexpected
changes in the funds rate target, i.e.,
(22)

∆itn = α + β [ ∆ ffft1| ∆ fft*u = 0] + ε t .

Substituting (14) and (18) into (22), it is easy to show
that the OLS estimate of β is equal to
(23)

ˆβ = ˆρ2s∆ω s∆2θ

s ∆θ + s ∆η

.

Estimates of β will be zero if and only if ρ̂ =0. If the
term premiums are positively correlated, the estimate of β will be larger the larger is ρ̂ and the smaller
is the idiosyncratic variance in the federal funds
18

It cannot be exactly zero because the term of the bill rate shortens
from one day to the next.

J U LY / A U G U S T 2 0 0 2

79

REVIEW

Poole, Rasche, Thornton

Figure 2

Figure 3

Funds Futures for April 1994 FOMC Event

Funds Futures for May 1994 FOMC Event

4.25

4.75
4.50

4.00

4.25
3.75
4.00
3.50
3.75
3.25
3.00

3.50
3.25
7

14

21

28

7

14

February
Apr 1994

21

28

4

11

March
May 1994

18

25

2

Jun 1994

Target

futures rate. If the magnitude of ρ̂ declines as the
term to maturity on Treasury rates lengthens, so
too will estimates of β.
Identifying times where there were unexpected
changes in the funds rate target is critical for interpreting the results, because of the potential for correlation between changes in Treasury rates and
changes in the federal funds futures rate even when
there are no changes in the funds rate target or
expectations thereof. We have been careful; nevertheless, it is important to check the robustness of
our interpretation of the results. As a check on our
interpretation, we undertook a case-by-case investigation of the response of federal funds futures
rates to each unexpected target change noted in
Table 2. In each instance, we examined the rates on
federal funds futures contracts for the month of
the event and for the months leading up to and just
after the surprise events identified in Table 2.
Before discussing the findings in general, it is
useful to get an idea of the methodology with two
illustrative examples (a detailed analysis of each
of the surprise events is presented in the appendix).
The first example is for the intermeeting target
change that occurred on April 18, 1994. The commentary indicated that the market anticipated that
the Fed would raise the funds rate, but the timing
of the April move was unexpected. For the period
leading up to and just after the April 18, 1994,
increase in the funds rate target, Figure 2 shows
the rates on the April, May, and June federal funds
futures contracts, the funds rate target, and the
J U LY / A U G U S T 2 0 0 2

14

21

28

4

March

Apr Average Target

80

7

April
May 1994

11

18

25

2

April
Jun 1994

9

16 23 30
May

Jul 1994

Target

May Average Target

average funds rate target for April. The average target
is the weighted average of the target of 3.5 percent
for 18 days and 3.75 percent for 12 days.
During March (at least as early as the release
of the report on the employment situation for
February 1994 on March 4), the prevailing expectation was as follows: there was a high probability of
a 50-basis-point increase before the beginning of
May, with an even higher intended funds rate on
average during June. The increase in the funds rate
target that occurred in March was expected and
there was no revision of the market’s expectation
for the future funds rate target.
The situation after the April intermeeting move
is very different. Figure 3 shows that there was a
significant revision in the market’s expectation for
the funds rate in May and June immediately upon
the Fed’s April action. For most of the period subsequent to the intermeeting change in the intended
funds rate in April, market participants assigned a
high probability to an additional increase of 50 basis
points at the May FOMC meeting. Market participants
had come to expect that a 50-basis-point increase
over the target established in April would prevail
during May and were assigning a high probability
of an additional 25-basis-point increase at the June
FOMC meeting. In late March, expectations of even
higher intended funds rates for April, May, and June
prevailed; however, these expectations were reversed
by early April. Consistent with our interpretation
of the regression results, the April action appears
to have caused market participants to significantly

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Figure 4

Figure 5

Funds Futures for March 1994 FOMC Event

Funds Futures for March 2001 FOMC Event

3.8

5.75

3.7
5.50

3.6
3.5

5.25

3.4
5.00

3.3
3.2

4.75

3.1
4.50

3.0
3

10

17

24

31

7

January
Mar 1994

14

21

28

7

February
Apr 1994

14

21 28

5

19

26

February

March
May 1994

12

Target

Mar Average Target

revise their expectations for the funds rate in May
and June.
For 9 of the 14 target changes where our analysis
of market commentary suggested that the market
was surprised by the Fed’s action, there was a clear
indication that the market revised its expectation
for the funds rate out two months. On one of these
occasions (July 6, 1995), however, the market’s
expectation for the funds rate out two months was
significantly revised in the weeks following the
target change.
There appeared to be no significant revision of
the market’s expectation for the funds rate out two
months on five occasions. One of these occasions
occurred on March 22, 1994, shown in Figure 4. The
market had revised its expectation for the funds
rate in May, a couple of weeks prior to the March
FOMC meeting. While our analysis of the commentary suggested that the March action was a surprise,
both the Poole/Rasche and Kuttner measures of the
unexpected target change were very small. Hence,
it may be that the commentary did not reflect the
true market expectations at the time of the action.
Another instance when there was no revision
of the market’s expectation occurred on March 20,
2001, shown in Figure 5. At the time the FOMC
reduced the funds rate target by 50 basis points,
market participants were anticipating a 75-basispoint reduction; however, there was no immediate
revision of the market’s expectation following the
announcement.
On three of the five occasions, the Poole/Rasche

Mar 2001

Apr 2001

5

12

19

26

2

March
May 2001

Target

Mar Average Target

measure of the unexpected target change was 6
basis points or less—about two standard deviations
of the variation in this measure associated with
ambient news, suggesting that these actions were
perhaps less of a surprise than the market commentary suggested. Moreover, on all occasions when
the Poole/Rasche measure was larger than 10 basis
points, the market appeared to revise its expectation
for the funds rate at least two months out, suggesting that market participants might not revise their
longer-run outlook for the funds rate target except
in cases where they make a relatively large error in
forecasting the Fed’s action.
Market participants should not only revise their
expectations when there is a surprise change in
the funds rate, but also when they are surprised
that the target was not changed. We identified only
four such events. Our analysis suggests that of these
four cases, three were instances when market participants revised their expectations for the future federal
funds rate when the Fed failed to act as expected.
The most dramatic of these occurred in
September 1996. The commentary indicated that
market participants expected an FOMC action.
Figure 6 shows the rates on the September, October,
and November futures rate contracts before and
after the September 1996 meeting. Both the futures
rates and the market commentary suggest that market participants were expecting the FOMC to raise
the target at the September FOMC meeting and were
expecting additional subsequent increases. When
the FOMC unexpectedly left the target unchanged,
J U LY / A U G U S T 2 0 0 2

81

REVIEW

Poole, Rasche, Thornton

Figure 6

Figure 7

Funds Futures for September 1996 FOMC
Event

Funds Futures for September 1994 FOMC
Event

5.60

5.2

5.55
5.0

5.50
5.45

4.8

5.40
4.6

5.35
5.30

4.4

5.25
5.20

4.2
5

12

19

26

August
Sep 1996

Oct 1996

2

9

16

23

30

Nov 1996

Target

DOES GREATER TANSPARENCY HELP?
The FOMC has made a number of procedural
changes that should have helped the market anticipate policy actions. Analysis of the period before the
1994 implementation of the practice of announcing
target changes is hampered by the fact that most
target changes were made during the intermeeting
period. Because the market could never be sure
when a change was most likely to occur, market
commentary never predicted the date or the magnitude of Fed actions before 1994. Hence, market
commentary at that time cannot be used to determine target changes that were or were not expected,
J U LY / A U G U S T 2 0 0 2

8

15

22

August

market participants significantly revised down
their expectations for the funds rate in October and
November. In the three cases where the Fed’s inaction
prompted markets to revise their longer-term expectations for the funds rate, the revision in expectations
appears to be large relative to those cases where
the FOMC took a surprise action. This finding is
consistent with our previous interpretation in that,
if the market expects a 25-basis-point change in the
target and the FOMC does nothing, the unexpected
action is relatively large.
The exception occurred in September 1994,
shown in Figure 7, when the surprise decision not to
change the rate at the FOMC meeting of September
27, 1994, had essentially no effect on the market’s
expectation for the federal funds rate in October
and November.

82

1

September
Sep 1994

Oct 1994

29

5

12

19

26

September
Nov 1994

Target

as was done for the post-1993 period. Table 2, however, shows that for actions since 1994 that were not
surprise actions, the Poole/Rasche measure of the
unexpected target change was nearly always less
than 6 basis points (about two estimated standard
deviations of the variation in the 1-month futures
rate associated with ambient news). Hence, one way
to determine expected target changes is to assume
that the market anticipated the Fed’s action when
the Poole/Rasche measure of the unexpected target
change is 6 basis points or smaller. Using this criterion, of the 24 target changes before 1994 that the
market was aware had occurred, only 6 were anticipated; 18 target changes were unanticipated.
Moreover, if one assumes that changes in the
current or 1-month federal funds futures rate measures the degree of the unexpected target change,
there were only three instances, on days when the
market knew that the target had been changed,
when there were large unexpected target changes.
All three of these are associated with intermeeting
target changes.
Using the same criterion for the post-1993
period indicates that 10 of the 24 target changes
were unanticipated. Our analysis of market commentary suggested that 14 target changes were
unanticipated, but we concluded that the market
anticipated the FOMC’s action only if market participants correctly anticipated the size and the timing
of the action.
While the above analysis is simple, it suggests
that the market has been able to better forecast Fed

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Figure 8

Figure 9

Funds Futures for March 2000 FOMC Event

Funds Futures for June 1999 FOMC Event

6.1

5.10

6.0

5.05
5.00

5.9

4.95

5.8

4.90
5.7
4.85
5.6

4.80

5.5

4.75

5.4
6 13 20 27 3 10 17 24 31 7 14 21 28 6 13 20 27
December
Mar 2000

January
Target

February

4.70
17

24

31

7

14

Mar Average Target

actions since the 1994 procedural change. In this
regard, more transparency appears to help. This
finding is not too surprising, however, since it is
reasonable to expect that the market does a better
job of anticipating policy actions when the timing
of those actions is somewhat constrained by the
FOMC’s practice. Hence, somewhat more compelling evidence of the value of transparency can be
obtained by determining whether the market is
better able to predict Fed actions further in advance.

How Far in Advance Does the Market
Anticipate Fed Actions?
Our results suggest that after 1994, market
participants usually have anticipated changes in
the funds rate target by the time they have occurred.
The more transparent the Fed is, the further in
advance the market should be able to predict policy
actions. To get an idea of how far in advance the
market anticipates Fed actions, we once again use
the case study approach. Specifically, we plot (i) the
rate on the federal funds futures contract for the
month of target changes that we classified as “no
surprise” in Table 2 and (ii) the average federal funds
rate target for that month. Care must be taken
because of the possibility of changes in the term
premium. Nevertheless, if the market correctly
anticipates the event, the rate on the federal funds
futures contract for the month of the event should
move to the level of the average effective federal
funds rate before the event and stay close to that
rate until the time of the event.

21

28

5

12

June

March

26

2

Jul Average Target

Target

Jul 1999

19

July

Figure 10
Funds Futures for August 1999 FOMC Event
5.3
5.2
5.1
5.0
4.9
4.8
4.7
7

14

21

June
Aug 1999

28

5

12

19

July
Target

26

2

9

16

23 30

August
Aug Average Target

There were ten such events during the post-1993
period. Of these, our analysis suggests that on seven
of these occasions the market anticipated the change
two or more weeks in advance. For the change on
March 21, 2000, shown in Figure 8, market participants appear to have anticipated the action about
12 weeks in advance. Indeed, before Christmas 1999
market participants correctly anticipated both the
February and March actions.
Two of the more remarkable cases are associated
with the target changes that occurred on June 30
and August 24, 1999. Figures 9 and 10 show the July
J U LY / A U G U S T 2 0 0 2

83

REVIEW

Poole, Rasche, Thornton

transparency is important. After 1994, not only is
the market better able to anticipate when the Fed
will act, but, more importantly, there is some evidence
that the market is able to predict those actions further
in advance. Greater clarity should enable the market
to better predict how the Fed is likely to respond to
incoming information about economic fundamentals.

Figure 11
Funds Futures for July 1989 FOMC Event
10.0
9.8
9.6
9.4

DISCUSSION

9.2
9.0
8.8
8.6
8.4
5

12

19

26

June
Jul 1989

Aug 1989

3

10

17

24

31

July
Sep 1989

Target

Jul Average Target

and August federal funds futures rates and the average effective funds rate target for those months,
respectively. By early June the market had come to
expect not only the action taken on June 30, but
the action taken on August 24 as well.
On the remaining three occasions, the actions
appear not to have been expected until just days
prior to the meetings. Hence, while the commentators were correct that these actions were widely
anticipated, it appears that the market did not figure
out what the FOMC was about to do until just days
before the meeting.
To see whether the market’s ability to predict
Fed actions has improved since the beginning of
1994, we considered the six instances prior to 1994
where the market expected the Fed’s action, using
the criterion that the market expected the action if
the Poole/Rasche measure of the unexpected target
change is 6 basis points or less. Trading in the federal
funds futures contracts began only two months
prior to one of these occasions, December 1988.
Moreover, there was no evidence that the market
was aware that the Fed was targeting the funds rate
at that time. For both of these reasons, analysis of
this event is inappropriate.
Of the remaining five instances, there is only
one instance, July 6, 1989, when the market appears
to have anticipated the Fed’s action well in advance.19
Figure 11 suggests that the 25-basis-point target
change made at that time was anticipated by early
June.
This analysis also supports the conclusion that
84

J U LY / A U G U S T 2 0 0 2

This paper investigates the extent to which
market participants anticipate Fed actions, focusing
on the period since the late 1980s. This period is
nearly ideal. The Fed has been explicitly targeting
the overnight federal funds rate during the entire
period that the federal funds futures rate has been
available to measure the market’s expectation for
the federal funds rate and, consequently, the funds
rate target.
A natural way to proceed in this environment
is to use the change in the futures rate as a proxy
for the unexpected change in the funds rate target
and then estimate the response of longer-term
rates to the unexpected target change. A significant
response of longer-term rates suggests that the
unexpected change in the funds rate target caused
markets to revise their longer-term expectations
for the funds rate. While this procedure can provide
useful information about how market participants
revise their longer-run expectations, we note that
care is required. For one thing, there is a measurement error associated with using the change in the
futures rate to proxy the unexpected target change;
it arises because idiosyncratic and other shocks
cause variation in federal funds futures rates even
when there are no changes in the funds rate target.
This measurement error also can bias down the
estimated response of other rates to the unexpected
target change. In addition, this procedure requires
that market participants know that the Fed has
changed its funds rate target. If market participants
do not know that the target has been changed, the
change in the futures rate does not reflect the unexpected target change. This problem, of course,
applies to the pre-1994 period when target changes
were not announced.
Accounting for both of these problems, we estimate the response of Treasury rates of various maturities from 3 months to 30 years to unexpected
target changes for periods before and after the
19

The figures for the other four dates and for December 1988 are presented in the appendix.

FEDERAL RESERVE BANK OF ST. LOUIS

FOMC’s 1994 procedural change. We find that the
response of the 3-month T-bill rate is nearly identical
before and after this procedural change. The magnitude and significance of the response of longer-term
rates, however, declines after this procedural change.
One possible explanation for the smaller response
of longer-term rates is that the Fed has been more
transparent about its longer-run policy intentions.
Under this interpretation, the market would have
relatively firm expectations that the Fed will change
the funds rate target at some point in the future, but
may have less-firm expectations of exactly when
that change will occur. If only the timing of the target
changes were unexpected, shorter-term futures
rates would respond more to announcements of a
target change than would longer-term rates.
We note that the interpretation of the response
of Treasury rates to unexpected changes in the
funds rate target is complicated by the possibility
that all forward-looking rates might respond to common information, such as information that alters
the market’s expectation of the term premium. For
this reason, extreme care must be exercised in identifying unexpected changes in the funds rate target.
To address this issue, we undertake a case-by-case
analysis of occasions when market commentary
indicted that the market was surprised by the Fed’s
action or inaction. This analysis suggests that, in
most of those cases, market participants revised
their expectations for the funds rate at least two
months out in response to an unexpected target
change. Moreover, there is some indication that the
larger the unexpected target change, the more likely
it is that the market will revise its expectation for
the funds rate.
Our most important finding is that greater
transparency appears to help. Not only is the market
better able to anticipate funds rate target changes,
but it appears that the market is able to anticipate
such changes further in advance. This is important
since changes in the funds rate target can have a
significant effect on economic variables only by
generating changes in longer-term interest rates.
The Fed can only affect long-term rates by affecting
market participants’ expectations for the future funds
rate. The further in advance the market can anticipate changes in the funds rate, other things the same,
the larger will be the corresponding changes in
longer-term rates. Moreover, in such an environment,
market responses in anticipation of policy actions
begin to stabilize the economy long before the policy
actions themselves occur.

Poole, Rasche, Thornton

The interaction of economic policy and market
expectations has been a core feature of macroeconomics for 30 years. In this paper we documented
the substantial change in the predictability of monetary policy that occurred in 1994. The period since
1994 has also been one of remarkable economic
stability. We believe that the greater transparency
of monetary policy has contributed to this outcome.

REFERENCES
Cook, Timothy and Hahn, Thomas. “The Effect of Changes
in the Federal Funds Rate Target on Market Interest Rates
in the 1970s.” Journal of Monetary Economics, November
1989, 24(3), pp. 331-51.
Goodfriend, Marvin. “Monetary Mystique: Secrecy and
Central Banking.” Journal of Monetary Economics, January
1986, 17(1), pp. 63-92.
Hardouvelis, Gikas A. “The Predictive Power of the Term
Structure During Recent Monetary Regimes.” Journal of
Finance, June 1988, 43(2), pp. 339-56.
Hoover, Kevin D. and Jorda, Oscar. “Measuring Systematic
Monetary Policy.” Federal Reserve Bank of St. Louis
Review, July/August 2001, 83(4), pp. 113-37.
Johnston, J. Econometric Methods. New York: McGraw-Hill,
1963.
Kuttner, Kenneth N. “Monetary Policy Surprises and Interest
Rates: Evidence from the Fed Funds Futures Market.”
Journal of Monetary Economics, June 2001, 47(3), pp.
523-44.
Lucas, Robert E. Jr. “Econometric Policy Evaluation: A
Critique.” Journal of Monetary Economics, 1976, 1(2), pp.
19-46.
McCallum, Bennett T. “Monetary Policy Analysis in Models
Without Money.” Federal Reserve Bank of St. Louis Review,
July/August 2001, 83(4), pp. 145-60.
Meyer, Laurence H. “Does Money Matter?” Federal Reserve
Bank of St. Louis Review, September/October 2001, 83(4),
pp. 1-15.
Poole, William and Rasche, Robert H. “Perfecting the
Market’s Knowledge of Monetary Policy.” Journal of
Financial Services Research, December 2000, 18(2-3), pp.
255-98.

J U LY / A U G U S T 2 0 0 2

85

REVIEW

Poole, Rasche, Thornton

Rasche, Robert H. “The World of Central Banking: Then
and Now,” in Reflections on Economics: Essays in Honor
of Martin M.G. Fase. Amsterdam: De Nederlansche Bank
NV, 2001.
Roberds, William; Runkle, David and Whiteman, Charles H.
“A Daily View of Yield Spreads and Short-Term Interest
Rate Movements.” Journal of Money, Credit, and Banking,
February 1996, 28(1), pp. 34-53.

Thornton, Daniel L. “The Conventional Test of the Expectations Theory: Resolving Some Anomalies at the Short
End of the Term Structure.” Unpublished manuscript,
Federal Reserve Bank of St. Louis, February 2002.
___________. “Does the Fed’s New Policy of Immediate
Disclosure Affect the Market?” Federal Reserve Bank of
St. Louis Review, November/December 1996, 78(6), pp.
77-86.

Robertson, John C. and Thornton, Daniel L. “Using Federal
Funds Futures Rates to Predict Federal Reserve Actions.”
Federal Reserve Bank of St. Louis Review, November/
December 1997, 79(6), pp. 45-53.
Simon, David P. “Expectations and the Treasury Bill–Federal
Funds Rate Spread over Recent Monetary Policy Regimes.”
Journal of Finance, June 1990, 45(2), pp. 467-77.
Taylor, John B. “Expectations, Open Market Operations,
and Changes in the Federal Funds Rate.” Federal Reserve
Bank of St. Louis Review, July/August 2001, 83(4), pp. 3347.

___________ and Wheelock, David C. “A History of the
Asymmetric Policy Directive.” Federal Reserve Bank of
St. Louis Review, September/October 2000, 82(5), pp. 1-16.
Whitesell, William; Lange, Joe and Sack, Brian. “Anticipations
of Monetary Policy in Financial Markets.” Unpublished
manuscript, Board of Governors of the Federal Reserve
System, March 2001.
Woodford, Michael. “Monetary Policy in the Information
Economy.” Unpublished manuscript, August 2001.

Appendix
Figure A1-A

Figure A1-B

Funds Futures for December 1994 FOMC
Event

Funds Futures for May 1997 FOMC Event

6.75

5.80

5.85

6.50

5.75

6.25

5.70

6.00
5.65

5.75

5.60

5.50
5.25

5.55

5.00

5.50

4.75

5.45
7

4.50
7

14

21

28

November
Dec 1994

86

Jan 1995

J U LY / A U G U S T 2 0 0 2

5

12

19

26

Feb 1995

May 1997
Target

21
April

2

December

14

Jun 1997

28

5

12

19

26

May
Jul 1997

Target

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Figure A2-A

Figure A2-B

Funds Futures for August 1994 FOMC Event

Funds Futures for November 1994 FOMC
Event

5.50
6.0

5.25
5.00

5.8

4.75

5.6

4.50

5.4

4.25

5.2

4.00

5.0

3.75
4.8
3.50
2

9 16 23 30 6 13 20 27 4 11 18 25 1 8 15 22 29
May

June

Oct 1994

Sep 1994

Aug 1994

4.6
5

August

July

12

19

26

3

10

September

Target

Nov 1994

Aug Average Target

17

24

31

7

October
Dec 1994

14 21

28

November
Jan 1995

Target

Nov Average Target

Figure A2-C

Figure A2-D

Funds Futures for July 1995 FOMC Event

Funds Futures for December 1995 FOMC
Event

6.3
6.2

5.8

6.1

5.7

6.0
5.6
5.9
5.8

5.5

5.7

5.4

5.6
5.3
5.5
1

8

15

22 29

5

May
Jul 1995
Jul Average Target

12 19

26

3

June
Aug 1995

10

17 24 31
July

Sep 1995

5.2
4 11 18 25 2 9 16 23 30 6 13 20 27 4 11 18 25 1

Target

September
Dec 1995

October
Jan 1996

November
Feb 1996

December
Target

Dec Average Target

J U LY / A U G U S T 2 0 0 2

87

REVIEW

Poole, Rasche, Thornton

Figure A2-E

Figure A2-F

Funds Futures for January 1996 FOMC Event

Funds Futures for October 1998 FOMC
Event

5.8
5.7

5.6

5.6

5.5
5.4

5.5

5.3

5.4

5.2

5.3

5.1

5.2

5.0

5.1

4.9

5.0
6 13 20 27 4 11 18 25 1
November

8 15 22 29 5 12 19 26

December

Apr 1996

Mar 1996

Feb 1996

February

January

4.8
4.7
6 13 20 27 3 10 17 24 31 7 14 21 28 5 12 19 26
July

Target

Feb Average Target

Oct 1998

August

September

Nov 1998

Dec 1998

October
Target

Oct Average Target

Figure A2-G

Figure A2-H

Funds Futures for November 1998 FOMC
Event

Funds Futures for November 1999 FOMC
Event

5.60

5.55

5.44

5.50

5.28

5.45

5.12

5.40

4.96

5.35

4.80

5.30

4.64

5.25

4.48

5.20
7

14 21

28

September
Nov 1998

12

J U LY / A U G U S T 2 0 0 2

19

26

2

October

Dec 1998

Nov Average Target

88

5

9

16

23 30

November
Jan 1999

Target

6

13 20

27

September
Nov 1999
Nov Average Target

4

11

18

October
Dec 1999

25

1

8

15

22 29

November
Jan 2000

Target

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Figure A2-I

Figure A2-J

Funds Futures for January 2001 FOMC Event

Funds Futures for April 2001 FOMC Event

6.6

5.6

6.4

5.4
5.2

6.2

5.0

6.0

4.8
5.8

4.6

5.6

4.4

5.4

4.2

5.2

4.0

2

9 16 23 30 6 13 20 27 4 11 18 25 1
October

Jan 2001
Jan Average Target

November
Feb 2001

December
Mar 2001

8 15 22 29

5

Target

12

19

26

Apr 2001

2

9

16

23

30

April

March

January

May 2001

Jun 2001

Target

Apr Average Target

J U LY / A U G U S T 2 0 0 2

89

REVIEW

Poole, Rasche, Thornton

Figure A3-A

Figure A3-B

Funds Futures for February 1995 FOMC
Event

Funds Futures for March 1997 FOMC Event
5.55

6.56

5.50

6.40

5.45

6.24

5.40

6.08

5.35

5.92

5.30

5.76

5.25

5.60

5.20
20

5.44
5

12 19

26

2

9

16

23 30

6

January

December

27

3

10

17

24

3

10

Target

Mar 1997

17

24

31

March

February

27

February

Target

Feb 1995

13 20

Mar Average Target

Feb Average Target

Figure A3-C

Figure A3-D

Funds Futures for September 1998 FOMC
Event

Funds Futures for February 2000 FOMC
Event

5.58

5.85

5.49

5.80

5.40

5.75

5.31

5.70
5.65

5.22

5.60

5.13

5.55
5.04
5.50
4.95
17

24

31

7

14

21

September
Oct 1998

90

J U LY / A U G U S T 2 0 0 2

Target

28

5

12

19

26

2

October
Oct Average Target

5.45
20

27

3

10

17

24

January
Feb 2000

Target

31

7

14

21

28

February
Feb Average Target

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Figure A3-E

Figure A3-F

Funds Futures for May 2000 FOMC Event

Funds Futures for January 2001 FOMC
Event

6.6
6.5

6.6

6.4

6.4

6.3

6.2

6.2
6.1

6.0

6.0

5.8

5.9

5.6

5.8
5.4

5.7
20

27

3

10

17

24

1

15

22

29

5.2

May

April
May 2000

8

Target

8

15

22

29
January

May Average Target

Target

Feb 2001

5

12

19

26

February
Feb Average Target

Figure A3-G
Funds Futures for May 2001 FOMC Event
4.6
4.5
4.4
4.3
4.2
4.1
4.0
3.9

23

30

7

14

21

28

May
May 2001

Target

May Average Target

J U LY / A U G U S T 2 0 0 2

91

REVIEW

Poole, Rasche, Thornton

Figure A4-A

Figure A4-B

Funds Futures for December 1988 FOMC
Event

Funds Futures for February 1989 FOMC
Event

9.25

10.2
10.0

9.00

9.8
9.6

8.75

9.4
8.50

9.2
9.0

8.25

8.8
8.00
7

14

21

28

5

12

November
Dec 1988

Jan 1989

26

19

8.6

2

19

26

2

9

December
Feb 1989

Target

Mar 1989

Feb 1989

Dec Average Target

Feb Average Target

Figure A4-C
Funds Futures for June 1989 FOMC Event
9.92
9.76
9.60
9.44
9.28
9.12
8.96

1

8

15

22

May
Jun 1989
Jun Average Target

92

J U LY / A U G U S T 2 0 0 2

16

23

30

6

January

Jul 1989

29

5

12

19

26

June
Aug 1989

Target

13

20

27

February
Apr 1989

Target

FEDERAL RESERVE BANK OF ST. LOUIS

Poole, Rasche, Thornton

Figure A4-D

Figure A4-E

Funds Futures for October 1990 FOMC
Event

Funds Futures for September 1991 FOMC
Event

8.15

5.9

8.10

5.8

8.05

5.7

8.00

5.6

7.95

5.5

7.90

5.4

7.85

5.3

7.80

5.2

7.75

5.1

7.70
3

10

17

24

September
Oct 1990
Oct Average Target

Nov 1990

1

8

15

22

29

5

Dec 1990

12

19

26

August

October
Target

Sep 1991

Oct 1991

2

9

16

30

23

September
Nov 1991

Target

Sep Average Target

J U LY / A U G U S T 2 0 0 2

93

Poole, Rasche, Thornton

94

J U LY / A U G U S T 2 0 0 2

REVIEW

Commentary
Mark W. Watson
his paper addresses three questions related
to market anticipations of monetary policy
actions. First, how can “anticipations” and
“surprises” be measured? Second, has there been
a change in the market’s ability to anticipate monetary policy? Third, how far in advance does the
market anticipate changes in the Federal Reserve’s
policy instrument?
These are important questions, and this paper
makes four distinct contributions as it attempts to
provide answers. Following earlier work by Poole
and Rasche (2000) and Kuttner (2001), the paper
uses the federal funds futures market to construct
measures of anticipated and surprise movements
in the target federal funds rate. The first contribution
of the paper is a comparison of two versions of these
measures. In February 1994, the Federal Open
Market Committee (FOMC) began the practice of
issuing a press release after each meeting that summarized their deliberations. The second contribution of this paper is an analysis of how this change
in FOMC procedure affected the ability of the market
to anticipate future changes in the federal funds
rate. Regressions involving variables that measure
“expectations” are prone to econometric problems
that are technically similar to the classical problem
of “errors-in-variables.” The third contribution of
this paper is an adjustment for this problem. Much
of the paper’s analysis is made possible by a new
dataset that provides a qualitative summary of the
market’s expectations about changes in the target
federal funds rate. The fourth contribution of the
paper is the development of this dataset that was
constructed by a careful analysis of reports that
appeared in the Wall Street Journal.
I will begin this discussion by stepping outside
the authors’ analysis to address the general problem
of measuring the forecastability of a time series and
ask how futures prices might help with this task. I
will then provide a brief and selective summary of
the paper’s main results. One of the important results

T

Mark W. Watson is a professor of economics and public affairs at the
Department of Economics and Woodrow Wilson School, Princeton
University, and a research associate at the National Bureau of Economic
Research.

© 2002, The Federal Reserve Bank of St. Louis.

in the paper is that the February 1994 change in
FOMC procedure presaged an increase in the market’s ability to anticipate changes in the target federal
funds rate.

HOW FORECASTABLE IS THE FEDERAL
FUNDS RATE?
To begin, consider the decomposition of the
change in the federal funds rate, ff,
(0.1)

fft = ( fft − fft / t −1 ) + ( fft / t −1 − fft / t −2 ) + L
+ ( fft / t − h +1 − fft / t − h ) + fft / t − h

where fft/t –k=E( fft |information available at t – k).
The first term on the right-hand side of (0.1) represents the information about fft that is unknown at
time t –1 and revealed at time t ; the second term
represents the information revealed at time t–1, etc.
All of the terms on the right-hand side of this equation are mutually uncorrelated, and this implies
that the variance of fft can be decomposed as
(0.2)
h −1

var( fft ) = ∑ var( fft / t − k − fft / t − k −1 ) + var( fft / t − h ).
k =0

This decomposition of variance means that the
fraction of the variability in fft associated with
information revealed at time t – k is
Rk2 =

var( fft / t − k − fft / t − k −1 )
.
var( fft )

In many ways, values of Rk2 provide an ideal
summary of the ability of the market to anticipate
changes in the federal funds rate. For example,
∞ R 2 shows the fraction of the variability of ff
∑k=i
k
t
associated with information revealed at t– i or earlier.
Can Rk2 be estimated using data from the futures
market? In principle, yes. In practice, no. To see this,
consider a futures contract with a payoff that is tied
to the value of fft. Then, abstracting from changes
in risk and discounting, changes in the price of the
contract between periods t–k–1 and t–k can be used
to construct ft/t –k – fft/t –k –1. The variance of these
changes is the numerator of Rk2, and the denominator is the variance of ff. Thus, these futures prices
make it possible to estimate Rk2.
In practice, federal funds rate futures contracts
have payoffs that depend on the average value of
the federal funds rate over a month, rather than the
value on a particular day. This means that changes
in futures prices can be used to compute averages
of expected changes in the federal funds rates, such
J U LY / A U G U S T 2 0 0 2

95

REVIEW

Watson
m –1
as m–1∑i=0
( fft+i /t–k – fft+i /t –k –1). So, in general, the
variance of changes in the federal funds rate futures
price will depend on the variance of fft+i /t – fft+i /t –1
for all days in the month as well as the covariance
between each of these terms. This makes it impossible to estimate Rk2 from the futures data.

THE APPROACH USED BY POOLE,
RASCHE, AND THORNTON
Complications like this mean that additional
assumptions must be made if the federal funds
futures market is to be used to summarize market
anticipations. This paper uses assumptions made
in earlier papers by Poole and Rasche (2000) and
Kuttner (2001). The assumptions are similar, and
here I will review Kuttner’s version. The federal
funds futures contract for the current month has a
payoff that depends on the average federal funds
rate in the current month. Thus, if there are 30
days in the month, the current date is denoted by t,
and the month ends at date t+k, then
(0.3)

FFFt − FFFt −1 ≈

1 k
∑ ( fft + i / t − fft + i / t −1 )
30 i =0

where FFFt denotes the price of the futures contract
and ≈ reflects the fact that changes in risk and discounting have been ignored. Now, consider a date
when the federal funds rate changes unexpectedly
(so that fft – fft /t –1 ≠ 0), and no other changes are
expected during the month (so that fft+i /t – fft+i /t –1=
fft – fft /t –1 , for i=1,…, k). For this date,
(0.4)

fft − fft / t −1 ≈

30
( FFFt − FFFt −1 ) .
k +1

Thus, date t surprises in the federal funds rate can
be measured by scaling up changes in the price of
the federal funds futures.
Earlier researchers (Poole and Rasche, 2000,
and Kuttner, 2001) used these estimates of surprise
movements in the federal funds rate in regressions
of the form
(0.5)

i t = α + β ( fft − fft / t −1 ) + ε t

where it is a longer-term interest rate and fft – fft /t –1
is estimated by (0.4). These papers estimated (0.5)
for dates when the approximations in (0.3) and (0.4)
seemed reasonable a priori: that is, those dates when
the target federal funds rate changed. This paper
refines this earlier analysis by explicitly incorporating measurement error in (0.4). The authors estimate
the magnitude of this measurement error using a
96

J U LY / A U G U S T 2 0 0 2

variety of methods, all focusing on days when the
target component of fft – fft /t –1 was zero, as determined from their reading of the business press.
Reassuringly, they find that these measurement
error corrections have little effect on the estimates
of β in (0.5).
In addition, this paper compares estimates of
(0.5) using the Poole and Rasche estimates of
fft – fft/t –1 and the Kuttner estimates. They find little
difference between the estimates, suggesting comparability of the Kuttner and Poole/Rasche measures.

DID FORECASTABILITY CHANGE IN
1984?
An important empirical conclusion in this paper
is that the market was better able to anticipate
changes in the target federal funds rates after
February 1994. I offer two pieces of confirmatory
empirical evidence.
First, consider the decomposition of the changes
in the federal funds rate target, fft*:
∆fft* = at + ut ,
where at denotes the anticipated component and
ut is the unanticipated component. The fraction
of the variability of the changes in fft* that are
unanticipated is E(ut2 ) /E[(∆fft*)2], and the fraction
anticipated is 1 minus this value. Using the Poole/
Rasche (2000) measures of ut, this fraction can be
estimated from the data reported in the paper. For
dates before February 1994, 80 percent of the
variance of ∆fft* was anticipated. For dates after
February 1994, this fraction increases to 91 percent.
Thus, in both periods, the market correctly anticipated the bulk of changes in the target rate, but there
does appear to be a marked improvement in market
expectations in the post-February 1994 sample
period.
The second piece of empirical evidence is an
estimate of how well long-term interest rates forecast the federal funds rate. Let
1 89
Wt + 90 =
∑ fft + i
90 i =0
denote the average value of the federal funds rate
over the next 90 days, and let R90
t denote the 90-day
interest rate. From the expectations theory of the
term structure, R90
t ≈Wt+90/t. Consider the regression:
(0.6)

Wt + 90 − fft = α + β ( Rt90 − fft ) + ε t

If changes in fft are not predictable, β=0 in (0.6) and

FEDERAL RESERVE BANK OF ST. LOUIS

the regression R2 is also zero. If changes in fft are
predictable, then β=1 and the regression R2 is nonzero. More generally, the R2 from (0.6), or its generalization containing other variables as well as the term
spread, measures the predictability of change in
the federal funds rate.
Table 1 shows the results from estimating (0.6)
using monthly averages of federal funds rates and
monthly 3-month Treasury bill rates over the two
sample periods considered in this paper. The results
are quite striking. Evidently, there is a marked
increase in the predictability of federal funds rate
changes, post 1994, at least at the 3-month forecast
horizon.

Watson

Table 1
Predicting Changes in the Federal Funds Rate
Using 3-Month Treasury Bills
Sample period

β̂ (SE)

R2

1987:01–1994:01

–0.02 (0.14)

0.00

1994:02–2001:06

0.97 (0.14)

0.61

NOTE: Estimates of (0.6) over the sample period are shown in
the first column. Data are monthly. SE denotes the estimated
heteroskedastic autocorrelation–robust standard error.

FINAL COMMENTS
In this paper, Poole, Rasche, and Thornton
have further refined the use of federal funds futures
prices for decomposing changes in the federal funds
rate into anticipated and unanticipated components.
They develop a new qualitative dataset that compliments the quantitative data in the futures prices.
Their results suggest that changes in FOMC procedures adopted in February 1994 have improved the
market’s ability to anticipate changes in the target
federal funds rate. My crude calculations, summarized above, are consistent with these conclusions.
These results focus on very short-run forecasts (3
months in my analysis above).
A more important question involves the market’s
ability to forecast over longer horizons, particularly
to form conditional forecasts: “If the path of inflation
is ___ and the path of GDP growth is ___, then the
path of the federal funds rate will be ___.” Accurate
long-run conditional forecasts follow from consistency of long-run Federal Reserve policy. Evaluating
long-run conditional forecasts poses interesting and
important questions, and I look forward to seeing
extensions of this paper in that direction.

J U LY / A U G U S T 2 0 0 2

97

Watson

98

J U LY / A U G U S T 2 0 0 2

REVIEW

Does It Pay To Be
Transparent?
International Evidence
from Central Bank
Forecasts
Georgios Chortareas, David Stasavage, and
Gabriel Sterne

I. INTRODUCTION
he past decade witnessed an increased
interest in the institutional framework of
monetary policy. The benefits of central bank
independence have been demonstrated in much
academic research and have become conventional
wisdom among policymakers.1 New questions have
emerged, however, about the institutional characteristics of central banks and their effect on economic
performance; recent analyses have attempted to
identify optimal degrees of independence, accountability, and transparency in monetary policy.
Relative to the abundant literature on the effects
of central bank independence, only limited research
exists so far on the issues of transparency and
accountability in monetary policy. Furthermore,
empirical analyses have mostly focused on financial
markets and used time-series data.2 In this paper
we examine how monetary policy transparency is
associated with inflation and output in a crosssection of 87 countries. We use a particular concept
of transparency that relates to the detail in which
central banks publish economic forecasts (henceforth “transparency in forecasting”). We employ a

T

Georgios Chortareas is an economist in the International Economic
Analysis Division, Bank of England. David Stasavage is a lecturer in the
Department of International Relations, London School of Economics.
Gabriel Sterne is an economist in the International Economic Analysis
Division, Bank of England. The authors thank the following for their
helpful comments and suggestions: Andrew Bailey, Lawrence Ball,
Alec Chrystal, Rebecca Driver, Petra Geraats, Charles Goodhart, Andrew
Haldane, Andrew Hauser, Marion Kohler, Kenneth Kuttner, Lavan
Mahadeva, Adam Posen, Daniel Thornton, Peter Westaway, Mark
Zelmer, and the participants of the 26th Annual Economic Policy
Conference of the Federal Reserve Bank of St Louis, Bank of England
seminars, the 2001 Eastern Economic Association meetings, the
2001 Public Choice Society meetings, and the 2001 Congress of the
European Economic Association. The views expressed are those of
the authors and not necessarily those of the Bank of England.

© 2002, The Federal Reserve Bank of St. Louis.

new data set based on a survey conducted by Fry,
Julius, Mahadeva, Roger, and Sterne (2000) (henceforth FJMRS). To our knowledge these are the only
data covering transparency in monetary policy across
such a wide cross-section of countries.
Our results show that a higher degree of transparency in monetary policy is associated with lower
inflation. The relationship is robust to various
econometric specifications and holds regardless
of whether the domestic nominal anchor is based
more on an inflation or a money target. In contrast,
our results suggest that the publication of forecasts
has no significant impact on inflation in countries
that target the exchange rate. In addition, we do not
find evidence to support the proposition that a high
degree of transparency is associated with higher
output volatility.
The rest of this paper is organized as follows.
The next section reviews the relevant empirical and
theoretical literature. Section III provides a discussion of our survey dataset. The econometric analysis
and the discussion of our results are contained in
Section IV, and Section V assesses the robustness
of those results.

II. REVIEW OF THE LITERATURE
The currently expanding theoretical literature
on central bank transparency identifies various
channels through which increased transparency
may affect economic policy outcomes. Not all of
these move in the same direction. And neither is
there a universally accepted definition of central
bank transparency.3 Various authors conceptualize
transparency in different ways, focusing on preferences, models, knowledge about the shocks hitting
the economy, the decisionmaking process, or the
implementation of policy decisions.4 The models
by Faust and Svensson (2000, 2001), Jensen (2000),
Geraats (2001a), and Tarkka and Mayes (1999) all
assume private information about the central bank’s
objectives/intentions. Transparency is modeled as
the degree of asymmetric information about control
1

See Blinder (2000).

2

Some exceptions are the papers by Briault, Haldane, and King (1996)
and Nolan and Schaling (1996). Their focus, however, is on accountability rather than on transparency, and these accountability measures
involve only 14 countries.

3

Blinder et al. (2001) assess why, how, and what central banks do and
should talk about. Winkler (2000) discusses issues related to the definition of transparency.

4

For example, see Geraats (2001a) for a classification.

J U LY / A U G U S T 2 0 0 2

99

REVIEW

Chortareas, Stasavage, Sterne

errors (Faust and Svensson, 2001, and Jensen, 2000)
or (anticipated) economic shocks reflected in the
policy instrument (Cukierman, 2000a,b, and Tarkka
and Mayes, 1999).
In this paper we focus on the detail in which
central banks publish forecasts, since this variable
is of common interest both in theoretical models
of transparency and in related policy debates.5
Furthermore, publication of forecasts may allow
dissemination of information relating to the central bank’s view of the world (economic models),
stochastic shocks, or preferences.
For any form of central bank transparency to
be relevant, some asymmetry of information in
monetary policy must exist. Recent empirical work
provides evidence suggesting central banks may
possess superior information. Romer and Romer
(2000), for example, show that if commercial forecasters had access to the Federal Reserve’s inflation
forecasts, they would generally find it optimal to
adopt them, discarding their own forecasts. Peek,
Rosengren, and Tootell (1998, 1999) also find that
the Fed’s forecasts benefit from an informational
advantage over the public that assists the Fed in conducting monetary policy. Superior information here
is a product of the Fed’s supervisory function and
includes information about non-publicly traded
banks.
Increased central bank transparency may reduce
uncertainty in financial markets. Studies employing
various methodologies provide evidence that market
participants react to the dissemination of macroeconomic information by the central bank. For example, Clare and Courtenay (2001) employ an event
study methodology and use tick-by-tick exchange
rate data from London International Financial
Futures Exchange (LIFFE) futures contracts, finding
that the publication of forecasts in the form of the
Inflation Report has an information content for U.K.
market participants. Kuttner and Posen (2000) examine how shifts in the Federal Reserve’s and the Bank
of Japan’s degrees of transparency over time contributed to the reduction of exchange rate volatility.6
Additional arguments in favor of transparency in
monetary policy include the insulation of monetary
policy from political pressures, increased accountability, facilitation of fiscal and monetary policy
coordination, and improved internal organization
of central bank analysis.7
In Faust and Svensson (2001), a high degree of
transparency in monetary policy is, in general,
welfare improving. Increased transparency reduces
100

J U LY / A U G U S T 2 0 0 2

the inflation bias, inflation variability, and employment variability. Faust and Svensson (2001) use a
modified Barro-Gordon model. The central bank’s
employment target is not announced and varies over
time according to an idiosyncratic component.
Fluctuations in this component of the employment
target tempt the central bank to deviate from an
announced inflation target. The central bank controls
inflation imperfectly and the inflation outcome has
two components: the central bank’s intentions and
a control error. The central bank decides upon the
extent to which it will reveal its knowledge of the
control error to the public. By revealing the control
error, the central bank renders its intentions for
inflation observable and thereby enables the public
to infer the central bank’s employment goal. Thus
the degree of central bank transparency increases
as the central bank reveals a greater proportion of
the observable component of the control error.
Analytically, Faust and Svensson (2001) distinguish among three different regimes of transparency.
In the first (least transparent) regime, neither the
employment objective nor the intentions of the
central bank are observable by the public. In the
second regime, with a high degree of transparency,
the inflation intentions of the central bank become
observable. Increased transparency in inflation intentions results in lower inflation because it increases
the sensitivity of a central bank’s reputation to its
actions, making it more costly for the central bank
to pursue a high-inflation policy. The third regime is
one the authors classify as “extreme” transparency
where both the employment goal and the intentions
of the central bank are observable. The central bank’s
actions no longer convey additional information
about the inflation bias, and its reputation is no
longer affected by its actions. An inflationary bias
reemerges resulting in higher inflation, inflation
volatility, and unemployment variability.8
5

See, for example, Buiter (1999) and Issing (1999) for a lively debate
about transparency and accountability among central bankers.

6

Other relevant studies include Dotsey (1987) and Haldane and Read
(2000). Thornton (1999) provides evidence on whether the Fed controls the funds rate primarily through open market or “open mouth”
operations.

7

These views were expressed by Josef Tos ovskỳ, who was at that time
Governor of the Czech National Bank. His views, and those of various
other central bank governors, are contained in Mahadeva and Sterne
(2000, pp. 186-205). For a discussion of policy-related arguments for
transparency in monetary policy, see Blinder et al. (2001).

8

This result is consistent with the results of the more general model
of policymaking by Morris and Shin (2001).

∨

FEDERAL RESERVE BANK OF ST. LOUIS

Jensen (2000) adopts an informational structure
similar to Faust and Svensson (2001), assuming
that the output target is private information to the
central bank and that the public’s capacity to deduce
it increases as the central bank publishes a greater
percentage of the inflation control error. In contrast
to Faust and Svensson (2001), who focus on the credibility effects of central bank actions in the future,
Jensen (2000) uses a model with New Keynesian
elements (staggered price-setting and monopolistic
competition) and focuses on the marginal costs of
inflation within the current period. More transparency increases the reputational costs of deviations
from the inflation target and therefore increases its
discipline and credibility.
The literature does not suggest that a high degree
of transparency is unconditionally desirable. In
Jensen’s model, when central bank preferences are
already public information, the credibility-enhancing
effect of increased transparency becomes redundant. Furthermore, in the presence of a shock that
requires counter-cyclical monetary policy, transparency becomes a straightjacket. Thus, the choice
of the optimal degree of transparency is related to
the trade-off between flexibility and credibility. A
high degree of transparency is desirable for central
banks with poor credibility but may be costly in
terms of flexibility for high-credibility central banks.
Increased transparency may have the disadvantage of eliminating the central bank’s strategic
advantage, thereby reducing its capacity to stabilize
the economy. “Cheap talk” and “optimal ambiguity”
arguments are characteristic expressions of this
view.9 Other papers focus less on the reputational
aspects of transparency and more on the consequences of the central bank releasing information
about stochastic shocks. In Cukierman’s (2000b)
one-period model, the central bank’s private information is about an upcoming shock. He uses a neoclassical transmission mechanism, relying on an
expectations-augmented Phillips curve (i.e., a standard Barro-Gordon model) and a model along the
most recent neo-Keynesian lines that focuses on the
interest rate instrument. He examines the welfare
implications of different degrees of transparency
in each model. Under a regime of “limited” transparency, the central bank reveals its information
about the upcoming shock after the public’s inflation expectations have been set; conversely, under
“full” transparency this information is released
before the public forms its expectations.
Different degrees of transparency in the neo-

Chortareas, Stasavage, Sterne

classical version of the model merely affect the variability of inflation and not its average level. This is
because the public becomes aware of the supply
shock, and thus the central bank loses its informational advantage and cannot generate inflation
surprises to stabilize the economy. Expected social
welfare, however, is always higher under a limited
transparency regime compared with the full transparency regime. This is because, under full transparency, unexpected inflation is always zero and
therefore the central bank cannot affect employment.
This result holds under assumptions of both perfect
and imperfect (noisy) central bank forecasts. Under
perfect central bank forecasts, however, only the
variance of the policy outcomes is affected, whereas
under noisy forecasts the average policy outcomes
are affected as well.10
In the neo-Keynesian model of Cukierman
(2000b), society is indifferent between the two
regimes provided that interest rate variability does
not enter its loss function. When the social loss
function includes interest rate variability, however,
the limited-transparency regime is superior to the
full-transparency regime. Because the model incorporates a typical instrument rule, premature forecast
publication requires more nominal interest rate
variability in order to stabilize the ex ante real rate
and through it the output gap and inflation.
Geraats (2001a) uses a two-period Barro-Gordon
model with a real-interest-rate transmission mechanism and focuses explicitly on the publication of
central bank forecasts. The central bank has private
information about both demand and supply shocks
and does not publish its inflation target. More transparency in the first period allows the private sector
to observe the first period’s demand and supply
shocks and make inferences about the central bank’s
inflation target. More transparency therefore makes
the central bank’s reputation more sensitive to its
actions, so an “opaque” monetary policy regime is
characterized by higher inflation in the first period.
This is because the non-publication of the central
bank’s forecasts implies a reputation loss. Given the
9

For example, in the “cheap talk” model of Stein (1989), the central
bank can generate inflation surprises. In the “optimal ambiguity”
model of Cukierman and Meltzer (1986), imprecise control of the
money supply allows the central bank to generate inflation surprises
according to its time-varying preferences.

10

Cukierman’s (2000b) model does not include an explicit inflation
bias, but our analysis shows that the results are similar when the
model is extended to incorporate such a bias in the central bank’s
objective function.

J U LY / A U G U S T 2 0 0 2

101

REVIEW

Chortareas, Stasavage, Sterne

uncertainty about whether the central banker is
“weak” or “strong” in its aversion to inflation, the
public tends to interpret the non-publication as an
indication that the central bank is “weak.” Transparency reduces the variability of inflation, but the
effect on output is ambiguous. More precisely, under
transparency, supply shocks lead to greater variability of output, whereas demand shocks lead to
less. The reason is that under opacity, the central
bank has less flexibility to adjust the interest rate
in response to shocks. So under opacity, supply
shocks lead to more variability in inflation and less
in output; thus the demand shocks are no longer
completely offset, leading to greater variability of
both inflation and output.
Tarkka and Mayes (1999) suggest that publishing
the central bank’s forecasts leads to better macroeconomic performance because the released information reduces the private sector’s uncertainty about
the central bank’s intentions. The authors use a
Barro-Gordon model and assume that the central
bank does not publish its inflation target.
Our assessment of the literature points toward
appropriate measures of transparency for empirical
tests, possible implications for the macroeconomy,
and channels through which transparency may
affect inflation:
• Transparency is generally conceptualized as
the publication of central bank forecasts,
since this allows the public to observe the
control error.11
• The literature identifies a number of channels
by which transparency affects the macroeconomy. These are conditional on model
choice and specification (e.g., neoclassical
versus neo-Keynesian models, presence of
inflation bias) and assumptions such as the
initial degree of credibility enjoyed by the
central bank, the precise degree of transparency, and whether the models are specified
over one or more periods.
• The effects of increased monetary policy transparency in the existing theoretical models
are associated with variables such as average
inflation, output, inflation volatility, output
volatility, and interest rate volatility. Thus
the hypotheses we test in this paper are, in
general, consistent with the theoretical propositions of the recent literature.
• A common element in the majority of the
models is that increasing transparency makes
102

J U LY / A U G U S T 2 0 0 2

the central bank’s reputation more sensitive
to its actions and therefore reduces the incentive to pursue inflationary policies. Transparency has less impact on the sensitivity of
reputation to the actions of the central bank
when its preferences are already known.
Regardless of the different implications of
increased transparency about social welfare
in the above models, more transparency
never results in higher inflation outcomes.
• Another common element is that the improvement in inflation performance may be offset
by a reduction in the capacity of the central
bank to stabilize the economy by surprising
the private sector with a policy-induced
demand shock.

III. A NEW DATA SET ON CENTRAL
BANKING INSTITUTIONS
In measuring transparency of central bank forecasts, we seek to establish the scope and coverage
of macro-forecasts published by central banks. Data
are taken from a survey of central banks contained
in FJMRS.12 They provide estimates of many transparency characteristics. We focus on central bank
publication and explanation of macroeconomic
forecasts, since this emphasis is closest to that of
both theoretical and policy-oriented work on transparency in monetary policy.
The great majority of central banks in our sample
publish some form of forward-looking analysis—
79 percent of the 94 covered in the FJRMS survey.13
Forward-looking analysis may, of course, take many
forms, some of which may help to guide expectations more than others. For some central banks, the
publication of a money target is in itself a form of
forward-looking analysis, since such targets are often
more benchmarks rather than rules, and other forecasts must underpin the target. Other central banks
have attempted to guide inflation expectations by
presenting forecasts of a number of variables in
11

An exception in the recent theoretical literature is Cukierman (2000a),
who focuses on the economic model and the operational objectives
of the central bank rather than central bank forecasts and votes.

12

The characteristics covered in the FJMRS survey include numerical
measures of how policy decisions are explained and the quantity of
current analysis, research, and speeches provided by the central bank.
They also assess and provide scores for various aspects of accountability, independence, and target setting, each of which may contribute
to transparency and clarity in the monetary framework.

13

A total of 82 of these observations are included in our estimates. The
other 12 are excluded because other data do not match up with them.

FEDERAL RESERVE BANK OF ST. LOUIS

Chortareas, Stasavage, Sterne

Table 1
Measure of Explanations of Forecasts and Forward-Looking Analysis: Questions and
Distributions of Responses
Categories of answers,
distribution of results

All

Industrial

Transitional

Developing

Form of publication
of forecasts

Words and numbers
Either words or numbers
Unspecified
None

35
25
13
21

16
8
0
4

5
6
4
7

14
11
9
10

Forward-looking analysis
in standard bulletins
and reports

More than annually
At least annually
Unspecified
Otherwise

39
24
10
21

18
4
2
4

7
4
4
7

14
16
4
10

Discussion of past
forecast errors

Yes
Sometimes
No

21
9
64

8
7
13

3
2
17

10
0
34

Risks to forecast
published

Words and numbers
Either words or numbers
None

9
23
62

7
9
12

2
4
16

0
10
34

Questions

considerable detail including, for example, a discussion of risks.
The questions in the survey ask not only whether
the central bank provides forward-looking analysis.
They also consider the quality, scope, and frequency
of forecasts and the extent to which forecast errors
are monitored and publicly discussed. The exact
wording of the questions, along with the motivation
behind them, is provided below, with the distribution
of the results for each question shown in Table 1.
The questions are:

(ii) using either words or numbers, and (iii)
using neither.
• With what frequency does the central bank
publish forward-looking analysis in standard
bulletins and reports?
Motivation: Published annual targets for
money and inflation may help to guide expectations, but they only do so over a particular
horizon. Forecasts published more frequently
will guide/anchor expectations and may discipline policy over different forecast horizons.

• What is the form of publication of forecasts? Is
it in words only, or is it also presented formally
in terms of numbers?14
Motivation: The “bottom line” of a forecast is
usually presented in a numerical or graphical
format, which may help to influence expectations and discipline policy, since the forecast may then be directly compared with a
target, and subsequently outcomes may be
compared with the forecast. The analysis
underpinning the forecast may, however, be
more important than the precise number,
since the accuracy of numerical forecasts may
sometimes be attributable to luck as well as
judgment. The questionnaire distinguishes
between those central banks that publish
forecasts: (i) using both words and numbers,

• Are risks to the forecast published; if so, in
what form?
Motivation: A number of central banks use
their forecast as a vehicle for highlighting
the relative likelihood of various outcomes,
rather than to focus on a particular number.
The argument for publishing risks to a forecast is that a forecast that rests on a single
number for each time period may be accurate
for spurious reasons. An assessment of risks
can convey a more accurate representation
of the forecasters’ subjective assessment of
monetary conditions. As with the first question, the quality of risk assessment is judged
14

Graphs are treated as identical to numbers in this analysis.

J U LY / A U G U S T 2 0 0 2

103

Chortareas, Stasavage, Sterne

according to whether both numbers and
words are used.
• Is there a discussion of past forecast error; if
so, is this a standard feature of discussion?
Motivation: Attempts to build credibility may
rest on becoming more open about the capacity of the central bank (and other institutions)
to forecast accurately. An open assessment of
forecast errors may also reinforce the quality
of future forecasts.

Data Reliability
The FJMRS survey data are the most comprehensive description available of central bank efforts to
explain policy. The questions are worded objectively
and cover a number of aspects of forecasting whose
publication could enhance transparency to varying
degrees, yet there are a number of reasons that
might suggest caution in interpreting and using the
data. We assess the implications of each in turn.
First, there could be a problem of sample
selection bias to the extent that only the “best
performers” respond. We are confident that the
FJRMS survey is largely immune to this problem
because of the very high response rate. Of 114
questionnaires, 94 were completed, and the survey
covers over 95 percent of world gross domestic
product (GDP). Furthermore, as the discussion of
forecasts was only one facet of a broad survey, it is
less likely that central banks were deterred by this
particular part of the questionnaire.
Second, there could be problems with the subjective nature of the responses. For example, the
distinction between publishing regular targets and
forecasts may become blurred in some cases. Some
respondents may have interpreted publishing an
intermediate money target as providing forwardlooking analysis. Such a target, after all, must be
based upon output and inflation projections. Other
countries, however, interpreted the publication of
an intermediate target as distinct from publishing a
forecast. This potential subjectivity bias may not
be serious, however, since the questionnaire asked
about the nature of publication, its frequency, and
the discussion of risks and forecast errors.
A third problem is that it may be relatively easy
to change some transparency characteristics. Some
of the transparency measures in the survey have
been implemented only recently, and so they may
not have had an impact on inflation in the sample.
If the impact of these measures represents a signifi104

J U LY / A U G U S T 2 0 0 2

REVIEW

cant change in central bank behavior, the effect may
also take some time to influence inflation expectations. We consider this problem in the discussion
of the robustness of our empirical results.

IV. EMPIRICAL METHODS AND RESULTS
As noted above, theoretical work on transparency
has generated a number of different propositions
about the effect of publishing central bank forecasts.
In order to evaluate these alternative models, in
this section we provide empirical tests of the effect
of transparency on inflation and on the volatility
of output, using a cross-section of 87 countries
over the period 1995-99. Our results show that there
is a statistically significant negative correlation
between transparency and inflation and, in particular, in countries with flexible exchange rate
regimes. At the same time, there is no evidence of a
cost of transparency in terms of increased output
volatility.

Constructing an Index for Transparency
of Forecasts
The FJMRS data set provides four separate indicators that can be used to assess the detail in which
a central bank publishes its inflation forecasts.
These include the frequency with which forecasts
are published and whether past forecast errors and
risks to the forecast are discussed in publications.
These indicators are highly correlated, implying
that any regression that included each would exhibit
multicollinearity. This factor argues in favor of
aggregating the four to produce a composite measure
of transparency.
Rather than creating an aggregate measure by
simply taking the average of the different transparency measures in the FJMRS data set, we considered to what extent the FJMRS indicators can be
arranged to form a Guttman scale. Its major advantage is that, unlike an average of several variables, a
Guttman scale constructed from several indicators
does not result in a loss of information through
aggregation. A Guttman scale is constructed by
arranging binary variables in a sequence such that
a positive value for one indicator implies a positive
value for all previous variables in the sequence. To
construct a Guttman scale for transparency, we have
ordered our variables according to the decision tree
in Figure 1. Although a few of the central banks in
our sample do not fit this pattern (for example, they
discuss risks to their forecast but not past forecast

FEDERAL RESERVE BANK OF ST. LOUIS

Chortareas, Stasavage, Sterne

Figure 1
A Guttman Scale of Transparency in Forecasting

Forecast published?
(if no, index = 0)
Forward analysis
annually?
(if no, index = 1)
Past forecast errors
discussed?
(if no, index = 2)
Risks to
forecast discussed?
(if no, index = 3)
(if yes, index = 4)

errors), the vast majority did. A common criterion
for judging whether data can be ordered in a Guttman
scale is if the “coefficient of reproducibility,” defined
as the number of errors/total responses, is less than
0.10. (“Errors” are cases where ordering according
to a Guttman scale results in a false prediction for a
response.) Our transparency data set easily satisfies
this criterion, with a ratio of errors to total responses
of 0.08.15
The advantage of Guttman scaling is that, based
on the aggregate index, one can determine exactly
how a central bank scores on each of the four separate sub-indicators. So, for example, a score of 2 on
our transparency index implies that a central bank
publishes forecasts and that it does so on at least
an annual basis, but it does not discuss either past
forecast errors or risks to the current forecast.16 In
contrast, if we took the simple average of the four
indicators, then a score of 2 could imply a positive
response on any two of the four sub-indicators. Furthermore, we later show that our results are robust
to the use of either a Guttman scale or the simple
average of our four sub-indicators of transparency
in forecasting. The distribution of the Guttman scores
is as follows: Of the 82 countries, 25 have a Guttman
score of 0; 8 have a score of 1; 24 have a score of 2;
6 have a score of 3; and 19 have a score of 4.

examined whether our index is negatively correlated
with average inflation across our 87-country sample.
Because the FJMRS data set examined transparency
at one specific point in time (1998), we are limited to
tests that consider only cross-country variation in
inflation, rather than variation over time. Given that
many reforms to increase central bank transparency
are quite recent, we also chose to use a brief period
for calculating average inflation (1995-99). This is
based on consumer price index (CPI) data from the
International Monetary Fund’s International Financial
Statistics. As discussed later, our results are nonetheless robust to using different time periods and to
running regressions based on data from individual
years.
Table 2 presents pairwise correlations between
levels of transparency and average inflation. We use
both our overall index and individual measures from
the FJMRS data set. There is a significant negative
correlation between all of these indicators and both
the level and the variability of inflation, and this
correlation is significant for the Guttman index in
both cases.
As a next step, we examined whether this
relationship holds when controlling for other deter15

Alternative orderings, such as scaling in the following order of (i) forecasting, (ii) forward analysis, (iii) risks to forecast, (iv) past forecast
errors, generate virtually identical results for the 82-observation sample
that we use in our regression.

16

This highlights the importance of having the overall data set closely
approximate a perfect Guttman scale, in order to be able to make this
inference.

Transparency and Inflation
As a first step toward investigating the effect of
transparency in forecasting in monetary policy, we

J U LY / A U G U S T 2 0 0 2

105

REVIEW

Chortareas, Stasavage, Sterne

Table 2
Transparency and Inflation: Pairwise
Correlations
Log inflation
Guttman scale of transparency

–0.37 (p < 0.01)

Publication?

–0.29 (p = 0.01)

Forward analysis at least annually?

–0.15 (p = 0.15)

Past forecast errors discussed?

–0.21 (p = 0.05)

Risks to forecast considered?

–0.28 (p = 0.01)

Number of observations

87

minants of average inflation.17 To do this we followed
existing cross-country empirical literature on inflation including Campillo and Miron (1996), Lane
(1997), Bleaney (1999), Romer (1993), and Ghosh,
Gulde, and Ostry (1995). First, we included the log
of real GDP per capita, based on the possibility that
lower-income countries may rely more heavily on
the inflation tax to finance government expenditures.
Second, we included a measure of openness,18 following Romer (1993) and Lane (1997) who argue that
incentives for policymakers to generate “surprise”
inflation are weaker in more open economies. We
also included a measure of political instability as a
control variable, based on the prediction from a number of different political economy models that a high
frequency of government turnover may shorten the
time horizons of politicians, prompting them to
adopt more inflationary macroeconomic policies.19
Finally, we added a dummy variable to control for
a country’s exchange rate regime (fixed=1).20 This
follows the theoretical arguments that emphasize
how pegging can serve as a commitment device. It
also follows empirical findings of Ghosh, Gulde, and
Ostry (1995), Bleaney (1999), and others who show
that there is a clear negative correlation between
exchange rate pegs and average inflation.
Table 3 reports the results of four cross-country
regressions. Regression (1) includes each of our
control variables in addition to our Guttman scale
for transparency in forecasting. The coefficient on
the scale is negative and highly significant. Our
second regression adds an interaction term, which
allows the effect of transparency to vary between
countries with fixed exchange rates and those
with flexible exchange rate regimes. This tests our
hypotheses about transparency in forecasting with
106

J U LY / A U G U S T 2 0 0 2

greater precision because arguments in favor of
publishing inflation forecasts apply, above all, to
economies with floating exchange rates where the
monetary authorities have greater control over the
domestic money supply. In small open economies
with a fully credible fixed exchange rate regime and
with full convertibility, publishing forecasts should
have no effect on average inflation since the central
bank has little or no control over domestic interest
rates or the money supply. Following Canavan and
Tommasi (1997) and Herrendorf (1999), exchange
rate pegs can be seen as an alternative strategy for
establishing transparency, since they provide the
public with an easily observable indicator over
which the government has direct control.
The results of regression (2) in Table 3 correspond to those predicted in theory. In countries with
floating exchange rates, transparency in forecasting
is negatively correlated with average inflation. The
coefficient on our transparency index is highly significant and becomes more negative when compared
with the result from regression (1).21 The significance
is accounted for by a high-point estimate of the
effect of transparency in inflation coupled with
relatively wide error bands. In a country with a
floating exchange rate that began with an inflation
rate of 12 percent per annum, we estimate that a
decision by the central bank to begin publishing
regular inflation forecasts (a move on the index from
0 to 2) would lead to a reduction in inflation of
between 1.8 percent and 7 percent per annum (the
95 percent confidence interval). In contrast, in countries with fixed exchange rates, transparency in
forecasting has less effect on inflation. According
to our estimates, the effect of a similar increase in
transparency in forecasting in a fixed exchange rate
country would be much smaller (reducing inflation
from 12 percent to 11.8 percent per annum).
We also investigated whether the effect of transparency in forecasting on inflation might depend
17

We restrict our attention to average inflation here because existing
empirical work focuses on this variable.

18

We define openness as (x+m)/GDP, where x and m stand for exports
and imports, respectively.

19

Drawn from a database created by Beck et al. (1999), this variable
measures the percentage of key decisionmakers (executive, legislative
majority[ies], coalition members) that change in a given year.

20

Based on the classifications in the IMF’s Annual Report on Exchange
Arrangements and Exchange Restrictions.

21

The results are very simple from a regression that excludes countries
with pegged exchange rates.

FEDERAL RESERVE BANK OF ST. LOUIS

Chortareas, Stasavage, Sterne

Table 3
Transparency in Forecasting and Average Inflation
Dependent variable: log inflation

(1)

(2)

(3)

Log GDP per capita

–0.47*** (0.07)

–0.45*** (0.07)

–0.50*** (0.066)

Openness

–0.001 (0.002)

–0.001 (0.002)

–0.002 (0.002)

Political instability

1.13* (0.63)

0.97 (0.64)

1.07* (0.63)

Exchange rate peg (peg = 1)

–0.47** (0.23)

–0.95** (0.43)

–0.74* (0.40)

Transparency in forecasting index

–0.16** (0.07)

–0.26*** (0.09)

Peg × transparency

0.25* (0.13)

0.01 (0.10)

Inflation target × transparency

–0.15* (0.08)

Money target × transparency

–0.24** (0.01)

Constant
R2
N

6.04*** (0.54)
0.52

6.11*** (0.51)
0.54

82

82

6.40*** (0.55)
0.52
82

NOTE: Heteroskedastic-consistent standard errors are in parentheses; ***, **, and * indicate significance at the 1, 5, and 10 percent
levels, respectively.

on whether countries are inflation targeters or
whether they target monetary aggregates. A number of authors have defined transparency as a key
ingredient of inflation targeting (e.g., Mishkin, 2000),
while some have gone further by arguing that
transparency is a prerequisite to inflation targeting
(Masson, Savastano, and Sharma, 1997). The latter
argument would suggest that transparency should
have a greater impact on the credibility of monetary
policy when adopted in conjunction with the use
of an inflation target. In regression (3) in Table 3,
we include two multiplicative dummy variables
representing transparency in countries whose frameworks are based on inflation targets and money
targets. We first construct two variables, inflation
target and money target, each of which is a binary
variable compiled from several different indicators
in the FJMRS data set.22 These two variables are
then multiplied by the Guttman scale such that the
inflation (money) target multiplicative dummy is
equal to the value of the Guttman scale when the
country’s framework is based more upon an inflation (money) target and is equal to zero otherwise.
The results of regression (3) suggest that the effect
of transparency on inflation may be stronger for
money-targeting frameworks, but tests reveal that
the difference between the coefficients is insignificant. In unreported results, we find that when
the binary dummy variables, inflation target and

money target, are included separately, neither is
significant.
Our estimates of the effects of our control variables on inflation are consistent with previous studies. Income per capita is negatively correlated with
inflation, while political instability tends to be associated with higher inflation. As in previous studies,
there is a very large and very significant negative
correlation between exchange rate pegs and inflation. One finding that may appear surprising is the
result that greater openness of an economy to trade
is not associated with lower inflation. Earlier studies
by both Romer (1993) and Lane (1997) using data
covering the 1970s and 1980s found evidence of
a negative openness-inflation correlation. More
recently Bleaney (1999) has reproduced earlier
findings with regard to the 1970s and 1980s, while
also concluding that there is no significant correlation between openness and inflation in data from the
1990s. Given that our data cover the period 1995-99,
our results are consistent with those obtained by
Bleaney. We also investigated whether these results
22

The questions ask central banks (i) to classify their regime, (ii) to report
if an explicit target was published for each variable, (iii) to rank objectives in practice, and (iv) to indicate which variable prevails in policy
conflicts. Each country was allocated a score for each of the following:
exchange rate, money and inflation focus, and discretion. The maximum of these scores was classified as the “targeted” variable. This
definition is broader than that used in other papers (e.g., Mishkin and
Schmidt-Hebbel, 2000).

J U LY / A U G U S T 2 0 0 2

107

REVIEW

Chortareas, Stasavage, Sterne

Table 4
Transparency and Output Volatility: Pairwise Correlations
Standard deviation
annual GDP growth (p value)

Standard deviation
quarterly GDP growth (p value)

–0.08 (0.47)

–0.29 (0.13)

Publication?

0.06 (0.59)

–0.10 (0.60)

Forward analysis at least annually?

0.02 (0.86)

–0.25 (0.19)

–0.22 (0.06)

–0.20 (0.29)

0.09 (0.43)

0.16 (0.40)

Guttman scale of transparency

Past forecast errors discussed?
Risks to forecast considered?
Number of observations

76

29

Table 5
Transparency and Output Volatility: Controlling for Terms of Trade Variability
Dependent variable

Standard deviation
annual GDP growth

Guttman scale of transparency
Standard deviation terms of trade shocks
Number of observations

–0.03 (0.12)

–0.005 (0.004)

0.53*** (0.14)
71

Standard deviation
quarterly GDP growth
0.23 (0.29)
28

NOTE: Heteroskedastic-consistent standard errors are in parentheses; ***, **, and * indicate significance at the 1, 5, and 10 percent
levels, respectively.

with regard to openness were attributable to outliers
with very high levels of openness, but our results
rejected this possibility.

Transparency and Output Volatility
In addition to making predictions about the
effect of transparency on average inflation, models
of transparency in monetary policy also produce
comparative statistics about volatility of output.23
As noted, one’s prediction here depends heavily on
underlying assumptions. Our empirical investigation of the effect of transparency on output volatility
is limited by the lack of obvious controls to be used
in estimating cross-country differences in output
volatility. To construct measures of output volatility
(based on the standard deviation of GDP growth),
annual data were available for our entire sample
(1993-99), while quarterly GDP data were available
for 30 of our sample countries (also 1993-99).
Table 4 reports the results of pairwise correlations, using both the Guttman scale for transparency
108

J U LY / A U G U S T 2 0 0 2

and the individual indicators from the FJMRS data
set. There are several extreme outliers in our output
volatility data, and in order to obtain more robust
results we have excluded these countries from the
correlations reported in the table.24 The results
show that the correlation between transparency
and output volatility is often negative, especially in
the sample using quarterly data, but in only one case
is a correlation significant at conventional levels.
While this evidence certainly does not suffice to
demonstrate that publishing inflation forecasts
reduces output volatility, it does appear to be fairly
strong prima facie evidence against claims that
23

We also tested for the effects of transparency on the volatility of
inflation, and our tentative results showed no significant positive or
negative impact.

24

In the sample based on annual data, Kuwait and the Kyrgyz Republic
were outliers in terms of having very high standard deviations of GDP
growth, while in the quarterly data Turkey was the only severe outlier.
We defined a “severe” outlier, x, using the following formula where
“pctile” refers to the percentiles of the entire sample: x<25pctile –
3(75pctile – 25pctile) or x>75pctile+3(75pctile – 25pctile).

FEDERAL RESERVE BANK OF ST. LOUIS

increasing transparency increases output volatility.
Results obtained before outliers were excluded were
also consistent with this finding.25
We also estimated several ordinary least squares
(OLS) regressions of output volatility on the Guttman
scale of transparency, controlling for the variance
of terms of trade shocks.26 The results, reported in
Table 5, show no significant effect of increased
transparency on output volatility.

V. INTERPRETING THE ROBUSTNESS
OF OUR RESULTS
The effort required for a central bank to publish
detailed forecasts may not appear to be particularly
arduous relative to the benefits of securing lower
inflation. Why, then, do many more central banks
not introduce detailed forecasts?27 We base our
detailed discussion of the robustness of our results
on five complementary explanations of this empirical conundrum:
• The result (that greater transparency in forecasting leads to lower inflation) is valid and
could be exploited by more central banks
than at present, but some central banks have
not yet completed the transition to greater
transparency.
• The result is valid overall but may not be true
of all frameworks.
• The result is valid, but there may be offsetting
costs to transparency, which deter some
central banks from introducing it.
• The results may be overstated or invalid
because of endogeneity and reverse causality.
• There may be other statistical biases.
Econometric techniques are necessary but
insufficient for judging the robustness of our results.
In this section we also include a detailed discussion
of how such tests relate to the theory and practice
of monetary frameworks.

Chortareas, Stasavage, Sterne

in monetary policy, in the context of the Merrill vs.
FOMC case.28 His paper was framed by questions
relating to how central banks might respond to
increasing evidence of the importance of expectations in economic decisionmaking. The theoretical
literature began to increase rapidly only at the end
of the 1990s, and our paper is among the first to
provide cross-country empirical evidence using
macroeconomic data. Similarly, the practical precedents of frameworks in which published forecasts
contributed significantly in building credibility have
emerged only in the 1990s.29
Framework designers have not always been
quick to adjust their frameworks quickly in response
to new framework innovations,30 yet recent developments in global framework design suggest that
central banks are on a transition path toward much
higher average levels of transparency. Even since
FJMRS constructed their data, several countries have
markedly increased the information about their
forecasts.31 And the rapid global proliferation of
explicit money and inflation targets in the 1990s is,
according to Mahadeva and Sterne (2001), part of a
global trend whereby disinflating countries use targets more as a forecasting device than as a policy
rule.
25

Before exclusion of outliers, all correlations were negative and seven
of ten were significant at conventional levels.

26

The variable was based on an indicator from the World Bank’s World
Development Indicators. We then calculated this effect as a share of
GDP and took the standard deviation of this indicator over the period
1992-97.

27

In the FJMRS survey, only three central banks (Norway, Sweden, and
the United Kingdom) satisfy every criterion by which the authors
judged the detail in which central banks explain forecasts.

28

One of the Fed’s arguments for resisting greater transparency, that it
was difficult in the early 1980s to provide information evenly to all
market participants, has been eroded over time by advances in information technology.

29

The discussion of central bank governors in Mahadeva and Sterne
(2000, pp. 182-205) illustrates that inflation-targeting countries have
made transparency a key aspect of their framework. The Bundesbank
has, according to Posen (2000), a long history of explaining its policies
well, yet its independence is more widely perceived as contributing
more strongly to its credibility.

30

For example, if regimes are classified according to money targeting,
exchange rate targeting, inflation targeting, and discretion, then only
three countries (Australia, the United Kingdom, and Uruguay) have
changed their regime as much as four times since the breakdown of
the Bretton Woods agreement.

31

Brazil, Chile, South Africa, and Thailand each now publish fan charts
for inflation and provide explicit discussion of risks to inflation forecasts in regular inflation reports.

A Transition to Greater Global
Transparency in Monetary Policy?
One possible reason why only a relatively small
number of central banks publish detailed forecasts
may be that policymakers have not yet fully acted
upon the evidence that transparency can contribute
to lower inflation. The theoretical and empirical
evidence on the effects of transparency is relatively
new. Goodfriend’s (1986) landmark paper was among
the first to discuss the costs and benefits of secrecy

J U LY / A U G U S T 2 0 0 2

109

REVIEW

Chortareas, Stasavage, Sterne

Figure 2

Estimated Effect of Transparency
on Inflation

Results of Recursive Estimation
0.4

0

–0.4
77

50
Number of Observations in Sample
Estimated Effect

95% Confidence Interval

Transparency May Have Significantly
Different Effects on Inflation Across
Frameworks
Our discussion suggests that more explanation
of policy does not significantly reduce inflation under
all circumstances. Moreover, our point estimates of
the overall effect, though large, were surrounded by
relatively wide error bands, suggesting that there
are a number of frameworks that are exceptions to
the overall result. The governance structure of the
central bank may affect the willingness of the central bank to publish forecasts. In some central banks,
senior policymakers are responsible for the published forecast; in others, the central bank’s staff
are the sole authors.32 Such differential arrangements
may affect the perceptions of policymakers and
the public alike regarding the closeness of the link
between published forecasts and policy decisions,
and this in turn may affect the transmission channels between transparency and inflation outcomes.
To the extent that transparency operates by enhancing credibility, as is predicted by a number of the
models we have discussed, the effect of transparency
on inflation may be smaller when credibility has
been secured by actions rather than words. This
applies to exchange rate targeters (see results noted
previously) and may also apply to countries with
low inflation.33
Given that our sample includes countries both
with very low and very high average rates of inflation,
we examine the extent to which our results are
stable when we exclude high-inflation countries
110

J U LY / A U G U S T 2 0 0 2

from the sample. As a first step, we used a standard
procedure to determine whether the coefficient on
the Guttman scale was influenced by outliers. This
resulted in the exclusion of five observations, after
which the coefficient on the Guttman scale remained
significant.34 We then used a recursive estimation
procedure to examine how our results changed as we
progressively excluded high-inflation observations
from the remaining sample. This was an iterative
procedure which involved the following: (i) estimating regression (2) in Table 3 using a sample of the
50 countries with the lowest average rates of inflation, (ii) adding the observation with the next highest
rate of inflation, and (iii) reestimating the regression
and then repeating the process until we reached
maximum sample size. Figure 2 plots the estimated
coefficient on the Guttman scale, together with
bounds for the 95 percent confidence interval
according to sample size. The coefficient becomes
progressively more negative as we include highinflation countries in the sample, suggesting that
the estimated anti-inflationary effect of publishing
a forecast in our Table 3 regressions may be somewhat inflated by the inclusion of high-inflation
countries.

Costs to Publishing Forecasts
There may be political and economic costs
associated with a central bank publishing forecasts,
and these may offset the benefits of potential reductions in inflation. To the extent that fiscal policy may
in some circumstances be the root of high inflation,
detailed forecasts are likely to pinpoint the source
of the problem and could, in some cases, lead to
tensions between the central bank and the government. Transparency may, in such circumstances,
also be proxying for a degree of central bank inde32

Kohn (2001) and Svensson (2001) include discussions on ownership
of the forecast in their respective reports.

33

A related issue is the optimal degree of transparency. It is conceivable
that there exist circumstances when increased transparency might
lead to a deterioration in welfare or an increase in inflation. Telling the
public about a likely financial or exchange rate crisis might precipitate
the crisis. And many central banks have developed well-resourced
press offices to manage the clarity of published information.

34

The countries excluded were Bahrain, Indonesia, Mauritius, Turkey,
and Russia. We tested for outliers based on the dfbeta statistic, which
measures the impact of an individual observation on a specific coefficient. Following standard practice, we excluded observations for which
the absolute value of the dfbeta statistic was greater than

(2 / n ) ,
where n is the number of observations. The coefficient on the
Guttman scale was –0.29 (0.08), p<0.01, after five outliers were eliminated (based on regression (2)).

FEDERAL RESERVE BANK OF ST. LOUIS

pendence that may be very difficult to measure in
conventional surveys.35
There may also be economic costs to introducing transparency that prevent central banks from
publishing forecasts. The discussion of theoretical
literature pointed to circumstances in which greater
transparency may be associated with higher volatility in inflation and output. Where there is a risk of
a banking or exchange rate crisis, for example, it is
questionable whether or not a central bank should
highlight such an issue by publishing forecasts.
Cross-country evidence presented in Chortareas,
Stasavage, and Sterne (2002) indicates that increased
transparency may reduce the costs of disinflation
in a sample of mainly industrialized economies.

Endogeneity and Reverse Causality?
There is a possibility that the results may be
affected by reverse causality whereby low inflation
may lead to greater transparency as well as being
caused by it. Similarly there may exist endogeneity
caused by cross-country differences in institutional
circumstances, or macroeconomic conditions may
imply systematic variation in transparency and
inflation. In this section we seek to address these
issues that have potentially serious implications
for bias in our results.
Could it be the case that low inflation dissolves
a central bank’s preference for secrecy? Geraats
(2001a) models the effect of transparency on the
utility of both strong and weak central banks. Strong
central banks are defined as having lower (unpublished) inflation targets than weak ones. She considers two alternative scenarios that shed light on
the issue of endogeneity. In the first, transparency
is exogenous, being imposed by the public. Weak
central banks prefer secrecy since it affords them
an opportunity to conduct stabilization policies
with a lower probability of their preferences for
relatively high inflation being revealed. In the case
of transparency being an endogenous choice of
the central bank, however, weak central banks also
choose greater transparency. They overcome an
inclination toward secrecy because they appreciate
that secrecy will itself be interpreted by rational
agents as a sign of weakness.36
We attempt to assess empirically the extent of
any endogeneity. First, we can demonstrate that
although transparency is positively correlated with
other measurable characteristics of a country’s
economic, legal, and political environment, our
results remain robust even when we control for the

Chortareas, Stasavage, Sterne

fact that transparency might be endogenous to these
other factors. The second column in Table 6 shows
simple correlation coefficients between our Guttman
scale of transparency and other variables to which
it might arguably be endogenous. These include
measures of development (per capita GDP, OECD
membership), other features of monetary policy
(a focus on inflation objectives, legal central bank
independence, quality of central bank analysis37),
and measures for the political environment (democracy, political instability, political polarization, type
of legal system38). As one might expect, transparency
is positively correlated with a number of these variables, but in no case is the correlation high enough
to suggest that transparency is perfectly correlated
with another variable. Two of these variables, per
capita GDP and political instability, are already
included in our Table 3 regressions. As a next step,
we reestimated regression (2) from Table 3 while
adding one of the variables that may affect transparency. We repeated this procedure for each variable. In every single case, the coefficient on the
Guttman scale remains negative, significant, and
of roughly the same magnitude as in the original
regression. The Guttman scale coefficient also
remained significant when we included all variables
in Table 6 simultaneously.
Although we can demonstrate that our transparency index is not merely proxying for levels of
income or the level of democracy, it remains possible
that our index may, to some extent, be influenced
by some other political or economic variable x
which may be difficult to measure directly. It may
be possible to investigate this indirectly, though. If
this unmeasurable variable x involves some broad
change in the economic or political conditions that
35

Fry (1998) questions the extent to which survey measures are capable
of fully capturing central bank independence.

36

Geraats qualifies this channel in her paper and provides possible
reasons why, in spite of her results under endogenous choice of
transparency, not all central banks are transparent. Furthermore, in
Geraats (2001b) the author shows that the desirability of transparency
depends critically on the institutional framework. In this model, when
the central bank has limited independence, less transparency reduces
the government’s information about the economy, which discourages
it from overriding the central bank.

37

The FJMRS study collected data on the extent to which central banks
conduct detailed analysis of inflation expectations (based on market
information) and on the sophistication of the models used to generate
forecasts.

38

The type of legal system is a dummy variable distinguishing whether
countries have a common legal system. Data are taken from La Porta
et al. (1998).

J U LY / A U G U S T 2 0 0 2

111

REVIEW

Chortareas, Stasavage, Sterne

Table 6
Correlation of Transparency to Economic, Political, and Legal Variables

Democracy

Correlation
with Guttman

Coefficient on Guttman
after inclusion†

0.23

–0.34 (0.10)

Number of
observations
64

GDP per capita

0.34

–0.26 (0.09)

82

OECD member

0.39

–0.21 (0.09)

82

Inflation target

0.17

–0.26 (0.10)

82

Central bank independence (FRJMS)

0.32

–0.28 (0.10)

82

Central bank independence (Cukierman‡)
Type of legal system

–0.10

–0.39 (0.12)

47

0.15

–0.26 (0.10)

82

Quality of CB analysis

0.44

–0.32 (0.09)

82

Political instability

0.07

–0.26 (0.09)

82

Political polarization

0.02

–0.24 (0.09)

82

NOTE: “Democracy” from the Polity III data set; “inflation target,” “central bank independence,” and “quality of analysis” from the
FJMRS data set; “political instability” and “polarization” from Beck et al. (1999). Heteroskedastic-consistent standard errors are in
parentheses; ***, **, and * indicate significance at the 1, 5, and 10 percent levels, respectively.
†Each

coefficient may be compared to –0.26, the result from regression (3) in Table 3.

‡Cukierman’s

central bank independence measure for 1980-89. A number of smaller countries in Cukierman’s data set were not
included in the FJMRS survey.

Table 7
Endogeneity of Transparency to Broader Policy Measures
Correlation
with Guttman

Coefficient on Guttman
after inclusion

Number of
observations

Fiscal surplus

0.30

–0.25 (0.11)

59

Foreign currency bond rating

0.37

–0.20 (0.08)

58

NOTE: “Foreign currency bond rating” from Standard & Poor’s, January 2000; “fiscal surplus” from International Financial Statistics.
Heteroskedastic-consistent standard errors are in parentheses; ***, **, and * indicate significance at the 1, 5, and 10 percent levels,
respectively.

leads to both increased transparency and lower
average inflation, then we might expect it to lead
also to improvements in other policy outcomes that
are exogenous to inflation. For example, in many
countries dramatic turnarounds in economic policy
often involve both reductions in inflation and
improvements in a government’s fiscal balance, to
the extent that fiscal balance can be seen as being
exogenous to inflation. Likewise, a policy turnaround
is also likely to lead to an improvement in the rating
on a government’s foreign currency bonds, which
should be independent of domestic inflation. This
suggests using the fiscal balance and the rating on
112

J U LY / A U G U S T 2 0 0 2

foreign currency–denominated bonds in order to
proxy for x. We can then perform the same test that
we performed for variables such as “democracy.”
Table 7 shows the results, while also showing the
simple correlation of each variable with the Guttman
scale. The results strongly suggest that our original
results with respect to transparency and inflation
cannot be attributed entirely to broader policy
improvements.
We also considered directly the possibility of
reverse causality, whereby the negative correlation
between transparency and inflation could reflect
the fact that central banks are more likely to publish

FEDERAL RESERVE BANK OF ST. LOUIS

Chortareas, Stasavage, Sterne

Table 8
Endogeneity of Transparency to Past Inflation and Output Outcomes
Correlation
with Guttman

Coefficient on Guttman
after inclusion

Past deviation of output from desirable

–0.16

–0.24 (0.10)**

Past output volatility

–0.26

–0.23 (0.09)***

Past deviation of inflation from desirable

–0.42

–0.16 (0.10)*

Past inflation volatility

–0.43

–0.20 (0.11)*

NOTE: “Past deviation of output from desirable” is average absolute deviation from 2 percent real GDP growth over 1990-94. “Past
output volatility” is the mean deviation of real GDP growth with respect to the average level of GDP growth. “Past deviation of inflation from desirable” is average absolute deviation from 2.5 percent inflation over 1990-94. “Past inflation volatility” is the log of the
mean deviation of inflation 1990-94 with respect to the average level of inflation for the same period. Heteroskedastic-consistent standard errors are in parentheses; ***, **, and * indicate significance at the 1, 5, and 10 percent levels, respectively.

forecasts when they have greater control over macroeconomic outcomes. If this assessment is accurate,
one would expect central banks to decide whether
to make their forecast public based on the level and
the volatility of past inflation (and potentially output). A bias would be introduced in our results, then,
to the extent that lagged inflation or lagged inflation
volatility is correlated with the current level of
inflation.
For each of our sample countries using the five
years preceding our sample period (1990-94), we
calculated the mean absolute deviation of inflation
and output from their desirable levels during this
same period (2 percent inflation and 2.5 percent
annual output growth). We also calculated the mean
absolute deviation of inflation and output from their
average level for the period, in order to measure
volatility. As the endogeneity critique would suggest,
our Guttman scale for transparency is in fact negatively correlated with lagged inflation outcomes
over the 1990-94 period (see Table 8). We then
included each of these four measures as control
variables in regressions using the specification from
regression (3) in Table 8. As can be seen in Table 8,
the coefficient on the Guttman scale is essentially
unchanged when we control for both lagged output
and lagged output volatility. However, when we control for lagged inflation and lagged inflation volatility,
the coefficient on the Guttman scale is less negative
and somewhat less significant in each case (p=0.10
and p=0.07). The reduction in the significance of
the coefficient after the inclusion of lagged inflation
is to some extent inevitable. Lagged inflation is itself
likely to have been caused partly by lagged trans-

parency, measures of which are not at our disposal.
So although we acknowledge that it is difficult to be
certain that there is not some endogeneity between
transparency and inflation, we are reassured that
the association is clearly detectable even when we
control for the effect of the average rate or volatility
in past output and inflation.

Other Robustness Issues
We also considered several other robustness
issues, including whether or not our results are
stable when we consider subsamples of low-inflation
countries, whether changes in the time period affect
the results, and whether modifications in the Guttman
scale lead to significantly different inferences.
In addition to investigating outliers, we also
determined the extent to which our results are robust
with regard to modification of the time period considered. When we performed regressions based on
inflation data for individual years between 1995 and
1999, the coefficient on our transparency index was
always negative and generally statistically significant
at conventional levels.39
We also examined the possibility that the
Guttman scale might not be the most appropriate
technique for examining the relationship between
average inflation and the transparency indicators
collected as part of the FJMRS survey. We compared
the results of our regressions using a Guttman scale
with two alternative specifications. The first alternative was to take the simple average of the four
39

Coefficients and standard errors for each successive year were –0.29
(0.10) for 1995, –0.15 (0.17) for 1996, –0.30 (0.09) for 1997, –0.28
(0.21) for 1998, and –0.41 (0.20) for 1999.

J U LY / A U G U S T 2 0 0 2

113

REVIEW

Chortareas, Stasavage, Sterne

indicators. To test which of the two specifications
(Guttman vs. average) provided more explanatory
power, we used a simple non-nested test developed
by Davidson and MacKinnon (1981); the test results
supported using the Guttman scale.40
The second alternative to the existing Guttman
scale involved creating a matrix of dummy variables,
each of which takes a value of 1 for a particular
range of values of the Guttman scale. This method
allows the estimated effect of each step on the
Guttman scale to vary, whereas introducing the
Guttman scale as a single variable constrains the
estimated effect of each successive step upward on
the Guttman scale to be constant. Our sample countries can be divided into three groups of roughly
equal size for this purpose. First, there are 25 countries that do not publish any form of inflation forecast (Guttman = 0). Second, there are 32 countries
that publish a basic forecast that, in most cases,
includes forward analysis on at least an annual basis
(Guttman=1 to 2). Finally, there is a third group
of 25 countries that publish an inflation forecast
including a discussion of previous forecast errors
and, in most cases, a discussion of risks to the forecast (Guttman=3 to 4).
We repeated regression (2) from Table 3, while
substituting two dummy variables for the Guttman
scale: one for countries with Guttman values of 1
and 2, and the other for countries with Guttman
values of 3 and 4. Both dummy variables had the
expected negative sign, and the dummy for Guttman
values of 3 and 4 was both more negative and more
statistically significant than the dummy for Guttman
values of 1 and 2.41 These results suggest that while
there may be significant gains from publishing a
basic inflation forecast, the marginal gain in terms
of inflation performance from publishing a more
detailed forecast may be even larger. It should be
noted, though, that because the coefficient on the
dummy for Guttman values of 1 and 2 was not
highly significant, using a standard F test, we were
unable to reject the null hypothesis that the coefficients on the two dummies were equal.
A final potential robustness issue involves the
measurement of our dependent variable. While
much of the cross-country literature on determinants estimates a semi-log model which minimizes
the effect of high-inflation outliers, Bleaney (1999)
argues that using log inflation as a dependent variable results in too much weight being given to countries with very low inflation. As an alternative, he
114

J U LY / A U G U S T 2 0 0 2

suggests estimating an equation where the dependent variable is (πi )/(1+πi ), where πi is inflation in
the ith country. All of our results from Table 3 remain
robust when we use transformed inflation instead
of log inflation as our dependent variable. As a further
alternative, we also repeated our Table 3 regressions
using a Box-Cox model, and the results of this estimation were nearly identical to our original semi-log
specification.

Our Bottom Line on Robustness
We have subjected our results to numerous
econometric tests, and they remain reassuringly
robust. But how far have we gone in explaining the
apparent empirical conundrum we highlighted at
the start of the section—that few central banks publish forecasts in full detail in spite of the evidence
that such acts would facilitate lower inflation?
Although we have controlled for a number of additional variables in this section, it remains possible
that the negative correlation we observe between
transparency and inflation is biased by our inability
to control for unobserved country effects.
To be absolutely confident that our results are
subject to zero econometric bias, we would need
more data. To eliminate the possibility of reverse
causality affecting our results, for example, we would
need to distinguish those central banks that were
publishing forecasts merely to rubber-stamp their
reputation, and those that were reluctant to publish
because inflation was high. Such causality analysis
would benefit from a time series or panel data on
transparency, yet so far these data are unavailable.
We feel comforted, however, that we know of no
example of a framework in which policymakers
have reduced transparency in response to an
increase in inflation. Furthermore, to the extent that
transparency locks in low-inflation policies even if
it is introduced when inflation is already low, then
40

The J test involves estimating each specification and saving the fitted
values as a first step. Then, in the second step the fitted values from
each specification are included as an additional explanatory variable
in the alternative specification. The t statistic on the coefficient for
the fitted values can then be used as a test of the null hypothesis that
the alternative specification does not add any explanatory power. Using
this test we rejected the null hypothesis that the Guttman specification
did not add explanatory power to the “average” specification. In contrast, we could not reject the null hypothesis that the “average” specification does not add explanatory power to the Guttman specification.

41

The coefficient (and standard error) for the dummy Guttman values
of 1 and 2 was –0.48 (0.38). The coefficient for the dummy Guttman
values of 3 and 4 was both larger and highly statistically significant:
–0.98 (0.37), p<0.01.

FEDERAL RESERVE BANK OF ST. LOUIS

the issue of reverse causality becomes less important,
since transparency may be effective in reducing
and maintaining low inflation.
Of greater practical relevance could be the possibility that some central banks have attempted to
improve macroeconomic policy by simultaneously
altering policy preferences, transparency, and other
aspects of the institutional framework, which could
be argued to be the case in some inflation-targeting
countries.42 Cukierman (2000c) develops a model
where there is a possibility of a policymaker being
dependable or weak, yet inflation control errors
are sufficiently large to offer weak policymakers a
possible cloak of disguise. Dependable policymakers
like to raise the probability of being revealed as such,
whereas opportunistic policymakers like to reduce
the probability of being revealed as weak. An interpretation of his results is that a decision to become
transparent and a decision to become dependable
may be observationally inseparable. Even with good
time-series data, it would be difficult to identify
the precise empirical role of transparency in such
circumstances, yet our conclusion that publishing
forecasts can lead to lower inflation is unaffected
by this sort of endogeneity.
Overall, we acknowledge that in spite of the
battery of tests we employ, we cannot be sure that
our tests using cross-section data eliminate all possible biases. Yet our existing tests have gone far
enough to make us confident that we have identified
empirically an established theoretical channel for
attaining and maintaining low inflation. Furthermore, there are important global policy implications: many central banks around the world could
secure improved credibility and lower inflation by
publishing their forecasts in greater detail.

VI. CONCLUSION
There are a number of aspects to central bank
transparency, yet recent theoretical models and
much of the policy debate focus on the role of the
publication of central bank forecasts. The existing
literature provides mixed suggestions and evidence
on the welfare effects of monetary policy transparency. It is virtually unanimous, however, about
the main proposition tested in this paper: greater
transparency in monetary policy leads to lower
inflation. Furthermore, one of the most important
channels identified by the theoretical literature is
entirely consistent with the practical experiences
of the numerous central banks that have chosen
to explain policy more thoroughly: transparency

Chortareas, Stasavage, Sterne

makes a central bank’s credibility more sensitive to
its actions.
This paper is the first to consider detailed crosscountry evidence for a wide range of countries
covering the effects of central bank transparency
on monetary policy outcomes. We construct an
index of central bank transparency based on forecast
publications by central banks. The main empirical
result is that greater transparency in publishing
forecasts is associated with lower inflation. We
acknowledge that it is difficult to be certain that
there is not some endogeneity between transparency
and inflation. We are, however, reassured that the
result is robust to a comprehensive set of econometric specifications and robustness checks, and
the association between transparency and inflation
is detectable even when we control for the effect of
the average inflation rate or volatility in past output
and inflation.
Our results suggest that transparency contributes
to lower inflation whether or not policy is based
more on an inflation-targeting or money-targeting
anchor for policy. In countries that target the
exchange rate, the publication of forecasts does not
appear to have a significant impact on inflation.
Finally, we do not find evidence supporting the
proposition that a high degree of transparency is
associated with higher output volatility.

REFERENCES
Beck, Thorsten; Clarke, Anthony; Groff, Alberto; Keefer,
Phillip and Walsh, Patrick P. “Database on the Institutions
of Government Decision Making.” Unpublished manuscript,
Development Research Group, The World Bank, 1999.
Bleaney, Michael. “The Disappearing Openness-Inflation
Relationship: A Cross-Country Analysis of Inflation Rates.”
Working Paper No. 99/161, International Monetary Fund,
December 1999.
Blinder, Alan S. “Central Bank Credibility: Why Do We Care?
How Do We Build It?” American Economic Review, 2000,
90(5).
___________; Goodhart, Charles A.E.; Hildebrand, Philipp;
Lipton, David and Wyplosz, Charles. “How Do Central
Banks Talk?” Centre for Economic Policy Research,
October 2001.
Briault, Clive B.; Haldane, Andrew G. and King, Mervyn A.
42

See Schaecter, Stone, and Zelmer (2000).

J U LY / A U G U S T 2 0 0 2

115

Chortareas, Stasavage, Sterne

“Independence and Accountability.” Working Paper No.
49, Bank of England, April 1996.
Buiter, Willem. “Alice in Euroland.” Journal of Common
Market Studies, June 1999, 37(2), pp. 181-209.
Canzoneri, Matthew B. “Monetary Policy Games and the
Role of Private Information.” American Economic Review,
December 1985, 75(5), pp. 1056-70.
Campillo, Marta and Miron, Jeffery A. “Why Does Inflation
Differ Across Countries?” Working Paper No. 5540,
National Bureau of Economics Research, 1996.
Canavan, Chris and Tommasi, Mariano. “On the Credibility
of Alternative Exchange Rate Regimes.” Journal of
Development Economics, October 1997, 54(1), pp. 101-22.
Chortareas, Georgios E.; Stasavage, David and Sterne, Gabriel.
“Monetary Policy Transparency, Inflation and the Sacrifice
Ratio.” International Journal of Finance and Economics,
2002 (forthcoming).
Clare, Andrew and Courtenay, Roger. “Assessing the Impact
of Macroeconomic News Announcements on Securities
Prices Under Different Monetary Policy Regimes.” Working
Paper No. 125, Bank of England, February 2001.
Cukierman, Alex. “Are Contemporary Central Banks
Transparent About Economic Models and Objectives and
What Difference Does It Make?” Working paper, Tel-Aviv
University, 2000a.
___________. “Establishing a Reputation for Dependability
by Means of Inflation Targets,” in Lavan Mahadeva and
Gabriel Sterne, eds., Monetary Frameworks in a Global
Context. London: Routledge, 2000b.
___________. “Accountability, Credibility, Transparency and
Stabilization Policy in the Eurosystem,” in Charles Wyplosz,
ed., The Impact of EMU on Europe and the Developing
Countries. New York: Oxford University Press, 2001.
___________ and Meltzer, Allan H. “A Theory of Ambiguity,
Credibility, and Inflation Under Discretion and Asymmetric
Information.” Econometrica, September 1986, 54(5), pp.
1099-28.
Davidson, Russell and MacKinnon, James G. “Several Tests
for Model Specification in the Presence of Alternative
Hypotheses.” Econometrica, May 1981, 49(3), pp. 781-93.
116

J U LY / A U G U S T 2 0 0 2

REVIEW
Dotsey, Michael. “Monetary Policy, Secrecy, and Federal
Funds Rate Behavior.” Journal of Monetary Economics,
December 1987, 20(3), pp. 463-74.
Faust, Jon and Svensson, Lars E.O. “The Equilibrium Degree
of Transparency and Control in Monetary Policy.”
Discussion Paper No. 2195, Centre for Economic Policy
Research, October 2000.
___________ and ___________. “Transparency and Credibility:
Monetary Policy with Unobservable Goals.” International
Economic Review, May 2001, 42(2), pp. 369-97.
Fry, Maxwell J. “Assessing Central Bank Independence in
Developing Countries: Do Actions Speak Louder than
Words?” Oxford Economic Papers, July 1998, 50(3), pp.
512-29.
___________; Julius, DeAnne; Mahadeva, Lavan; Roger,
Sandra and Sterne, Gabriel. “Key Issues in the Choice of
Monetary Policy Framework,” in Lavan Mahadeva and
Gabriel Sterne, eds., Monetary Frameworks in a Global
Context. London: Routledge, 2000.
Geraats, Petra M. “Why Adopt Transparency? The Publication
of Central Bank Forecasts.” Working Paper No. 41,
European Central Bank, 2001a.
___________. “Transparency of Monetary Policy: Does the
Institutional Framework Matter?” Unpublished manuscript,
University of Cambridge, 2001b.
Ghosh, Atish R.; Gulde, Anne-Marie; Ostry, Jonathan D. and
Wolf, Holger C. “Does the Nominal Exchange Rate
Regime Matter?” Working Paper WP/95/121,
International Monetary Fund, 1995.
Goodfriend, Marvin. “Monetary Mystique: Secrecy and
Central Banking.” Journal of Monetary Economics,
January 1986, 17(1), pp. 63-92
Haldane, Andrew G. and Read, Vicky. “Monetary Policy
Surprises and the Yield Curve.” Working Paper No. 106,
Bank of England, 2000.
Herrendorf, Berthold. “Transparency, Reputation, and
Credibility Under Floating and Pegged Exchange Rates.”
Journal of International Economics, 1999, 49(1), pp. 31-50.
International Monetary Fund. Supporting document to the
Code of Good Practices on Transparency in Monetary and
Financial Policies. Washington, DC: 2000. <www.imf.org/
external/np/mae/mft/sup/index.htm>.

FEDERAL RESERVE BANK OF ST. LOUIS

Issing, Otmar. “The Eurosystem: Transparent and
Accountable or ‘Willem in Euroland’.” Journal of Common
Market Studies, September 1999, 37(3), pp. 503-19.
Jensen, Henrik. “Optimal Degrees of Transparency in
Monetary Policymaking.” Working paper, University of
Copenhagen, 2000.
Kohn, Donald. “The Kohn Report on MPC Procedures.” Bank
of England Quarterly Bulletin, Spring 2001, pp. 35-54.
Kuttner, Kenneth N. and Posen, Adam S. “Inflation, Monetary
Transparency, and G3 Exchange Rate Volatility.” Working
Paper 00-06, Institute for International Economics, 2000.

Chortareas, Stasavage, Sterne

Peek, Joe; Rosengren, Eric S. and Tootell, Geoffrey M.B. “Does
the Federal Reserve Have an Informational Advantage?
You Can Bank on It.” Working Paper No. 98-2, Federal
Reserve Bank of Boston, April 1998.
____________; ___________ and ___________. “Does the
Federal Reserve Possess an Exploitable Informational
Advantage?” Working Paper No. 99-8, Federal Reserve
Bank of Boston, 1999.
Posen, Adam S. “Lessons from the Bundesbank on the
Occasion of Its Early Retirement,” in Lavan Mahadeva
and Gabriel Sterne, eds., Monetary Frameworks in a
Global Context. London: Routledge, 2000.

Lane, Philip R. “Inflation in Open Economies.” Journal of
International Economics, May 1997, 42(3-4), pp. 327-47.

Romer, David H. “Openness and Inflation: Theory and
Evidence.” Quarterly Journal of Economics, November
1993, 108(4), pp. 869-903.

La Porta, Rafael; Lopez-de-Silane, Florencio; Shleifer, Andrei
and Vishny, Robert W. “The Quality of Government.”
Journal of Law Economics and Organization, April 1998,
15(1), pp. 222-79.

Romer, Christina D. and Romer, David H. “Federal Reserve
Information and the Behavior of Interest Rates.” American
Economic Review, June 2000, 90(3), pp. 429-57.

Mahadeva, Lavan and Sterne, Gabriel, eds. Monetary
Frameworks in a Global Context. London: Routledge, 2000.
___________ and ___________. “Inflation Targets as a
Stabilisation Device.” The Manchester School, 2002
(forthcoming).
Masson, Paul R.; Savastano, Miguel A. and Sharma, Sunil.
“The Scope for Inflation Targeting in Developing
Economies.” Working Paper No. 97/130, International
Monetary Fund, October 1997.
Mishkin, Frederic S. “Inflation Targeting for EmergingMarket Countries.” American Economic Review Papers
and Proceedings, May 2000, 90(2), pp. 105-09.
___________ and Schmidt-Hebbel, Klaus. “A Decade of
Inflation Targeting in the World: What Do We Know and
What Do We Need to Know?” in Norman Loayza and
Raimundo Soto, eds., Inflation Targeting: Design,
Performance, Challenges. Santiago: Central Bank of Chile,
2002.

Schaecter, Andrea; Stone, Mark R. and Zelmer, Mark.
“Adopting Inflation Targeting: Practical Issues for Emerging
Market Countries.” Occasional Paper No. 202, International
Monetary Fund, December 2000.
Stein, Jeremy C. “Cheap Talk and the Fed: A Theory of
Imprecise Policy Announcements.” American Economic
Review, March 1989, 79(1), pp. 32-42.
Sterne, Gabriel. “Inflation Targets in a Global Context,” in
Norman Loayza and Raimundo Soto, eds., Inflation
Targeting: Design, Performance, Challenges. Santiago:
Central Bank of Chile, 2002.
Svensson, Lars E.O. “Independent Review of the Operation
of Monetary Policy in New Zealand: Report to the Minister
of Finance.” <http://www.princeton.edu/~svensson/NZ/
RevNZMP.htm>, 2001.
Tarkka, Juha and Mayes, David G. “The Value of Publishing
Official Central Bank Forecasts.” Discussion Paper 22/99,
Bank of Finland, 1999.

Morris, Stephen and Shin, Hyun-Song. “Welfare Effects of
Public Information.” Unpublished manuscript, Cowles
Foundation, Yale University, January 2001.

Thornton, Daniel L. “The Fed’s Influence on the Federal
Funds Rate: Is It Open Market or Open Mouth Operations?”
Presented at The Bundesbank/CFE Conference
Transparency in Monetary Policy, October 1999.

Nolan, Charles and Schaling, Eric. “Monetary Policy
Uncertainty and Central Bank Accountability.” Working
Paper No. 54, Bank of England, October 1996.

Winkler, Bernhard. “Which Kind of Transparency? On the
Need for Clarity in Monetary Policy-Making.” Working
Paper No. 26, European Central Bank, August 2000.

J U LY / A U G U S T 2 0 0 2

117

Chortareas, Stasavage, Sterne

118

J U LY / A U G U S T 2 0 0 2

REVIEW

Commentary
Adam S. Posen
n the span of 15 years, central bank transparency
has gone from being highly controversial to
motherhood and apple pie (or knighthood and
fish and chips to the Bank of England–based authors
of this paper). It is now an accepted broad goal to
which all central banks pay at least lip service. Yet,
like many other broad concepts in macroeconomic
policy, such as “fiscal discipline” or “price stability,”
what central bank transparency actually means
remains rather open to debate. Chortareas,
Stasavage, and Sterne make a valiant attempt to
test whether one particular aspect of transparency—
the release of economic forecasts by central banks
to the public—confers the benefits that some theories predict it should.
Recent monetary theory has had difficulties in
generating much in the way of operational hypotheses about transparency for empirical examination.
The bulk of today’s theoretical models applied to
central bank transparency—including those in the
formal analysis of inflation targeting—cast the issue
as whether or not a representative agent of the
public can discern the central bank’s “type” (wet
or dry; that is, soft or hard on inflation) and therefore
whether it is more or less “credible.”1 This is simply
the wrong question to frame, especially in the developed economies: no one really has any doubts about
the commitment of any current central banks to
low inflation, and any reasons for doubt in this area
would quickly become self-evident.2 Even in the
developing economies (which make up the bulk of
the authors’ sample), discerning runaway fiscal
positions, overt political pressures upon central
bank governors, or economic world views at odds
with today’s (perhaps questionable but evident)
consensus on a vertical long-run Phillips curve is
rather easy. Moreover, the all-or-nothing trigger
strategy in these models implies that, once a central
bank type is revealed, all is determined. This unrealistically reduces the conversation between central
banks and the private sector to a simple long-lasting
thumbs up or thumbs down. For purposes of even
applied research, the failure of the predictions of

I

Adam S. Posen is a senior fellow at the Institute for International
Economics.

© 2002, Institute for International Economics.

these widely used models raises further questions3
about much of the theoretical time-inconsistency
framework that has been the workhorse of monetary economics in the last 20 years.4
The authors, presumably in pursuit of rigor
and microfoundations, go to great pains to survey
the extant literature in order to claim a source for
their two testable hypotheses: that greater transparency reduces average inflation and increases
output volatility. Yet, the fact that these hypotheses
can easily be generated by a host of differing models
and say nothing specific about which (measurable)
aspect of transparency is at issue only underscores
how irrelevant these microfoundations are. The two
real issues are, instead, as follows: (i) to come up
with hypotheses that are specific to transparency
as distinct from just one more set of circular statements indicating that more credible central banks
have better inflation performance and (ii) to derive
reproducible measures of transparency that differentiate among the various types of information that
may be disclosed by central banks. Unfortunately,
the authors stick with the broad hypotheses and
arbitrarily focus on a particular aspect of transparency, idiosyncratically measured. This puts them
somewhat at odds with those few rigorous empirical
investigations of central bank transparency that
have already been done. Does it pay to be transparent? Yes, but not in the way the authors suggest.

SUMMARY OF THE ARGUMENT
The authors’ plan of attack is deceptively simple.
They go through the current theoretical literature
on central bank transparency (primarily the works
of Cukierman, Faust and Svensson, Geraats, and
Jensen). They acknowledge the relative lack of clear
consensus on operational predictions:
1

The seminal article starting this approach is Cukierman and Meltzer
(1986). See Faust and Svensson (2000a) and Geraats (2001), among
others, for examples of these models.

2

Despite the constant invocation of the word “credibility,” it remains
unclear that this concept does any meaningful work, except as a circular validation of successful central banks’ success. See Posen (1998).

3

Broader problems with this framework, such as the observation that
removal of the inflation premium proved rather easy once central
banks chose to remove it, have been noted previously by Blinder (1998),
McCallum (1997), and others.

4

The founding papers being Kydland and Prescott (1977), Calvo (1978),
and Barro and Gordon (1983), with the aforementioned Cukierman
and Meltzer (1986) setting up a new subfield in this area. In the spirit
of transparency, I should acknowledge my own reliance on such models
in, for example, Kuttner and Posen (1999), despite earlier published
misgivings.

J U LY / A U G U S T 2 0 0 2

119

REVIEW

Posen

The currently expanding theoretical literature on central bank transparency identifies
various channels through which increased
transparency may affect economic policy
outcomes. Not all of these move in the same
direction. And neither is there a universally
accepted definition of central bank transparency. Various authors conceptualize
transparency in different ways…
They then abruptly decide in their investigations to
“focus on the detail in which central banks publish
forecasts…,” suggesting that this is of wide interest
without any particular justification for its saliency
over preferences, targets, models, decisionmaking
processes, or other aspects of central bank transparency. Fair enough, were they to make this an
empirically driven exploration of simply what difference forecast disclosures make.
But the authors then underline the arbitrariness
of their focus by spending several pages discussing
the inconsistent theoretical models, most of which
are concerned with the revelation of central bank
preferences (over inflation versus output goals and
for the target level of inflation). At the conclusion
of the paper’s first section, the authors assert that
“transparency is generally conceptualized as the
publication of central bank forecasts, since this
allows the public to observe the control error.” This
claim is incorrect in two senses. First, in terms of
the theory, the public is only able to discern the control error, given the forecast, if they are also informed
of the model of the economy, of the nature of any
revealed shocks (and/or the central bank’s perception of those shocks), and most importantly of the
central bank’s true preferences. The forecast simply is not enough to reveal what the authors claim
it does. Second, in terms of the empirical investigations, it is extremely difficult to say what specifically
should be the impact of forecast releases on observable macroeconomic outcomes without considering
what other information releases or institutional
frameworks the central bank in question exercises.
In any event, the authors then identify two general hypotheses about the effects of transparency
for testing: (i) that increasing transparency reduces
the incentive to inflationary policies, never resulting
in higher inflation outcomes and (ii) that improvement in inflation performance may be offset by a
reduction in the capacity of the central bank to
stabilize the economy. The authors then pull in “a
new data set on central banking institutions,” created
120

J U LY / A U G U S T 2 0 0 2

from a survey of central banks conducted (under
the leadership of one of the authors) by the Centre
for Central Banking Studies of the Bank of England.
The results on the subset of questions on “…the
quality, scope, and frequency of forecasts and the
extent to which forecast errors are monitored and
publicly discussed” are to be used as the measure
of the independent variable of transparency. It should
be kept clear that they are testing joint hypotheses—
their offered hypotheses plus the idea that forecast
releases are a sufficient measure of transparency
plus the idea that the results of their measure accurately portray forecasts—and not just the hypotheses
about transparency per se.
In fact, this is critical, since the availability of
this survey data determines the scope of the authors’
investigations. The authors proceed to conduct a
cross-sectional analysis of the effects of this measure
of forecast disclosure on the level of inflation and
of the volatility of output, in a sample of 87 countries
over the period 1995-99. The four aspects of forecast disclosure are amalgamated into a four-point
Guttman scale, a new twist on the standard additive
indices for such measures. They find a significant
negative correlation between their measure of forecast transparency and average inflation, even when
controlling for such institutional factors as fixed
exchange rate pegs, political instability, and central
bank independence. They find no such significant
correlation between the disclosure of forecasts and
average output volatility. Finally, the authors take on
a large number of what they consider robustness
checks to their results, but they rephrase those as
the question, since “the effort required for a central
bank to publish detailed forecasts may not appear
to be particularly arduous relative to the benefits of
securing lower inflation…Why, then, do many more
central banks not introduce forecasts?” They end up
raising a number of questions about the possibility
of reverse causality, to which I will return.

SOME FRIGHTENING FOOTNOTES ON
THE RESULTS
Leaving aside for the moment the questions of
whether the dependent and independent variables
are properly defined, consider the authors’ results
on their own terms. Is the significant negative correlation between this measure of forecast disclosure
and the average level of inflation (and the absence
of any such correlation with output volatility) well
established? Given the authors’ commendable clarity

FEDERAL RESERVE BANK OF ST. LOUIS

with which they conduct their investigations, there
are a number of details which unfortunately raise
some doubts. These have to do for the most part with
the construction of the forecast disclosure index
and with the nature of the sample determined by
the availability of this index.
A particular concern is the absence of discussion
of subsamples beyond Table 1, where the distribution
of responses across industrial, transitional, and
developing economies’ central banks is displayed.
As shown in Table 1, however, there is a strong correlation between level of development and positive
response to the survey regarding forecast disclosure:
56 percent of industrial countries publish forecasts
with words and numbers versus less than 25 percent
of transitional and less than 33 percent of developing
countries; 25 percent of industrial countries publish
“words and numbers” risks to the forecast, while
only 9 percent of transitional and no developing
countries do. While the authors later include per
capita gross domestic product (GDP) as a control
variable, this is likely insufficient to account for
such differences. In fact, in their robustness checks,
as displayed in the authors’ Figure 2, the authors
note that “[t]he coefficient becomes progressively
more negative as we include high-inflation countries
in the sample, suggesting that the estimated antiinflationary effect of publishing a forecast in our
Table 3 regressions may be somewhat inflated by
the inclusion of high-inflation countries.” This is
more than somewhat inflated—as the authors move
from 50 observations to 63 (in their total sample of
94), the estimated coefficient drops below the lower
bound of significance on the point estimate at 50,
while at 50 it was not significantly different from
zero. Considering that industrial economies make
up only 28 observations of their full sample, this
hardly is convincing of subsample stability. This
problem is exacerbated when one considers that,
by dummying out the pegged exchange rate countries in their Table 3, they are disproportionately
taking out (European) developed countries from
their sample.
A second set of concerns has to do with embedding these results in the other aspects of central
bank structure. In short, for us to believe in these
results about the effects of forecast disclosure, we
have to believe that other aspects of central bank
transparency, independence, exchange rate regimes,
and the like are appropriately held constant. While
the authors make some attempt to do so, notably
by including the fixed exchange rate yes/no classifi-

Posen

cation from the International Monetary Fund’s (IMF)
lists in their Table 3 regressions, these are insufficient.
The authors’ controls for inflation or money [sic]
targets in Table 3 are based on central banks’ selfreporting from the same central bank survey (see
their footnote 22). As there is great dispute over who
should call themselves “inflation targeters” (including those who are “implicit targeters”) and similar
dispute over “monetary targeters” (with many who
assume this label having proven their tendency to
ignore their stated monetary targets while making
policy decisions), this measure should be replaced
with a standardized, independently observable
means of verifying targeter status.
Moreover, given the narrowness of the authors’
“focus” on forecast disclosure, it is difficult in the
time period they consider (the late 1990s) to disentangle high scores on this measure from the adoption
of inflation targeting writ large. The authors themselves state in footnotes 30 and 31 that there is some
coincidence of the two and that their reporting of
self-assessed measures of targeter status leads to
some strange results (e.g., that only three countries
have changed their regimes four or more times since
Bretton Woods, which points up a very awkward
definition of regime). In their controls for central
bank independence, they use a very noisy measure
made up of five elements even though the literature
has long since established that only one aspect of
central bank transparency has predictive power for
the industrial economies (restrictions on direct
central bank purchases of government debt) and a
different single aspect has power for developing
economies (turnover of central bank governors).5
There are also some plain strange results which
are disclosed in the authors’ discussion, but not taken
sufficiently seriously. In footnote 23, they point out
that, while they get their expected results for a negative correlation between forecast disclosure and
inflation level, they find no link between their measure and inflation volatility, despite the long-standing
correlation of inflation’s volatility and level. Given
that the relationship between inflation volatility
and level increases with the level of inflation, this
may be another indicator that their desired correlation is being driven unduly by the low-inflation
countries. In footnote 27, the authors note that only
3 of their 87 countries get a top score on their fourpoint Gutman index of forecast disclosure; but footnote 41 shows that all the major variation across
5

Berger, de Haan, and Eijffinger (2001) and Eijffinger and de Haan (1996).

J U LY / A U G U S T 2 0 0 2

121

REVIEW

Posen

countries is in the jump between scores 3 and 4—
in other words, it is only whether the risks to forecast
are discussed that matters, and this should have
been done as a simple dummy. But, definitionally,
this is the aspect of the authors’ forecast disclosure
that has most to do with preferences, least to do
with forecast narrowly defined, and probably least
dependable as gathered in a self-description. This
re-raises the questions about the authors’ definition of the independent variable as a measure of
transparency.
Finally, the authors’ Table 8, meant as a robustness check using lagged inflation and volatility to
test for reverse causation between low inflation
and forecast disclosure, shows that the greater the
past inflation and output deviations (from “desirable,” presumably high and low, respectively) and
the greater the past volatility of each, the less disclosure of forecasts.6 Given that this runs opposite
to the theoretical models (e.g., those of Faust and
Svensson) that the authors invoke to justify their
investigations suggesting that central banks with
less credibility will need to disclose more, this reraises questions about the authors’ definition of the
dependent variables of interest as average inflation
and average output volatility. It also suggests, as the
authors acknowledge, “that it is difficult to be certain
there is not some endogeneity between transparency
[as defined by their forecast disclosure measure]
and inflation…”

WHAT DEFINITION OF TRANSPARENCY
IS RELEVANT?
Returning to the design of the authors’ investigation, we have to reconsider the explanatory variable.
As discussed in Posen (1999), we can use the control
theory view of monetary policy to come up with
the aspects of central bank behavior that can be
revealed by transparency. Essentially, the central
bank has preferences over macroeconomic outcomes, an intermediate target linked to a model of
the economy, and a forecast of that economy based
on shocks revealed to date. If the issue is to determine the central bank’s preferences on the relative
weight of inflation versus output goals or for a
longer-term target for inflation, the public and markets need two of these three plus the outcome in
the economy in order to determine the remaining
one. A forecast alone, even combined with economic
results, does not necessarily reveal preferences, without a clear revelation of the central bank’s model.
122

J U LY / A U G U S T 2 0 0 2

Yet, in the real world, central banks never have one
exact model that is relied upon consistently, especially when there are multiple monetary policy
decisionmakers (voters on a committee) and changing economic structures; and, in the real world, the
public and even markets are unable to discern such
models and reason backwards from them and from
forecasts and shocks to determine central bank
preferences, even if it is possible theoretically.
Ultimately, central bank transparency is about
the broad communication of general preferences
by the central bank. When the public trusts in the
preferences of the central bank, its inflation expectations will respond differently to shocks than when
trust is less (King, 1997; Kuttner and Posen, 1999).
In this regard, it is important to have a forecast definition that is not just the revelation of numbers, but
a mechanism for a structured conversation with the
public that conveys the central bank’s evaluation
of shocks and reinforces the longer-term goals. Posen
(2000) sets out this framework and identifies it as
the source of the Bundesbank’s success. This would
imply that the independent variable to measure is
the institutional framework committing the central
bank to regularly report on its activities and explain
its performance against its stated goals (Kuttner and
Posen, 2001, give one way of operationalizing such
a measure for empirical work).
Interestingly, this is consistent with the fact noted
in the previous section that the bulk of variation
and explanatory power in the authors’ measure has
to do with whether or not (in their survey) central
banks discuss risks to the forecast. Thus, it is not
the forecast specifically or the implication of reasoning backwards to a model (to “discern control errors”
as the authors have it) that counts. It is the provision
of context for economic decisionmakers. This underlines the difficulties the authors have in “controlling”
for the broader institutional structures of inflation
targeting, central bank independence, and exchange
rate pegs in their investigations.
Given that the authors have only a cross-section
of central bank self-definitions, because of their
commitment to use survey data from a specific onetime survey, they cannot distinguish whether the
disclosure of forecasts is the result of a prior or
contemporaneous adoption of inflation targeting
6

Another problem with this examination of endogeneity is that all
countries’ desirable real growth level is presumed to be 2 percent,
and the desirable inflation level is presumed to be 2.5 percent, which
is rather inappropriate for a sample that includes numerous developing
and transitional economies.

FEDERAL RESERVE BANK OF ST. LOUIS

or of something called (but not necessarily indicative
of) central bank independence. Their use of the IMF’s
definition of exchange rate pegs is particularly misleading in this set-up: it is impossible to tell whether
inflation forecasts do not offer “transparency
benefits” when pegs are in place (i) because of a
lack of discretion or (ii) because inflation is irrelevant
to the goal of the central bank (especially because, as
is now well-known, many floaters seem like fixers).
Ultimately, the authors’ use of their idiosyncratic
measure of forecast disclosure as a measure of
transparency, and their reliance on the one-time
observation of it, renders their definition of the
explanatory variable irrelevant.

WHAT EXPECTED IMPACT OF
TRANSPARENCY IS REASONABLE
TO TEST?
Whatever the measure of transparency, the
authors have another hurdle in deciding which
variables should be affected by it. As mentioned,
they test two—average inflation level and average
output volatility—finding support for a negative
correlation between transparency and the first
and no correlation with the second. But are these
the right variables to test? It is not clear that average
macroeconomic outcomes, particularly of real variables, are the right dependent variables for investigations of transparency. The obvious problem,
particularly on the real side, is the inability to control
for a sufficient range of factors, including the degree
of structural change induced (or not) by changes in
monetary regime, a la Lucas.
More importantly, though, is the question of
what one thinks central bank transparency is for.
Even if the central bank has a greater commitment
to low inflation, or can be held to one because of
increased transparency (which seems to be the
authors’ vision), dependent upon the shocks that
the bank faces, it can choose to let inflation rise
temporarily. For example, after the second oil shock
in 1979, the Bundesbank allowed its “unavoidable
rate of price increase” to climb from 2 percent to
over 4 percent in 1980 and then transparently
brought it down year-by-year to 1986 (see Laubach
and Posen, 1998). This may well have been the
optimal response to a supply shock for a (nearly)
inflation-targeting central bank (see Bernanke et al.,
1999, Chap. 4). What made the Bundesbank’s policy
a success was that this optimal easing did not result
in pass-through of a second round of inflation
increases, or even a particularly costly disinflation.

Posen

Kuttner and Posen (1999, 2001a,b) argue that this
means the real issue of central bank transparency
is therefore the reaction of inflation expectations
and (long-bond) markets conditional on central bank
action. The implication is that the macroeconomic
variables likely to be affected most directly by transparency are inflation persistence, not inflation level,
and the ratio of inflation and output volatility, not
the level of output volatility. Following King’s (1997)
articulation of the optimal state-contingent rule
strategy for an inflation-targeting central bank,
increased transparency should remove Svensson’s
(1997) stabilization bias, reducing inflation persistence, and should free the central bank to be more
aggressive about stabilizing output. From this point
of view, the authors’ focus on inflation level as the
dependent variable is misguided, even if one accepts
that the point of transparency is to make the central
bank more sensitive about its reputation.

THE CELL PHONE–LIKE USES OF
CENTRAL BANK TRANSPARENCY7
What are the hypotheses that might be tested
about central bank transparency, and what are the
appropriate measures that might be utilized instead
of the ones in the authors’ paper? Think of the relationship between a central bank and the attentive
public as analogous to the relationship between a
married couple. Good communication is key if the
relationship is to cope well with the bumps and
bruises of everyday life. While familiarity removes
the need for too much explicitness in communication, changing surroundings and personal needs
over time make it dangerous to take too much previous understanding for granted. Presumably, the
relationship is for the long-term and day-to-day misunderstandings do not imperil the relationship, but
they can make it less pleasant or mutually beneficial.
My wife already has a (subjective) estimate of
how considerate a husband I am, that is, my “type”
on a scale of wet to dry. While she may update it if
I were to do something extraordinarily bad or good
more than once, she is unlikely to do so as a result
of our quotidian existence. In fact, small variations
in the day-to-day signals she gets from me are likely
to be ignored, while any big changes will be easily
noticed, whatever the day-to-day signals. Communication between us, therefore, is not about her judging
my type or my commitment—instead, it is about
the smaller, practical issues of coordination.
7

This section is drawn and adapted from Posen (2002).

J U LY / A U G U S T 2 0 0 2

123

REVIEW

Posen

This fall, in response to the more worrisome
world in which we find ourselves, my wife had me
get a cell phone. This cell phone increases the transparency with which I live my life: I can be reached
at any time we are apart, and similarly I can reach
her; in an emergency (God forbid), I can make a
call; and, most concretely, we can use it to update
each other on our schedules, such as who is likely
to get home first from work. I can be more or less
considerate about updating her by using the cell
phone (probably well within one standard deviation
of how considerate I was prior to having this transparency mechanism). Yet, her primary concern is
to know where I am for practical reasons and not
to have a means of monitoring my commitment to
her well-being.
Being a bit more explicit, there are six conceivable channels through which my use of the cell
phone could affect her:
• She could be more relaxed in general if
updates via cell phone about my comings
and goings reduce her worry.
• She could find life a little easier if this device
makes it simpler for us to adjust our schedules.
• She could find that, after all, she really does
not care about what I say on the cell phone,
just that I am no less prompt or responsive
than I was before.
• She could herself become more cognizant of
my activities and use this to demand greater
responsiveness, perhaps interfering with my
normal habits.
• She could become annoyed if I were to say
that I would call at a specified time and am
late in doing so.
• Or, she could be more (rather than less) worried if she came to count on my calls and
events beyond my control, even innocuous
ones, prevented me from calling.
It is ultimately an empirical matter which of
these various, occasionally contradictory, but all
theoretically plausible, effects will turn out to be of
practical import to the day-to-day functioning of our
relationship. To repeat, none of this, however, should
change her basic estimate of what type of husband
I am and therefore of my level of commitment.
Now, consider the analogous situation of a
central bank communicating with its public (including financial markets) as part of an ongoing relationship based on a fundamental assumption of trust
and good will. The addition of various recent mea124

J U LY / A U G U S T 2 0 0 2

sures of transparency to monetary policymaking—
announced inflation targets, disclosure of votes,
timely publication of minutes, explicit forecasts, and
so on—in hopes of showing sensitivity to markets
and the public’s concerns are the equivalent of my
acquisition and use of the cell phone in response to
my wife’s concerns. Being a bit more explicit, there
are six conceivable channels through which central
banks’ enhanced transparency could influence the
public’s and markets’ reaction to monetary policy
(see Table 1):
• The public could be reassured in general if
updates via regular releases about policy
decisions reduce worry about what is going
on in the short-term.
• The public, and particularly markets, could
find it a little easier to plan their actions if
transparency about the details of the economy
makes the world more predictable.
• The public could find that, after all, what
central banks say is irrelevant, so long as the
central banks are no less responsive to shocks
than before.
• The public, and particularly markets, could
become more cognizant of central bank
activities and use this to demand greater
responsiveness contingent on specific targets,
perhaps interfering with central banks’ normal habits.
• The public could become annoyed, adding
political pressures, if central banks were to
announce a specific target or forecast, and
fail to meet it.
• Or, the public could be more (rather than less)
worried in general if it demanded adherence
to announced targets, diverting central banks
from responding optimally to shocks.
As Table 1 summarizes, each of these six practical views of central bank transparency (reassuring,
detailed, irrelevant, contingent, annoying, and
diverting) focuses on a specific set of information
releases, with a specific hypothesis for the impact
of those releases upon expectations and central
bank behavior and for the mechanism by which
this impact is transmitted. None of these hypotheses
can be ruled out a priori on theoretical grounds,
and these multiple options show the diversity of
implications possible from (and proposed in) the
current literature on central bank transparency. All
are subject to empirical examination, and some
work has already been completed. On the reassuring

FEDERAL RESERVE BANK OF ST. LOUIS

Posen

Table 1
Practical Views of Central Bank Transparency
View of
transparency

Information
released

Hypothesized
impact

Cause of
impact

Testable
impact

Reassuring

Regime, speeches

Greater flexibility

Greater trust

Inflation persistence

Supported by
evidence

Detailed

Forecasts, models

Greater disclosure

Greater
predictability

Market response

Supported by
evidence

Irrelevant

Whatever

None

Only actions matter

Inflation level

Exact opposite—
lower inflation

Contingent

Mandate, votes

Stronger
reputation

Greater credibility

Inflation volatility

Unsupported by
evidence

Annoying

Minutes, targets

Greater confusion

Increased
politicization

Effect of target
misses

Unsupported by
evidence

Diverting

Targets, goals

Less discretion

Increased oversight

Output volatility

Unsupported by
evidence

view, as discussed in the preceding section, Kuttner
and Posen (1999, 2001a,b) have offered evidence
that the announcement of explicit inflation targets
is associated with a decline in inflation persistence
and no rise in output volatility; this tends to contradict the diverting view, which is the mirror image
with opposite predictions. On the details view,
Daniel Thornton has done a series of papers (including Poole, Rasche, Thornton, 2002, for this conference) indicating the benefits of disclosure for the U.S.
Treasuries market.8 The authors of “Does It Pay To
Be Transparent?” can find in these papers some
evidence that their answer of yes is correct, even
though the authors’ own method of assessing the
contingent view is unconvincing.

REFERENCES
Ball, Laurence. “Efficient Rules for Monetary Policy.”
International Finance, 1999, 2(1), pp. 63-83.

Result

International Experience. Princeton: Princeton University
Press, 1998.
Blinder, Alan S. Central Banking in Theory and Practice.
Cambridge, MA: MIT Press, 1998.
Bomfim, Antulio N. and Reinhart, Vincent R. “Making News:
Financial Market Effects of Federal Reserve Disclosure
Practices.” Discussion Paper No. 2000-14, Federal Reserve
Board Finance & Economics, 2000.
Calvo, Guillermo. “On the Time Consistency of Optimal
Policy in the Monetary Economy.” Econometrica, 1978,
46(6), pp. 1411-28.
Clare, A. and Courtenay, R. “Assessing the Impact of Macroeconomic News Announcements on Securities Prices
Under Different Monetary Policy Regimes.” Working
Paper No. 125, Bank of England, 2001.

Barro, Robert J. and Gordon, David B. “Rules, Discretion
and Reputation in a Model of Monetary Policy.” Journal
of Monetary Economics, 1983, 12(1), pp. 101-21.

Cukierman, Alex. “Are Contemporary Central Banks
Transparent About Economic Models and Objectives and
What Difference Does It Make?” Federal Reserve Bank of
St. Louis Review, July/August 2002, 84(4), pp. 15-36

Berger, Helge; de Haan, Jakob and Eijffinger, Sylvester C.W.
“Central Bank Independence: An Update of Theory and
Evidence.” Journal of Economic Surveys, 2001, 15(1), pp.
3-40.

__________ and Meltzer, Allan H. “A Theory of Ambiguity,
Credibility, and Inflation Under Discretion and Asymmetric
Information.” Econometrica, 1986, 54, pp. 1099-128.
8

Bernanke Ben S.; Laubach, Thomas; Mishkin, Frederic S.
and Posen, Adam S. Inflation Targeting: Lessons from the

Though Clare and Courtenay (2001) and Bomfim and Reinhart (2001)
find indications of some less beneficial effects at very short time
horizons.

J U LY / A U G U S T 2 0 0 2

125

REVIEW

Posen

Eijffinger, Sylvester C.W. and de Haan, Jakob. “The Political
Economy of Central Bank Independence.” Princeton
Special Papers in International Economics, No. 19, 1996.
Faust, Jon and Svensson, Lars E.O. “Transparency and
Credibility: Monetary Policy with Unobservable Goals.”
Discussion Paper No. 1852, Centre for Economic Policy
Research, 2000a.
___________ and ___________. “The Equilibrium Degree of
Transparency and Control in Monetary Policy.” Discussion
Paper No. 2195, Centre for Economic Policy Research,
2000b.
Geraats, Petra A. “Why Adopt Transparency? The Publication
of Central Bank Forecasts.” Working Paper No. 41,
European Central Bank, January 2001.
Jensen, Henrik. “Optimal Degrees of Transparency in
Monetary Policymaking.” Discussion Paper No. 2689,
Centre for Economic Policy Research, 2000.
King, Mervyn. “Changes in UK Monetary Policy: Rules
and Discretion in Practice.” Journal of Monetary Economics,
June 1997, 39(1), pp. 81-97.
Kuttner, Kenneth N. “Monetary Policy Surprises and Interest
Rates: Evidence from the Fed Funds Futures Market.”
Journal of Monetary Economics, June 2001, 47(3), pp.
523-44.
___________ and Posen, Adam S. “Does Talk Matter After All?
Inflation Targeting and Central Bank Behavior.” Working
Paper No. 99-10, Institute for International Economics,
1999.
___________ and ___________. “Inflation, Monetary
Transparency, and G3 Exchange Rate Volatility,” in M.
Balling, E. Hochreiter, and E. Hennesy, eds., Adapting to
Financial Globalisation. London: Routledge, 2001a.
___________ and ___________. “Beyond Bipolar: A ThreeDimensional Assessment of Monetary Frameworks.”
International Journal of Finance and Economics, October
2001b, 6(4), pp. 369-87.
Kydland, Finn E. and Prescott, Edward C. “Rules Rather

126

J U LY / A U G U S T 2 0 0 2

Than Discretion: The Inconsistency of Optimal Plans.”
Journal of Political Economy, June 1977, 85(3), pp. 473-90.
Laubach, Thomas and Posen, Adam S. “Disciplined
Discretion: Monetary Targeting in Germany and
Switzerland.” Princeton Essays in International Finance,
December 1998.
Mahadeva, Lavan and Sterne, Gabriel, eds. Monetary Policy
Frameworks in a Global Context. London: Routledge
(Bank of England), 2000.
McCallum, Bennett T. “Crucial Issues Concerning Central
Bank Independence.” Journal of Monetary Economics,
June 1997, 39(1), pp. 99-112.
Poole, William; Rasche, Robert H. and Thornton, Daniel L.
“Market Anticipations of Monetary Policy Actions.” Federal
Reserve Bank of St. Louis Review, July/August 2002, 84(4),
pp. 65-94.
Posen, Adam S. “Central Bank Independence and
Disinflationary Credibility: A Missing Link?” Oxford
Economic Papers, July 1998, 50(3), pp. 335-59.
___________ . “No Monetary Masquerades for the ECB,” in
E. Meade, ed., The European Central Bank: How
Accountable? How Decentralized? Washington, DC: American
Institute for Contemporary German Studies, 1999.
___________ . “Lessons from the Bundesbank on the
Occasion of Its Early Retirement,” in Lavan Mahadeva
and Gabriel Sterne, eds., Monetary Policy Frameworks in
a Global Context. London: Routledge (Bank of England),
2000.
___________ . “Six Practical Views of Central Bank
Transparency,” in P. Mizen, ed., Central Banking, Monetary
Theory and Practice: Essays in Honour of Charles Goodhart.
Vol 1. London: Edward Elgar, 2002 (forthcoming).
Svensson, Lars E.O. “Inflation Forecast Targeting:
Implementing and Monitoring Inflation Targets.” European
Economic Review, June 1997, 41(6), pp. 1111-46.

Does Inflation
Targeting Matter?
Manfred J.M. Neumann and Jürgen von Hagen

I. INTRODUCTION
ince it was first introduced by New Zealand
and Chile in 1990, Canada in 1991, and the
United Kingdom in 1992, inflation targeting (IT)
has received a lot of attention in the public and
academic debate over the design of monetary policy
regimes. In part, this attention reflects the growing number of countries that have adopted an IT
regime over the past decade. Schaechter, Stone,
and Zelmer (2000) count 13 countries with IT
experience as of February 2000: Australia, Brazil,
Canada, Chile, the Czech Republic, Finland, Israel,
New Zealand, Poland, South Africa, Spain, Sweden,
and the United Kingdom. Corbo, Landerretche
Moreno, and Schmidt-Hebbel (2001) add Korea
and Thailand to this list. Most recently, Hungary
and Switzerland have introduced inflation targets.
Since the Bundesbank declared a normative target
inflation rate as the principal goal of its monetary
policy, Mishkin and Posen (1997), following von
Hagen (1995), classify Germany as an early case
of IT, although the German inflation objective was
formulated for the medium run, while the short-run
focus of the Bundesbank’s monetary strategy was
on the annual monetary target.
As early as 1994, an academic conference
reviewed the experience with IT (Leiderman and
Svensson, 1995). A number of more recent studies
summarize the experience gained with IT over
the past decade (Bernanke et al., 1999; Corbo,
Landerretche Moreno, and Schmidt-Hebbel, 2001).
These papers focus on a variety of questions related
to the choice of monetary regimes, including the
improvement in inflation performance, in monetary

S

Manfred J.M. Neumann is a professor of economics at the University
of Bonn and a member of the Academic Advisory Council of the
German Federal Ministry of Economics and of the Bundesbank’s
Research Advisory Council. Jürgen von Hagen is a professor of economics and director of the Center for European Integration Studies at
the University of Bonn, a research fellow of CEPR, and a member of
the Academic Advisory Council of the Federal Ministry of Economics,
the Bundesbank’s Research Advisory Council, and the French Comité
Economique de la Nation. The authors thank our discussants, Frederik
Mishkin and Stephen Cecchetti, for valuable comments and suggestions.

© 2002, The Federal Reserve Bank of St. Louis.

policy credibility, and in the sacrifice ratio, i.e., the
cost of lowering inflation.
The debate over IT exposes a couple of odd
characteristics. One is that, despite a lot of effort,
empirical studies on IT have consistently failed to
show convincingly that IT has been an important
factor in speeding up disinflation, achieving lower
inflation rates, lowering the cost of disinflation, or
raising the credibility of the central bank’s commitment to low inflation. An important challenge for
IT supporters comes from the observation that the
environment of the 1990s, when IT was first undertaken, was generally benign, implying that the particular strategy of IT may have done little to improve
monetary policy outcomes over what any reasonable strategy could have achieved (Cecchetti and
Ehrmann, 2000). We will review this literature in
more detail in Section II.
The other oddity is that, despite the lack of
empirical evidence supporting the advantages of
IT, its proponents consistently argue that the failure
to adopt it jeopardizes the ability of a central bank to
deliver price stability. For example, Bernanke et al.,
after presenting pages upon pages of rather inconclusive evidence regarding the superiority of IT,
nevertheless submit a plea for the Fed to adopt IT
in the end, arguing that this is critical to secure price
stability in the United States in the post-Greenspan
era. Similarly, Alesina et al. (2001), in a discussion of
the European Central Bank’s (ECB) monetary policy,
boldly claim that the ECB could improve its policy
by adopting a version of IT, although they neither
present supporting evidence for this claim nor even
indicate where such evidence might be found. It is
understandable that some academics find IT intellectually attractive for the outright declaration of central
bank intentions and the increase in accountability
implied by the announcement of an inflation target.
Yet, others remain skeptical: Both the ECB (2001)
and the Fed (Gramlich, 2001) have argued that they
do not regard IT as an appropriate monetary policy
framework.
In this paper, we contribute to the assessment
of IT in several ways. After reviewing earlier studies
of IT experiences, we examine the changes of shortterm interest rates and of consumer price inflation
and output gaps at different frequencies, as well as
show that IT has reduced short-term variability in
central bank interest rates and in headline inflation.
We interpret this as evidence that IT has induced
central banks to pay less attention to short-run news
and noise and adopt a steadier course of monetary
J U LY / A U G U S T 2 0 0 2

127

Neumann and von Hagen

policy. Next, we study central bank behavior and
ask whether IT has resulted in a change in central
bank reactions to key monetary policy variables.
We estimate Taylor rules to describe central bank
policies and find that these rules indeed indicate
changes in the reaction of IT central banks to output
and inflation. Furthermore, we find that this fact
distinguishes them from a group of other central
banks that we use as a benchmark. This difference
suggests that IT has affected central bank behavior.
Third, we take an event-study approach to compare the performance of IT and non-IT central banks
under two similar, exogenous shocks, namely, the
oil price hikes of 1978 and 1998. We find that IT
countries realized a credibility gain in the second
episode compared with the first, allowing them to
keep interest rates lower and face these shocks with
a much less contractionary monetary policy. Our
paper thus suggests that IT has indeed changed
central bank behavior and that this policy yields
benefits under those circumstances that central
banks have historically found difficult to cope with.
But comparing IT and non-IT central banks shows
that the former have conformed to the standards of
monetary policy set by the Bundesbank, the Fed,
and the Swiss National Bank in the late 1970s and
1980s. Thus, we cannot confirm the superiority of
IT over other monetary policy strategies geared at
price stability.

II. BENEFITS OF INFLATION TARGETING:
WHAT ARE THEY?
The literature on the design of monetary policy
under IT and experiences with the new regimes has
expanded rapidly in the past six years, partly reflecting the growing number of countries adopting such
a regime. Most of the studies presented in the literature look at the time-series behavior of inflation,
output, unemployment, and interest rates to see
whether the new regime affected the dynamic interaction of these variables.
Early studies by Ammer and Freeman (1995)
and Freeman and Willis (1995) present vector autoregression (VAR) models for real gross domestic
product (GDP), price levels, and interest rates. Ammer
and Freeman compare inflation forecasts generated from their VARs with actual outcomes in New
Zealand, Canada, and the United Kingdom and with
actual time series. They find that inflation fell by
more than was predicted by the models in the early
1990s, an indication of the effect of the new regime.
128

J U LY / A U G U S T 2 0 0 2

REVIEW

The evidence regarding the cost of disinflation is
more mixed. Real GDP fell and recovered in New
Zealand and the United Kingdom, but fell and
remained low in Canada. Freeman and Willis (1995)
note that long-term interest rates fell in the three
IT countries in the early 1990s, an indication of
improving monetary policy credibility. However,
long-term rates came back a few years later. This
occurrence could indicate that the credibility effect
of IT did not last long, although Freeman and Willis
ascribe most of the resurgence in long-term rates
to a rise in interest rates worldwide.
Mishkin and Posen (1997) present careful
accounts of the IT experiences in New Zealand,
Canada, and the United Kingdom and estimate VARs
of core inflation, GDP growth, and short-term central
bank rates for the same countries. They point out
that the disinflation had actually been almost
completed in New Zealand, Canada, and the United
Kingdom before the introduction of IT. This suggests
that IT might have served to lock in the gains from
disinflation rather than to facilitate disinflation.
Mishkin and Posen then ask whether IT helped these
countries to keep inflation rates down following the
initial disinflation. Comparing dynamic simulations
with actual outcomes, they find that inflation and
interest rates remained below their counterfactuals
after the introduction of IT, while output did not.
In particular, actual inflation did not rise with the
upswing in the business cycle, as it would have prior
to IT. One shortcoming of these results is the absence
of confidence bands in their dynamic forecasts,
which implies that their positive conclusion relies
on visual inspection alone. Laubach and Posen
(1997) find further evidence supporting these results
by analyzing interest rates and consumer expectations. Kahn and Parrish (1998) observe a number of
inflation blips in New Zealand and Canada during
the 1990s, suggesting that the central banks did
not necessarily achieve better control through IT.
Debelle (1997) looks at a larger sample of IT
countries, including the former three plus Sweden,
Finland, Spain, and Australia. He notes the decline
in inflation rates and long-term bond rates achieved
in these countries but points out that unemployment
increased in the same countries during the disinflation, indicating that the latter did not come without
cost. Furthermore, Debelle points out that other
countries achieved similar reductions in inflation
rates during the first half of the 1990s, making it
difficult to conclude that the disinflation is a success
of the IT regime. Siklos (1999) argues that the intro-

FEDERAL RESERVE BANK OF ST. LOUIS

duction of IT should change the persistence of inflation rates, as central banks no longer tolerate lasting
movements of the actual rate outside the target
range. Using univariate time-series techniques and
quarterly inflation data, he finds that first-order autocorrelation of inflation rates has declined significantly after the introduction of IT in Australia,
Canada, and Sweden, but not so in Finland, New
Zealand, Spain, and the United Kingdom.1
Other empirical studies have focused more on
the behavior of central banks before and after the
introduction of IT. Kahn and Parrish (1998) note that
the volatility of official central bank interest rates
(both nominal and real) has declined substantially
after the introduction of IT. They argue that this
could reflect a change in monetary policy away
from activist policies, but it might also be due simply
to a more stable economic environment in the
1990s. The fact that interest rate volatility decreased
in the United States, too, lends some support to the
second interpretation over the first. Kahn and Parrish
also estimate monetary policy reaction functions
for New Zealand, Canada, Sweden, and the United
Kingdom relating current official rates to their own
lags as well as lagged inflation, unemployment, and
exchange rates. They find significant structural
breaks in these functions for New Zealand and the
United Kingdom. In the case of New Zealand, this
break is associated with a stronger reaction of the
official rate to lagged inflation and unemployment
and a weaker reaction to lagged exchange rates. In
the U.K. case, the break mainly reflects the loss of
significance of the exchange rate in the reaction
function. In neither case is it obvious that the
changes in the reaction function are consistent with
a shift to inflation as the primary goal of monetary
policy after the adoption of IT.
Kuttner and Posen (1999) interpret the introduction of IT as a change in the central bank “type,”
i.e., a shift in the parameters of the central bank’s
preference functions, toward a stronger commitment to price stability and less discretionary policy.
According to their model, such a shift should imply
a decline in inflation persistence. The response of
short-term interest rates to inflation shocks could
increase or decrease, however, depending on the
central bank type prior to IT. Kuttner and Posen estimate VARs for inflation, unemployment, and shortand long-term interest rates to test the impact of IT.
Their results are rather ambiguous. For Canada and
the United Kingdom, they find no change in the
persistence of inflation after the introduction of IT,

Neumann and von Hagen

nor a change in the central bank reaction functions.
For New Zealand, they do find a reduction in the
persistence of inflation, but also a stronger reaction
to unemployment with no change in the reaction
to inflation in the central bank’s reaction function.
Cecchetti and Ehrmann (2000) look at a sample
of 23 countries, both developed and less developed,
9 of which have central banks pursuing inflation
targets. A first observation from their data is that
inflation rates generally came down in the 1990s
compared with the 1980s independent of the geographical region of the country, their pursuit of
inflation targets, or whether they were striving to
enter the European Monetary Union at the end of
the decade. This indicates that the 1990s were a
period friendly to increased price stability. Cecchetti
and Ehrmann then ask whether this improvement
in price stability reflects a change in central bank
aversion to inflation and whether this is particular
to inflation targeters or not. They find that their
measure of inflation aversion indeed rises between
the mid-1980s and the 1990s among IT central
banks. Unfortunately, their methodology provides
no standard errors for testing whether these changes
are statistically significant. A similar observation
of rising inflation aversion holds for other central
banks, too. Furthermore, inflation aversion of the
IT central banks rises to no more than the levels of
non-IT central banks. Thus, rather than being a
product of the IT regime, the rise in inflation aversion
may just reflect the general culture among central
bankers in a decade that provided an environment
conducive to price stability and, therefore, an opportunity to move away from the inflationary policies
of the 1970s and 1980s.
Corbo, Landerretche Moreno, and SchmidtHebbel (2001) build on this study and show that
inflation aversion increased during the 1990s among
IT countries that do not belong to the group of
industrialized economies, most notably Israel and
Chile. Among industrialized countries, inflation targeters do not show an increase in inflation aversion.
The same authors also suggest that IT central banks
lowered the dynamic reactions to current inflation
and output gap shocks. They also find that inflation
persistence has declined substantially among IT
countries since the introduction of the new regime.
According to their results, inflation persistence was
much higher in IT countries than in others before
1

Siklos finds that the first-order autocorrelation lost statistical significance after the introduction of IT in Finland, Spain, and the United
Kingdom, but this result may be due to the relatively short sample he
has in quarterly data.

J U LY / A U G U S T 2 0 0 2

129

REVIEW

Neumann and von Hagen

the introduction of IT, i.e., the new regime has produced more similar inflation dynamics.
A central feature of IT regimes is the publication
of inflation forecasts and surrounding analysis to
explain the central bank’s assessment of monetary
conditions and its monetary policy actions to the
public. IT has thus contributed to improving the
transparency of central bank policy. This is the focus
of Chortareas, Stasavage, and Sterne (2000), who
develop a measure of monetary policy transparency
and use it to compare monetary policy performance
across countries. These authors construct a panel
data set for 87 countries and show that transparency
has a significant negative impact on average inflation rates over time. This corroborates the impression that inflation targeters were able to bring and
hold inflation down in the 1990s; at the same time,
their results also show that IT is but one way of
achieving that.

III. INFLATION TARGETING:
NEW TIME-SERIES EVIDENCE
In this section, we present new empirical evidence on the performance of IT central banks. Following the comparative approach of previous papers,
we consider a group of IT countries (viz., Australia,
Canada, Chile, New Zealand, Sweden, and the United
Kingdom) and a group of non-IT countries (Germany,
Switzerland, and the United States). Our reference
group thus contains two countries that used monetary aggregates as their intermediate targets of monetary policy in the past, Germany and Switzerland.
We are primarily interested in this question: Did
central bank behavior change under inflation targeting and, if so, how?
We use monthly as well as quarterly data spanning the period from September 1978 to March 2001.
For Germany, we end the sample in December 1998
to account for the start of the European Monetary
Union. The sample period is divided into two subperiods in order to test whether IT made any noticeable difference. The first sample runs up to June
1992, and the second sample starts in January 1993.
We leave out the second half of 1992 to eliminate
the interest rate effects of the crisis of the European
Monetary System. The choice of subperiods is somewhat arbitrary, as some IT countries such as Chile
and New Zealand had already adopted the new
policy regime in 1990, whereas countries such as
Sweden and the United Kingdom only started in
1993. Since we do not focus on any single country
130

J U LY / A U G U S T 2 0 0 2

but are looking for cross-country evidence, we found
this choice preferable as it allows us to use the same
subperiods for all countries considered.2

Volatility of Interest Rates, Inflation,
and Output Gaps
We begin this section by studying the volatility
of consumer price (CPI) inflation, short-term interest
rates, and output gaps. The interest rates are overnight money rates; exceptions are Chile and New
Zealand where, due to data availability, we use 3month interest rates. Since output gaps calculated
from GDP are not available at a monthly frequency,
we generally use the index of industrial production;
exceptions are Australia, New Zealand, and
Switzerland, where monthly data on industrial
production are not available. For Australia and New
Zealand the output gap is calculated from quarterly
GDP data; for Switzerland, monthly GDP data are
used. The output gap is defined as the percentage
difference between the actual index value and a trend
derived by applying a Hodrick-Prescott (HP) filter.
Panel A of Table 1 shows average annual inflation rates together with the standard deviation of
annualized monthly, annual, and biannual relative
changes in the CPI for the two sample periods. These
standard deviations provide a simple measure for
the volatility of inflation at different frequencies.
We first note the well-known fact that the level of
inflation has been reduced everywhere. In the pre-IT
period, the IT countries were less determined to
squeeze out the inflation inherited from the 1970s
and hence were troubled by much higher average
inflation than the non-IT countries. Thus, the adoption of IT can be regarded as the consequence of
this poor performance. With regard to the level of
inflation, the new policy regime has been successful.
Average inflation in IT countries has come down to
the level observed for non-IT countries—Chile being
an exception. Note that average inflation in the postIT sample matches the medium-run target rates for
Germany (1.9 percent) and the United Kingdom (2.6
percent), while it undercuts the 2 percent target
rates of Canada and Sweden by half a percentage
point. Similar to average inflation, the volatility of
inflation has fallen in IT as well as non-IT countries.
Again, similar to average inflation, the volatility of
inflation in IT countries has converged from relatively
2

The main data source is the International Financial Statistics (IFS), supplemented by data for Australia and New Zealand from the Organization
for Economic Cooperation and Development (OECD).

FEDERAL RESERVE BANK OF ST. LOUIS

Neumann and von Hagen

Table 1
Volatility of Inflation, Interest Rates, and Output Gaps
A. Volatility of inflation
1978:09–1992:06

1993:01–2001:03

Standard deviation of
Average inflation

Industrial countries
Australia
Canada
Chile
New Zealand
Sweden
UK

5.9
7.3
5.9
20.1
9.8
7.5
7.4

Germany
Switzerland
US

3.1
3.7
5.5

1 month 12 months 24 months

Standard deviation of
Average inflation

7.9
8.0

3.0
2.8
3.1
8.1
5.8
3.1
4.4

2.3
1.5
6.1
5.1
2.7
3.8

2.0
2.5
1.5
6.3
1.8
1.5
2.6

4.1
6.8
4.3

2.0
1.9
3.4

1.9
1.8
3.0

1.9
1.1
2.6

4.8
22.1

1 month

12 months 24 months

4.2
4.5

0.5
1.9
1.1
2.8
1.3
1.1
0.7

1.4
0.8
2.4
0.9
0.8
0.4

2.8
3.3
2.2

0.6
0.6
0.6

0.5
0.5
0.5

3.0
5.4

B. Volatility of interest rates
1978:09–1992:06

1993:01–2001:03

Standard deviation of

Standard deviation of

Average interest rate

1 month

12 months

Average interest rate

1 month

12 months

13.4
11.1
31.8
15.4
11.5
11.9

11.4
20.3
124.0
14.8
13.6
10.1

3.6
3.6
19.0
6.7
2.7
3.0

5.8
4.9
12.9
7.2
5.8
6.0

2.4
4.6
53.7
5.3
2.6
3.0

1.3
1.7
5.7
2.4
1.6
1.2

6.7
3.7
9.4

4.7
14.1
10.2

2.1
2.0
3.1

4.5
2.5
5.1

1.8
3.9
1.9

0.9
1.1
1.0

Australia
Canada
Chile
New Zealand
Sweden
UK
Germany
Switzerland
US
C. Volatility of output gaps

1978:09-1992:06

1993:01-2001:03

Standard deviation of

Standard deviation of

3 months

Australia
Canada
Chile
New Zealand
Sweden
UK
Germany
Switzerland
US

12 months

3 months

12 months

4.7
9.0
43.1
14.7
10.3
7.7

2.4
4.7
9.5
4.5
3.3
3.2

2.7
4.7
34.5
4.1
11.2
3.2

1.2
2.1
7.3
1.9
3.4
1.2

7.3
2.0
6.0

2.4
1.6
3.3

6.9
1.2
3.2

3.0
0.9
1.4

NOTE: Entries are in percent. For New Zealand, the sample starts in 1982:03; for Germany, it ends in 1998:12; for Switzerland, it starts
in 1980:01.

J U LY / A U G U S T 2 0 0 2

131

REVIEW

Neumann and von Hagen

Table 2
Monthly Taylor Rules
Constant

Gap t –1

R2

STD

Long-run response
to inflation

0.62**

0.78

1.77

0.75

0.77**

0.60

9.68

π t –1

i t –1

0.24**

A. 1978:09–1992:06
Canada

1.99**

0.16**

Chile

7.19**

0.05

–0.01

Sweden

1.49**

0.01

0.07*

0.82**

0.78

1.07

0.41

UK

1.47**

0.05

0.08**

0.83**

0.89

0.79

0.45

Germany

0.26**

0.04*

0.06*

0.94**

0.97

0.38

0.96

Switzerland

0.00

–0.01

0.08*

0.92**

0.81

1.15

0.99

US

0.45*

0.05

0.08*

0.90**

0.93

0.90

0.77

Canada

0.64**

0.10**

–0.12*

0.90**

0.91

0.44

Chile

3.41**

0.04

0.50**

0.45**

0.44

3.79

0.90

Sweden

0.22*

0.03**

0.08**

0.93**

0.99

0.23

1.10

UK

0.47*

0.03

0.10*

0.88**

0.89

0.24

0.81

Germany

0.11*

0.02*

0.09*

0.92**

0.99

0.12

1.12

Switzerland

0.12*

0.06

0.91**

0.95

0.31

0.66

US

0.34**

–0.10

0.94**

0.98

0.14

B. 1993:01–2001:03

–0.00
0.09**

NOTE: Gap is the output gap, π the annual rate of inflation, i the nominal interest rate, R2 the adjusted R-squared value, STD the standard
deviation of the residuals. For Germany, the sample ends in 1998:12; for Switzerland, it ends in 2000:09. * and ** indicate significance
at the 5 and 1 percent confidence levels, respectively.

high levels to the levels observed in the non-IT countries of the reference group. This result suggests
that the IT countries have joined our non-IT group
in their determination to stabilize inflation over the
medium run and to gain credibility in this way. But
note that, with the exception of the United Kingdom,
the volatility of inflation at the 12- and 24-month
frequencies still remains above the level observed
for our reference group of non-IT countries.
Panel B of Table 1 provides similar information
for short-term interest rates. Given the improved
inflation performance in all countries, it is no surprise that between the sample periods the average
levels of interest rates have fallen. The table shows
that the volatility of short-term interest rates has
decreased, too. Similar to what one observes for
the volatility of inflation, the volatility of overnight
rates in IT countries converges to the lower level
observed in our non-IT countries, though it remains
higher for IT than for non-IT countries in the post-IT
period. The United Kingdom again is the exception.
132

J U LY / A U G U S T 2 0 0 2

Finally, consider panel C of Table 1, which shows
the volatility of the output gaps. They, too, have
fallen generally between the sample periods in all
countries. Notable exceptions are Germany and
Sweden, where the volatility at the 12-month frequency has increased in the post-IT sample.

Taylor Rules
The inspection of the data suggests that the
behavior of central banks has changed between
the sample periods. To study this in more detail, we
estimate dynamic Taylor rules by combining the
Taylor equation with the assumption of interest-rate
smoothing. We do this for both monthly and quarterly data. Overnight money market rates serve as
the dependent variable. The explanatory variables
are the inflation rate and the output gap, apart from
the lagged overnight rate.
Evidence from Monthly Taylor Rules. A first
set of estimates uses monthly data. Based on a

FEDERAL RESERVE BANK OF ST. LOUIS

specification search relying on the Akaike information criterion (AIC), we include both inflation and
the output gap with one lag in all equations. Table 2
has the results for the two sample periods. The first
thing to note is that the standard deviations of the
residuals (or standard errors) are considerably lower
in the post-IT period for all countries. A standard
likelihood-ratio test indicates that the differences
are statistically significant.
The table shows that, judged by R2 values, the
estimated Taylor rules fit the data well for all countries except Chile. This result holds for both groups
of countries and confirms findings reported in earlier
literature. The estimated coefficients generally have
the correct signs. Exceptions are the Swiss response
to output in both sample periods, the Chilean
response to inflation in the pre-IT period, and the
United States and the Canadian responses to inflation in the post-IT period. Only in the Canadian case,
however, is the coefficient significantly different
from zero. Overall, the Taylor rules appear to be
reasonable descriptions of central bank behavior.
Table 2 shows the long-run responses of the interest
rates to inflation together with the estimated
coefficients.
According to this table, Canada is the only country in the IT group where the interest rate responded
significantly to the output gap in the first sample
period. Reactions to the output gap are not significantly different from zero in the other three countries.
The reaction to lagged inflation was significantly
positive in Canada, Sweden, and the United Kingdom
during that period, and not significantly different
from zero in Chile. The short-run reaction to inflation
was more than three times larger in Canada than
in Sweden and the United Kingdom. The greater
persistence of the interest rate in the latter two
countries, however, implied that the differences in
the long-run responses are less pronounced.
These patterns have changed somewhat in the
post-IT period. Sweden and Canada now show significant reactions to the output gap. The estimates
for the United Kingdom, Sweden, and Chile indicate
significant, positive reactions to lagged inflation,
while the estimate for Canada shows a negative
sign. Compared with the first period, the long-run
responses to inflation almost doubled for the United
Kingdom and more than doubled for Sweden. Chile’s
long-run reaction to inflation is now similar to that
of the United Kingdom and Sweden. The persistence
in interest rates increased somewhat in the United
Kingdom, Sweden, and Canada, but dropped in

Neumann and von Hagen

Chile. Overall, the substantial increase in the longrun response to inflation is the strongest indicator
of a change in central bank behavior that we take
from these estimates.
Consider, then, the non-IT countries. In the
first subsample, the estimate for Germany shows
positive and significant reactions of the overnight
rate to both the output gap and inflation. For neither
Switzerland nor the United States do we find a
significant reaction of short-term rates to the output gap, but their reactions to lagged inflation are
in line with Germany’s. The results remain similar
for Germany and Switzerland in the post-IT sample,
although the Swiss reaction to inflation loses statistical significance. The U.S. reaction to inflation even
changes sign and loses significance, while its reaction to the output gap is larger and significant.
Comparing IT and non-IT countries in the pre-IT
sample, we see the starkest differences in their longrun reactions to inflation, which are uniformly much
lower among the countries that later adopted IT
than in Germany and Switzerland. This changed
in the post-IT period, as the long-run reactions to
inflation have increased by more in IT countries
than in Germany. The suggestive result then is that
the move to IT marks a convergence in central bank
behavior of the first group to the Bundesbank and
the Swiss National Bank, the two banks that showed
the strongest determination to keep inflation down
in the 1970s and 1980s. This finding corroborates
the results reported by Cecchetti and Ehrmann
(2000) and Corbo, Landerretche Moreno, and
Schmidt-Hebbel discussed above. Finally, the estimates support the conjecture that under the IT
regime central banks give less weight to stabilizing
the business cycle. With the exception of Sweden,
the reaction of IT countries to the output gap is
lower in the post-IT period than before, though still
stronger than in Germany.
We pursue this analysis further by embedding
our Taylor rules into three-dimensional VARs for
short-term inflation, the output gap, and the interest
rate. All estimates employ a constant and only one
lag. Based on the Cholesky decomposition, we can
use the VARs to study the impulse responses of the
overnight rates. Figure 1 shows results for the United
Kingdom and Germany. For both countries and both
sample periods we observe significantly positive
responses of central bank interest rates to innovations in inflation and output gaps. This replicates
the information from our estimates of Taylor rules.
Beyond that qualitative result, Figure 1, in panels A
J U LY / A U G U S T 2 0 0 2

133

REVIEW

Neumann and von Hagen

Figure 1
Response of the Overnight Rate
A. United Kingdom, 1978-92

B. United Kingdom, 1993-01
Response to Inflation

Response to Inflation

0.6

0.2

0.5
Percent

Percent

0.4
0.3
0.2

0.1
0.0

0.1
0.0

–0.1

–0.1
5

15

10

5

20

Response to Gap

0.6

10

15

20

Response to Gap
0.2

0.5
0.4

0.1

0.3
0.2

0.0

0.1
0.0
–0.1

–0.1
5

10

15

5

20

D. Germany, 1993-98

Response to Inflation

Response to Inflation

0.16
0.12

0.3

0.08

0.2

Percent

Percent

20

Months

C. Germany, 1978-92
0.4

15

10

Months

0.1

0.04
0.00

0.0

–0.04

–0.1
5

10

15

20

Response to Gap

0.4

–0.08
5

10

15

20

Response to Gap

0.16
0.12

0.3

0.08
0.2
0.04
0.1

0.00

0.0

–0.04
–0.08

–0.1
5

10

15
Months

20

5

10

15
Months

NOTE: Responses were calculated to Cholesky one standard deviation innovations ± standard errors.

134

J U LY / A U G U S T 2 0 0 2

20

FEDERAL RESERVE BANK OF ST. LOUIS

Neumann and von Hagen

Figure 2
Contribution of Inflation Shocks to the Overnight Rate Variance
A. United Kingdom

B. Sweden
60

50
1978-92
+2 S.E.

40

1978-92

20
10

Percent

30
Percent

50

1993-2001

40

1993-2001
–2 S.E.

1993-2001
30

1993-2001
–2 S.E.

10

0

0

–10

–10
5

10

15

5

20

10

C. Germany

D. Switzerland

50

60
50

40

Percent

Percent

30

1993-98
1978-92
+2 S.E.

20

20

1993-2000
+2 S.E.

40

30

15
Months

Months

10

1978-92
+2 S.E.

20
10

1978-92

1993-2000

0
1978-92

0

1978-92

1978-92
+2 S.E.

20

1993-98
–2 S.E.

–10
–20

–10

–30
5

10

15

20

Months

and B, shows for the United Kingdom that the post-IT
impulse response to a one-standard-deviation shock
to the inflation rate is considerably smaller in magnitude than the pre-IT impulse response. The former
never exceeds 0.12, while the latter goes above 0.2
after six months. Thus, while the long-run response
to inflation has increased strongly (as shown in
Table 2), the impulse response functions suggest
that the short-run response to inflation shocks has
become less aggressive. The impulse response functions for Germany (Figure 1, C and D) convey the
same impression. In the post-IT sample, the impulse
response never exceeds 0.08, but it does climb above
0.1 in the pre-IT sample. Also, these two panels in
the figure show similar reductions in the impulse
responses to output-gap shocks. Thus, the estimates
suggest that both IT and non-IT central banks moved
to a less activist monetary policy in the 1990s.
Consider next the contributions that the inno-

1978-92
–2 S.E.

1993-2000
–2 S.E.
5

10

15

20

Months

vations in the inflation rate at various lags made to
the variance of the overnight rates. This is an indicator of the degree to which monetary policy actions
were directed at counteracting inflation shocks.
The question is whether the relative importance of
these shocks to interest rate policy has risen in the
post-IT period. Figure 2 shows the results for two
IT countries, the United Kingdom and Sweden, and
for two non-IT countries, Germany and Switzerland.
We cannot apply this comparison to Canada, Chile,
and the United States because the monthly VAR
estimates for these countries suggest a counterintuitive, negative response of the overnight rate to inflation in at least one of the sample periods.
Each of the panels in Figure 2 provides the percentage of the variance of the interest rate due to
innovations in the inflation rate for both sample
periods. For the United Kingdom, Figure 2A shows
a strong increase in the relative importance of
J U LY / A U G U S T 2 0 0 2

135

Neumann and von Hagen

inflation shocks as a source of interest rate variance
in the post-IT period. At lag 12, their contribution to
the variance of the overnight rate reaches 40 percent,
compared with less than 25 percent in the pre-IT
period. Note that the 1993-2001 line lies outside the
confidence interval around the 1978-92 line for lags
5 to 18. The picture thus suggests that U.K. monetary
policy has become more strongly determined to
fight inflation under the IT regime. Similar findings
emerge from Figure 2B for Sweden. At the annual
lag, the contribution of inflation shocks to the variance of the Swedish money market rate rose from
10 percent in the pre-IT period to 35 percent in the
post-IT period. It is noteworthy that a similar result
holds for Germany. There, too, the contribution of
inflation shocks to the variance of the overnight rate
was higher in the post-IT period. Figure 2C shows
that, at lag 12, inflation shocks contributed less than
10 percent to the variance of Germany’s money
market rate before 1992, but 25 percent thereafter.
Again, the data convey the impression of a convergence in central bank behavior that coincides with
the introduction of inflation targeting in the IT countries. An exception is Switzerland, where the estimated contribution of inflation shocks to the variance
of money market rates appears to have been quite
small in both periods. But note that the underlying
estimate for the post-IT period is poor, which might
reflect the fact that Swiss monetary policy to a larger
extent is directed at controlling the exchange rate.
Evidence from Quarterly Taylor Rules. We
now turn to quarterly estimates of Taylor rules,
which allow us to consider a broader group of
countries. Thus, we can include Australia and New
Zealand as two additional inflation targeters. For
these countries the output gap is estimated from
real GDP and an HP filter. Inflation continues to
be measured in terms of the CPI. Switching from
a higher to a lower frequency may change the
dynamics of the Taylor estimate. Using the AIC
again, we find that the contemporaneous output gap
and the contemporaneous inflation rate fit better
than the first lags in most Taylor models. Exceptions
are Canada and Switzerland in the first sample and
Canada, Sweden, Germany, and Switzerland in the
second sample. In these cases, the first lag of the
output gap gave better estimation results. Table 3
provides the results.
As before, the estimates look reasonable and
fit the data well, with the exception of Chile and
Switzerland. If we disregard the latter, all signs are
as expected except the output gap response of the
136

J U LY / A U G U S T 2 0 0 2

REVIEW

United Kingdom in the post-IT sample, as well as
the Canadian inflation response in that period; but
these coefficients are not statistically different
from zero. Note that the quarterly estimates provide
a significant positive reaction to inflation for the
United States in the post-IT sample, in contrast to
the estimate based on monthly data.
As in the case of monthly Taylor rules, the quarterly estimates suggest that the behavior of central
banks has changed between the sample periods.
Among the inflation targeters, Canada is the only
country where the output response of interest rates
increased between the first and the second sample.
A similar observation for Sweden (made from
monthly data) has vanished. Among the non-IT
countries, it is again in the United States where the
reaction to the output gap is stronger in the second
sample period. The short-run reaction to inflation
is larger in the second period for the IT countries
except Canada and New Zealand. More importantly,
the long-run response to inflation increases for all
IT countries. In the United Kingdom, Sweden, and
Australia, it is more than twice as large as in the
first sample period. These changes in long-run inflation responses are in line with the estimates from
monthly Taylor rules for the United Kingdom,
Sweden, and Chile. Turning to the non-IT countries,
we find that the long-run response to inflation goes
up in Germany and decreases slightly in the United
States. Overall, the estimates from quarterly data
confirm the impression from monthly data: The
adoption of IT has produced a convergence of central bank behavior to that of the Bundesbank in
the 1980s and 1990s.
A notable feature of our estimates is that the
estimated long-run response of short-term interest
rates to inflation is below unity in all cases except
for Germany, Sweden, and the United Kingdom in
the post-IT period. This contrasts with the familiar
claim of the literature on Taylor rules that the
response of interest rate policy to inflation should
exceed unity in order to guarantee that monetary
policy is able to stabilize inflation. The fact that we
do not find this for most countries, including the
United States, is puzzling.
One reason for this finding may be that earlier
studies have commonly used GDP deflators instead
of CPIs for computing the rate of inflation. In the
appendix, we show that the long-run response coefficient of the federal funds rate to U.S. inflation is
about 1.5 if the GDP deflator is used but 1.0 or less
if the GDP deflator is replaced by the CPI. From a

FEDERAL RESERVE BANK OF ST. LOUIS

Neumann and von Hagen

Table 3
Quarterly Taylor Rules
Constant

Gap t

–25.4*

0.27*

Gap t –1

π t –1

i t –1

R2

STD

Long-run response
to inflation

A. 1978:Q3–1992:Q2
Australia
Canada
Chile
New Zealand

0.03

0.84**

0.78

1.39

0.21

0.38**

0.52**

0.68

1.68

0.78

0.23

0.26

0.54**

0.32

11.48

0.57

0.22*

0.35**

0.56**

0.86

1.64

0.79

2.97**
9.06**
–17.85

0.16**

Sweden

1.81*

0.13*

0.11

0.76**

0.72

1.50

0.49

UK

3.22**

0.09

0.22**

0.59**

0.79

1.06

0.54

Germany

0.75**

0.16**

0.18*

0.81**

0.95

0.53

0.91

Switzerland

0.27

0.01

0.96**

0.85

0.94

0.35

US

0.74

0.15*

–0.03

0.16*

0.81**

0.85

1.20

0.87

–19.8*

0.21*

0.08

0.85**

0.89

0.33

0.55

–0.20

0.64**

0.69

0.59

0.41

3.18

B. 1993:Q1–2001:Q1
Australia
Canada

1.99**

Chile

6.33**

0.03

0.26**

New Zealand

1.68*

0.10

Sweden

0.84**

UK

1.71**

Germany

0.22*

Switzerland

0.30*

US

0.78**

0.96**

0.96

0.28

0.68**

0.70

0.83

0.88

0.35**

0.74**

0.95

0.45

1.32

0.59**

0.46**

0.81

0.30

1.09

0.07**

0.22*

0.83**

0.99

0.16

1.28

0.27**

0.09

0.84**

0.94

0.31

0.58

0.18*

0.76**

0.95

0.21

0.74

0.10**
–0.11

0.28**

NOTE: Gap is the output gap, π the annual rate of inflation, i the nominal interest rate, R2 the adjusted R-squared value, STD the standard
deviation of the residuals. For Germany, the sample ends in 1998:Q4; for Switzerland, it ends in 1980:Q1. * and ** indicate significance
at the 5 and 1 percent confidence levels, respectively.

purely statistical point of view, the difference can
be traced to the markedly higher variance of the
CPI. But the question remains: Which price index
is the more appropriate one? If central banks, in practice, care more about inflation derived from CPIs
than from GDP deflators, our estimates suggest that
central bank interest rates do not respond sufficiently
to inflation in most countries even in the post-IT
period.
To check the contribution of inflation shocks
to the variance of interest rates, we again estimate
three-dimensional VARs for the inflation rate, the
output gap, and the interest rate. All estimates employ
a constant and only one lag. Note that the quarterly
Taylor estimates show contemporaneous reactions
to inflation for all countries and to the output gap

for some countries. This implies that the interest
rate equations of the VARs differ from the estimated
Taylor equations. In Figure 3, we plot the contribution of inflation shocks to the variance of money
market rates for all countries, except Canada, Chile,
and Switzerland, and for both sample periods. For
the United Kingdom, Sweden, and Germany, the
estimates with quarterly data replicate the results
from monthly data (see Figure 2, A through C). For
the other countries, the results are more mixed. In
the cases of New Zealand and the United States, we
find a smaller contribution of inflation shocks to
the variance of money market rates in the post-IT
period. For Australia, finally, the contribution of
the inflation shocks increases, but only slightly so.
In sum, we find that the quarterly data support the
J U LY / A U G U S T 2 0 0 2

137

REVIEW

Neumann and von Hagen

Figure 3
Contribution of Inflation Shocks
B. Sweden: Contribution to
Overnight Rate Variance

A. United Kingdom: Contribution to
Overnight Rate Variance
60

40

60

1978-92

30
1993-2001
–2 S.E.

20
1978-92
–2 S.E.

0

20
1978-92

0
–10

–10

1978-92
–2 S.E.

–20
1

2

3

4

5

6 7
8
Quarters

9

1

10 11 12

2

3

4

5

6 7
8
Quarters

9

C. New Zealand: Contribution to
Variance of 3-Month Bank Bill Rate

D. Germany: Contribution to
Overnight Rate Variance

70

80
1993-2001
+2 S.E.

60
50

Percent

1993-2001

20
1982-92
–2 S.E.

10

Percent

1982-92

30

40

1978-92

0
1993-2001
–2 S.E.
1

2

3

4

5

6 7
8
Quarters

9

10 11 12

E. Australia: Contribution to
Overnight Rate Variance

1978-92
–2 S.E.

–20
1

2

3

4

5

6 7
8
Quarters

9

10 11 12

F. United States: Contribution to
Variance of Federal Funds Rate
80

50
1993-2001
+2 S.E.

30

60

1978-92
+2 S.E.

20
1993-2001
1978-92

10
0

1978-92
+2 S.E.

40
Percent

40

1978-92

20
1993-2001

1993-2001
+2 S.E.

1978-92
–2 S.E.

0
1978-92
–2 S.E.

–10
1993-2001
–2 S.E.

–20
–30

1993-98
–2 S.E.

20

0
–10
–20

10 11 12

1978-92
+2 S.E.

1993-98

60

40

Percent

1993-2001
–2 S.E.

30

10

10

138

1978-92
+2 S.E.

50 1993-2001
40
Percent

Percent

70

1978-92
+2 S.E.

1993-2001

50

1

2

3

4

J U LY / A U G U S T 2 0 0 2

5

6 7
8
Quarters

–20
–40

9

10 11 12

1993-2001
–2 S.E.
1

2

3

4

5

6 7
8
Quarters

9

10 11 12

FEDERAL RESERVE BANK OF ST. LOUIS

results derived from monthly data for the IT countries Sweden and the United Kingdom and for the
non-IT country Germany, while the results for the
other countries remain mixed.

IV. INFLATION TARGETING: AN EVENT
STUDY
An important shortcoming of the analysis presented in the previous section and of similar work
in the literature is the assumption that the economic
environment of monetary policy remains basically
unchanged in the period under consideration. In
particular, it is a maintained, though usually only
implicit, hypothesis that monetary policy was
exposed to the same type of shocks in different
periods, so that any observed changes in central
bank performance or in the level and dynamics of
interest rates and inflation rates can be attributed
to changes in the monetary regime. Regression
analysis of central bank reaction functions or inflation dynamics of course allow for exogenous shocks
of different magnitude in different periods of time.
Nevertheless, the analysis necessarily assumes that
all exogenous shocks are drawn from the same distribution and that monetary policymakers interpreted
their environment in this way. This is obviously a
very strong assumption and one that is hard to verify. But if we cannot be sure that monetary policy
responses as described by empirical reaction functions are truly reactions to shocks from the same
distribution, the analysis loses much of its strength.
In this section, we look at the issue in a different
way. We do not ask how the average response of
central banks to many shocks, as described by
regression analysis, changed before and after the
adoption of IT. Rather, we compare central bank
performance and monetary policy outcomes in two
historical episodes in which monetary policy was
faced with very similar, exogenous shocks. By ensuring that the nature and the size of the shock are
truly similar, we can be more confident that we
compare monetary policy under truly comparable
circumstances, yet with one important difference,
namely, the adoption of IT by some central banks
in one of the episodes considered.
The kind of experiment we pursue here demands
that the shocks we look at be truly exogenous to
monetary policy in the countries considered; that
is, we should look at shocks originating outside
these countries. With this in mind, we choose two
periods of rising crude oil prices. From the point of
view of the central banks in our analysis, episodes

Neumann and von Hagen

of rising oil prices present the dilemma of a negative
supply shock. Rising oil prices lead to a slowdown
of economic growth and rising inflation. Monetary
policy can attempt to hold unemployment down,
but only at the cost of even higher inflation rates.
This is the experience of the “stagflations” most
industrialized countries first encountered following
the oil price shock of 1973. While markets may not
have fully understood the macroeconomic consequences of rising crude oil prices immediately after
the first oil price shock, it is plausible to assume
that they did subsequently.
The two episodes we look at are the periods of
rising crude oil prices starting in July 1978 and in
December 1998. During the first episode, the price
per barrel of crude oil increased from $13.15 to
$39.57 (U.S.), a total increase of 201 percent. The
peak was reached in November 1979. During the
second episode, the price increased from $10.41 per
barrel to $29.62, for a total of 185 percent reached
in June 2000. After a temporary drop to $27.93 per
barrel in July 2000, the oil price rose again to $32.68
in September. The price hikes are thus similar in
magnitude, although oil prices rose faster initially in
the second episode. Figure 4A illustrates the similarity of the price developments in the two periods.
We are interested in exploring the differences
in the monetary policy responses to these two oil
price hikes. This would be much easier if we could
safely assume that the economies we look at are
the same in terms of aggregate demand and supply
performance in both episodes. However, the oil price
hikes of the 1970s induced important substitutions
away from the use of oil as a source of energy in
the industrialized world. In many countries, tax
policies have amplified these substitution processes.
This is indicated by the concept of “energy intensity,”
which relates annual energy consumption to annual
economic activity. According to the OECD (2001),
energy intensity of European industries improved
by an average 1.5 percent annually in the European
Union countries during the 1990s, driven by gains
made particularly in Germany and Sweden. Energy
intensity improved by an annual average rate of
1.9 percent in the North American Free Trade
Agreement (NAFTA) region during the 1980s; it was
flat in 1990-93, but improved again thereafter. These
gains were realized mainly by the United States
and Canada. The improvements in energy intensity
suggest that the economies became less vulnerable
to oil price shocks, and the inflationary consequences
of the second episode we consider should be less
dramatic as a result. However, the data also suggest
J U LY / A U G U S T 2 0 0 2

139

REVIEW

Neumann and von Hagen

Figure 4
Oil Prices
A. Two Oil Price Hikes
250.00

Percent of Initial Price

200.00

150.00

100.00

50.00
1978
0.00

1

2

3

5

4

7

6

8

9
10
11
Months from Start

14

13

12

1998

15

17

16

B. First Oil Price Shock: Inflation
25.00
United Kingdom

Percent per Annum

20.00

United States

15.00

Sweden

10.00
Canada

5.00

Germany

1

9

:1

:0

19

81

7

81

19

19

81

:0

:0
81

19

19

81

:0

5

3

1

1

:0

19

80
19

81

9

:1

7

:0

19

19

80

:0

:0
80

19

80

5

3

1

:0

19

80

1
19

80

:0

9

:1

:0

79
19

79

19

:0
79

19

19

79

:0

7

5

3

1

:0

19

79

:0

1
19

:1

J U LY / A U G U S T 2 0 0 2

79

9
19

78

:0

:0
78

19

19

140

78

7

0.00

18

FEDERAL RESERVE BANK OF ST. LOUIS

that improvements in energy intensity are not special
to IT or non-IT countries. Therefore, using non-IT
countries as a benchmark, we can control for the
effects of these substitution processes.

The Method of Double Differences
We wish to evaluate the effect of a change in
monetary regime, the adoption of IT, on a number
of monetary indicators of a country. The main problem with such an assessment is that the monetary
policy regime may not be the only relevant variable
that changed between the two periods we compare.
A change in energy intensity is just one example
of other relevant developments that might have
occurred. A widespread change in public perceptions
about the role and the goals of monetary policy is
another example.
A standard method for dealing with this kind of
evaluation in a public policy context is the “method
of double differences.” Consider a variable of interest,
y, and assume that this variable is a function of an
exogenous variable, x, and a vector of other, exogenous variables, z, as well as a policy regime. We are
interested in how the response of y to a change in
the exogenous variable x is affected by a change in
the policy regime. We have observations of y for a
group of countries i=1,...,N that underwent a regime
change and a group of countries j=1,...,M where
no regime shift occurred. In both groups of countries, the indicator is affected by the same exogenous variables, and we hypothesize that the effects
of variables z are approximately the same for all
countries.
Consider two time intervals during which we
observe the indicator y. The starting point of the
first period is t1 and the end point is t2; the starting
and the end points of the second period are t3 and
t4, respectively. Let D1i=yi,t2 – yi,t1 be the change in
indicator y over the first period for country i, and
define D1j, D2i, and D2j analogously for the second
group of countries and the second time period. If
no changes in variables z occurred, the difference
D1i – D2i would tell us how the reaction of y to x
changed as a result of the shift in the policy regime.
Because variables z can change, however, we must
compare this change with the same difference for
countries in which no regime shift occurred. Thus,
the double difference
1 N
1 M
∑ ( D1i − D2i ) −
∑ ( D1 j − D2 j )
N i =1
M j =1
gives us a proxy for the impact of the change in
policy regime on the response of y to x.
DD =

Neumann and von Hagen

Comparing the 1978 and the 1998 Oil
Price Hikes: Empirical Results
We use this method to compare the monetary
policy reactions and consequences of the 1978 and
1998 oil price hikes. Specifically, we look at three
indicators. The first is the annual CPI inflation rate,
our basic indicator of monetary policy outcomes.
The second is the change in long-term government
bond rates. We take this as a measure of monetary
policy credibility, as a large increase in nominal longterm rates indicates rising inflation expectations.
The third is the change in short-term money market
rates. The increase in short-term rates following
an oil price hike indicates the extent of monetary
tightening that the central banks perceived to be
necessary to control inflation after that rise in oil
prices. We use data from six IT countries—Australia,
Canada, Chile, New Zealand, Sweden, and the United
Kingdom—and four non-IT countries—Denmark,
Germany, Switzerland, and the United States. Note
that the second episode of rising oil prices spans
the beginning of the European Monetary Union on
January 1, 1999, which implies a shift of responsibility for monetary policy from the Bundesbank to
the European Central Bank (which does not pursue
IT). Thus, the case of Germany remains a valid observation in the control group. All data except Sweden’s
long-term bond rates, Chile’s short-term and longterm interest rates, and New Zealand’s inflation rates,
which were provided by the respective central banks,
are taken from the International Monetary Fund’s
IFS. We use monthly series wherever possible.
The use of the double differences method
requires us to choose the dates at which we measure
an indicator to calculate its total change during an
episode of rising oil prices. The simplest choice
would be to take the value of the indicator at the
start and at the end of the oil price hike. This, however, could lead to serious measurement bias. Consider CPI inflation. A first point is that changes in
oil prices take some time to be passed through to
consumer prices. Thus, the CPI inflation rate at the
start of the oil price hike is unlikely to be affected
by the hike. A second point is that CPI inflation at
the start of a period of rising oil prices is affected
by economic policies and developments preceding
the oil price hike. Taking a too-early measurement
of CPI inflation thus runs the risk of using data
tainted by the effects of policies predating the episode of interest. Finally, there is likely to be some
variation across countries in the appropriate dates
J U LY / A U G U S T 2 0 0 2

141

Neumann and von Hagen

for measuring the effect of the oil price hike on
inflation, as the pass-through and preceding policies
will be different across countries. In the case of
long-term rates and short-term rates, there is also
likely to be some cross-country variation in the time
that it took markets to realize that a prolonged oil
price hike was happening and in the time that it
took central banks to realize this and to decide to
take action against the incipient inflationary consequences of those rising oil prices.
In view of these difficulties, we use a common
rule for picking observations for all countries and
data series rather than the same dates for all countries. For each indicator series, we look for the first
valley after the beginning of the episode of rising
oil prices, i.e., the lowest realization followed by a
string of increases. We then look for the next highest
realization followed by a string of declining values
for the same series, i.e., the next peak in the time
series. We use the difference between the latter and
the former to calculate the differences D1i and D1j
and apply the same procedure for the second
episode.
Consider Figure 4B, which shows the inflation
rates of Germany, Canada, the United Kingdom,
the United States, and Sweden for illustration. The
German inflation rate stood at 2.70 percent in July
1978, the starting month of the oil price hike. It fell
to 2.24 percent in November 1978, which we use
as the valley in this episode. Between November
1978 and May 1980, the inflation rate increased to
a maximum of 5.94 percent. Thus, D1 is 3.70 percent for Germany. The Swedish inflation rate stood
at 8.63 percent in July 1978 and fell to 5.53 percent
in February 1979. We use this value as the valley of
this episode. Swedish inflation then rose to 13.58
percent in April 1980, resulting in a difference (D1)
of 8.05 percent in this episode. Note that, after several months of lower inflation rates, Swedish inflation eventually peaked at 15.57 percent in October
1980. We do not use that value as the peak to calculate D1, however, as the new increase in inflation
might have been due to other influences. If anything,
this biases our procedure in the conservative
direction. Using similar considerations, we chose
September 1978, July 1978, and December 1978
as the valleys for the United Kingdom, the United
States, and Canada in this episode, and May 1980,
March 1980, and July 1981 as the respective peaks.
While the procedure admittedly requires some
judgement in some cases, we try to err on the conservative side. In Table 4, we indicate the length of
142

J U LY / A U G U S T 2 0 0 2

REVIEW

time between the valley and the peak for each indicator considered and each episode.
Table 4A shows our results for inflation rates.
The average increase in inflation rates over the first
episode amounted to 8.35 percent, considerably
more than the 5.37 percent average for the non-IT
countries. In the second episode of rising oil prices,
the average increase in inflation is 2.99 percent for
IT countries. Thus, the average difference in the
inflation impact between the two episodes is D1– D2
=5.36 percent. This indicates that the inflation
performance of IT countries facing oil price hikes
has improved substantially. But note that these gains
are distributed quite unevenly. Canada and Australia
realized only relatively small improvements, while
New Zealand and the United Kingdom enjoyed large
ones. The average increase in inflation in the second
episode is 1.97 percent among non-IT countries.
Thus, D1– D2=3.41 percent, indicating that the
non-IT countries realized improvements in their
inflation performance, too. As a result, the double
difference is DD=5.36 – 3.41=1.95.
The result thus shows that the IT countries were
able to achieve greater improvement in their inflation performance than the non-IT countries. We can
conclude, therefore, that the introduction of the new
monetary regime helped these countries to improve
their inflation performance. However, a conventional
t test shows that the difference to the non-IT countries is not statistically different from zero. This is
due primarily to the relatively small improvements
in inflation performance observed in Canada and
Australia.
Now consider the evidence for inflation expectations contained in long-term bond rates (see Table
4B). The average increase in long-term interest rates
among the IT group was 5.78 percent. Chile stands
out in this group with the largest increase. On average, among the non-IT countries, long-term rates
went up by 3.27 percent during the first episode of
rising oil prices. The difference between the two
groups suggests that non-IT countries enjoyed better
monetary policy credibility. In the second episode,
the average increase in long-term bond rates was
2.17 percent, signaling a large improvement in credibility. In fact, the average increase in long-term bond
rates among the IT countries was only marginally
higher than the average increase among the non-IT
countries (1.83 percent). The non-IT countries thus
experienced an improvement in monetary policy
credibility, too, though a more modest one. As a
result, the average double difference, DD, amounts

FEDERAL RESERVE BANK OF ST. LOUIS

Neumann and von Hagen

Table 4
Double Differences
Low

High

D1

Time

Low

High

D2

Time

D1– D2

DD

5.68

11.46

5.78

15

0.42

4.46

4.02

14

1.76

–1.65

A. CPI inflation rates*
Australia
Canada

8.43

12.87

4.44

32

0.55

3.03

2.48

13

1.96

–1.45

Chile

29.68

39.22

9.54

11

2.31

4.69

2.38

14

7.16

3.75

New Zealand

10.30

18.40

8.10

15

–0.5

4.00

4.50

15

3.60

0.19

Sweden

5.53

13.58

8.05

14

–1.12

1.33

2.35

12

5.68

2.27

UK

7.76

21.94

14.18

32

1.10

3.31

2.21

13

11.97

8.56

Switzerland

0.4

5.16

4.76

13

–0.10

1.94

2.04

13

2.72

Denmark

6.73

12.80

6.07

9

1.71

3.15

1.44

13

4.63

Germany

2.24

5.94

3.70

22

0.19

2.47

2.28

19

1.42

US

7.72

14.68

6.96

20

1.61

3.76

2.10

15

4.86

7.70

33

5.01

7.16

2.15

13

5.55

4.11

B. Long-term government bond yields†
Australia
Canada
Chile

8.80

16.50

9.66

13.45

3.79

11

5.08

6.38

1.30

11

2.49

1.05

54.47

67.27

12.80

9

11.62

16.77

5.15

2

7.65

6.21

New Zealand

9.99

13.57

3.58

15

5.27

7.28

2.01

13

1.57

0.13

Sweden

9.99

13.78

3.79

31

4.02

5.92

1.90

10

1.89

0.45

11.68

14.70

3.02

26

4.40

4.94

0.54

9

2.48

1.04

UK
Switzerland

3.03

5.10

2.07

27

2.53

4.19

1.66

13

0.41

Germany

5.90

9.40

3.40

20

3.53

5.35

1.82

13

1.58

US

8.41

12.75

4.34

9

4.65

6.66

2.01

13

2.33

C. Short-term money market rates
Australia

6.88

17.05

10.17

38

4.72

6.24

1.52

12

8.65

4.27

Canada

6.61

19.36

12.75

16

4.59

5.75

1.16

13

11.59

7.21

Chile

45.59

73.13

27.54

6

5.54

13.62

8.08

5

19.46

15.08

New Zealand

10.00

16.32

6.32

16

4.30

6.88

2.58

16

3.74

–0.64

Sweden

5.40

16.99

11.59

25

3.00

4.10

1.10

17

10.49

6.11

UK

8.25

17.38

9.13

21

4.56

6.00

1.44

11

7.69

3.31

Switzerland

0.03

4.90

4.67

15

0.76

3.50

2.74

16

1.93

Denmark

10.08

16.69

6.61

3

3.07

5.78

2.71

10

3.90

Germany

2.67

9.02

6.35

13

2.42

4.98

2.56

17

3.79

US

7.81

17.61

9.80

21

4.63

6.54

1.91

19

7.89

NOTE: Time means number of months between low and high.
*Estimates are based on monthly data except those for Australia and New Zealand, which are based on quarterly data.
†Data for Denmark were not available.

J U LY / A U G U S T 2 0 0 2

143

REVIEW

Neumann and von Hagen

to 2.17. Using a t test indicates that this average is
significantly different from zero. Thus, we conclude
that the introduction of IT has produced significant
gains in terms of the credibility of the monetary
authorities’ commitment to price stability.
Finally, we turn to short-term interest rates (see
Table 4C). During the first episode of rising oil prices,
central banks in the IT group raised short-term rates,
on average, by 12.92 percent. Eliminating Chile from
this group, where the increase was much larger than
in the other countries, still leaves an average increase
in short-term rates in this group of 9.99 percent. In
contrast, the average increase among the non-IT
group was 6.86 percent in the first episode. In the
second episode, IT and non-IT central banks resembled each other much more in the way they tightened monetary policy. Here, the average increase
among IT central banks is 2.65 percent, while the
average increase among non-IT central banks is
2.48 percent. The difference in the interest rate
responses between the first and the second period
is thus substantially larger for the IT central banks.
The average double difference, DD=5.89, is statistically different from zero as indicated by a t test. This
result does not change qualitatively if we remove
Chile and the United Kingdom from the IT group.
Thus, the data suggest that both types of central
banks could get through the second episode of rising
oil prices with substantially reduced interest rate
hikes compared with the first episode. However, IT
central banks managed to reduce their response to
the increase in oil prices significantly more than
non-IT central banks, which reflects the comparatively poor performance of the IT central banks during the first episode.
Pulling these results together, we find that central
banks generally managed to cope with the 1998 oil
price hike with substantially less inflation than with
the price hike starting in 1978. This may be the
result of improved energy intensity and a generally
greater commitment to price stability on the parts
of all central banks. While both groups of central
banks enjoyed improvements in credibility, as indicated by the smaller increases in long-term inflation
rates, these gains were larger in the case of IT central
banks. The observation that the IT central banks
had experienced much larger increases in long-term
rates during the first episode than non-IT central
banks suggests that the introduction of IT allowed
them to achieve the same level of credibility as the
central banks in our control group. Finally, we note
that better inflation performance and improved
144

J U LY / A U G U S T 2 0 0 2

credibility required less action in terms of driving
up short-term rates from all central banks in the
second episode compared with the first episode.
Here, again, the IT central banks’ improvement is
significantly larger, and the data suggest that inflation
targeting has resulted in an assimilation of central
bank responses to those of the central banks in the
control group. Altogether, these findings suggest
that the new monetary policy regime has affected
central bank behavior and credibility more than it
has changed inflation outcomes, which have
improved for both groups.

V. INTERPRETATION AND
CONCLUSIONS
In the early 1990s, a number of countries that
had been troubled by high inflation since the 1970s
adopted inflation targeting as a strategy to bring
inflation down to the low levels experienced by
Germany and Switzerland. Since then, the new
regime has been praised in the literature as a superior
concept for monetary policy. In this paper, we have
looked at different types of evidence in order to validate this claim. For six IT countries and three non-IT
countries and for two sample periods—a pre-IT
period (1978-92) and a post-IT period (1993-2001)—
we have investigated (i) the stability record by examining the volatility of inflation, output gaps, and
central banks’ interest rates; (ii) the reaction of
central banks’ interest rate policies to inflation
shocks by estimating Taylor rules and unrestricted
VARs; and (iii) the policy reactions to large supply
shocks by comparing the central banks’ reactions
to the huge oil price hikes of 1978-79 and 1998-99.
Taken together, the evidence confirms the claim
that IT matters. Adopting this policy has permitted
IT countries to reduce inflation to low levels and to
curb the volatility of inflation and interest rates; in
so doing, these banks have been able to approach
the stability achieved by the Bundesbank. Thus, IT
has helped the former high-inflation countries to
achieve a degree of credibility similar to that of the
Bundesbank and the Swiss National Bank. Of all IT
countries it is the United Kingdom that has performed best even though its target rate of inflation
is higher than the inflation targets of most other
countries.
While IT has proven an effective strategy for
monetary policy, our evidence does not support
the claim that it is superior to strategies that focus
on monetary aggregates, such as the Bundesbank’s

FEDERAL RESERVE BANK OF ST. LOUIS

approach to monetary targeting between 1974 and
1998, nor even to the Fed’s strategy in the 1980s and
1990s, which focused neither on monetary nor on
inflation targets. It is interesting to note in this
context that one of the staunchest supporters of
inflation targeting, Svensson (2001), has recently
endorsed a more moderate “flexible inflation targeting” in which the inflation target serves as a yardstick for the conduct of monetary policy in the
medium run. Abstracting from technicalities, the
main idea of “flexible IT” does not differ much from
the Bundesbank’s former monetary policy concept,
in which the inflation objective serves to anchor
medium-run inflation expectations while short-run
operations are guided by an intermediate monetary
target.
Reviews of that strategy have long shown that
monetary targeting must not be misinterpreted
as a rigid rule. Instead, it is well known that the
Bundesbank often tolerated deviations of actual
money growth from target if doing so seemed compatible with the goal of low inflation rates. For the
Bundesbank, monetary targeting fulfilled two important functions (von Hagen, 1999). It served to structure internal monetary policy debates within the
Bundesbank and forced monetary policymakers to
take into account the inflationary consequences of
their actions, especially in times when inflation
risks became a growing concern. Furthermore, the
discussion of monetary developments served as a
framework for an effective dialogue between the
bank and the public, which stabilized long-run inflation expectations and helped the bank maintain a
relatively steady policy course.
Recent models of IT adopt a similar perspective
and stress the importance of the communication
tools developed by IT central banks to improve the
public’s understanding of central bank intentions
and to stabilize inflation expectations over the long
run (Cukierman, 2000; Faust and Svensson, 2000;
Geraarts, 2000). The evidence presented in this
paper suggests that the positive impact on inflation
expectations has been the most beneficial effect of
the new regime. In the same vein, the reductions in
short-term volatility of central bank interest rates
in the IT countries is compatible with the view that
IT has helped monetary policymakers to focus less
on transitory, short-term developments and adopt
a steadier course of monetary policy. From this perspective, then, IT matters if used effectively to structure policy debates both within the central bank
and between the central bank and its public. This

Neumann and von Hagen

interpretation means that IT, like other monetary
policy strategies, must be seen in the context of
(economic) culture and traditions. Given the central
bank’s commitment to price stability and its willingness to bind its policy to an intermediate target that
serves as the nominal anchor for monetary policy,
the choice between an inflation target or a monetary
aggregate then is probably more a question of
culture than economic principles.

REFERENCES
Alesina, Alberto F.; Blanchard, Olivier; Gali, Jordi; Giavazzi,
Francesco and Uhlig, Harald. Defining a Macroeconomic
Framework for the Euro Area: Monitoring the European
Central Bank 3. London: Centre for Economic Policy
Research, 2001.
Ammer, John and Freeman, Richard T. “Inflation Targeting
in the 1990s: The Experiences of New Zealand, Canada,
and the United Kingdom.” Journal of Economics and
Business, May 1995, 47(2), pp. 165-92.
Bernanke, Ben; Laubach, Thomas; Mishkin, Frederic and
Posen, Adam. Inflation Targeting: Lessons from the
International Experience. Princeton: Princeton University
Press, 1999.
Cecchetti, Stephen and Ehrmann, Michael. “Does Inflation
Targeting Increase Output Volatility? An International
Comparison of Policymakers’ Preferences and Outcomes.”
Working Paper 69, Central Bank of Chile, 2000.
Chortareas, Georgios; Stasavage, David and Sterne, Gabriel.
“Does It Pay To Be Transparent? International Evidence
from Central Bank Forecasts.” Federal Reserve Bank of
St. Louis Review, July/August 2002, 84(4), pp. 99-118.
Corbo, Vittorio; Landerretche Moreno, Oscar and SchmidtHebbel, Klaus. “Assessing Inflation Targeting After a
Decade of World Experience.” Unpublished manuscript,
Central Bank of Chile, 2001.
Cukierman, Alex. “Are Contemporary Central Banks
Transparent About Economic Models and Objectives and
What Difference Does It Make?” Federal Reserve Bank of
St. Louis Review, July/August 2002, 84(4), pp. 15-36.
Debelle, G. “Inflation Targeting in Practice.” Working Paper
97/35, International Monetary Fund, 1997.
European Central Bank. The Monetary Policy of the ECB.
Frankfurt: European Central Bank, 2001.

J U LY / A U G U S T 2 0 0 2

145

Neumann and von Hagen

Faust, J. and Svensson, Lars E.O. “The Equilibrium Degree
of Transparency and Control in Monetary Policy.” Working
paper, 2000.
Freeman, Richard T. and Willis, Jonathan L. “Targeting
Inflation in the 1990s: Recent Challenges.” International
Finance Discussion Paper 1995-525, Board of Governors
of the Federal Reserve System, 1995.
Geraats, Petra. “Why Adopt Transparency? The Publication
of Central Bank Forecasts.” Working Paper 41, European
Central Bank, 2001.
Gramlich, Edward M. Remarks before the Charlotte
Economics Club, 13 January 2001.
Kahn, George A. and Parrish, Klara. “Conducting Monetary
Policy with Inflation Targets.” Federal Reserve Bank of
Kansas City Economic Review, Third Quarter 1998, pp. 5-32.
Kuttner, K.N. and Posen, Adam S. “Does Talk Matter After
All? Inflation Targeting and Central Bank Behavior.” Federal
Reserve Bank of New York Staff Report, October 1999, 88.
Laubach, T. and Posen, Adam S. “Some Comparative
Evidence on the Effectiveness of Inflation Targeting.”
Research Paper 9714, Federal Reserve Bank of New York,
1997.
Leiderman, Leo and Svensson, Lars E.O., eds. Inflation
Targets. London: Centre for Economic Policy Research,
1995.
Mishkin, Frederic S. and Posen, Adam S. “Inflation

146

J U LY / A U G U S T 2 0 0 2

REVIEW
Targeting: Lessons from Four Countries.” Federal Reserve
Bank of New York Economic Policy Review, 1997, pp. 9-117.
Organization for Economic Cooperation and Development.
“Comparison of Energy Efficiency in OECD Countries
(1970s and 1990s).” Unpublished manuscript, 2001.
Schaechter, Andrea; Stone, Mark R. and Zelmer, Mark.
“Adopting Inflation Targeting: Practical Issues for Emerging
Market Countries.” Ocassional Paper 202, International
Monetary Fund, 2000.
Siklos, Pierre L. “Inflation-Target Design: Changing Inflation
Performance and Persistence in Industrial Countries.”
Federal Reserve Bank of St. Louis Review, March/April
1999, 81(2), pp. 46-58.
Svensson, Lars E.O. “Independent Review of the Operation
of Monetary Policy in New Zealand.” Unpublished
manuscript, 2001.
Taylor, John B. “A Historical Analysis of Monetary Policy
Rules,” in John B. Taylor, ed., Monetary Policy Rules.
Chicago: The University of Chicago Press, 1999, pp. 319-41.
von Hagen, Jürgen. “Inflation and Monetary Targeting in
Germany,” in Leonardo Leiderman and Lars E.O. Svensson,
eds, Inflation Targets. London: Centre for Economic
Policy Research, 1995, pp. 107-21.
___________. “Money Growth Targeting by the Bundesbank.”
Journal of Monetary Economics, 1999, 43, pp. 681-701.

FEDERAL RESERVE BANK OF ST. LOUIS

Neumann and von Hagen

Appendix

TAYLOR RULES FOR THE UNITED
STATES
This appendix serves to show that the long-run
response of the short-term interest rate to the rate
of inflation critically depends on the price index
used for measuring inflation.
We begin by reestimating the static equation
provided by Taylor (1999) for quarterly U.S. data
spanning the period 1987:01–1997:03. The dependent variable is the federal funds rate, the rate of
inflation is the year-over-year rate of change of
the GDP deflator, and the gap is measured as the
percentage deviation of GDP from trend, applying
the Hodrick-Prescott filter.
The first regression, shown in Table A1, is
Taylor’s original estimate, implying the familiar
strong response of the funds rate to inflation of 1.5.
The following regression, variant (1), serves to show
that our data broadly reproduce Taylor’s result
though the estimated response to inflation is some-

what lower. Replacing GDP by industrial production
reduces the estimated response for the output gap
but provides the same inflation response; see variant
(2). Variants (3) and (4), finally, repeat the exercise
but employ the rate of inflation as measured by
the CPI. This reduces the estimated response to
inflation markedly. No longer is it different from
unity for the sample period used by Taylor. Similar
downward shifts of the estimated inflation response
are found for the sample periods used in the text.
Next, we note that the estimates in Table A1
all exhibit very low Durbin-Watson statistics, indicating dynamic misspecification. Table A2 presents
dynamic estimates for our subperiods, employing
the GDP deflator and the CPI alternatively. These
dynamic specifications use the lagged federal funds
rate as an additional regressor. Here, we find that
the estimated short- and long-run response to inflation is smaller when the CPI index is used instead
of the GDP deflator.

Table A1
Taylor’s Static Estimate for 1987:01–1997:03
Constant

π t deflator

1.17*

1.53**

0.77**

(1)

2.03**

1.36**

0.93**

(2)

2.02**

1.37**

(3)

2.31**

1.02**

(4)

2.28**

1.02**

Taylor

CPI

Gapt GDP

IP

–
R2

DW

0.83

Variants
0.62**
0.91**
0.61**

0.72

0.22

0.81

0.23

0.69

0.35

0.78

0.45

NOTE: π is the average inflation rate over four quarters, computed from the GDP deflator or the CPI; Gap is the percentage deviation
of output from trend, computed from GDP data or the index of industrial production (IP). * and ** indicate significance at the 5 and
1 percent confidence levels, respectively.

J U LY / A U G U S T 2 0 0 2

147

REVIEW

Neumann and von Hagen

Table A2
Dynamic Estimates for Samples 1978:03–1992:02 and 1993:01–2001:01
Constant
1978-92
(1)

0.74

(2)

0.82

1993-01
(1)

0.78**

(2)

0.61

π t deflator

CPI
0.16*

0.35**
0.18*
0.27*

Gapt IP

it– 1

–
R2

Long-run response
to inflation

0.15*

0.81**

0.85

0.87

0.15**

0.72**

0.87

1.24

0.28**

0.76**

0.95

0.74

0.29**

0.78**

0.95

1.25

NOTE: π is the average inflation rate over four quarters, computed from the GDP deflator or the CPI; Gap is the percentage deviation
of industrial production (IP) from trend. * and ** indicate significance at the 5 and 1 percent confidence levels, respectively.

148

J U LY / A U G U S T 2 0 0 2

Commentary
Frederic S. Mishkin

B

ecause inflation targeting is a relatively
recent phenomena, in the past we have had
insufficient data to conduct time-series econometric work to evaluate this important new monetary policy strategy. However, now that inflation
targeting has been around for close to ten years,
we are able to do some preliminary econometric
work on this topic. This is exactly what Neumann
and von Hagen do in their paper, and it is a welcomed addition to the literature.
I break my comments up into two parts. The
first part looks at the empirical analysis in the paper,
while the second examines the question of whether
the non-inflation-targeting countries that Neumann
and von Hagen look at are really that different from
the inflation targeters they study.

EMPIRICAL ANALYSIS
Neumann and von Hagen produce several pieces
of evidence quite favorable to inflation targeting.
• After countries adopt inflation targeting, the
volatility of inflation, interest rates, and output
falls to levels that are similar to those in the
successful non-inflation-targeting countries
(the United States, Germany, and Switzerland).
• Taylor rules display a greater focus on the
control of inflation after adoption of inflation
targeting.
• Vector autoregression (VAR) evidence indicates
that the relative importance of inflation shocks
as a source of the variance of interest rates
rises after adoption of inflation targeting, and
this might also suggest a greater focus on
inflation control after adoption of inflation
targeting.
• The response of inflation and output to oil
price shocks is relatively more favorable after
inflation targeting is adopted.
Neumann and von Hagen thus conclude that
Frederic S. Mishkin is the Alfred Lerner Professor of Banking and
Financial Institutions at the Graduate School of Business, Columbia
University, a research associate at the National Bureau of Economic
Research, and a member of the economic advisory panel and academic
consultant at the Federal Reserve Bank of New York.

© 2002, The Federal Reserve Bank of St. Louis.

inflation targeting has improved monetary policy
performance in the countries that have adopted it.
Given that my past research has been quite
favorable to inflation targeting, it is not surprising
that I like the conclusions in this paper. Unfortunately
I am forced to point out that the evidence in the
paper suffers from several problems and so is not
completely convincing.
Although the reduction in volatility after inflation
targeting is adopted is suggestive, there is the potential problem that possibly it is something else that
produced these declines. Neumann and von Hagen
are aware of this problem, and this is why they turn
to other evidence to evaluate whether inflation targeting has been beneficial.
The Taylor rule evidence also looks quite favorable to inflation targeting because it suggests that
the central bank puts a greater weight on the control
of inflation relative to output stabilization, thus making it more likely that price stability will be achieved.
However, a troubling feature of the Taylor rules estimated in the paper is that, even when the long-run
coefficient on inflation has risen after inflation targeting has been adopted, it still remains less than 1.
Values less than 1 on this coefficient indicate that
the inflation process is unstable: When inflation and
inflation expectations rise, the central bank raises
interest rates by a lesser amount so that the real rate
of interest falls. The lower real interest rate then
stimulates inflation further and is thus likely to lead
to an inflationary spiral. Indeed, as John Taylor
(1993) has pointed out, an estimated Taylor rule for
the United States in the pre-1979 period does have
a coefficient less than 1 on inflation, and this is an
explanation why inflation rose to double-digit levels
by the end of the 1970s.
Therefore, although the estimated Taylor rules
in the paper suggest that the weight on the inflation
gap increases after inflation targeting has been
adopted, the central bank is still not doing its job
well enough if the long-run inflation coefficient
remains less than 1, as it does for all inflationtargeting countries other than Sweden in the monthly
estimates and Sweden and the United Kingdom in
the quarterly estimates. The Taylor rule estimates
do not suggest that inflation-targeting countries
have improved monetary policy enough to achieve
the goal of price stability. The fact that inflationtargeting countries have been so successful in inflation control should raise some concern about the
Taylor rule estimates.
Furthermore, the Taylor rule estimates for the
J U LY / A U G U S T 2 0 0 2

149

Mishkin

non-inflation-targeting countries also tend to have
long-run coefficients on inflation that are less
than 1. The exception is Germany in the post-1993
period. Especially troubling is that the long-run
coefficient on inflation is less than 1 for the United
States in both the 1978-92 and 1993-2001 periods.
These results appear to be inconsistent with those
of Taylor (1993) who finds that, for the United States
after 1979, the coefficient on inflation rises above
1—which is an important reason why the performance of monetary policy improved so much in
the post-1979 period relative to the pre-1979 period.
The authors point out in an appendix that the
low coefficients on their estimated Taylor rules stem
from using the CPI to measure inflation rather than
the GDP deflator as Taylor does. This is somewhat
troubling because it suggests large differences in
results occur when slightly different inflation measures are utilized. The most serious problem with
the Taylor rule results in the paper may not be that
monetary policy does not respond sufficiently to
changes in (CPI) inflation, but rather that estimated
Taylor rules in the paper are misspecified. From my
experience with central banks, it is quite clear that
they respond to future forecasts of inflation rather
than to current inflation. Indeed, this is exactly what
theorizing on the design of optimal monetary policy
suggests that they should do. Estimating Taylor rules
with actual rather than forecasted inflation thus
results in an errors-in-variables problem for the
long-run inflation coefficient and is thus likely to
bias this coefficient downward. Orphanides (2001)
shows that this is exactly what occurs in estimates
of Taylor rules for the United States. The fit is better
when one-year-ahead inflation forecasts are used
in the Taylor rule equations and the inflation coefficients are much higher and always above 1.
Orphanides (2001) also shows that using revised
data, rather than the data available in real time,
creates a further errors-in-variables problem, as
does possible improper measurement of the output
gap. The bottom line is that, although I am sympathetic to the view that countries adopting inflation
targeting increase their focus on inflation control,
I am highly skeptical of the Taylor rule evidence in
this paper that supports this.
I also am very skeptical of the VAR evidence in
the paper. A basic problem with VARs is that they
appear to yield a lot of useful evidence without
putting a lot of structure in their models. However,
as economists, we always need to be skeptical of
getting something for nothing, because as we always
150

J U LY / A U G U S T 2 0 0 2

REVIEW

say, “there is no such thing as a free lunch.” This
applies to econometrics just as much as it does to
filling our stomachs. The paper uses an implicit
identification scheme that inflation and output
react to monetary policy only with a lag. This is a
standard identification scheme, and although not
without its problems, it is not unreasonable. However, a serious problem for the analysis in this paper
arises from the fact that VARs don’t have any structural model of dynamics, and such a structural
model is needed if we are to interpret the response
of monetary policy to inflation. The fact that the
contribution of inflation shocks to the variance of
interest rates rises does not tell us that monetary
policy has an increased focus on the control of inflation. To see this, consider the following example.
Suppose that the monetary authorities greatly
increase their focus on inflation control and are
able to develop a super-credible inflation-targeting
regime. This regime would then change the timeseries process of inflation so that, when inflation
rises above its target level, the public and markets
expect it to fall back down to the target level very
quickly. Then the central bank doesn’t need to
respond much to the temporary upward blip in
inflation because inflation expectations will keep
inflation from deviating much from the inflation
target. In this environment, we would expect a
decreased contribution of inflation shocks to the
variance of interest rates. Should the smaller impact
of inflation shocks on interest rates then be interpreted as indicating that the central bank is less
focused on inflation? Of course not, because in this
example the opposite has actually occurred. The
above reasoning suggests that the VAR evidence in
the paper tells us little about the impact of inflation
targeting on the conduct of monetary policy.
The most interesting evidence in the paper
involves the examination of the different responses
before and after inflation targeting to upward spikes
in oil prices in 1978 and 1998. Neumann and von
Hagen look at oil price shocks because it is reasonable to assume that these shocks are exogenous to
most of the countries they are studying. (This might
be less true for the United States in the 1978 episode
because overly expansionary monetary policy might
have driven up oil prices at the time; see Barsky and
Killian, 2001.) It is also useful to look at the effect
of the oil price shock in 1998 on inflation targeting
because one commonly heard claim is that inflation targeting has not been tested because so many
shocks in the 1990s have been favorable. However,

FEDERAL RESERVE BANK OF ST. LOUIS

we recently made the point (Mishkin and SchmidtHebbel, 2002) that this view is incorrect. To the
contrary, we point out that the oil price shock in
1998 was an adverse shock that was handled very
well by inflation-targeting regimes, which is also
the conclusion that Neumann and von Hagen reach.
Their paper uses the method of double differences to look at the difference in outcomes for
inflation-targeting countries relative to non-inflationtargeting countries. To justify their analysis, they
need several assumptions. First is that the response
to other exogenous shocks is the same for both
inflation-targeting and non-inflation-targeting countries. Second is that, when the oil price shock occurs,
nothing else is occurring that affects inflationtargeting and non-inflation-targeting countries
differently. Third is that the dynamic response to
oil price shocks is the same in all inflation-targeting
countries. It would be easy to cast some doubt
on the first two assumptions, but they are pretty
reasonable relative to other assumptions we often
have to make in doing empirical work. However,
there are more serious concerns about the third
assumption that I think the authors of the paper
share. Under the third assumption, double differencing would choose the same starting date, and this
is what is conventionally done. However, Neumann
and von Hagen instead make use of a nonstandard
dating scheme that chooses the starting date for
each country on the basis of when the trough and
peak of the inflation rate is reached after the oil
price shock. It is appropriate that they choose a date
after the shock because it takes time for commodity
price shocks to affect inflation. However, it is not
clear under what assumption their procedure makes
sense. I think that the reason they chose to use this
procedure is because they have doubts about the
assumption that the dynamic response to oil price
shocks is the same in all the inflation-targeting countries, and this is a little worrisome. I am not sure
how important this is because it is not clear that
their results would be very different if they chose
the same starting date for the double differencing.
To conclude my discussion of the empirical
work in the paper: Although the research conducted
by Neumann and von Hagen is worth doing, I have
some doubts about the quality of the evidence. Thus
I see the results as suggestive, but not much more
than that. Should the fact that there are doubts
about the evidence in this paper shake our faith in
the benefits of inflation targeting? I think not. The
doubts about the evidence just mean that we have

Mishkin

to look at broader types of evidence. One reason
why some of my recent research on inflation targeting (Mishkin and Posen, 1997; Bernanke et al., 1999;
and Mishkin and Savastano, 2001) has focused on
historical case studies is because of the difficulty of
doing econometric analysis of the type done in this
paper. (Neumann and von Hagen call their doubledifferencing empirical work a case study approach,
but it really is more like an event study rather than
a case study.) Case studies allow us to see how inflation targeting has worked in practice and so provide
some evidence about the mechanisms through
which inflation targeting has affected the interaction
of the markets, the public, politicians, and central
banks. Then we can see if that interaction has been
likely to improve how monetary policy is conducted
and whether it results in better policy outcomes.
This type of evidence is also not without its faults
because it is necessarily anecdotal. However, I think
that we need to be honest and admit that all evidence,
including econometric evidence, has its faults. This
is why we need to take a broader view on what evidence to examine and try to understand what makes
monetary policy strategies successful from alternative perspectives.

ARE THE SUCCESSFUL NON-INFLATION
TARGETERS VERY DIFFERENT FROM
INFLATION TARGETERS?
I want to address a final issue that is also very
relevant to the interpretation of this paper. It is not
at all clear that the successful non-inflation targeters
that Neumann and von Hagen study (the United
States and especially Germany and Switzerland) are
very different in their monetary policy strategies
from the inflation targeters.
As documented in my work with Ben Bernanke,
Thomas Laubach, and Adam Posen (Bernanke et al.,
1999), the successful non-inflation targeters’ strategies for conducting monetary policy have many of
the same elements as those pursued by inflation
targeters. Indeed, my reading of Neumann and von
Hagen’s paper is that they would agree with the
view that inflation targeters and the successful noninflation targeters are not all that different. Both do
focus on the long-run goal of price stability and
stress transparency, accountability, and flexibility,
the key elements of inflation-targeting regimes.
Thus, the adoption of inflation targeting should be
seen as a convergence to best practice in the conduct
of monetary policy.
J U LY / A U G U S T 2 0 0 2

151

REVIEW

Mishkin

I agree with Neumann and von Hagen that
monetary targeting worked well in Germany and
that the evidence does not suggest that inflation
targeting would have been superior to the monetary
targeting approach used by the Bundesbank from
1974 to 1998. As pointed out by Neumann and von
Hagen and also in my work with Bernanke, Laubach,
and Posen, the Bundesbank’s monetary targeting
strategy was a success because it helped both the
officials inside the central bank and the public and
markets to focus on longer-run issues, particularly
price stability. This view leads the authors to end
their paper by stating that, “Given the central bank’s
commitment to price stability and its willingness to
bind its policy to an intermediate target that serves
as the nominal anchor for monetary policy, the
choice between an inflation target or a monetary
aggregate then is probably more a question of culture than economic principles.” I agree.
However, it is important to point out that the
context (culture) for the conduct of monetary policy
in Germany is quite different from what it is in the
European Monetary Union. Because of its history
in which it experienced horrendous costs from
hyperinflation, the German public is far more sophisticated about monetary policy than other Europeans
and has much greater support for a central bank that
focuses on inflation control. As a result, the complicated explanations provided by the Bundesbank
when it missed its monetary target ranges were
accepted by the public and did not weaken the
support for the Bundesbank’s monetary policy
strategy. This is much less likely to work with the
wider European population.
Some evidence for this view is that the European
Central Bank (ECB) (or, more accurately, the European
System of Central Banks) has received a tremendous
amount of flack since its inception, although its
policies seem to be reasonable and inflation has
remained under control. I believe this has occurred
because the ECB suffers from a “communications
gap” and not a “policy executions gap.” Part of the
problem stems from the two-pillar strategy, which
I believe is confusing to the European public and
hinders effective communication. Given the instability of the money-income relationship, the monetary
reference value requires complicated explanations
that are not fully understood by the European public.
It would be much clearer for the ECB to focus its
explanations of the conduct of monetary policy on
the second pillar, which addresses whether it is
meeting its inflation goal. In other words, one pillar
152

J U LY / A U G U S T 2 0 0 2

is better than two. I thus believe that the ECB would
reduce its communications gap if it adopted a flexible inflation-targeting framework akin to that followed by inflation targeters, just as Switzerland has
done recently. It is important to note that dropping
the monetary-reference-value pillar does not rule
out a role for monetary aggregates in the formulation of policy. Many inflation targeters, including
the Bank of England, do follow monetary aggregates
quite closely in thinking about the future path of
inflation, and this could certainly be an element in
an inflation-targeting framework for the ECB.
The Federal Reserve’s monetary policy actions
under Alan Greenspan have probably also been quite
consistent with what would have been done under
an inflation-targeting regime. Furthermore, as I have
pointed out elsewhere (Mishkin, 2000), the United
States has a nominal anchor that has been very
effective in recent years—it is Alan Greenspan. Thus
it is not at all clear that adoption of inflation targeting
in the United States would have improved recent
monetary policy performance. However, there is
still a strong argument for adoption of inflation
targeting by the United States. No matter how good
a nominal anchor Alan Greenspan is, he won’t be
around forever. It is better to depend less on individuals and more on institutions to achieve good
policy results. Thus we need to take steps now that
will institutionalize the desirable features of the
Greenspan Fed with its focus on price stability and
the use of preemptive strikes against either inflationary or deflationary impulses in the economy. This
is exactly what inflation targeting is intended to
achieve.

REFERENCES
Barsky, Robert and Killian, Lutz. “Do We Really Know That
Oil Caused the Great Stagflation? A Monetary Alternative.”
Working Paper No. 8389, National Bureau of Economic
Research, July 2001.
Bernanke, Ben S.; Laubach, Thomas; Mishkin, Frederic S.
and Posen, Adam S. Inflation Targeting: Lessons from the
International Experience. Princeton: Princeton University
Press, 1999.
Mishkin, Frederic S. and Posen, Adam S. “Inflation Targeting:
Lessons from Four Countries.” Federal Reserve Bank of
New York Economic Policy Review, August 1997, 3(3), pp.
9-110.
___________. “What Should Central Banks Do?” Federal

FEDERAL RESERVE BANK OF ST. LOUIS

Mishkin

Reserve Bank of St. Louis Review, November/December
2000, 82(6), pp. 1-13.
___________ and Savastano, Miguel. “Monetary Policy
Strategies for Latin America.” Journal of Development
Economics, October 2001, 66, pp. 415-44.
___________ and Schmidt-Hebbel, Klaus. “One Decade of
Inflation Targeting in the World: What Do We Know and
What Do We Need to Know?” in Norman Loayza and
Raimundo Soto, eds., A Decade of Inflation Targeting in
the World. Santiago: Central Bank of Chile, 2002, pp.
117-219.
Orphanides, Athanasios. “Monetary Policy Rules Based on
Real-Time Data.” American Economic Review, September
2001, 91(4), pp. 964-85.
Taylor, John B. “Discretion Versus Policy Rules in Practice.”
Carnegie-Rochester Conference Series on Public Policy,
December 1993, 39, pp. 195-214.

J U LY / A U G U S T 2 0 0 2

153

Mishkin

154

J U LY / A U G U S T 2 0 0 2

REVIEW

Panel Discussion: Transparency in the Practice
of Monetary Policy
The Value of Transparency in
Conducting Monetary Policy
Charles Freedman
n this paper, I discuss transparency in the conduct of monetary policy from three perspectives.
First, I look at why central banks have chosen
to become more transparent in recent years. I then
set out the measures taken by the Bank of Canada
to increase transparency. The third section of the
paper examines a number of issues that could be
grouped under the heading “Are there limits to
what should be made public?”

I

WHY HAVE CENTRAL BANKS BECOME
INCREASINGLY TRANSPARENT?
There are two key factors behind the move to
increased transparency on the part of central banks.
The first is the relationship between transparency
and the effectiveness of monetary policy. The second
is the link between transparency and accountability.
Let me examine each of these motivations in turn.
The way in which monetary policy is conducted
by central banks has changed significantly in recent
years. Not too long ago, central banks said relatively
little about their monetary policy and allowed their
actions to speak for themselves. Today, in contrast,
central banks are very explicit in setting out the
objectives of policy, the way in which they view the
operation of the transmission mechanism between
their policy actions and their goal variables, their
outlook for economic activity and inflation, and their
setting of the policy interest rate. It is now generally
believed in the central banking community that this
increased transparency improves the functioning
of monetary policy in a number of dimensions.
The first dimension involves the understanding
Charles Freedman is a deputy governor at the Bank of Canada. The
author thanks Paul Jenkins and David Longworth for comments on
earlier drafts. The views expressed here are those of the author and
should not be attributed to the Bank of Canada.

© 2002, The Federal Reserve Bank of St. Louis.

of the general public, both directly and through the
media. Like all public policies, monetary policy
benefits from increased public support and understanding. In particular, monetary policy, which at
times involves the need to take tightening actions
to prevent the economy from overheating, would
find itself the subject of considerable public criticism
if the public did not understand the reason for its
actions. The key point in developing such an understanding is to make clear what monetary policy
can do, as well as what it cannot do. Thus, central
banks should emphasize that the role of monetary
policy is to control inflation in the medium-to-long
run1 and that an environment of low inflation will
help the economy to achieve a higher level or rate
of growth of productivity. Moreover, a monetary
policy aimed at inflation control will tend to moderate the economic cycle, although it cannot eliminate
it. In focusing on these benefits, the central bank
should make clear that the objective of low inflation,
or price stability, is a means to an end, the end being
a well-functioning economy, and not an end in itself.
Examples from postwar economic history that focus
on the poor performance of the economy at times
of high inflation and its better performance at times
of low inflation can be very helpful in this regard.
In addition to generating broad public support
for the goal of low inflation, transparency (along
with the credibility of policy) can contribute to
behavior that will facilitate the achievement of the
goal. Thus, wage and price setting that is done in
the context of an environment of confidently held
expectations of low and stable inflation will make
the task of the central bank easier.
The second dimension of the relationship
between transparency and the functioning of
monetary policy involves the behavior of participants
in financial markets. When financial markets understand and anticipate the actions of the central bank,
the first steps in the transmission mechanism
between policy actions and economic activity and
inflation work more smoothly.
1

This could be done in the context of an explicit inflation target (as in
Canada) or a more general commitment to low inflation (as in the
United States).

J U LY / A U G U S T 2 0 0 2

155

REVIEW

Panel Discussion

For example, when the central bank and market
participants have a similar interpretation of factors
affecting the economic outlook, data releases will
tend to lead to movements in market interest rates
(and the exchange rate) in advance of, and consistent with, the policy actions that are subsequently
taken by the central bank. Thus, new data indicating
increased pressures on capacity and, hence, an
increased likelihood of higher future inflation will
result in higher interest rates across much of the
yield curve, while signs of weakness in the economy
and an increased likelihood of lower future inflation
will result in lower interest rates.
I would emphasize at this point that central
banks should not and do not simply follow the market. If views differ between the central bank and the
market as to the likely outlook and the appropriate
policy, the central bank must follow its own best
judgment and explain to the market the reasons
for its actions. But the enhanced transparency and
improved communications of recent years reduce
the likelihood of sharply different views as to appropriate policy, although they do not entirely eliminate it. In short, if market expectations are broadly
in line with the direction of policy, there is likely to
be less volatility in financial markets and smoother
incorporation of policy actions into interest rates
and exchange rates.
Communications play an important role in the
transmission of the views of the central bank to the
public and to markets. Hence, a great deal of attention is now paid to the way that central banks present their key messages (see Blinder et al., 2001, and
Jenkins, 2001). Improving the effectiveness of monetary policy through greater transparency requires
proactive and well-planned communications.
The second key factor motivating the trend to
greater transparency is the tendency toward greater
accountability, an important element in the framework supporting the independence of central banks.
On this, I can be brief. Increasingly around the
world, central banks are being given responsibility
for carrying out monetary policy in the context of
objectives that are defined in legislation or treaty
and/or agreed upon by the government and central
bank. As nonelected bodies, central banks are typically held accountable to government or parliament
or the general public for their stewardship of policy.
In order for this accountability to be effective, the
oversight body must have sufficient information to
evaluate the conduct of policy by the central bank.
Such information is provided by central banks in
156

J U LY / A U G U S T 2 0 0 2

the context of their overall communications strategy,
and the need to provide this information has played
an important role in the increased transparency of
monetary policy.

HOW HAS THE BANK OF CANADA
BECOME MORE TRANSPARENT?
While I now turn to the ways in which the
Bank of Canada has become more transparent in
recent years, I would note that similar (although not
identical) changes have been put in place in most
central banks. Changes in the direction of increased
transparency can be grouped under a number of
headings—the goal of policy, the transmission
mechanism, the outlook, the policy instrument,
and the means by which the Bank communicates
information.

Goal of Policy
In February 1991, the Bank of Canada and the
government of Canada publicly announced their
jointly agreed inflation-control targets. The initial
targets aimed at a gradual reduction of the target
rate of inflation from 3 percent at the end of 1992
to 2 percent at the end of 1995. Since then the targets
have been renewed three times, each time with a
target range centered on 2 percent. The most recent
agreement, announced earlier this year, extended
the 2 percent target to the end of 2006. The move
to a five-year term for the agreement (from the
three-year term in previous agreements) is aimed
at enhancing the longer-term predictability of the
rate of inflation.
The range for the target has been plus or minus
1 percent throughout. The Bank has also been very
explicit that the horizon for bringing inflation back
to the target midpoint if it moves away from that
level would be six to eight quarters. While the target
has been defined in terms of the 12-month rate of
increase of the total consumer price index (CPI), the
Bank has used a publicly announced measure of
core inflation as a policy guide in assessing future
inflation developments.

Transmission Mechanism
The Bank of Canada has explained in some
detail the way in which it views the transmission
mechanism from its policy actions to market interest
rates and the exchange rate, and then to output and
inflation (see Thiessen, 1995). It has also published
a number of articles on the large macroeconomic

FEDERAL RESERVE BANK OF ST. LOUIS

model, the quarterly projection model, or QPM
(Poloz, Rose, and Tetlow, 1994), that currently provides the basis (combined with staff judgment) for
the principal staff projection. An alternative view
of the transmission mechanism focuses on the
way that developments in the monetary aggregates
directly affect the spending behavior of households
and businesses. (See Engert and Selody, 1998, and
Laidler, 1999, for expositions of this approach.) The
various multi-equation and single-equation models
linking monetary aggregates to economic activity
have also been made public. And the Bank has
explained how the staff projection, the monetarybased forecasts, and the information gathered by
the Bank’s regional offices (through formal surveys
and anecdotally) are integrated in the course of
making monetary policy decisions (see Longworth
and Freedman, forthcoming).

Economic and Inflation Outlook
Central banks differ in the degree of detail that
they publish on their economic and inflation outlook. And they also differ in the interest rate and
exchange rate conventions that underlie their
projections.
The Bank of Canada presents a detailed discussion of recent economic and inflation developments
as well as its outlook for the future once per quarter
either in its monetary policy report (in April and
October) or in its update (in January and July). The
outlook typically focuses on expected developments
over the next 6 to 18 months in gross domestic product (GDP), the output gap, total CPI, and core CPI.
A qualitative assessment is given of the risks surrounding the outlook, but there is no attempt to
quantify the risks.
Speeches by the Governor and other members
of the Governing Council of the Bank are used to
sketch out changes in the outlook between publications. As well, a press release is issued on each of
the eight preannounced fixed action dates, whether
or not the policy interest rate is changed, and this
gives the Bank a further opportunity to give some
sense of its views of likely future developments in
the economy and inflation.

Policy Interest Rate
Until a few years ago, markets had to infer a
central bank’s target for the policy interest rate from
its actions, and it was not always immediately clear
from these actions whether or not the policy rate

Panel Discussion

target had changed. Now, the target for the policy rate
is announced explicitly, normally on preannounced
dates, almost everywhere.
In Canada, there were a number of changes
that made the setting of the policy interest rate
increasingly transparent. In 1994, the Bank established an operational target band of 50 basis points
for the overnight interest rate. Market participants
recognized a change in the rate when the Bank
informed them of its intention to intervene at the
new limits of the band (using repos or reverse repos
to enforce those limits). In early 1996, the Bank
began to issue a press release whenever there was
a change in the band, giving an explanation for the
change. Shortly thereafter, the Bank Rate (the rate
charged by the Bank on advances to participants in
the payments system) was set at the top of the band.2
In 1999, the target rate was explicitly set as the
midpoint of the band. With the movement to fixed
announcement dates in late 2000, a press release
was issued on each date regardless of whether or
not there was a change in the policy rate.

Communications
The Bank now aims at an integrated communications strategy in order to disseminate its key messages to the various target audiences throughout
the year. As noted earlier, each year this involves
two monetary policy reports, two updates to the
report, eight press releases on the fixed announcement dates, and speeches by the Governor and other
members of the Governing Council (in many cases
as part of a regional outreach program). In addition,
there are background briefings, press conferences
with the Governor following the release of the report
and the update, and testimony by the Governor
before the House of Commons Finance Committee
following the publication of each report.
In recent years, the Bank has instituted a media
“lock-up” arrangement in which the media can read
key Bank reports and write their stories prior to the
official publication time, for release at that time.
As well, there are regular media briefings during
the lock-up, where officials can deal with technical questions and clarify other issues (on an unattributed basis) for the media that are present.
The result has been a clear improvement in the
quality of the reporting compared with the period
when the media received the reports at the official
2

It had previously been set equal to the average rate on Treasury bills
at the weekly auction plus 25 basis points.

J U LY / A U G U S T 2 0 0 2

157

REVIEW

Panel Discussion

release time and the wire services competed to get
out the first headline.
The establishment of the fixed announcement
dates has also had a beneficial effect on the discussion surrounding Canadian monetary policy by both
journalists and market commentators. Whereas previously there had been a tendency for the discussion
to center on whether or not the Bank would follow
the Fed’s movements, the focus has shifted to what
is appropriate for the Canadian economy in its current and prospective economic circumstances.3

ARE THERE LIMITS TO WHAT SHOULD
BE MADE PUBLIC?
On the surface, this seems like an odd question.
Can there ever be too much of a good thing? But
as one reflects on the nature of transparency and
communications, it becomes clear that certain steps
in the direction of increased transparency could
actually be counterproductive. Let me begin with an
admittedly extreme example, turn to the principle
at issue, and then return to some examples.
Should the policymaking body’s deliberations
before its decisions be televised or Web-cast? Even
strong proponents of transparency come to the
conclusion that such an initiative could be harmful
for a number of reasons. First, policymakers could
be inhibited from taking different points of view in
the course of the discussion (i.e., playing devil’s
advocate). Second, it would make it more difficult
for them to change their minds on the appropriate
decision for the policy interest rate as the debate
progressed and as different perspectives on the
issue were discussed, since they would appear to
be “waffling” on the decision. Third, making the
deliberations public would likely lead to participants
making more formal presentations (with perhaps a
more entrenched initial position), replacing the
more informal discussion in which the dynamic
of the debate plays an important role in arriving at
a decision.4 In short, the view that opening the
deliberations to the public could well lead to a
deterioration in the quality of the decisionmaking
process has acted to prevent such a development
even in those central banks that are the most enthusiastic supporters of transparency. (See Blinder et al.,
2001, for a detailed discussion of this issue.)
Let me now examine the question of the limits
of transparency from a broader perspective, drawing
on an interesting and insightful paper by Bernhard
Winkler (2000) of the European Central Bank (ECB).
Winkler argues (p. 18) that “in a world where—
158

J U LY / A U G U S T 2 0 0 2

unlike in most standard economic models—cognitive
limits matter, more information and greater detail
does not by itself translate into greater transparency
and better understanding, nor does it necessarily
lead to more efficient decision-making.” Winkler
notes that there are several aspects of transparency,
which may possibly conflict with each other. These
include (i) openness, or the amount and precision
of information provided; (ii) clarity in the presentation and interpretation of information; (iii) common
understanding by the sender and receiver of information; and (iv) honesty, or the correspondence of
the internal framework of analysis with the presentation used for external communication.
As an example of potential conflict, we can compare openness and clarity. Central bank projections
typically produce time paths for dozens or even
hundreds of economic variables. Yet most central
banks communicate to the public their quantitative
outlook only for the broadest economic measures,
such as output and inflation.5 This reflects the view
that increased openness, in the sense of presenting
enormous amounts of detail, would reduce the
clarity of the central bank’s message about future
developments rather than increase it.
In passing, I would note that one issue that all
central banks are struggling with is how to characterize and communicate the risks around their baseline case forecast. Some, such as the Bank of England
and the Riksbank, present a form of probability
distribution that is intended to indicate the variance
around the central forecast. Others, such as the
Federal Reserve and the Bank of Canada, are more
qualitative in their presentation of the balance of
risks. But I do not think that any central bank has
been completely successful thus far in communicating the nature of the risks surrounding its outlook
for the economy and inflation.
The notion of “honesty” in the correspondence
of the internal framework of analysis and external
communications also gives rise to some interesting
3

See Bank of Canada (2000) for a discussion of the benefits anticipated
from the movement to fixed announcement dates.

4

Presentations at FOMC meetings by Board members and Reserve Bank
presidents appear to have become somewhat more formal since 1993.
In the fall of that year, the FOMC was made aware that the transcripts
of the tape recordings of the meetings since March 1976 had been
retained. The FOMC subsequently decided that lightly edited verbatim
transcripts of the meetings would be released with a five-year delay.

5

There is often considerable qualitative discussion of some of the components of these broad measures, but most central banks do not give
precise estimates of their projections of these components.

FEDERAL RESERVE BANK OF ST. LOUIS

issues. Economists, whether in universities or
markets, would like central banks to be more explicit
in setting out their reaction function to various
contingencies. But central banks, while they spend
a lot of time considering the appropriate response
to various shocks, do not have an explicit, quantitative pre-agreed reaction function for every type of
shock. To quote John Vickers (1998, pp. 370-71):
In situations of any complexity, there is a
tension between a complete contract (i.e. one
that specifies what is to happen in every
eventuality) and having a good contract
(i.e. one that entails good decisions in every
eventuality). If the same is true for policy
reaction functions, then residual discretion
is sensible and so residual uncertainty is
inevitable.
One reason that it is not possible to develop a
simple reaction function is that there is no model
of the economy that is universally accepted.6 With
model uncertainty, there cannot be a simple reaction
function, especially when different weights are
attached to the projections from the various models
in different circumstances. In this context I would
note that one of the perceived advantages of the
Taylor rule is that it is robust across models. But
while the Taylor rule can be useful as an indicator
of policy in many circumstances, it is not a reaction
function that sets out a monetary policy response
to all contingencies. A second reason that there
cannot be a simple reaction function is that the
information used in coming to a decision involves
more variables than can be incorporated in any
such function. For example, in the early 1990s, the
reluctance of commercial banks to extend loans
(Chairman Greenspan’s “headwinds”) played an
important role in the Fed’s conduct of policy. More
recently, the increased rate of growth of productivity
operated through a number of channels to affect
economic behavior and thereby to influence the
Fed’s decisionmaking. And, currently, the confidence
of firms and households in light of the terrorist
attacks of September 11 is playing an important role.
While a simple relationship such as a Taylor rule
can be a helpful guide to policymaking, it cannot
incorporate all the factors that feed into the decisionmaking process (especially in an open economy).

CONCLUDING REMARKS
Central banks have come a long way in recent
years in the direction of increased transparency.

Panel Discussion

And this has been very helpful in improving the
effectiveness of monetary policy and enhancing
the accountability of the central bank.
But there continue to be interesting challenges
as to future directions in which central banks should
go. How much detail should be included in the outlook? Whose forecast is being released—that of the
staff (as in the case of the ECB) or that of the policymaking body (as in the case of the Bank of England)?
What convention, if any, should be used for the
interest rate path on which the outlook is based?7
How does the central bank communicate most
effectively that its outlook is conditional on current
information and that the outlook will change as
new information is received? How can it best communicate the risks and uncertainties surrounding
its outlook?
In my view, the central bank’s approach to
answering each of these questions should be based
on an analysis of what would be most effective in
enhancing the understanding of the public, the
markets, and the media. This may be different in
different countries. And it may change over time as
the sophistication of the targeted audiences
changes.

REFERENCES
Bank of Canada. “Bank of Canada to Adopt Fixed Dates for
Announcing Bank Rate Changes.” Ottawa, 19 September
2000.
Bank of England. Economic Models at the Bank of England.
London: Bank of England, 1999.
Blinder, Alan; Goodhart, Charles; Hildebrand, Philipp;
Lipton, David and Wyplosz, Charles. How Do Central
Banks Talk? Geneva Report on the World Economy No. 3.
London: Centre for Economic Policy Research, October
2001.
Engert, Walter and Selody, Jack. “Uncertainty and Multiple
Paradigms of the Transmission Mechanism.” Working
Paper 98-7, Bank of Canada, Ottawa, 1998.
Jenkins, Paul. “Communicating Canadian Monetary Policy:
Towards Greater Transparency.” Remarks to the Ottawa
Economics Association, Ottawa, Canada, 22 May 2001.
6

See, for example, Bank of England (1999) for the various models that
the Bank uses in its policy formulation.

7

See Kohn (2000) and Svensson (2001).

J U LY / A U G U S T 2 0 0 2

159

Panel Discussion

Kohn, Donald L. “The Kohn Report on MPC Procedures.”
Bank of England Quarterly Bulletin, Spring 2001, pp. 35-49.
Laidler, David. “The Quantity of Money and Monetary Policy.”
Working Paper 99-5, Bank of Canada, Ottawa, 1999.
Longworth, David and Freedman, Charles. “Models,
Projections and the Conduct of Policy at the Bank of
Canada.” Prepared for the conference Stabilization and
Monetary Policy: The International Experience, Bank of
Mexico, 14-15 November 2000 (forthcoming).
Poloz, Stephen; Rose, David and Tetlow, Robert. “The Bank
of Canada’s New Quarterly Projection Model (QPM): An
Introduction.” Bank of Canada Review, Autumn 1994,
pp. 23-38.

The Value of Transparency in
Conducting Monetary Policy:
The Czech Experience
Václav Klaus
My remarks reflect more my political experience
during the last decade, after the fall of communism
in my country and elsewhere, than any well-defined
theoretical position.
As I see it, transparency does not represent the
main and most important issue of monetary policy.
Transparency itself is undoubtedly a positive feature,
but to concentrate on transparency without taking
into consideration other things means missing, if
not hiding, something that is more relevant.
In my understanding, the more relevant issues
or the prior issues are the quality of the monetary
regime and the way in which monetary policy
reflects the preferences of society. An error in either
of them is very costly.
Let me start with the second issue, with the
problem of the independence of the central bank.
I must admit I have a problem with it—as someone who, as minister of finance, introduced it into
my country. I can probably afford to make such a
“politically incorrect” statement here because I have
Václav Klaus, former finance minister and prime minister, is the
president of the Chamber of Deputies of the Parliament of the Czech
Republic.

160

J U LY / A U G U S T 2 0 0 2

REVIEW
Svensson, Lars E.O. “Independent Review of the Operation
of Monetary Policy in New Zealand: Executive Summary
of the Report to the Minister of Finance.” Reserve Bank
of New Zealand Bulletin, March 2001, pp. 4-11.
Thiessen, Gordon G. “Uncertainty and the Transmission of
Monetary Policy in Canada.” Bank of Canada Review,
Summer 1995, pp. 41-58.
Vickers, John. “Inflation Targeting in Practice: The UK
Experience.” Bank of England Quarterly Bulletin,
November 1998, pp. 368-75.
Winkler, Bernhard. “Which Kind of Transparency? On the
Need for Clarity in Monetary Policy-Making.” Working
Paper No. 26, European Central Bank, 2000.

some justification for it. In the communist era, we
were—among other things—dreaming about rational
monetary policy and we considered the independence of a central bank to be a necessary precondition for it. Now, after 12 years of its absolute
independence in my country, I see this issue in a
more complicated way. I see it as a principal-agent
problem. There are many arguments that the central
bank should be just an agency that operates to meet
policy objectives set by society or its legal representatives. In accordance with this view, the independence of a central bank should be limited to the
independence in choosing instruments, not policy
objectives. This is not, however, the case in my country. Transparency is, therefore, not the main issue.
Looking at the title of this discussion, we are
supposed to speak not about monetary regimes but
about monetary policy. Nevertheless, it seems to me
that there is a difference in transparency between
the regime of discretionary monetary policy and
the regime of policy of rules. Discretionary policy
cannot be—perhaps even should not be—transparent
(as I understand transparency).
My personal experience with pegged exchange
rate policy, which was considered to be the most
suitable policy for transition economies 10 years
ago, is not a good one. I was very much afraid of
accepting it at the end of 1990, but at that time the
International Monetary Fund did not listen to any
arguments in this respect. This policy, however, in
the first half of the 1990s brought about (or at least
made possible) better economic fundamentals in
my country than in other transition economies. It

FEDERAL RESERVE BANK OF ST. LOUIS

Panel Discussion

was, however, undermined by the premature introduction of full convertibility of the Czech crown
and by the resulting (or perhaps parallel, but independent) large inflow of foreign capital into the
country. This coincidence of events led, of course,
to the excessive growth of the money supply.
Our central bank tried not to be passive and
started to interfere with the money supply, which
was an expected error. The combination of two
different rules (or regimes) whether in a transparent
or nontransparent way—pegged exchange rate
and monetary targeting—had very unpleasant
consequences.
To return to our topic, we can say that the policy
of pegged exchange rates was transparent, but in
the world of global massive movements of capital
it contained inherent risks. When investors lose trust
in the currency and start speculative attacks against
it, the pegging must be abandoned, which is not
costless. The transition from one type of monetary
rule to another is connected with instability, which is
especially true for a small, open, transition economy
with weak and shallow markets.
Our country finally moved to inflation targeting
which is, in a favorable interpretation, a more complex policy regime than a simple monetary targeting

or pegged exchange rate regime. In another interpretation, it is a resignation on accountable policy.
It requires using the whole mix of central bank
instruments, but no one knows in advance which
of them will be used. In this respect, inflation targeting is not transparent and our experience forces
me to argue that its results (at least its short-term
results) are very dubious.
The Czech experience demonstrates that pegged
exchange rate policy is suitable before deregulation
of capital flows, whereas, after it, floating is inevitable. It shows as well the problems of inflation targeting in a transition economy. Our central bank did
not have sufficient experience with monetary policy
and, in addition, chose an extremely low inflation
target which slowed down the economy too much.
After that we could not get out of deflation.
Inflation targeting can have meaning only on
condition of hitting the inflation target, which in
our case was not done. The missing of the target
was enormous; instead of 6 percent inflation we
got deflation. Somebody could argue that it was a
mistake, but I am not so sure.
To conclude, transparency has a meaning and
plays a positive role only when all other preconditions of monetary policy are in place.

Transparency in the Practice
of Monetary Policy

share some thoughts with this distinguished group.
Actually, it is hard to imagine that anyone interested in improving the conduct of monetary policy
would not be interested in this topic. There is a
growing consensus among monetary economists
at this point that the impact of monetary policy on
expenditure is transmitted primarily through the
effects of policy actions on expectations regarding
the future path of short-term interest rates rather
than the current level of the overnight rate (see
Woodford, 2001, p.17). Further, the more financial
markets know about the reasons for a central bank’s
current policy actions and its longer-run policy
intentions, the more likely it is that market reactions
to policy actions will reinforce these actions and
increase the effectiveness of stabilization policy. It
follows that central banks should be highly transparent regarding both their long-term policy objectives
and the shorter-term tactical actions they take with
policy instruments.
Against this background, it seems to me that
the Fed, along with other central banks, has made
considerable progress in increasing transparency

J. Alfred Broaddus Jr.
This has been a very useful conference in my
view, and I am honored by this opportunity to be a
part of it. As some of you may know, I was the second
choice for this slot, but that doesn’t bother me at
all because the first choice was Don Brash, the
Governor of the Reserve Bank of New Zealand and
a pathbreaker in bringing both transparency and
accountability to central banking in practice. I won’t
be able to fill Don’s shoes completely, but I have a
strong interest in this topic, and I am very happy that
Bill and Dan saw fit to give me the opportunity to
J. Alfred Broaddus Jr. is the president of the Federal Reserve Bank of
Richmond. The author thanks his colleague Marvin Goodfriend for
assistance in preparing these remarks. The views expressed here are
the author’s and not necessarily those of the Federal Reserve Bank of
Richmond or the Federal Reserve System.

J U LY / A U G U S T 2 0 0 2

161

Panel Discussion

in recent years. When I first joined the Fed back in
1970, to the extent that anyone thought explicitly
about transparency issues at all, the idea seemed
to be that limited transparency—or even no transparency—was best. Central banks in industrial
democracies were thought to work most effectively
behind the scenes, away from the glare of public
scrutiny, at least in part because they could then
quietly take appropriate actions that might be
politically unpopular or, more broadly, difficult to
explain to a public not well versed in the intricacies of finance (see Goodfriend, 1986). There was
also a belief in some quarters that central banks
could enhance the effects of certain policy actions—
most notably foreign exchange market intervention
operations—if they kept market participants uncertain about their intentions.
Attitudes toward transparency appeared to
change in the 1980s, partly reflecting progress made
by economists in understanding the monetary policy
transmission mechanism, and probably partly
because of public demand, particularly in the United
States, for greater openness in government and
public policy in general. (As you may recall, the
most widely read popular book about the Fed and
Fed policy in the 1980s was somewhat derisively
titled Secrets of the Temple.) Further, in the early
1980s Chairman Volcker publicly took responsibility
for reducing inflation from its then high level, and
subsequently took strong and temporarily painful
actions to accomplish the reduction. Some public
explanation of the need for these steps was required,
and this need probably facilitated the transition to
viewing transparency in a more favorable light. In
any case, given the normal resistance to change in
bureaucratic organizations, I believe the Fed has
made remarkable progress over the last decade or
so in opening up its conduct of monetary policy to
market and public scrutiny.
Since the Fed is now quite open regarding many
important aspects of its policy strategy and operations, and in view of the strong performance of the
U.S. economy in recent years, at least up until the
last several quarters, one might reasonably ask
whether still greater transparency is necessary or
even desirable in U.S. monetary policy. I think it is,
and I will try to make this case in the next few minutes. Let me comment briefly on four points: (i) the
transparency of our long-term inflation objective,
(ii) what I’m going to refer to as the “intermediateterm transparency problem,” (iii) the transparency
of our policy directive including its “tilt,” and (iv)
162

J U LY / A U G U S T 2 0 0 2

REVIEW

the role of testimony, speeches, and other public
statements by Fed officials in providing transparency.

TRANSPARENCY OF THE LONG-TERM
INFLATION OBJECTIVE
Probably the most important thing about Fed
monetary policy that the public wishes to know
and needs to know with some precision is our longterm objective for inflation. Longer-term inflation
expectations are obviously critical to households
and businesses in committing to long-term investments, home purchases, insurance contracts, and
wage and benefit agreements. Conversely, the Fed
needs the public to understand and trust its longterm commitment to low inflation to achieve maximum benefit from this long-term strategy.
How to convey this objective credibly to the
markets and the public has been a major focus of
our policy research at the Richmond Fed for a long
time. For many years I’ve personally been convinced
that controlling inflation should be the Fed’s overriding objective, that this objective should be explicit,
and that it should be supported by a Congressional
mandate. At one level, abstracting, for example, from
political obstacles, this seems obvious. We know
that the Fed has the ability to determine the long-run
inflation rate with monetary policy, and theoretical
analysis and all of our practical experience suggests
we should use that power in the public interest to
maintain low and stable inflation over time.
An explicit long-term inflation objective supported by a Congressional mandate would be a
substantially beneficial step, in my view, even if it
were limited to a verbal statement along the lines
of the language in the proposed Neal Amendment
to the Federal Reserve Act (see Black, 1990, and
Greenspan, 1990). Quantifying the objective in
terms of an explicit numerical rate (say, 2 percent
per annum using the core personal consumption
expenditures [PCE] inflation index) would make
the objective even more transparent and probably
more effective.
Committing to an explicit inflation objective
would achieve at least three things. First, it would
help anchor longer-term inflation expectations and
therefore facilitate the longer-term transactions I
noted earlier. Second, it would help prevent inflation
scares in financial markets, which would allow the
Fed to act more aggressively in response to downside risks in the economy with less concern that
rising long-term interest rates might neutralize the
effect of the action.

FEDERAL RESERVE BANK OF ST. LOUIS

Third, and most importantly, an explicit inflation
objective would discipline the Fed to explain and
justify short-run actions designed to stabilize output and employment against our commitment to
protect the purchasing power of the currency over
the long run. An explicit objective would force such
explanations and justifications to be more sharply
focused than in the current regime without such
an objective. Routine, clear explanations of shortterm actions would build confidence in the Fed’s
commitment to price stability and over time help
reinforce credibility for low inflation. If the explanations were made in testimony before Congress,
supplemented perhaps by a written inflation report
along the lines of the Bank of England model,
Congress would be positioned to enforce an accountability for monetary policy that arguably is now
weaker in the United States than in the United
Kingdom and the European Monetary Union.
One final point here: The Fed’s long-term commitment to price stability is now largely embodied
in our current Chairman’s demonstrated commitment to this objective, rather than being institutionally grounded in an explicit objective. It is therefore
inherently tenuous, since its continuance will
depend on the preferences of future chairmen and
their susceptibility to political pressure to pursue
other goals.
For all these reasons, it seems clear to me that
the increased transparency that would be provided
by an explicit long-term inflation objective would
increase the probability that we will attain our goal
over time. Some argue strongly for a dual objective
that refers explicitly to output or employment as
well as inflation. But both theory and experience
indicate that the Fed cannot control real variables
directly with monetary policy, and in my view there
are reasonable grounds to presume that the Fed will
optimize its contribution to the economy’s overall
performance by maintaining credibility for low inflation (see Goodfriend and King, 2001). A unitary goal
focused on low inflation would strengthen credibility by making the Fed’s commitment to this objective
definite and unambiguous.
It is one thing to advocate an explicit inflation
objective; it is another to actually put one in place.
I doubt seriously that an explicit objective set and
announced unilaterally by the Fed would be credible. Any explicit inflation objective would need to
be accepted by the government as a whole through
legislation or some other formal agreement, as such
objectives are in countries that employ them. With its

Panel Discussion

public standing high, the Fed seems well positioned
currently to make the case for such a mandate.

INTERMEDIATE-TERM ISSUES
Even if the Fed obtains a price stability mandate,
transparency issues are still likely to arise in practice—specifically, when current inflation or nearterm inflation projections deviate from the long-term
objective. For example, inflation may rise above its
objective at a time when real output is below potential and unemployment is rising. It would be difficult
or impossible in this situation for the Fed to ignore
the weakness in the real economy and act aggressively to bring inflation quickly back to target.
Some have argued that precisely this possibility
makes an explicit inflation objective for the United
States impractical. I don’t find this objection particularly compelling. Especially if the Fed has previously established credibility, inflation may remain
above its objective for some time without undue
damage to the Fed’s credibility if the Fed is transparent regarding its medium-term strategy for bringing
inflation back to path. Even with established credibility, explaining this strategy clearly and convincingly to market participants and the general public
would be challenging. Strategies and the accompanying explanations will have to be tailored to each
case. In particular, the Fed may anticipate bringing
inflation back to the objective more quickly in some
cases than in others. Consequently, it may be useful
for the Fed to announce intermediate-term inflation
forecasts to assist the public in making financial
and business decisions during the transition back
to the long-term objective.
Beyond this, even if inflation is stable at or near
its long-term objective, unanticipated shocks may
push employment and output growth temporarily
away from their sustainable noninflationary rates.
Here, too, Fed transparency about its intentions will
help the public gauge how production, employment,
and interest rates will evolve in the medium term
as the economy adjusts to the shock. Transparency
is in the Fed’s interest as well since it can help build
confidence in the following: that, first, monetary
policy can be effective in dealing with temporary
departures of real activity from its long-term potential and, second, that the Fed has the competence
to exploit this capability. More generally, I believe
that the Fed’s expertise regarding the functioning
of the U.S. economy—while far from perfect—is
now of high enough quality that transparency of
our thinking about the economy’s medium-term
J U LY / A U G U S T 2 0 0 2

163

REVIEW

Panel Discussion

prospects can build public confidence and trust in
periods of economic stress. To be sure, actual developments may deviate from our announced expectations in particular situations, but trust can be
maintained if the Fed provides reasonable explanations for the deviations.

TRANSPARENCY OF THE FEDERAL
FUNDS RATE TARGET AND THE
DIRECTIVE “TILT”
Having dealt with longer-term and intermediateterm issues, let me now make a few comments
about transparency as it relates to short-term policy
tactics: specifically, transparency regarding the
current federal funds rate target, the “tilt” of the
directive language, and the statement released to
the press after each Federal Open Market Committee
(FOMC) meeting. It is in this area that the greatest
progress has been made in increasing transparency
over the last decade. Since February 1994, the funds
rate target set at a particular FOMC meeting (previously released only after the next FOMC meeting)
has been announced shortly after adjournment of
the meeting where it is set. So markets now know
the current target. And the Committee has released
the tilt (or absence of a tilt) in the directive language
along with the current funds rate target since its
meeting on May 18, 1999. Previously, it too had
been released only after the next FOMC meeting.1
This increased instrument transparency, in my
view, is all to the good. I believe the immediate
release of the tilt language is especially useful. Again,
the effect of monetary policy is transmitted to the
economy not only through the current level of the
funds rate target but also through market expectations about the future level of the target, which are
reflected in the short-term yield curve. Market participants are going to form these expectations in
any event. By announcing the tilt immediately, the
FOMC shares its best current estimate of emerging
economic conditions that might affect the direction
of any near- or intermediate-term change in the
funds rate target, which should increase the efficiency with which markets form their expectations,
help prepare markets and the public for changes
in the target, and reduce short-term disruptions
caused by leaks. In particular, since markets know
the current tilt, they are better positioned to interpret
the likely policy implications of incoming current
economic data. For example, the release of strong
data after disclosure of an upside tilt in the directive
164

J U LY / A U G U S T 2 0 0 2

language should increase the probability that longterm rates will be bid upward in response. Consequently, immediate disclosure of the tilt should
enable long-term interest rate adjustments to perform their stabilizing role in the economy more
effectively.
While, again, considerable progress has been
made in increasing the transparency of the Fed’s
short-term instrument settings, and its short-term
expectations regarding at least the direction of
future settings, in my view there is room for further
progress. In particular, there may be different views
about the extent to which a tilt in the directive in
one direction or the other commits or obliges the
Fed to a future funds rate change. To the degree
that markets interpret a tilt as committing the Fed
to future action, failure to take action may surprise
or “whipsaw” markets. It should be possible for the
Fed to mitigate this problem by emphasizing publicly
that a tilt only implies a greater likelihood that any
near- or intermediate-term change in the funds rate
will be in a particular direction, and is not a commitment to any action. It might seem tempting to
consider eliminating the tilt in the formulation of
short-term policy to remove any confusion it may
produce. But such a reduction in transparency would
deprive the FOMC of the benefits of announcing
the tilt noted above. Moreover, beyond these benefits, abandoning it would deprive the Committee of
a useful way to keep in touch with the strength of
its internal consensus regarding policy at any point
in time and a valuable supplementary tool for
reaching agreement on a funds rate target when
there is a significant divergence of views regarding
the appropriate level of the target.
Finally, it is important to recognize that the
language of the press statement announcing the
funds rate target and any tilt after each meeting also
influences market expectations regarding future
policy actions. This language is widely reported and
interpreted currently in media coverage of FOMC
meetings. In essence, the language in the statement,
1

Initially, the FOMC tilt statement referred to the likelihood of a future
increase or decrease in the targeted federal funds rate. In January 2000,
the Committee announced that it had adopted new language for this
portion of the statement. The new language describes the FOMC’s
assessment of the “balance of risks” with regard to heightened inflationary pressures or economic weakness in the foreseeable future,
without reference to future policy actions. The objective of the change
was to avoid potential confusion regarding the implications of the tilt
announcement for future policy. In practice, however, financial market
participants continue to draw inferences from the announcement
regarding the likelihood of possible future policy actions.

FEDERAL RESERVE BANK OF ST. LOUIS

like the tilt language in the directive, is viewed by
market participants as an additional short-term
policy instrument.

TESTIMONY AND SPEECHES
The role of the Fed’s explicit policy announcements in shaping market expectations of future
policy actions is obviously important, but as anyone
even slightly interested in Fed policy is well aware,
public statements by individual FOMC members
(including Reserve Bank presidents who are not
currently voting Committee members) are at times
especially important. This is particularly so in today’s
environment where media coverage of these utterances by cable television financial news channels,
instant e-mail transmission of market analysis, and
the like is much more extensive than even just a
few years ago. Obviously, the Fed Chairman’s
remarks in congressional testimony (including
answers to questions as well as prepared testimony),
his speeches, and his interviews are followed more
intensely than the comments of other FOMC participants, since the Chairman is clearly the most influential Committee member and only he speaks for
the Committee as a whole. At times, however, comments of other participants can affect market expectations, at least in the short run: for example, if a
comment is the Fed’s first public reaction to a new
economic report (particularly if the content of the
report was unanticipated by markets) or if the comment comes at a time when markets are especially
uncertain about near-term policy prospects. Consequently, we also receive our share of media attention.
Bill Poole and I and, I expect, all of our colleagues
at other Reserve Banks can tell stories about being
covered by several reporters even when making
speeches in fairly remote parts of our respective
Districts.
Some argue that this form of Fed transparency
may be counterproductive, at least at times, if the
views expressed in these comments seem inconsistent—particularly if they appear to conflict with
a recent FOMC decision or a public statement by the
Chairman. On occasion I have personally received
criticism and complaints from market professionals
and others when they have found my statements at
variance with other Fed statements or confusing in
some other way, and I will acknowledge that on a
few occasions my remarks may have briefly complicated the formation of market expectations.
Over time, however, speeches and other public

Panel Discussion

statements by individual FOMC participants provide
markets and the public with a more robust and
complete understanding of thinking inside the Fed
about current economic and financial conditions
and near-term prospects than that provided solely
by the policy announcements I just discussed. Also,
it is important to recognize that market analysts
are adept at filtering and appropriately weighting
press reports of individual FOMC participant remarks
in the context of the broad range of Fed public statements from all sources. In short, I believe a convincing case can be made that the public remarks of
individual Reserve Bank presidents and other FOMC
participants increase the efficiency with which markets form short-term policy expectations.
I would offer one other—admittedly speculative—note on this point. It is obvious, again, that the
Fed Chairman speaks with by far the most influential
voice among FOMC participants. It might appear
superficially that comments by other participants
that seem to be “off message” might create confusion about the Fed’s intentions and undermine the
force of the Chairman’s statements. As I just suggested, there might be a little of this from time to
time, but I doubt these instances are of much significance. Again, markets are well aware of the much
greater weight of the Chairman’s statements and
discount the remarks of other FOMC participants
accordingly. Perhaps more importantly, public commentary by other participants reinforces the Chairman’s credibility in the eyes of informed observers
of Fed policy, since they demonstrate that the Chairman leads, builds consensus among, and speaks
for a thoughtful, competent group of policy professionals who naturally have diverse views on specific
policy choices. If the public believed the Chairman
was conducting policy unilaterally, he or she would
be more vulnerable to an abrupt loss of public confidence. This might not be a risk for the current
Chairman, who justifiably enjoys exceptionally
high public respect, but it could be a problem for a
future Chairman.

CONCLUSION
Again, I have enjoyed participating in this panel
discussion. This conference has addressed what is
clearly a crucial topic in understanding how monetary policy affects the economy and how it might be
improved. The subject deserves continued research.
Thanks to this conference, I am confident it will
get it.
J U LY / A U G U S T 2 0 0 2

165

Panel Discussion

REFERENCES
Black, Robert. “In Support of Price Stability.” Federal Reserve
Bank of Richmond Economic Review, January/February
1990, pp. 3-6. (Statement before the Subcommittee on
Domestic Monetary Policy of the U.S. House of Representatives Committee on Banking, Finance, and Urban
Affairs.)
Goodfriend, Marvin. “Monetary Mystique: Secrecy and
Central Banking.” Journal of Monetary Economics, January
1986, 17(1), pp. 63-92.
___________ and King, Robert. “The Case for Price Stability,”
in European Central Bank, First ECB Central Banking
Conference, Why Price Stability? 2001, pp. 53-94.
Greenspan, Alan. Statement before the U.S. Congress,
House of Representatives, Subcommittee on Banking,
Finance, and Urban Affairs hearing. Zero Inflation. 101th
Congress, Session 1. Washington, DC: Government
Printing Office, 1990.
Woodford, Michael. “Monetary Policy in the Information
Economy.” Delivered at the Federal Reserve Bank of Kansas
City symposium Economic Policy for the Information
Economy, Jackson Hole, Wyoming, August 2001.

166

J U LY / A U G U S T 2 0 0 2

REVIEW