View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Vol. 2,  No. 6
JUNE 2007­­

EconomicLetter
Insights from the

F e d era l R eser v e  B a n k of Da l l as

Measuring the Taylor Rule’s Performance
by Adriana Z. Fernandez and Alex Nikolsko-Rzhevskyy
In the Full Employment and Balanced Growth Act of 1978, Congress gave

The Taylor rule has

the Federal Reserve two goals: Keep inflation low and stable while promoting economic

proven a reasonable

growth.1 Financial markets, businesses, economic analysts and others expend consider-

guide to how the

able effort trying to fathom how the Fed attempts to meet its dual mandate.

federal funds rate

One tool for understanding Fed policy is the Taylor rule , with its many varia-

adjusts to economic

tions. The brainchild of Stanford University’s John B. Taylor, it relates output and in-

developments.

flation to the historical behavior of the federal funds rate — the Fed’s most important
policy lever — to show the general way the central bank responds to changing economic
circumstances. The Taylor rule recognizes the Fed’s two monetary policy goals, with rates
rising to control inflation when it gets too high and falling to stimulate output and employment when the economy turns sluggish.
The Fed doesn’t explicitly follow the Taylor rule or any other formula in
making decisions. Instead, the Federal Open Market Committee studies a wide range of

information to determine the best
course of action. Nonetheless, the rule
has proven a reasonable guide to how
the federal funds rate adjusts to economic developments.
The Fed has taken steps in recent
years to increase the transparency of
its decisionmaking, hoping that clearer
communication will improve the
public’s comprehension of its actions,
thereby enhancing the economy’s
performance. In a similar way, the
Taylor rule has contributed to better
understanding of monetary policy by
providing a general guide to how the
Fed operates.
What makes a good Taylor rule?
We addressed this issue by using a
recently developed econometric technique to determine how the original
rule and subsequent variations perform using different measures of inflation, output and unemployment. We
found that the rule remains relevant
today, despite the changes wrought by
globalization, financial market innovations and technological advances.

actual levels of inflation and output.
Weights measure the federal funds
rate’s sensitivity to changes in each of
them (see box).
Since Taylor’s initial formulation
in 1993, economists have modified the
rule in a number of ways, while pre-

Applying the Taylor Rule
A policy rule is just a predictable
pattern of behavior, a characterization
of how policy either does, or should,
respond to changes in the economy.
The Taylor rule describes how a
central bank tries to keep the economy in equilibrium — with inflation at
the desired level and output at sustainable potential. If output is below
the long-run trend, the rule calls for
the Fed to cut interest rates. Cheaper
credit would increase investment and
purchases of consumer durables, bolstering output and eventually bringing
the economy back to equilibrium.
Similarly, if inflation rises beyond the
desired level, the rule calls for an
increase in interest rates, which would
reduce investment and purchases
of consumer durables. As aggregate
demand weakens, inflation would fall,
eventually returning the economy to
equilibrium.
The Taylor rule operates by
focusing on gaps between desired and

EconomicLetter 

serving its essence. The original Taylor
rule is backward looking in that it
calls for federal funds rate changes to
reflect past changes in inflation and
output. In recent years, studies have
found that the Fed also responds
to expected inflation and output, so

A Formal Description of the Taylor Rule
The Taylor rule uses inflation and gross domestic product to predict changes in the federal
funds rate. It’s typically expressed as
it = r* +

pt + d(pt – p*) + ω(yt – y*t ),

where it is the federal funds rate at time t, r* is the equilibrium real interest rate (usually
treated as a constant 2 percent), pt is the inflation rate, (pt – p*) is the deviation of the
inflation rate from its target level p* (also usually 2 percent), and (yt – y*t ) is the deviation
of output (yt ) from its full-employment level, y*t .
The weights d and ω indicate the sensitivity of federal funds rate changes to each of the
two gaps — inflation and output.
The Taylor rule predicts that central banks will increase interest rates when inflation rises
above the target level or output moves above its full-employment level, and vice versa.
Nominal interest rate component: r* +

pt

The sum of the equilibrium real interest rate
and the current inflation rate, this component defines the level at which the federal
funds rate would settle were inflation stable
at its target rate and output maintaining its
full-employment level.
+
Inflation gap: d(pt – p*)
Short-term interest rate: it =

When inflation rises above its target level,
the Fed raises the funds rate by a multiple
of the difference. This action slows money
growth, which reduces future inflation.
+
Output gap: ω(yt – y*t )
When output falls short of its full-employment potential, the Fed lowers the funds rate.
This action stimulates economic growth,
raising output toward its potential.

NOTE: Both yt and y*t   are typically converted to natural logs, so that (yt – y*t ) represents the percentage by which
output deviates from its full-employment level at time t.

F edera l Reserve Bank of Dall as

newer, forward-looking versions of the
Taylor rule have emerged.2
In addition to being backward
looking, the original Taylor rule
implies the Fed immediately adjusts
interest rates to target levels, an
unwarranted assumption. Gradualism
allows the Fed to change rates in a
series of small steps in the same direction, a process called interest rate
smoothing. Some Taylor rule models
account for this gradualism by including lagged values of the federal funds
rate.3
Gradualism provides a way to
exercise caution in policymaking
because it allows central banks to
assess their tactics and make necessary adjustments. By contrast, wholesale changes in the federal funds
rate—an approach Fed Chairman Ben
S. Bernanke once described as “cold
turkey”—would only add to analytical
and forecasting uncertainties.4
Even after incorporating gradualism into the Taylor rule, decisions
remain about what inflation and output data to use.
Measurement isn’t always straightforward. Different price gauges, for
example, sometimes send different signals about how much inflation is heating up. In the second half of 2000 and
early 2001, for example, the Consumer
Price Index (CPI) ran 3 – 3.5 percent,
while the Personal Consumption
Expenditures index fell from 3 percent
to 2 percent.
A similar imprecision plagues
measures of slack — the gap between
the economy’s actual and potential
output. Quarterly GDP figures are routinely revised, sometimes substantially,
and potential output estimates depend
on occasionally unreliable calculations
about the capital stock, labor supply
and productivity.5
Ambiguity can also be found in
alternative slack measures, such as
the non-accelerating inflation rate of
unemployment, or NAIRU. By one
estimate, we can be 95 percent sure
the NAIRU was between 5.1 and
7.7 percent in 1990, a wide range

that suggests the measure should be
used with caution.6 Economist Robert
M. Solow calculated that in 1995, 1
percentage point of unemployment
corresponded to about 1.25 million
jobs, or about 2 percent of GDP. Small
measurement errors can have serious
policy implications, he concluded.7
Data revisions present a particular
problem for the Taylor rule.8 More
complete information often leads to
changes in inflation and output figures
long after monetary policy actions
have been taken. The new data
may produce a different relationship
between inflation, output and the historical federal funds rate.
Revisions’ shifting signals can
distort models’ explanatory power,
creating problems with evaluating
their performance. For accurate comparisons, it’s essential to use real-time
data — information that would have
been available to policymakers when
they made their decisions.9 Real-time
data might be used more commonly
in Taylor rule models if it weren’t so
difficult to find unrevised data sets.
Modeling the Taylor Rule
To see what makes a good Taylor
rule, we looked at six versions, each
of which uses different data to determine the optimal federal funds rate
(Table 1). The models included four
early efforts — three by Taylor himself
and one by Richard Clarida, Jordi Galí
and Mark Gertler. Two others were of
more recent vintage — a 2004 model
from Dallas Fed economist Evan
Koenig and a 2007 effort by Christian
J. Murray, David H. Papell and Alex
Nikolsko-Rzhevskyy.
Economists have developed other
Taylor rule variations, but these were
chosen as representative of the scope
of the rule’s evolution: from backward
to forward looking, from cold turkey
to gradualism, and from simple measures of inflation, output and unemployment to more complex ones.
Inflation measures used in Taylor
rule models include:
• The GDP deflator, which

F edera l Reserve Bank of Dall as

Gradualism provides
a way to exercise caution
in policymaking bcause
it allows central banks to
assess their tactics
and make necessary
adjustments.

tracks the month-to-month percentage
change in prices of all new domestic
goods and services.
• The Blue Chip forecast, an
average of the inflation forecasts
issued by 52 business economists.
• The CPI, which measures the
monthly percentage change in prices
of a fixed market basket of goods and
services.
The output-gap measures are:
• The percentage by which GDP
deviates from a straight-line growth
path that’s based on historical data.
• The difference between trend
and actual GDP growth, using estimates from the Blue Chip forecast.
• The percentage by which
industrial production deviates from a
nonlinear growth path that’s based on
historical data.
• The current unemployment rate
minus the natural rate, expressed as
a five-year average of data from the
Philadelphia Fed’s real-time data set.
We used real-time data in assessing the models. Initial data are
released with a one-quarter lag, which
means we used the previous quarter’s

 EconomicLetter

Table 1

The Original and Five Variations on the Taylor Rule
Model

Input variables

Other characteristics

Taylor 1993

• GDP deflator
• Output deviation from linear trend
		

Backward looking
No interest rate smoothing
Fixed weights

Taylor A

Backward looking
Uses interest rate smoothing
Fixed weights

• GDP deflator
• Output deviation from linear trend
• Federal funds rate in previous period

Taylor B

• GDP deflator
• Output deviation from linear trend
		

Backward looking
No interest rate smoothing
Variable weights

Clarida, Galí and Gertler
(CGG)

• Blue Chip inflation forecast
• Deviation of log of industrial production
from quadratic trend
• Federal funds rate in previous period
• Federal funds rate two periods earlier

Forward looking
Uses interest rate smoothing
Variable weights

Koenig

• Blue Chip inflation forecast
• Current minus five-year moving average unemployment rate
• Difference between trend and actual GDP growth as
approximated by the Blue Chip GDP growth forecast
• Federal funds rate in previous period

Forward looking
Uses interest rate smoothing
Variable weights

Murray, Papell & NikolskoRzhevskyy (MPNR)

• GDP deflator
• Deviation of GDP from quadratic trend
• Federal funds rate in previous period

Backward looking
Uses interest rate smoothing
Variable weights

signals to predict each quarter’s federal funds rate.
To evaluate the models, we first
conducted a crude goodness-of-fit test
to determine how close each of them
came to predicting the actual federal
funds rate — with 1 denoting a perfect
fit and 0 a total failure.
To focus on recent trends, we
used a recursive test, with quarterly
data from the beginning of 1988 to
the beginning of 2006. The procedure
involved calculating how well each
model depicts actual federal funds rate
behavior for the initial 32 quarters, from
the beginning of 1988 through the end
of 1995 — the first point in the graph
for each model. We added first quarter
1996 to arrive at the second point and
so on. We repeated the procedure
until we reached first quarter 1988 to
first quarter 2006 — the last point.
Four of the models have goodness-of-fit values greater than 0.9 for

all periods, suggesting they’re reasonable guides to monetary policy (Chart
1). The Taylor 1993 and Taylor B versions perform well at the beginning of
the period but then deteriorate, most
likely because of the absence of a
smoothing factor to allow for gradual
interest rate changes.
Goodness-of-fit tests evaluate
each model on its own but don’t
compare them. Just as important, the
technique implicitly assumes the models accurately specify — on an ongoing
basis — the true relationship between
the federal funds rate and measures of
inflation, output and unemployment.
We used a new econometric tool that
allows us to relax this assumption and
see whether some models perform
better than others.
Side-by-Side Tests
In a 2007 paper, Raffaella
Giacomini and Barbara Rossi outline

EconomicLetter 

F edera l Reserve Bank of Dall as

an innovative analytical technique that
recognizes models may not perfectly
describe the economy and input data
may have weaknesses.10 The procedure eases the traditional requirement
of calculating Taylor rule weights on
inflation and output gaps over the
whole sample. Failing to consider
variations in the weights may prevent
us from seeing relative changes in
models’ performance over time.
The Giacomini–Rossi test resolves
this issue by using results from previous periods to calculate the optimal
weights in each quarter, selecting the
best model at every point in time.11
Allowing the weights to vary might
enhance — or detract from — any
model’s performance relative to the
others. The Giacomini–Rossi test only
allowed us to evaluate two models at
a time.12 For simplicity, we dropped
Taylor’s original 1993 model, the worst
performer on the goodness-of-fit test.

Chart 1

How Well Taylor Rule Models Gauge the
Federal Funds Rate
Value
1
.9
.8
.7
.6
.5
.4
CGG
Koenig
MPNR
Taylor 1993
Taylor B
Taylor A

.3
.2
.1
0
1996

1997

1998

1999

2000

2002

2001

2003

2004

2005

NOTES: In the goodness-of-fit test, 1 is a perfect match and 0 is total failure. Data are quarterly.

We conducted recursive Giacomini–Rossi tests to make our comparisons, which showed that Koenig’s
Taylor rule formulation performs best
in all cases (Chart 2).13 For graphical
representation, we designated Koenig
as the base model and compared it
with four others from first quarter
1988 to first quarter 2006.
Following Giacomini and Rossi,
we constructed upper and lower
bands to tell us when we can be 90
percent sure one model outperforms
another. The bands allow us to visually track the models’ relative performance. Values above the upper band
mean Koenig performs better than
the competing model. Values within
the bands mean the models perform
equally well. And values below the
lower band mean the competing
model outperforms Koenig’s version.
When it comes to predicting the
federal funds rate, Koenig does a
better job than Taylor A and Taylor
B at every point. The Koenig model
outperforms the Murray, Papell and
Nikolsko-Rzhevskyy model most of
the time, the exception coming in
the second through fourth quarters of

2000, when the models perform equally well. The Clarida, Galí and Gertler
model has the second-best results,
being inferior to Koenig’s through
1999 but performing equally well from
2000 onward.

Koenig’s model incorporates elements that do a better job of capturing
the federal funds rate’s history. But
we can’t be certain his model—or
any other—accurately represents the
behavior of the federal funds rate.
We know that Taylor rule performance can vary dramatically with
different inflation and output gap measures. Comparing various input data
in the same way we evaluated the
six models may further explain why
Koenig’s Taylor rule outperforms the
others.
For inflation, we looked at the
GDP deflator, Blue Chip forecast and
CPI. For a more complete analysis,
we also included M1 and M2 money
growth measures. We tested them
because the Fed can influence interest rates through the money supply.
M1 includes currency and checking
account deposits. M2 broadens M1 by
adding funds in savings, money market and similar accounts.
For this round of Giacomini–Rossi
recursive tests, the base we chose
wasn’t the best performer but the GDP
deflator — a widely used inflation measure.

Chart 2

Comparing Taylor Rules
Value
12
10
8

Taylor B
MPNR
Taylor A
CGG
Koenig
performs better

6
4
2

Koenig and other
models perform equally

0
–2

Other models
perform better

–4
1996

1997

1998

1999

2000

2001

2002

NOTE: Data are quarterly.

F edera l Reserve Bank of Dall as

 EconomicLetter

2003

2004

2005

Chart 3

Comparing Inflation Measures
Value

3
Performs better
than GDP deflator

2
1

Performs as well
as GDP deflator

0
–1
–2

GDP deflator
performs
better

–3

Blue Chip
M1
M2
CPI

–4
–5
–6
1995

1996

1997

1998

1999

2000

2001

2002

2003

2004

2005

NOTE: Data are quarterly.

Chart 4

Comparing Output Gap Measures
Percent

4
3
Performs better than GDP trend

2
1

Performs as well as GDP trend

0
–1
–2
–3
–4
–5

Current minus natural unemployment rates
Real GDP gap–linear
Industrial production gap–quadratic
Trend minus actual GDP growth
Hodrick–Prescott filter
1995

1996

1997

1998

1999

GDP trend performs better

2000

2001

2002

2003

2004

2005

NOTES: The basis for comparison is the percentage by which GDP deviates from trend. Data are quarterly.

EconomicLetter 

F edera l Reserve Bank of Dall as

The Blue Chip forecast lies below
the 90 percent bands until 2001 and
after 2005, and it does as well as the
other models in the interim (Chart 3).
The results give it an edge over the
other inflation indicators in predicting
the federal funds rate. In many periods, however, its superiority is only
marginally significant.
Which output gap measure
works best? In addition to the four
concepts our selected Taylor rule
models use, we considered two other
approaches—the percentage by which
GDP deviates from a path that varies
over time and GDP growth filtered to
remove large fluctuations.14 The former served as our base (Chart 4).
Three output gap measures perform well on the recursive tests—the
filtered GDP, the difference between
the current and five-year moving
average unemployment rates, and
the spread between trend and actual
growth in real GDP.
Koenig’s Taylor rule model uses
two of the output gaps that did best
on the Giacomini–Rossi tests. It also
employs the Blue Chip forecast, the
superior performer for the inflation
gap. These data no doubt contribute
to its success tracking the federal funds
rate over two decades. We concluded
the Koenig model’s superior performance stems both from its design and
its choice of input variables.
Indeed, this model shows the
power of the Taylor rule (Chart 5).
Three data sets with little apparent
relation to the federal funds rate,
when combined with appropriate
weights, have a remarkably good
record tracking the Fed’s policy decisions.
Our findings shouldn’t be considered an endorsement of rules to
determine monetary policy. The Fed
operates with wide discretion, which
provides greater freedom and flexibility in policymaking. Even so, the
Taylor rule’s predictive value should
allow observers to better understand
the forces that shape Fed actions. As a
general guide, the Taylor rule dimin-

ishes the overall level of uncertainty in
the economy and enhances the transparency of open market operations.
Fernandez is a Houston Branch economist in
the Research Department of the Federal Reserve
Bank of Dallas. Nikolsko-Rzhevskyy, a graduate
student at the University of Houston, was one of
the developers of a Taylor rule variation discussed
in this article.

Notes

2

The authors thank Amber Obermeyer for her

the importance of including expectations in

research assistance. They will publish details on

interest rate rules. Among them are “Inflation

their methods and findings in an upcoming issue

Dynamics: A Structural Econometric Analysis,”

of the Dallas Fed’s Staff Papers.

by Jordi Galí and Mark Gertler, National Bureau

1

While achieving maximum sustainable output

A number of papers have drawn attention to

of Economic Research Working Paper no. 7551,

over the long run requires keeping prices stable,

February 2000; “Monetary Policy Rules and

occasional short-run conflicts between growth

Macroeconomic Stability: Evidence and Some

and inflation may arise.

Theory,” by Richard Clarida, Galí and Gertler,
Quarterly Journal of Economics, vol. 115,
February 2000, pp. 147 – 80; and “European
Inflation Dynamics,” by Galí, Gertler and J. David

Chart 5

López-Salido, European Economic Review, vol.

Koenig’s Taylor Rule

45, no. 7, 2001, pp. 1237 – 70.
3

The degree of smoothing indicates how

gradually interest rates adjust. If we assume

Inflation Output Measures…

the value of this parameter is 0.8, as evidence

Percent

in Clarida, Galí and Gertler (1998) suggests, the

12

first move of the Fed to go from a rate of 5.25
percent to a target of 5.75 percent would be 5.35

10

percent. See “Monetary Policy Rules in Practice:
8

Some International Evidence,” by Clarida, Galí

6 Federal funds rate

and Gertler, European Economic Review, vol. 42,
June 1998, pp. 1033 – 67 .

4
2

4

Inflation relative
to 2% target

Reserve Bank of San Francisco–Seattle Branch,

0
–2
–4

See “Gradualism,” speech by Ben S. Bernanke

at a luncheon co-sponsored by the Federal
May 20, 2004, www.federalreserve.gov.

Output gap

Unemployment rate
relative to
5-year average

5

“Obstacles to Measuring Global Output Gaps,”

by Mark A. Wynne and Genevieve R. Solomon,

1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005

Federal Reserve Bank of Dallas Economic Letter,
vol. 2, March 2007.
6

…Yield Predicted Federal Funds Rate Close to Actual

“The NAIRU, Unemployment and Monetary

Policy,” by Douglas Staiger, James H. Stock

Percent

and Mark W. Watson, Journal of Economic

12

Perspectives, vol. 11, Winter 1997, pp. 33–49.
7

Increases in the unemployment rate are

associated with declines in real GDP. This well-

10

studied inverse relationship is called Okun’s law.
See Inflation, Unemployment, and Monetary

8
Federal funds rate

Policy, by Robert M. Solow and John B. Taylor,
Boston: MIT Press, 1998.

6

8

“Through a Glass, Darkly: How Data Revisions

Complicate Monetary Policy,” by Evan F. Koenig,

4

Federal Reserve Bank of Dallas Economic Letter,
vol. 1, December 2006.

2

9

Koenig predicted
0
1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005

For details on using real-time data, see “A

Real-Time Data Set for Macroeconomists,”
by Dean Croushore and Tom Stark, Journal of
Econometrics, vol. 105, November 2001, pp.
111–30; “Monetary Rules Based on Real-Time

NOTE: Data are quarterly.

Data,” by Athanasios Orphanides, American

F edera l Reserve Bank of Dall as

 EconomicLetter

EconomicLetter
Economic Review, vol. 91, September 2001,

initial 32 quarters, from first quarter 1988 to

pp. 964 – 85; and Federal Reserve Bank of

fourth quarter 1995. Then we calculate them from

Philadelphia, “Notes on the Philadelphia Fed’s

first quarter 1988 to first quarter 1996 and so

Real-Time Data Set for Macroeconomists

on, ending with first quarter 1988 to first quarter

(RTDSM),” www.phil.frb.org/files/forecast/

2006.

qvqd.pdf.

13

10

For details on the Giacomini–Rossi test, see

In addition to the recursive test, we employed

a rolling Giacomini–Rossi test with eight-year

“Model Selection and Forecast Comparison in

windows that examined the first 32 quarters,

Unstable Environments,” by Raffaella Giacomini

then the second through 33rd quarters, the third

and Barbara Rossi, January 2007, http://econ.lse.

through 34th and so on. The rolling test, like

ac.uk/events/papers/emetrics-110107.pdf.

the recursive one, found no model significantly

11

The parameters are allowed to change and are

outperforms the Koenig version in any period.

not based on overall averages, as in typical tests.
12

is published monthly
by the Federal Reserve Bank of Dallas. The views
expressed are those of the authors and should not be
attributed to the Federal Reserve Bank of Dallas or the
Federal Reserve System.
Articles may be reprinted on the condition that
the source is credited and a copy is provided to the
Research Department of the Federal Reserve Bank of
Dallas.
Economic Letter is available free of charge
by writing the Public Affairs Department, Federal
Reserve Bank of Dallas, P.O. Box 655906, Dallas, TX
75265-5906; by fax at 214-922-5268; or by telephone
at 214-922-5254. This publication is available on the
Dallas Fed web site, www.dallasfed.org.

As with the calculations for Chart 1, we do a

14

For this measure, we use the Hodrick–Prescott

1600 filter, which is more sensitive to long-term

recursive estimation. We estimate the parameters

fluctuations than to short-term deviations.

that minimize error for all the models for the

SouthwestEconomy
Regional Information You Can Use

Helen E. Holcomb
First Vice President and Chief Operating Officer

In the May/June 2007 issue:
• Texas Transitions to Service Economy
• Bridging the Texas GDP Gap
• The Housing Market, After the Boom
• Mexican Reform Clouds View of Maquiladora Industry

Harvey Rosenblum
Executive Vice President and Director of Research
W. Michael Cox
Senior Vice President and Chief Economist

Subscribe to Southwest Economy at
www.dallasfed.org or
call 214-922-5254.

y
m
o
n
o
c
E
t
s
e
Southw
FED

E3
ISSU 2007
UNE
MAY/J

ERA

ES
L R

E

B
RVE

Richard W. Fisher
President and Chief Executive Officer

OF
ANK

DAL

LAS

Robert D. Hankins
Senior Vice President, Banking Supervision
Executive Editor
W. Michael Cox
Editor
Richard Alm
Associate Editor
Monica Reeves
Graphic Designer
Ellah Piña

Issu
s
i
h
T
In

e

itions my
Trans
o
Texas rvice Econ
to Se
as
e Tex
ing th
Bridg ap
GDP G
ata
ight:
Spotl iladora D
Maqu
ord: ket,
r
e Rec
On th ousing Ma
The H the Boom
After

Federal Reserve Bank of Dallas
2200 N. Pearl St.
Dallas, TX 75201