View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

VOL. 1, NO. 12
DECEMBER 2006

EconomicLetter
Insights from the

FEDERAL RESERVE BANK OF DALLAS

Through a Glass, Darkly: How Data
Revisions Complicate Monetary Policy
by Evan F. Koenig

Looking at preliminary

Over the course of any year, we receive a veritable tidal wave of

data, policymakers

numbers on the U.S. economy’s performance—readings on output, inflation,

and others may

employment, productivity and so much more. Policymakers, business opera-

misinterpret what they

tors, investors and the general public look to these data to make economic

see, leading to mistakes

decisions. Unfortunately, some early statistical releases only imperfectly reflect

that could harm

what’s happening. As more complete and accurate data come out, the view

the economy.

they provide improves.
Looking at preliminary data, policymakers and others may misinterpret what they see, leading to mistakes that could harm the economy. A better
understanding of the nature of the revisions that regularly alter the data
should lessen the chances of acting on information that doesn’t accurately

Chart 1

The Shifting Inflation Picture
(12-month PCE inflation, excluding food and energy)
B. …and Again in 2005

A. Big Changes in 2003…

Percent

Percent
3

3

2.5

2.5
July data
2

2

1.5

1.5

June data
November data
1

1
December data

.5

.5

0

0
’99

’00

’01

’02

’03

’04

’05

’99

’00

’01

’02

’03

’04

’05

SOURCE: Bureau of Economic Analysis.

A better understanding of
the nature of the revisions
that regularly alter the
data should lessen
the chances of acting on
information that doesn’t
accurately reflect
economic realities.

reflect economic realities.
As an example of the potential
importance of data revisions for monetary policy, consider the behavior of
personal consumption expenditure
(PCE) inflation, excluding food and
energy. Core PCE inflation is policymakers’ preferred measure of trend
price change because of the PCE price
index’s relatively broad coverage and
superior tracking of shifts in household spending patterns.
As of November 2003, government data showed that core PCE inflation had been held to a fairly narrow
1 to 2 percent range for several years
running—a range that several Fed policymakers subsequently identified as
their inflation “comfort zone” (Chart
1A). Citing worries about a possible
unwelcome fall in inflation, the
Federal Open Market Committee had
voted to stimulate the economy by
cutting the target federal funds rate at
its June 2003 meeting.
Unfortunately, the broad coverage
and shifting spending shares that

EconomicLetter 2

FEDERAL RESERVE BANK OF DALLAS

make PCE inflation so attractive have
a big, practical disadvantage: They
make the measure vulnerable to substantial revision. By December 2003,
new data had significantly altered the
path of core PCE inflation. It was now
apparent that inflation had exceeded 2
percent back in 2001 and—of more
pressing concern—had been running
at 1 percent or below for four months.
Cut to June 2005 (Chart 1B). Core
PCE inflation appeared to have stabilized at about 1.5 percent. Another
month of data, however, brought yet
another major revision. Concerns
about excessively low inflation in
2003 now seemed possibly exaggerated. Just as important, inflation in
much of 2004 and 2005 wasn’t in the
middle of the comfort zone after all,
but above its 2 percent upper limit.
Main Sources of Revisions
Most data revisions fall into one
of three categories.
New estimates of seasonal patterns. Most economic data have a dis-

cernible seasonal pattern due to predictable weather and holiday effects.
Statistical agencies try to strip out this
pattern to make it easier to identify
the business-cycle movements of concern to policymakers.
But seasonal patterns shift over
time and have to be reestimated, which
leads to data revisions. Because it generally takes three years of data to estimate seasonal patterns, revisions due to
seasonal factors can extend over several
years. On the other hand, seasonal patterns shift slowly enough that resulting
revisions are usually small.
More complete survey
responses. Many government data
series are based on survey responses.
As new responses are processed and
old responses are corrected, statisticians are able to improve the accuracy
of earlier estimates of what transpired
in any particular month.
For series updated to capture
late-arriving, more complete data, the
government typically issues one or
two revisions in the months immediately after the initial release. Other
revisions follow later, at regular inter-

Chart 2

Revisions to March
2005 Texas Job Growth
Thousands of jobs
25
21.7
20

15
10.6
10

9.8

5

0
First
release,
April 2005

First
revision,
May 2005

Benchmark
revision,
March 2006

SOURCE: Texas Workforce Commission.

vals, as data from annual surveys, censuses or other sources become available. Revisions due to more complete
data are responsible for most of the
month-to-month and year-to-year
changes in economic data.
As an example, consider the
sequence of official estimates of the
number of nonfarm jobs added in
Texas during March 2005. The initial
estimate, a 10,600-job gain, was
released in April 2005 (Chart 2). It
was based on survey results for a sample of firms that collectively account
for about 40 percent of nonfarm jobs.
A first revision to March job growth
was released a month later, along with
the first estimate of April employment.
It reflected corrections to previously
received survey responses, as well as
late-arriving responses, and showed a
slightly smaller job gain.
Finally, an annual revision to
Texas employment was released in
March 2006. It showed an increase
twice as large as that previously estimated. Data for each of the other 11
months from October 2004 through
September 2005 were revised at the
same time. Annual revisions draw on
tax reports submitted under Texas
unemployment insurance laws. These
records capture about 98 percent of
nonfarm jobs, and the new estimates
are definitive, apart from revisions due
to updated seasonal factors.
Series derived from surveys with
once-and-for-all monthly deadlines
aren’t subject to revisions based on
new information. They include the
unemployment rate, the Conference
Board’s Consumer Confidence Index,
the Institute for Supply Management’s
manufacturing and nonmanufacturing
indexes, and the business-conditions
indexes compiled by various Federal
Reserve Banks.
Another example is the Consumer
Price Index, which is based on retail
prices observed and recorded directly
by Labor Department employees.
Commodity and financial asset prices,
of course, are also not subject to this
type of revision.

FEDERAL RESERVE BANK OF DALLAS

Because it generally
takes three years of
data to estimate
seasonal patterns,
revisions due to
seasonal factors
can extend over
several years.

3 EconomicLetter

Chart 3

The Case of the
Missing Recession
(Conference Board
Composite Leading Index)
Index, December 1998 = 100
130
July data

125

120

115

June data

110

105

100
’99

’00

’01

’02

’03

’04

’05

’06

SOURCE: The Conference Board.

The revisions in the month
or two immediately
after the government’s
initial releases and
revisions due to
reestimation of
seasonal factors
contribute relatively
little new information.

New methods and definitions,
applied retroactively. Finally, revisions occur when new calculation
methods or new definitions are applied
to old data. Significant revisions of this
type are relatively infrequent, and their
timing can be irregular.
A recent change to the construction of the Conference Board’s
Composite Leading Index provides a
good example. Looking at the index
as it appeared in June 2005, we see
that it fell nearly every month between
April 2000 and the start of the 2001
recession 11 months later (Chart 3).
The cumulative decline was 2 percent.
But the index fell by an almost identical amount between May 2004 and
May 2005 without a recession.
The Conference Board concluded
that its leading index was misinterpreting changes in the slope of the yield
curve—changes in the difference
between long-term and short-term
interest rates. In July, the index was
reformulated. With one stroke, the
2005 recession warning was eliminated. The seemingly strong record of the
leading index is at least in part an illusion due to changes to its construction
that have erased its past failures.
Assessing and Enhancing Data
Taking into account these different types of revisions, just how reliable are early government statistical
releases? How close do early releases
come to capturing the movements we
see in the data available to us today?
Let’s start with manufacturing
capacity utilization, which is compiled
by Federal Reserve Board staff in
Washington, D.C. Initial releases capture 87 percent of the variation in
today’s capacity utilization data (Table
1). Revisions over the next three
months raise the fraction of variation
explained only slightly—to 88 percent. After two years of revisions, 6
percent of the movements we observe
today remain unexplained.
The effects of revisions on real
growth as measured by gross domestic product (GDP), industrial produc-

EconomicLetter 4

FEDERAL RESERVE BANK OF DALLAS

tion and nonfarm jobs are similar.
Revisions add little to reliability until a
year or more after the initial statistical
release. The same holds for inflation
as measured by the GDP and PCE
price indexes.
For the unemployment rate and
inflation as measured by the
Consumer Price Index, the story is
very different. These series are unrevised, except when seasonal factors
are updated. Because these updates
are small, the initial estimates capture
essentially all the information in
today’s data.
This brief survey suggests that the
most important revisions are those
undertaken to incorporate new data
from surveys and censuses conducted
once a year or even less frequently.
The revisions in the month or two
immediately after the government’s initial releases and revisions due to reestimation of seasonal factors contribute relatively little new information.
In addition to government data,
useful alternatives or supplements
exist that aren’t subject to large revisions. To begin with, formal business
and consumer surveys are published
by the Institute for Supply Management, various regional Federal Reserve
Banks and the Conference Board. If
the results of these surveys are revised
at all, it’s only from reestimation of
seasonal factors. There are also lessstructured surveys, like the roundtables held at the Federal Reserve Bank
directors’ meetings and the calls
Reserve Bank presidents and their
staffs make to business contacts in
advance of Federal Open Market
Committee meetings. Studies have
shown that some of these surveys
contain information beyond what’s
available from real-time government
statistical releases.1
A big advantage of many nongovernmental surveys is their timeliness. The Institute for Supply
Management’s manufacturing index,
for example, is published the first
business day of each month—about
two weeks before the Federal

Table 1

How Reliable Are Government Data?

Economic Indicator
Slack
Capacity utilization
Unemployment rate
Growth
Real GDP
Industrial production
Nonfarm jobs
Inflation
GDP prices
PCE prices
Consumer Price Index

Percentage of today’s variation explained after:
First release
One quarter
One year
Two years

87
100

88
100

89
100

94
100

78
82
92

76
80
91

90
82
97

94
86
99

88
94
99

87
94
100

89
89
100

96
93
100

SOURCES: Federal Reserve Board; Bureau of Labor Statistics; Bureau of Economic Analysis; author’s calculations.

Reserve’s index of manufacturing output. A drawback is that participants
often aren’t selected scientifically and
may not be representative of the general population. Moreover, anecdotal
accounts, like those in the Fed’s Beige
Book, can be difficult for inexperienced readers to interpret.
Key market prices can also add to
our understanding of the economy.
Commodity prices have historically
provided early signals of emerging
inflation pressures and the strength of
the manufacturing sector. Quality and
maturity spreads based on financial
asset prices provide some of our most
reliable indicators of overall real
growth prospects.
Commodity and financial asset
prices have the advantage of being
available on a daily basis or even
minute by minute. A problem is that
although the indicators themselves
aren’t subject to revision, their interpretation is. For example, as more
manufacturing activity has shifted
overseas, the correlation between
commodity prices and the strength of
the U.S. manufacturing sector has
declined.2 Oil price movements were
once mostly driven by changes in

world supplies. Now, shifts in world
demand are increasingly important.3
In some cases, data can be sharpened ahead of official revisions. The
Dallas Fed has had great success
anticipating revisions to Texas state
employment estimates. Recall that in
March of each year, job growth estimates through the preceding
September are revised using unemployment insurance tax records. These
tax records, however, are available
quarterly. The Dallas Fed’s Frank
Berger takes advantage of this fact to
revise our estimates of Texas jobs on
an accelerated schedule, using procedures he developed in joint work with
colleague Keith Phillips.4
As we’ve seen, the official estimates of March 2005 Texas job
growth didn’t reflect the rapid expansion of the state’s economy until a
year had passed. The Dallas Fed’s
procedures, however, allowed us to
anticipate much of the jobs revision
(Chart 4). Our superior estimates of
past job growth are an important reason why our job growth forecasts
consistently outperform those of other
analysts.

FEDERAL RESERVE BANK OF DALLAS

A Recipe for Trouble
Seriously misleading conclusions
and subpar forecasting results are likely when analysts and policymakers
treat heavily revised and first-release
data as if they are interchangeable.
Let’s look at an example from the
realm of inflation forecasting.
Lies, damned lies and the
markup. British politician Benjamin
Disraeli famously remarked that “there
are three kinds of lies: lies, damned
lies and statistics.” One statistic with
great potential to mislead is a measure
of profitability called the markup. It
equals the dollar value of the goods
and services firms produce, less the
cost of materials and supplies, all
divided by labor compensation. When
the markup exceeds 1, firms’ revenues
more than cover variable costs.
The markup is interesting for several reasons. First, it’s the reciprocal of
labor’s share of the value added to production by U.S. firms. When you hear
someone say that labor’s share of aggregate output or aggregate income is
at a near-record low, that’s equivalent

Chart 4

Anticipating Revisions
to Texas Job Growth
Thousands of jobs
25
21.7
20
17
15
10.6
10

9.8

5

0
First
release,
April
2005

First
revision,
May
2005

Dallas Benchmark
revision,
Fed,
March
August
2006
2005

SOURCES: Texas Workforce Commission; Federal
Reserve Bank of Dallas.

5 EconomicLetter

Chart 5

The Markup and Inflation
A. Revised Data Seem to Show That the Markup Helps Forecast
Inflation…
Forecasted – actual inflation
(percentage points)
2

1.5

1

.5

0

–.5

–1
1.46

1.48

1.50

1.52
Markup

1.54

1.56

1.58

B. …But the Link Disappears Using Real-Time Markup Data
Forecasted – actual inflation
(percentage points)
2

1.5

1

.5

0

–.5

–1
1.46

1.48

1.50

1.52
Real-time markup

1.54

1.56

SOURCES: Bureau of Economic Analysis; Blue Chip Economic Indicators; author’s calculations.

EconomicLetter 6

FEDERAL RESERVE BANK OF DALLAS

1.58

to the statement that the markup is at
a near-record high. In the same vein,
when you hear that real wage growth
has been lagging behind labor productivity growth, that’s equivalent to
the statement that the markup has
been rising. Finally—and of greatest
importance for monetary policy—
whenever the markup is unusually
high, theory predicts that competition
between firms should gradually drive
it back down. That means a high
markup should act as a restraining
influence on future inflation. Former
Fed Chairman Alan Greenspan gave
prominent attention to this link in his
July 2004 testimony before Congress.5
The markup and inflation. Let’s
compare inflation forecast errors in the
Blue Chip survey of professional forecasters with the markup at the end of
the prior year. We find that from 1984
through 2002, forecasters systematically overpredicted inflation when the
markup was high and underpredicted
inflation when the markup was low
(Chart 5A). Either the Blue Chip forecasters have been ignoring important
information, or there’s something not
quite right in this relationship.
What’s not quite right, of course,
is that the markup estimates available
to us today aren’t the markup estimates that were available to these
forecasters. Sure enough, when we
replace today’s markup estimates with
the first-release estimates available in
real time, the correlation between the
markup and inflation disappears
(Chart 5B). The markup is useful for
understanding inflation after the fact,
but no help in predicting it.6
Poor forecasts from confusing
current with real-time data. Indeed,
the markup is worse than useless for
forecasting if you naively assume the
relationship between markup estimates
and inflation is the same for firstrelease data and subsequent revisions.
On its own, the Blue Chip survey successfully anticipates 68 percent of the
variation in the next year’s inflation. If
you conduct an after-the-fact exercise
in which you supplement Blue Chip

inflation forecasts with today’s markup
data, it appears that you can increase
predictive power to 77 percent.
However, this exercise is artificial
because today’s markup data wouldn’t
have been available in real time.
Unfortunately, the fact that only
first-release data are available for
actual forecasting all too often doesn’t
stop analysts from using revised data
to estimate their forecasting equations.
In the case of inflation, if you estimate
using revised markup data and then
forecast by substituting first-release
data as they become available, predictive performance is substantially
worse than if you had ignored the
markup entirely. Only 57 percent of
the variation in next year’s inflation is
successfully anticipated.
The message is clear: If you’re
going to forecast with first-release
data, the correct thing to do is to estimate using first-release data.7
Early estimates of the markup
nearly worthless. The consequences
of confusing revised with first-release
data are especially severe in this case
because real-time markup data are
poor quality. From 1983 to 2002, firstrelease markup estimates accounted
for only 5 percent of the variation that
we see in today’s markup data. Since
1990, they’ve accounted for only 2
percent. Even with the benefit of a
year’s worth of revisions, markup estimates account for just 21 percent of
today’s markup variation.
So don’t take too seriously claims
that labor’s share of output is at a
record low or arguments that high
profit margins are going to restrain
inflation—at least not until the data
have been through several annual
revisions.
Living with Revisions
Caution is essential in interpreting
early government reports because
many data series are subject to large
after-the-fact revisions. When reading
government statistical releases, it’s
best to keep the following in mind:
• Seasonal and month-to-month

revisions generally have little impact
on the information content of government statistical estimates. It’s the less
frequent annual, comprehensive and
benchmark revisions that really matter.
• By supplementing the government’s formal statistical releases with
information from other sources, it’s
sometimes possible to obtain a more
accurate picture of the economy. At
the Dallas Fed, we’ve had success
using unemployment insurance tax
records to make early updates to
Texas jobs data.
• Revised data showing one variable leading another say next to nothing about whether the first variable is
of any practical use in forecasting the
second.
• Forecasting relationships estimated with heavily revised data are
unlikely to perform well when applied
to first-release data available in real
time.
As an example of possible implications for monetary policy, consider
inflation targeting. Data revisions
potentially affect both how tightly one
can realistically expect to control any
particular inflation measure and how
strongly policy ought to react to early
inflation releases. Attempts to target
forecasted inflation will benefit if forecasts are as accurate as possible,
which requires that heavily revised
and early-release data be kept strictly
separate.
Evan F. Koenig is vice president and senior economist in the Research Department of the Federal
Reserve Bank of Dallas.

2

From 1949 through 1996, the correlation
between the 12-month growth rates of U.S. manufacturing output and the Journal of
Commerce–Economic Cycle Research Institute
Industrial Price Index is 0.48. From 1997 to the
present, the correlation is only 0.16.
3 “Making Sense of High Oil Prices,” by Stephen
P. A. Brown, Federal Reserve Bank of Dallas
Southwest Economy, July/August 2006.
4 “Solving the Mystery of the Disappearing
January Blip in State Employment Data,” by
Franklin D. Berger and Keith R. Phillips, Federal
Reserve Bank of Dallas Economic Review,
Second Quarter 1994, pp. 53–62; “Reassessing
Texas Employment Growth,” by Franklin D.
Berger and Keith R. Phillips, Federal Reserve
Bank of Dallas Southwest Economy, July/August
1993.
5 Federal Reserve Board’s Semiannual Monetary
Policy Report to the Congress, by Alan
Greenspan, testimony before the Committee on
Banking, Housing and Urban Affairs, U.S. Senate,
July 20, 2004.
6 “Is the Markup a Useful Real-Time Predictor of
Inflation?” by Evan F. Koenig, Economics Letters,
vol. 80, August 2003, pp. 261–67; “The Use and
Abuse of Real-Time and Anecdotal Information in
Monetary Policymaking,” by Evan F. Koenig,
Source OECD: General Economics and Future
Studies, vol. 2005, no. 35, pp. 241–53.
7 “The Use and Abuse of ‘Real-Time’ Data in
Economic Forecasting,” by Evan F. Koenig, Sheila
Dolmas and Jeremy Piger, Review of Economics
and Statistics, vol. 85, August 2003, pp. 618–28.

Data revisions potentially
affect both how tightly
one can realistically
expect to control any
particular inflation
measure and how

Notes
The author thanks Nicole Ball for research assistance.
1 “How Well Does the Beige Book Reflect
Economic Activity? Evaluating Qualitative
Information Quantitatively,” by Nathan S. Balke
and D’Ann Petersen, Journal of Money, Credit
and Banking, vol. 34, February 2002, pp.
114–36; “Using the Purchasing Managers’ Index
to Assess the Economy’s Strength and the Likely
Direction of Monetary Policy,” by Evan F. Koenig,
Federal Reserve Bank of Dallas Economic and
Financial Policy Review, Issue 6, 2002.

FEDERAL RESERVE BANK OF DALLAS

strongly policy ought
to react to early
inflation releases.

7 EconomicLetter

EconomicLetter

SouthwestEconomy
Regional Information You Can Use!
In the November/December 2006 issue:
A year before his death in November 2006, Nobel laureate economist Milton
Friedman spoke with Dallas Fed President Richard Fisher on a variety of
topics. Sample this wide-ranging conversation with the late champion of
economic freedom—plus gain insights into Texas port activity, special
NAFTA-created visas and the Louisiana metro of Shreveport–Bossier City.

is published monthly
by the Federal Reserve Bank of Dallas. The views
expressed are those of the authors and should not be
attributed to the Federal Reserve Bank of Dallas or the
Federal Reserve System.
Articles may be reprinted on the condition that
the source is credited and a copy is provided to the
Research Department of the Federal Reserve Bank of
Dallas.
Economic Letter is available free of charge by
writing the Public Affairs Department, Federal Reserve
Bank of Dallas, P.O. Box 655906, Dallas, TX 752655906; by fax at 214-922-5268; or by telephone at 214922-5254. This publication is available on the Dallas
Fed web site, www.dallasfed.org.

To subscribe, call 214-922-5254 or visit our
web site at www.dallasfed.org.

y
m
o
n
o
c
E
t
s
e
w
h
t
u
o
S
ER
FED

AL

6
200
E6
ISSU EMBER
/DEC
R
E
EMB
NOV

RES

E
ERV

BAN

K

DAL
OF

LAS

Richard W. Fisher
President and Chief Executive Officer
Helen E. Holcomb
First Vice President and Chief Operating Officer
Harvey Rosenblum
Executive Vice President and Director of Research
W. Michael Cox
Senior Vice President and Chief Economist

In T

e
u
s
s
I
his

ad
Ahe
m
a
Ste
rts
Full exas Po
T
ping
for
Step AFTA
A
:
isas d a N
TN V e Towar et
Ston r Mark
Labo
ity
t: ossier C
h
g
i
l
B
Spot veport–
Shre
d:
ecor tion of
R
e
h
a
On t ppreci dman
An A on Frie
Milt

Robert D. Hankins
Senior Vice President, Banking Supervision
Executive Editor
W. Michael Cox
Editor
Richard Alm
Associate Editor
Jennifer Afflerbach
Graphic Designer
Gene Autry

FEDERAL RESERVE BANK OF DALLAS
2200 N. PEARL ST.
DALLAS, TX 75201