View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Consumer Confidence Surveys:
Can They Help Us Forecast Consumer Spending in Real Time?

I

by Dean Croushore

n 1993, the Philadelphia Fed undertook a
project to develop a real-time data set for
macroeconomists, who can use these data in
many ways — for example, when analyzing
indexes of consumer confidence. Existing research
indicates that consumer-confidence measures, though
highly correlated with future spending, do not improve
forecasts of future spending. But these studies used revised
data that were not available to forecasters at the time they
made their forecasts. In this article, Dean Croushore uses
the real-time data set to investigate an important question:
Does using data available to forecasters at the time
— that is, real-time data — make measures of consumer
confidence more valuable for forecasting?

The Federal Reserve Bank of
Philadelphia’s real-time data set for
macroeconomists contains information on the data that a researcher
or forecaster would have known at a
date in the past. This data set, which

Dean Croushore
is an associate
professor of
economics and
Rigsby Fellow at
the University
of Richmond.
When he wrote
this article, he
was a visiting
scholar in the Research Department of the
Philadelphia Fed. This article is available
free of charge at www.philadelphiafed.
org/econ/br/index.html.
www.philadelphiafed.org

is available on the Philadelphia Fed’s
website at www.philadelphiafed.org/
econ/forecast/reaindex.html, allows us
to investigate a number of interesting
economic and policy questions — one
of which is the subject of this article.
We will use the data set to investigate whether measures of consumer
confidence help improve forecasts of
consumer spending.
For many reasons, people want to
know how the economy is doing. They
would like to answer questions such
as: Are we in an economic expansion? Will the economic expansion
continue? Are interest rates likely to
rise or fall? To find answers to these
questions, people read the newspapers,
which often report on the forecasts of

professional economists. The government and private-sector firms also
report on a variety of economic data,
which may include such items as a
survey of consumer confidence.
Several organizations take surveys
of consumers to investigate what they
say about the economy and their families’ finances. The survey responses
are compiled and used to form an
index of consumer confidence, which
is reported in the news media. The
consumer-confidence measures are
correlated with changes in consumer
spending, so they appear to capture
useful information about consumers’
spending plans. But do they really
help us forecast consumer spending in
real time?
In theory, the indexes should
enable us to predict what consumers
will spend in the future, and a glance
at the data tells us that the consumerconfidence measures are, indeed,
strongly correlated with consumer
spending. But we are interested in seeing whether the consumer-confidence
measures pass a tougher test: Do they
tell us more than we already know
from other economic data? If we look
at the existing research, we see that
the consumer-confidence measures,
though highly correlated with future
spending, do not improve forecasts
of future spending made on the basis
of knowing consumers’ incomes, past
consumer spending, the interest rate,
and the value of the stock market.
	However, that previous research
(which we will discuss in more detail
later) is flawed in one important
aspect. The data used in those studies
were not available to forecasters in real
time, that is, at the time their forecasts
Business Review Q3 2006 1

were made. Thoughtful researchers have long known that using such
flawed data is not ideal, but they did
not have a data set such as the realtime data set for macroeconomists
until recently.
The failure to use real-time data
may be important because data are
revised. For example, the Bureau
of Economic Analysis (BEA), the
government agency that releases data
on consumer spending, revises the data
many years after the fact. For example,
when the BEA revises the data on
consumer spending and income, it
uses data from tax returns and Social
Security records that no forecaster
could have known earlier. These data
are much more accurate than the
government’s initial data on spending
and income, which come from a very
incomplete survey. If the revisions to
the data on consumer spending and
income are correlated with measures of
consumer confidence, a forecaster in
real time using measures of consumer
confidence could make better forecasts
than a forecaster who did not use
measures of consumer confidence. So
when previous researchers found that
consumer-confidence indexes did not
improve forecasts of consumer spending, they were not using the right data
— no forecaster would have had the
data they used. We will investigate the
following question: If we used the data
a forecaster would have had available
in real time, would the measures of
consumer confidence prove to be more
valuable?
Fortunately, the Philadelphia Fed’s
real-time data set for macroeconomists
allows us to undertake this exercise.
That data set contains information
on the data a researcher or forecaster
would have known at a date in the
past. As such, it contains exactly the
data we need to investigate the realtime predictive power of consumerconfidence indexes.

 Q3 2006 Business Review

DATA ON CONSUMER CONFIDENCE AND REAL-TIME DATA
Consumer Confidence Surveys.
The two most widely known surveys
of consumer confidence are produced
by the University of Michigan and the
Conference Board. Both are similar in
concept but implemented in different
ways, and their use in forecasting models leads to somewhat different results.
The University of Michigan’s
survey contains about 50 questions,
only five of which are part of its index
of consumer sentiment. The survey,
which began in 1946 on an occasional
basis and has been taken monthly
since 1978, is conducted with about
500 people via telephone. Consumers
are asked five questions that reflect
their sentiments about the economy
and their family finances. Two
questions reflect current economic
conditions. The first question asks how
people are getting along financially
these days: Would you say that you
(and your family living there) are better off or worse off financially than you
were a year ago? The second question
asks about the large items people buy,
for example, furniture, appliances, or
cars: Generally speaking, do you think
now is a good or bad time for people to
buy major household items?
Three questions reflect future
conditions: (1) Looking ahead, do you
think that a year from now you (and
your family living there) will be better
off financially, or worse off, or just
about the same as now? (2) Turning to
business conditions in the country as
a whole, do you think that during the
next 12 months, we'll have good times
financially or bad times, or what? (3)
Looking ahead, which would you say
is more likely: that in the country as
a whole we'll have continuous good
times during the next five years or so
or that we will have periods of widespread unemployment or depression,
or what?

From the answers to these questions, the Michigan researchers create
an index. For example, from question
1, they subtract the percentage of people who say they are worse off from the
percentage of people who say they are
better off. They calculate percentages
in the same way for each of the other
four questions. These percentages are
averaged across all five questions then
compared with the value in a base
year (1966) that has been normalized
to 100, and the result is the index of
consumer sentiment. For our purposes
in this article, we will call that index
Michigan–overall. A separate index is
created from the two questions about
current conditions, which we will call
Michigan–current, and an index is
created from the three questions about
future conditions, which we will call
Michigan–future.
The Conference Board creates
its index of consumer confidence in
a similar manner except the survey is
mailed to 5,000 households, of which
about 3,500 are returned. The survey
has been conducted monthly since
June 1977. As with the Michigan
survey, the Conference Board’s survey
asks five questions: two about current
conditions and three about future
conditions. Questions about current
conditions are: (1) How would you rate
the present general business conditions
in your area? (2) What would you say
about available jobs in your area right
now? Questions about future conditions are: (1) Six months from now, do
you think general business conditions
will be better, the same, or worse? (2)
Six months from now, do you think
there will be more, the same, or fewer
jobs available in your area? (3) How
would you guess your total family
income will be six months from now
(higher, the same, or lower)?
Again, similar to the University
of Michigan, the Conference Board
creates indexes, which we will call

www.philadelphiafed.org

CB–overall, from all five questions;
CB–current, from the two questions
on current conditions; and CB–future,
from the three questions about future
conditions. Although the Conference
Board creates its index using a process
similar to that used by Michigan, the
base year for the Conference Board’s
index is 1985, not 1966.
Using Consumer Confidence to
Forecast Consumer Spending. Figure
1 shows the values of the Michigan–overall and CB–overall indexes,
plotted from January 1978 to December 2005.1 Gray bars indicate periods in
which the economy was in a recession.
As the figure indicates, the confidence
indexes decline sharply at the start of

recessions. Only for the 2001 recession
did the confidence indexes decline
several months before the recession
began; that was the only time the
indexes would have served as a leading
indicator of a recession.2 Because the
consumer confidence indexes do not
appear to forecast recessions well, we
examine their ability to forecast consumer spending instead.
If measures of consumer confidence are able to forecast consumer
spending, measures of consumer confidence should change before consumer
spending does. The relevant data
series for measuring consumer spending is known as personal consumption
expenditures, which is collected by the
Bureau of Economic Analysis as part

Similar plots could be shown for the current
and future indexes, but they are not included
here to conserve space. For the same reason,
Figures 2, 4, and 6 show only the CB—overall
index.
1

The same is true for the future indexes, which
are not shown, since they follow the same pattern as the overall indexes.
2

FIGURE 1
Consumer Confidence Indexes,
Overall, January 1978 to December 2005
Index value
160
140
CB-Overall
120
100
80

Michigan-Overall

60

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

1989

1988

1987

1986

1985

1984

1983

1982

1981

1980

1979

1978

40

of the National Income and Product Accounts. The data we use are
quarterly. Figure 2 plots the growth
rate of consumption spending each
quarter, measured as the amount of
consumer spending within the quarter
compared with the amount of spending in the same quarter of the previous
year, along with the quarterly level of
the CB—overall measure of consumer
confidence.
The graph indicates a fairly strong
correlation between the growth rate of
consumer spending and the measure of
consumer confidence. Broadly speaking, consumer spending growth rises
when consumer confidence rises, and
vice versa. However, there are periods,
such as 1987 to 1989, when the two
variables appear to move in opposite
directions. Nonetheless, it appears that
the correlation is strong enough that
we might be able to use consumer confidence to forecast consumer spending.
Forecasting Model. We will
construct a state-of-the-art forecasting
model that has been used in previous
research, and it is one that a forecaster
could have used to predict consumer
spending. Economic researchers have
used this model in studies that have attempted to test whether consumer confidence indexes are helpful in forecasting. These studies include the paper by
Jason Bram and Sydney Ludvigson and
the one by Christopher Carroll, Jeffrey
Fuhrer, and David Wilcox.3 We copy
their forecasting model, which models
the growth rate of consumer spending
today as dependent on the growth of
consumer spending in each of the last
four quarters, the growth in people’s
income in each of the last four quarters
(because changes in income affect
people’s decisions about how much
they can spend), the change in the

Date

For a review of these and other studies, see
Ludvigson’s 2004 paper.
3

Source: The Conference Board and the University of Michigan

www.philadelphiafed.org

Business Review Q3 2006 

FIGURE 2
Conference Board Overall Index and
Consumption Spending
January 1978 to December 2005
Growth rate of consumption
(percent)

Index value

8

150

CB-Overall
(left scale)

130

7
6
5

110

4
3

90

2
70

1
0

Growth Rate of Consumption
(right scale)

50

-1
-2
2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

1989

1988

1987

1986

1985

1984

1983

1982

1981

1980

1979

1978

30

Date

Source: The Conference Board and Bureau of Economic Analysis

interest rate (on three-month Treasury
bills) in each of the last four quarters
(higher interest rates induce people
to save more and consume less), and
the change in the value of the stock
market in each of the last four quarters
(increases in wealth induce people to
consume more). Data on consumer
spending, income, and the value of the
stock market are in real terms; that
is, they are adjusted for inflation. We
will use this forecasting model as our
baseline and then add a measure of
consumer confidence to the model to
see if we get improved forecasts.4
Data Revisions. One problem
that an economic forecaster faces in
practice is that data are sometimes
incomplete and may be revised over

For technical details on the forecasting
models, see my 2005 paper, on which this article
is based.
4

4 Q3 2006 Business Review

time. To compare the models properly,
we need to know what data a forecaster would have in real time. That
is, to forecast what consumer spending
would be during the first quarter of
1982, we must go back and examine
the data a forecaster would have had
available at that time, which may be
quite different from what the data
prior to the first quarter of 1982 look
like today because of data revisions. To
accomplish this task, we use the realtime data set for macroeconomists.5
Why are data revised? Mostly
because the government makes an estiThe data set, available on the Federal
Reserve Bank of Philadelphia’s website at
philadelphiafed.org/ econ/forecast/reaindex.
html, was first described in the Business Review
article that I wrote with Tom Stark. See our
other papers for further details on the data
set and the implications of data revisions for
economic research, forecasting, and monetary
policy.
5

mate of the data before it has complete
information. The government reports
on many macroeconomic data series
with a lag of just one month. For example, gross domestic product (GDP)
for the first quarter of 2005 was first
reported in April 2005. But the initial
data release by the BEA is based on
a very incomplete sample. Over time,
the BEA gathers more information and
revises the data, especially after people
file their income tax returns. By July
2006, the BEA had a much clearer picture of what GDP was in the first quarter of 2005 than it did in April 2005.
Thus, the revised data are significantly
more accurate than the data that were
initially released. But this poses a
quandary for forecasters: Should they
wait until the data have been revised,
a process that takes over a year, or use
what they have? The answer is clear
for most situations: Forecasters need to
forecast in the short run, and even the
government’s initial release of the data
is better than no data at all.
Which variables do we need to
worry about that might have data
revisions? Consumer spending (more
formally, real personal consumption
expenditures) and income are revised
over time by the government. In addition, we use the price index for personal consumption expenditures as our
measure of inflation; so the real value
of the stock market is revised when
that price index is revised. The interest
rate and the measure of consumer confidence are not revised. Thus, we need
real-time data on consumer spending,
income, and the price index, which are
available in the real-time data set.
How large are the revisions to the
data series? Both consumer spending
and income are revised substantially;
however, the real value of the stock
market is not revised very much. Figure 3 shows the revisions to consumer
spending and income from when
the data for each date were initially

www.philadelphiafed.org

FIGURE 3
Revisions to Real Consumption Growth and
Real Income Growth
Initial to February 2006 Database
Percent
(Annualized growth rate)
10
Consumption
5

0

-5

2005

2003

2001

1999

1997

1993

1991

1989

1987

1985

1983

1981

1979

1977

1975

1973

1971

1969

-15

1995

Income

-10

Date

Source: Author’s calculations from the real-time data set for macroeconomists

released to the values recorded in the
government’s database as of February
15, 2006. The numbers shown are the
annualized growth rate6 for the quarter
in the February 15, 2006, database minus the annualized growth rate for the
quarter as reported by the government
when the data were initially released.
You can see that the data revisions can
be large, reaching a magnitude of as
much as 11.4 percentage points, and
that revisions to income have generally
been larger than revisions to consumption.

An annualized growth rate is the growth rate
from one quarter to the next, expressed at an
annual rate so that comparisons with annual
data can be easily made. For example, if GDP
grew 0.6 percent from one quarter to the next,
the annualized growth rate would be 2.4 percent — four times as large — because if GDP
kept growing at the same pace for the entire
year, it would grow 2.4 percent for the year.
6

www.philadelphiafed.org

EVALUATING FORECASTS OF
CONSUMER SPENDING
Our model for forecasting consumer spending, as described above,
uses data on past consumer spending, past income, past changes in the
interest rate, and past changes in the
real value of the stock market. At
each date, beginning with the first
quarter of 1982, we will imagine we are
forecasters using the data available to
us at the time. We will estimate our
baseline model and generate a forecast
for consumption spending in the quarter. Then, we will include a measure of
consumer confidence in the model and
generate another forecast.
After following this procedure for
the first quarter of 1982, we imagine
stepping forward one quarter to the
second quarter of 1982, with one additional quarter of data on which to base

our forecasts. We will make forecasts
for that quarter and then keep repeating this process through the fourth
quarter of 2005. After following this
procedure, we can show the forecasts
for consumer spending each quarter,
based on the baseline forecast with no
consumer-confidence measure and the
CB–overall forecast (Figure 4).7 As we
can see in the graph, the forecasts are
similar, but they also differ systematically at times; that is, the forecasts
using the CB–overall index are higher
than the baseline forecasts for many
consecutive periods, such as most of
the quarters from 1987 to 1990, and
are lower than the baseline forecasts
for most of the quarters from 1990 to
1991.
How do we evaluate which forecast is better? To evaluate the forecasts for consumer spending, we will
subtract the forecast made using the
baseline model from the actual value
of consumer spending in each quarter
to calculate the baseline model’s forecast error. Next, we will do the same
for the forecast made using the model
that includes a measure of consumer
confidence. Then, we will compare
the forecast errors to see which model
produces smaller errors.
However, this raises a problem:
What is the actual value of consumer
spending? If we use today’s government
database (in particular, the database as
of February 15, 2006), we will probably find very large forecast errors in
the earlier part of the sample period
because of various changes to the
definitions of the variables, changes in
the base years for real variables, and
so forth. This occurs because about
every five years, the BEA modifies
the methods it uses to construct the

Forecasts for the other five measures of consumer confidence were also generated but are
not shown here.
7

Business Review Q3 2006 5

FIGURE 4
Comparing Forecasts Over Time
1982Q1 to 2005Q4
Consumption growth rate
Percent
14
12

CB-Overall

10
8
6
4
2

No Confidence Index

0

2005

2004

2003

2002

2001

2000

1999

1997

1998

1996

1997

1995

1996

1994

1993

1992

1991

1990

1989

1988

1987

1986

1985

1984

1983

-4

1982

-2

Date

Source: Author’s calculations

FIGURE 5
Alternative Actuals
1982Q1 to 2005Q4
Consumption growth rate
12
10
8

Feb. 15, 2006 Database

6
4
2
0
-2

Pre-Benchmark

Date

Source: Author’s calculations from the real-time data set for macroeconomists

 Q3 2006 Business Review

2005

2004

2003

2002

2001

2000

1999

1998

1995

1994

1993

1992

1991

1990

1989

1988

1987

1986

1985

1984

1983

-6

1982

-4

data in a process known as benchmark
revision. It is hard to imagine that a
forecaster working in early 1982 and
making a forecast for consumer spending for the first quarter of 1982 could
have anticipated the methods used by
the government to calculate data on
consumer spending as of early 2006.
For that reason, we will not use
the consumer spending data from the
February 15, 2006, database as our
measure of the actual value of the
data. Instead, we will do the following:
For each date for which a forecast is
made, we will use as the actual value
of the data the last value of the data
before a benchmark revision. Benchmark revisions to the U.S. National
Income and Product Accounts occurred in December 1980, December
1985, November 1991, January 1996,
October 1999, and December 2003.
Using the data just before a benchmark revision gives a better view of
how accurate the forecasts are. How
much does this choice matter? Figure
5 shows the data from the February
15, 2006, database compared with
the data just before each benchmark
release. Though the pattern of the
growth rates of consumption spending is roughly the same, from 1982 to
1990 the pre-benchmark growth rate is
almost always lower than the February 15, 2006, data. We would think
that the forecasting model was making
systematic forecast errors if we based
our analysis on the most recent data
instead of the pre-benchmark data.
Figure 6 compares the forecast
errors of the model that includes the
CB–overall index with those of the
baseline forecast. Since the graph
shows there are times when each forecast error is higher or lower than the
other, it is not obvious which forecast
is worse. We need some way to compare the forecast errors over the entire
period from 1982 to 2005.

www.philadelphiafed.org

relative RMSFE greater than 1, and a
model with a lower RMSFE than the
baseline model has a relative RMSFE
less than 1. If a measure of consumer
confidence was helpful in forecasting,
its RMSFE would be less than 1. Table
1 also indicates whether the difference
between the RMSFEs is statistically
significant. None of the models has an
RMSFE that is statistically significantly different from the baseline model.

FIGURE 6
Comparing Forecast Errors Over Time
1982Q1 to 2005Q4
Forecast error
(Percentage points)
6
4
2
0
-2
CB-Overall

-4
-6

No Confidence Index

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

1989

1988

1987

1986

1985

1984

1983

-10

1982

-8

Date

Source: Author’s calculations

Economic theory provides a way
to compare the forecast errors. We
begin with the assumption that bigger
forecast errors are substantially worse
than smaller ones. A commonly accepted method of comparing forecast
errors is to calculate the root-meansquared-forecast error (RMSFE). The
RMSFE is found by squaring each forecast error (thus penalizing large errors
more than small errors), adding the
squared errors together, then taking
the square root. The RMSFE is similar
in concept to the standard deviation,
which is commonly used in statistical
analysis. The higher the RMSFE is,
the worse the forecasts are. In addition, economists have developed
tests for the statistical significance of
differences in RMSFEs. For example,
it could be that one forecasting model
has a lower RMSFE than another, but

www.philadelphiafed.org

the difference between the two is so
small that the result could have occurred by chance and, thus, does not
mean that the one forecasting model is
significantly better than the other. In
each case, we will ask: Is the difference between the RMSFEs statistically
significant?
We compare the RMSFEs of the
different forecasts in Table 1. As you
can see, all of the forecasts using a
measure of consumer confidence have
higher RMSFEs than the baseline
except for the forecast using CB–future. For ease of comparison, the table
shows the relative RMSFE for each
model, which is its RMSFE divided
by the RMSFE of the baseline model
with no consumer confidence measure.
Thus, the baseline model has a relative
RMSFE of 1, a model with a higher
RMSFE than the baseline model has a

ALTERNATIVE FORECASTING
MODELS
The results in Table 1 are discouraging. They suggest that none of
the measures of consumer confidence
help to significantly improve the forecasts, only one measure improves the
forecasts at all, and the rest make the
forecasts worse (though not significantly worse). However, our baseline
model was based on models that other
researchers in the literature had used.
Those models were not necessarily
designed to produce the best forecasts with real-time data. It might be
possible to find a better forecasting
model and then see if the measures of
consumer confidence help improve the
forecasts using that better model.
One principle of forecasting is
KISS (for example, see the references in Frank Diebold’s textbook on
forecasting), which stands for Keep It
Sophisticatedly Simple. In forecasting,
this means that forecasters should use
sophisticated models that capture the
elements of the data that are essential to the process. But in comparing
different sophisticated models, choose
the simplest model that gets the job
done. If a model is very complicated, it
may suffer from data mining: Variables
are included in the forecasting model
because they help to explain a particular episode in the past, but they are
of no value for forecasting the future
and may, in fact, make such forecasts
worse. Thus, we will try to simplify

Business Review Q3 2006 7

TABLE 1
Root-Mean-Squared-Forecast Errors (RMSFE)
Original Model, 1982Q1 to 2005Q4
		Relative	Significant
Forecasting Model
RMSFE	RMSFE	
Difference?
No confidence measure

2.16

1.000

---

M-overall
CB-overall

2.28
2.17

1.055
1.004

no
no

M-future
CB-future

2.28
2.13

1.055
0.988

no
no

M-current
CB-current

2.40
2.26

1.114
1.048

no
no

TABLE 2
Root-Mean-Squared-Forecast Errors (RMSFE)
Alternative Model, 1982Q1 to 2005Q4*
		Relative	Significant
Forecasting Model	RMSFE	RMSFE	
Difference?
No confidence measure

2.11

1.000

---

M-overall
CB-overall

2.18
2.19

1.033
1.035

no
no

M-future
CB-future

2.22
2.18

1.051
1.033

no
no

M-current
CB-current

2.23
2.25

1.053
1.065

no
yes

*Model uses changes in confidence indexes and fewer variables.

the baseline model to see if we can
make our forecasts better.
One way to simplify the model is
to eliminate some variables from the

8 Q3 2006 Business Review

forecasting model. The only way to
figure out the right variables to eliminate is by trial and error, and doing so
results in slightly lower forecast errors.

Essentially, all the information from
the data on past income is already
reflected in past consumption data,
and the change in interest rates is
simply not a very large factor affecting
consumption. Therefore, we eliminate
those two variables, and our forecasting model performs somewhat better.
A second change that might help
is to consider how the measures of
consumer confidence should enter into
our forecasting model. Following the
previous researchers, we had initially
used the level of consumer confidence
in the forecasting model. But some
people have suggested that what might
be more helpful for forecasting is to
note when there is a large change in
consumer confidence, regardless of its
level. A large increase in consumer
confidence means people are likely to
spend more, while a large decrease in
consumer confidence means people
are likely to spend less. We use only
the change in a measure of consumer
confidence in our model, not its level.
We have simplified our forecasting model somewhat. The result, as
shown in Table 2, is that our forecasts are slightly better — that is, the
models generally have lower RMSFEs
than those in Table 1 — except for
CB—overall and CB—future. But the
simplification of the model made the
baseline model with no consumer confidence index slightly better. The result
is that all of the measures of consumer
confidence make the forecast worse,
and one measure (CB–current) makes
the forecasts significantly worse.
The conjecture in the introduction suggested that by using real-time
data, the measures of consumer
confidence were more likely to be of
help in forecasting than if we had used
the revised data, for example, if we had
pulled all the data out of the February 15, 2006, database. In fact, the
use of real-time data did not make an
appreciable difference in the forecasts

www.philadelphiafed.org

that used a consumer confidence index
compared with the baseline model that
did not. It appears that the use of realtime data did not rescue the consumerconfidence measures.

chance of showing that measures of
consumer confidence could prove useful in forecasting. After all, the measures of consumer confidence could
reflect what people know that has not
yet been captured by government statistical agencies. However, in trying to
predict consumer spending, evidently
the measures of consumer confidence
reflect other events affecting the

economy and do not sufficiently tell
us what people know that government
statistical agencies do not know.
The bottom line: If you are
forecasting consumer spending for
the next quarter, you should use data
on past consumer spending and stock
prices and ignore data on consumer
confidence. BR

Bram, Jason, and Sydney Ludvigson.
“Does Consumer Confidence Forecast
Household Expenditure? A Sentiment
Index Horse Race,” Federal Reserve Bank
of New York Economic Policy Review (June
1998), pp. 59–78.

Croushore, Dean, and Tom Stark. “A
Funny Thing Happened on the Way to
the Data Bank: A Real-Time Data Set for
Macroeconomists,” Federal Reserve Bank
of Philadelphia Business Review (Sept./Oct.
2000), pp. 15–27.

Ludvigson, Sydney. “Consumer
Confidence and Consumer Spending,”
Journal of Economic Perspectives 18 (Spring
2004), pp. 29–50.

Carroll, Christopher D., Jeffrey C. Fuhrer,
and David W. Wilcox. “Does Consumer
Sentiment Forecast Household Spending?
If So, Why?” American Economic Review 84
(December 1994), pp. 1397–1408.

Croushore, Dean, and Tom Stark. “A
Real-Time Data Set for Macroeconomists,”
Journal of Econometrics 105 (November
2001), pp. 111–30.

SUMMARY
The conjecture that began this
article seemed sensible: The use of
real-time data might have a better

REFERENCES

Croushore, Dean. “Do Consumer
Confidence Indexes Help Forecast
Consumer Spending in Real Time?” North
American Journal of Economics and Finance
16 (December 2005), pp. 435–50.

www.philadelphiafed.org

Stark, Tom, and Dean Croushore.
“Forecasting with a Real-Time Data
Set for Macroeconomists,” Journal of
Macroeconomics 24 (December 2002), pp.
507−31.

Diebold, Francis X. Elements of
Forecasting, 3rd edition.
Cincinnati: South-Western, 2003.

Business Review Q3 2006 9

A Review of Inflation Targeting in
Developed Countries
by michael dotsey

I

n the United States, inflation targeting
has many advocates, but many others are
skeptical about adopting such a policy. Given
this debate and inflation targeting’s growing
adoption around the world, now is a good time to review
the economic performance of some inflation-targeting
countries. In this article, Mike Dotsey examines five
countries that have been targeting inflation for at least
10 years and whose inflation rates, though fairly well
contained before inflation targeting, were nonetheless
considered too high by policymakers. For purposes of
comparison, he also looks at the economic performance of
six noninflation-targeting countries.

Many countries have debated
the merits of inflation targeting, and
some have adopted inflation targeting
as a national policy. In an inflationtargeting framework, a central bank
announces quantitative targets for
inflation and specifies that controlling
inflation is a long-run goal of monetary
policy. Another common feature is a

Mike Dotsey is a
vice president and
senior economic
policy advisor
in the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.org/econ/br/index.html.

10 Q3 2006 Business Review

specific policy for bringing inflation
back to target in circumstances where
the target has been missed. Also, inflation-targeting central banks have often
adopted a more transparent policy that
entails fairly detailed communications
with the public.1
New Zealand first instituted this
monetary policy framework in early
1990. Since that time, 22 countries
have formally adopted inflation targeting, and no country that has adopted
it has abandoned it. Although inflation targeting’s contribution to overall
economic performance is still being
debated, the general view is that it has
had beneficial effects. Inflation and its

volatility have generally declined in
inflation-targeting countries and output growth has increased. At the same
time, it appears that the volatility of
output has decreased.
Inflation targeting has many advocates in the United States. Those
in favor base their argument on the
economic benefits that ensue from low
and stable inflation, the possibility that
inflation targeting would enhance the
FOMC’s credibility, and the increased
flexibility that could come from increased credibility.2
Others, however, are not quite as
enthusiastic, and their reservations
largely involve concerns over a lack of
flexibility that might result from inflation targeting, especially in situations
where maintaining a tight rein on
inflation could prove damaging to the
economy. Critics also point out that
U.S. monetary policy has performed
quite well over the last 20 years without any formal reliance on an inflation
target and that it may be a bit too early
to fully evaluate the relative performance of inflation-targeting countries.
Inflation targeting’s track record is
rather short, and we may not yet have
seen situations where adhering to an
inflation target would be detrimental
to economic performance.
Given the status of the debate
over inflation targeting, the recent
interest in this topic in the United
States, and its growing adoption by
countries around the world, now seems

For a representative viewpoint, see speeches by
Anthony Santomero in Federal Reserve Bank
of Philadelphia Business Review, Third Quarter
2003 and Fourth Quarter 2004.
2

See the book by Bernanke and co-authors for a
formal description of inflation targeting.
1

www.philadelphiafed.org

an opportune time to review the policies and economic performance of a
number of inflation-targeting countries. To carry out this evaluation, I
will examine a set of countries — New
Zealand, Canada, Australia, the
United Kingdom, and Sweden — that
have been targeting inflation for at
least 10 years and whose inflation
rates were fairly well contained before
they adopted inflation targeting. This
choice helps avoid issues that arise if
we choose countries that went from
high inflation to low inflation after
targeting, and the experience of the
countries chosen is more relevant from
the standpoint of the United States.
For purposes of comparison, I will
examine the economic performance
of those five countries along with the
performance of six noninflation-targeting countries: the United States,
Germany, Japan, France, Italy, and
the Netherlands. I will also summarize
some recent empirical studies on inflation targeting.
The view taken here is that inflation targeting has been modestly beneficial. Inflation has indeed declined
in the five countries examined. Inflation volatility has also declined and,
perhaps just as important, so has the
inertia in the inflation process itself.
Furthermore, expectations of inflation
seem to be more stable in the inflationtargeting countries, lending credence
to the assertion that inflation targeting
enhances a central bank’s credibility.
Also, there seem to be no negative
consequences for economic activity.
Output growth has tended to be stronger and less volatile over the time that
these countries have targeted inflation.
Moreover, central bankers in these five
countries have publicly expressed enthusiasm for the framework.
KEY FEATURES OF INFLATION
TARGETING
Inflation targeting establishes a

www.philadelphiafed.org

numerical objective for inflation, and
the actual target is stated as either
a specific point target or a range.
Although the framework explicitly
acknowledges that maintaining a low
inflation rate is a primary policy objective, inflation need not be the sole
objective of monetary policy. When
inflation is not the central bank’s

energy components of the headline
measure. These less volatile measures
are typically referred to as the core inflation measure of a price index. Also,
should the target be a point or a range,
and over what period should inflation
be measured? Should monthly inflation be targeted, or should it be some
long-run average of inflation? In addi-

Importantly, in an inflation-targeting framework,
monetary policy is delegated to an independent
central bank.
only concern, the other concerns
are often stated in the central bank’s
explicit mandate, or the central bank
may communicate its concerns more
informally to the public. Importantly,
in an inflation-targeting framework,
monetary policy is delegated to an independent central bank.
By establishing numerical objectives that are to be met over specified periods, an inflation-targeting
framework embeds accountability.
The issue of accountability has led to
very open communication between
inflation-targeting central banks and
the public. This increased openness is
called transparency. The combination
of transparency and specific numerical
objectives makes monitoring inflation
targeting easier for the public. This
openness also helps central banks establish credibility because it is easier to
judge whether they are meeting their
commitments.
Adherence to the general framework of inflation targeting requires the
monetary authority to pay attention to
a number of particular elements. One
key element involves picking a particular measure of inflation. There are
numerous measures of inflation, ranging from the headline measure of the
consumer price index (CPI) to various
less volatile measures of inflation,
which typically exclude the food and

tion, if the target is a point target, the
central bank must decide how much of
a deviation from its target it is willing
to tolerate. For example, should the
implicit or explicit range be just a few
percentage points or wider?
Other equally important considerations involve whether the central
bank should have multiple goals, such
as a target for output growth or the
unemployment rate, and how a central
bank can be held accountable for its
actions. Communication and transparency become quite important in an
inflation-targeting regime. But how
transparent should the central bank
be?
The various approaches of the
five inflation-targeting countries examined in this article are summarized
in Table 1. As we can see, approaches
to inflation targeting vary.3 However,
although there are a number of differences, there are some key commonalities. Most have a point target but are
content to let inflation vary within
plus or minus 1 percent of the target.
Also, most central banks currently target an annual average of the headline
CPI, but they also report the behavior

Excellent summaries of inflation targeting can
be found in the book by Edwin Truman and the
book by Bernanke and co-authors.
3

Business Review Q3 2006 11

TABLE 1
Inflation-Targeting Framework of Five Countries
Transition
period to
reach final
target

Time frame
to correct
deviations

Country

Date

Type of
mandate

New
Zealand

Dec.
1989

Price
stability

Range
of 1-3%
CPI over
medium
term

Yes

Not explicit

Quarterly
monetary policy
statement

No: Target set by
agreement between
government and
bank

Canada

Feb.
1991

Multiple

2 pctage
points of
CPI with
±1%
tolerance

Yes

6-8 quarters

Quarterly
monetary policy
report

No: Target set by
government and
bank

U.K.

Oct.
1992

Hierarchy
with price
stability
first

2 pctage
points of
CPI with
± 1%
tolerance

Yes

Not specific,
but required
to set horizon
each instance

Quarterly inflation
report

No: Target set by
government

Sweden

Jan.
1993

Price
stability

2 pctage
points of
annual
CPI with
±1%
tolerance
1-2years
ahead.

No

Yes
1-2 years

Quarterly inflation
report

Yes

Australia

June
1993

Multiple

Range
of 2-3%
CPI over
medium
term

No

No time
frame

Quarterly
statement on
monetary policy

Yes

Setting

Communication

Independence

Source: Pooled from various materials (see references).

of core measures. Many, but not all, report a time path for bringing inflation
back to target if the target is missed.
Finally, all of the inflation-targeting

12 Q3 2006 Business Review

central banks are quite transparent
and issue frequent and detailed communications concerning policy. The
only major difference is with respect to

independence. Although all independently set interest rates, only two out
of the five have sole responsibility for
setting the ultimate goals of policy.

www.philadelphiafed.org

TABLE 2
Inflation and Output Growth in Inflation-Targeting Countries,
Before and After*

Country

Pre-inflation Targeting (10 years prior to
adopting target; for dates see Table 1)
inflation

growth

11.4

1.8

2.9

2.7

2.1

3.0

1.8

2.4

Canada

5.7

2.8

2.9

2.9

2.0

2.7

1.3

2.1

U.K.

5.5

2.5

3.0

1.9

2.5

2.9

0.8

0.7

Australia

6.0

3.2

2.9

2.7

2.6

3.8

1.6

1.1

Sweden

6.7

1.9

2.9

2.3

1.5

2.5

1.1

1.6

Avg. IT

7.1

2.4

2.9

2.5

2.1

3.0

1.3

1.6

NZ

s.d. inflation

s.d. growth

Post-inflation Targeting - 2004
(for dates see Table 1)
inflation

growth

s.d. inflation

s.d. growth

Inflation and Output Growth in Noninflation-Targeting Countries, Comparison
1982-1992

1992-2004

U.S.

4.0

3.0

1.3

2.6

2.5

3.3

0.6

1.2

Japan

1.9

3.7

1.1

1.8

0.1

1.1

1.0

1.6

Germany

2.6

2.7

1.7

5.3

1.8

1.1

1.4

1.4

France

5.1

2.2

3.3

1.1

1.6

1.9

0.6

1.3

Neth.

2.6

2.5

2.3

2.2

2.4

2.4

0.8

1.6

Italy

8.3

2.2

4.6

1.3

3.0

1.5

1.3

1.4

Avg. NIT

4.1

2.7

2.4

2.4

1.9

1.9

1.0

1.4

*

Inflation rates are annualized changes in the headline CPI and growth rates are annualized rates of growth in GDP.

www.philadelphiafed.org

Business Review Q3 2006 13

EXPERIENCE UNDER
INFLATION TARGETING
Now let’s compare the experience
of the five inflation-targeting countries
with that of the six noninflation-targeting countries. These six countries
serve as a reference, preventing me
from attributing various economic
outcomes to inflation targeting when,
in fact, these outcomes may be a result
of global economic conditions. For
example, output volatility declined in
all 11 of the countries from 1992 to
2004, the years in the latter half of my
sample. Attributing the entire decline
in the inflation-targeting countries to
inflation targeting would be erroneous.
Inflation targeting should be viewed as
helping to lower output volatility only
if inflation-targeting countries experience a greater decline than noninflation-targeting countries.
First, let’s look at data on inflation
and output growth for both the five
inflation-targeting countries and the
six noninflation-targeting countries.
For the inflation-targeting sample,
we’ll use data for the 10 years before
the adoption of inflation targeting and
from adoption to the end of 2004. For
the noninflation-targeting countries,
the first sample of data covers 1982 to
1992, and the second sample covers
1992 to 2004. This methodology allows
a visual comparison of the data before
adoption of inflation targeting and
after. The data are shown in figures 1
and 2 and summarized in Table 2.
The first thing to notice is that
with the exception of Italy, the inflation-targeting countries had higher
inflation rates in the first part of the
sample, while output growth was fairly
comparable across the two groups.
Therefore, it is evident that the U.S.,
Japan, Germany, and the Netherlands
had less incentive to adopt inflation
targeting, since their inflation rates
were already fairly low. In the second
half of the sample, both sets of coun-

14 Q3 2006 Business Review

tries have similarly low rates of inflation, and high inflation is not deemed
to be a problem for any of the countries.4 Thus, it appears that inflation
targeting is associated with a lowering
of inflation for all five countries that
adopted it but that central banks can
also achieve low inflation without explicitly targeting inflation.
However, to gauge the effectiveness of inflation targeting, we want to
examine the comparative differences
in behavior of the two groups of countries over the two samples. Some of the

the level of inflation because those
country-specific factors are assumed to
be the same across the two sample periods. By looking at differences across
the two sample periods, we can remove
that level effect.
In that regard, the graphs show
that both sets of countries saw a reduction in inflation, but, on average, the
reduction was greater for inflation-targeting countries. In those countries,
average annual inflation rates declined
5 percentage points as opposed to 2.2
percentage points for the noninfla-

Noninflation-targeting countries tend to be
larger countries and may be more immune to
the effects of changes in international prices.
noninflation-targeting countries may
have specific circumstances that allow them to more easily keep inflation
low. For example, the noninflationtargeting countries tend to be larger
countries and may be more immune to
the effects of changes in international
prices. We do not want to conclude
that inflation targeting is ineffective
when two countries have similarly low
inflation, one an inflation-targeting
country and the other one not, since
that outcome may occur because inflation targeting was helpful in the country that adopted it but was less needed
in the country that didn’t. To avoid
this confusion, I concentrate on differences in inflation and output growth
across the two groups of countries and
across the two sample periods. Doing so cancels out factors specific to
a particular country that may affect

Japan, on the other hand, suffered from deflation and a very sluggish economy in the second
half of the sample. It is possible that Japan
would have benefited from inflation targeting
because it would have forced the country to
have a more expansionary monetary policy.
4

tion-targeting countries. Also, output
growth increased by an average annual
rate of 0.6 percentage point in the inflation-targeting countries but actually
declined 0.6 percentage point in the
noninflation-targeting countries.
Next, let’s look at the relative variability of inflation and output over the
two sample periods (Figure 2). Excluding Italy, the variability of inflation, as
measured by the standard deviation of
annualized growth rates in the headline CPI, is greater for the inflationtargeting countries before the adoption
of inflation targeting. After adoption,
the average volatility of inflation fell
a dramatic 1.6 percentage points for
the inflation-targeting countries, but
it also fell 1.4 percentage points for the
noninflation-targeting countries. As in
the case of the decline in the inflation
rate, much of the decline in volatility
in the noninflation-targeting countries
occurred because France and Italy
became part of the European Currency
Union and one of the requirements for
joining the union was a low and stable
inflation rate. Thus, there was institutional pressure for France and Italy

www.philadelphiafed.org

FIGURE 1
Inflation Rates, Output Growth, and Inflation Targeting
Inflation
Inflation-Targeting Countries

Noninflation-Targeting Countries

Percent change in CPI

Percent change in CPI

20

20
Australia
Canada
New Zealand
Sweden
UK

15

10

10

5

5

0

0

-5

-5

Q1
1982

Q1
1987

Q1
1992

Q1
1997

US
Germany
France

15

Q1
1981

Q1
2002

Q1
1986

Q1
1991

Japan
Italy
The Netherlands

Q1
1996

Q1
2001

Output
Inflation-Targeting Countries

Noninflation-Targeting Countries

Percent change in GDP

Percent change in GDP

10

10

8

8

6

Japan
Italy
The Netherlands

US
Germany
France

6

4

4

2

2

0
Australia
Canada
New Zealand
Sweden
UK

-2
-4

-2
-4

-6
Q1
1982

0

Q1
1987

www.philadelphiafed.org

Q1
1992

Q1
1997

Q1
2002

Q1
1982

Q1
1987

Q1
1992

Q1
1997

Q1
2002

Business Review Q3 2006 15

FIGURE 2
Inflation and Output
Average Growth vs. Average Inflation

Average percent change in CPI
12

10

8
IT countries 10 years pre-IT
non-IT countries pre-1992

6

IT countries post-IT
non-IT countries post-1992

4

2

0
0

1

2

3

4

5

Average percent change in GDP

Standard Deviations of Growth & Inflation
SD of percent inflation
5
4.5
4
3.5
3

to reduce their inflation rates in the
second half of the sample.5
Output volatility also declined
by nearly the same amount for both
groups of countries, and both groups
show a more favorable tradeoff between output and inflation volatility
over the later sample (Figure 2). Just
looking at the data in this way is informative, but it has limitations. It
doesn’t control for many features of
the economic environment that affect
inflation and output and that may be
unrelated to inflation targeting. The
sample size is still very small, making
any definitive conclusion statistically
difficult.6
For example, countries may have
experienced more favorable economic
shocks after they adopted inflation
targeting, and merely examining
economic outcomes before and after
adoption could overstate the benefits
of inflation targeting.7 Using a control
group of countries helps avoid attributing all of the improvement in inflation
and output performance to the inflation-targeting framework, but it also
brings its own set of interpretation
problems.
In particular, there are some
important differences between the
inflation-targeting countries and the
control group. The inflation-targeting
countries are small, open economies,
whereas the control group contains

IT countries pre-IT
non-IT countries pre-1992

2.5

IT countries post-IT

2

non-IT countries post-1992

1.5
1

One of the convergence criteria of the Maastricht Treaty was for countries to have a rate of
inflation that was less than a maximum of 1.5
percent above the average rate in the three EU
countries with the lowest inflation.
5

For example, more sophisticated statistical
work analyzing whether inflation targeting reduces the variability of inflation is inconclusive.
6

0.5
0
0

1

2

3

4

SD of percent GDP growth

16 Q3 2006 Business Review

5

6

An economic shock is a factor that causes
unexpected changes in economic variables.
Shocks can be unfavorable, such as the devastating economic effects of hurricanes, or favorable, such as an innovation in technology that
increases productivity.
7

www.philadelphiafed.org

some economically large countries.8
Thus, the inflation-targeting countries
have a greater exposure to external
economic disturbances. Also, the
control group may have adopted better
monetary policy over the most recent
period, perhaps implicitly targeting inflation, and by doing so would
diminish the benefits attributable to
inflation targeting. It could also be the
case that the countries that adopted
inflation targeting just had relatively
bad luck before adoption and now have
experienced a more normal set of economic shocks. If that were the case, we
might incorrectly attribute the benefits
of a change in luck to inflation targeting. It also doesn’t allow us to examine
another important aspect of inflation
targeting, namely, its effect on inflation expectations.
A MORE DETAILED
EXAMINATION
To more sharply assess the potential benefits of inflation targeting, let’s
review the economic literature regarding the empirical effects associated
with inflation targeting.9 In general, it
appears that adopting inflation targeting has reduced the inflation rate and
the persistence of inflation, has stabilized long-run expectations of inflaA country is said to have an open economy
if it engages in significant trade with other
countries.

tion, and has not had any deleterious
effects on output. However, inflation
targeting has probably not had any significant effect on inflation volatility.
The Inflation Rate. One of the
primary reasons for moving to inflation targeting is to reduce inflation.
From Figure 1 and Table 2, it is clear
that inflation did decline after the
adoption of inflation targeting. But
it also declined for countries that did
not adopt inflation targeting. Thus,
distinguishing between the experience of countries that target inflation
and those that do not requires more
sophisticated statistical techniques.
The basic message from such exercises
is mixed. A number of studies indicate
that inflation targeting was successful
in reducing inflation, but that conclusion is sensitive to how the study
controlled for the fact that many inflation-targeting countries had relatively
high inflation before they introduced
inflation targeting. It also depends on
the countries in the particular study.
An interesting but controversial
study by Laurence Ball and Niamh
Sheridan attributes all the statistically
significant lowering of inflation to the
fact that inflation-targeting countries
initially had higher inflation, and
therefore, all one sees is a regression
to the mean.10 However, other studies

8

The results are largely taken from these five
papers: David Johnson; Andrew Levin, Fabio
Natalucci, and Jeremy Piger; Laurence Ball and
Niamh Sheridan; Refet Gurkaynak, Andrew
Levin, and Eric Swanson; and Marco Vega and
Diego Winkelried; as well as evidence presented
in the book by Edwin Truman. These studies
were chosen because they are some of the most
recent and therefore have the longest data sets.
They also do a relatively good job of controlling
for the experience of noninflation-targeting
countries. All of the studies include the five
inflation-targeting countries highlighted above,
but many include a wider control group and,
occasionally, other, more recently industrialized
countries, such as Spain, Finland, and Norway,
that have adopted inflation targeting more
recently.
9

www.philadelphiafed.org

This is an important but controversial result.
Many of the noninflation-targeting countries,
such as the United States, placed greater
emphasis on controlling inflation during the
1990s, and a number of European countries
aligned their monetary policy with Germany’s.
Although Germany does not meet the strict
definition of inflation targeting, it is widely
recognized that inflation has always been a key
concern of the Bundesbank. Accurately gauging the effects of inflation targeting requires
significant independent variability across inflation-targeting countries. It is likely that we have
insufficient data for making a sharp distinction
between a simple regression toward the mean
and the independent effects that inflation
targeting has in lowering inflation in this set
of countries. For a detailed exposition of this
point, see Mark Gertler’s discussion of the Ball
and Sheridan paper.
10

that included nonindustrialized countries have found significant benefits of
inflation targeting in terms of lowering
inflation in both industrialized and
nonindustrialized countries.11 In his
book, Edwin Truman notes that in this
wider set of countries, there is little
correlation between past inflation and
the adoption of inflation targeting.
The problem from the standpoint of
the United States is that the experience of many of these countries may
not be very relevant for understanding
how inflation targeting would affect
the United States. Fortunately, other
types of data allow for a better understanding of the effects of inflation
targeting.
Expected Inflation. One measure
that appears to behave differently between industrialized inflation-targeting
countries and industrialized noninflation-targeting countries is expected
inflation. A primary motivation for
adopting inflation targeting is to both
reduce and stabilize expectations of
inflation. If inflation targeting can accomplish this, then, in theory, it can
reduce the tradeoff between lowering
inflation and the loss of output.
A large set of economic models
imply that when individuals expect a
higher inflation rate than the rate the
central bank is targeting, employment
falls because higher expectations of
inflation lead to higher wage demands.
When these expectations are unrealistically high, the higher wages cause
firms to hire fewer workers and employment falls. Firms may also set prices too high, thereby reducing the demand for their products. Both factors,
which originate from erroneous views

The work of Marco Vega and Diego Winkelried uses econometric techniques from the
treatment literature as well as a wide sample of
noninflation-targeting countries, while Truman
tries to uncover the effects of inflation targeting
by including a wealth of control variables in his
analysis.
11

Business Review Q3 2006 17

of targeted inflation, reduce output.
Examining the same set of countries as
in this article, Johnson finds that inflation targeting has lowered inflation
expectations. Thus, adopting inflation
targeting helped those central banks
coordinate the public’s expectations of
inflation with the targeted rate.12
Interestingly, he also finds that up
until the fifth year of pursuing inflation targets, the effect of inflation targeting on reducing expected inflation
gets progressively larger with each year
under the inflation-targeting regime.
After adhering to inflation targeting
for five years, the effect gradually dissipates. This result makes sense because
the credibility of a country’s desire to
lower inflation would be expected to
increase over time. The longer a central bank sticks to inflation targeting,
the more confident the public becomes
that the change in policy is permanent.
Further, the lowering of expected
inflation induced by adopting inflation
targets varies from country to country:
New Zealand enjoys the largest decline
and the U.K. the least. It is interesting that New Zealand has the strictest
inflation contract with the Governor
of the Reserve Bank, who is solely responsible for outcomes and subject to
dismissal, while at the onset of inflation targeting, the Bank of England did
not have operational independence.
In complementary work that also
sheds light on the behavior of inflation
expectations, Andrew Levin and his
co-authors examine the behavior of
survey expectations of inflation in our
group of inflation-targeting countries
and those in a control group consisting
of the U.S. and Japan and a European

Johnson used survey measures of inflation.
For all but the United States, they were taken
from Consensus Forecasts. For the U.S., he used
expectations from the Survey of Professional
Forecasters, which is conducted by the Federal
Reserve Bank of Philadelphia.
12

18 Q3 2006 Business Review

average composed of Germany, France,
Italy, and the Netherlands. They find
that expectations of long-run inflation
(five years and 10 years) are not influenced by current inflation (measured
by an average of inflation over the
last three years) in inflation-targeting
countries, whereas long-run inflation
expectations respond to changes in
actual inflation in the control group.
Thus, inflation expectations seem better anchored under inflation targeting.

increase inflation in the near term may
not be offset in the future. In that case,
current economic surprises could affect
expectations of long-run inflation.
Unfortunately, there are only
three countries for which this type of
experiment can be carried out: the
U.S., the U.K., and Sweden. Based on
data from 1999-2005, when all three
countries’ inflation rates had fairly well
stabilized, only in the U.S. are expectations of long-run inflation sensitive to

Some economists prefer measures of inflation
expectations derived from financial market
data to those derived from surveys.
Some economists prefer measures
of inflation expectations derived from
financial market data to those derived
from surveys. Investors in bonds face
serious losses if they misjudge future
inflation. Refet Gurkaynak and colleagues at the Federal Reserve Board
carried out research using an alternative measure of inflation expectations.
They looked at the difference in yields
between long-term government bonds
and long-term government bonds indexed for inflation. The difference in
yields gives a market expectation of
inflation.
They tested to see how sensitive long-run expected inflation is to
unexpected economic developments,
which are measured as the difference
between actual reports of various economic statistics and survey forecasts
of those statistics a few days before
their release. If investors are confident
that the central bank will adhere to its
inflation objectives, the fluctuations
in current economic data should not
influence investors’ beliefs about longrun inflation. However, if investors
believe that the monetary authority is
not committed to controlling long-run
inflation, economic disturbances that

unexpected economic news. Thus, inflation expectations appear to be better anchored in the U.K. and Sweden
than in the U.S.
An interesting result in their study
is that before 1997, when the Bank of
England obtained operational independence from the government, long-run
expectations of inflation in the U.K.
were also sensitive to economic news.
Whether this result is due to operational independence providing more
credibility or the fact that the Bank of
England had established more credibility over time is an open question.
The Persistence of Inflation.
One effect of inflation targeting should
be to reduce the persistence of deviations in inflation from its target because any deviations of inflation from
target are gradually offset, whereas
there is no explicit requirement that a
noninflation-targeting central bank do
so. This potential benefit of inflation
targeting finds support in a couple of
studies.13 However, the extent to which
persistence is diminished varies across
the two studies. One indicates that
The relevant papers are the ones by Vega and
Winkelried and Levin and co-authors.
13

www.philadelphiafed.org

the effect of inflation targeting is quite
large, while the other finds it to be
rather small. The difference in results
could be attributable to the use of different noninflation-targeting countries
in the two studies, but more work and
perhaps better data are needed as well.
Output. While it appears that
inflation targeting in general has had
economically beneficial effects on the
behavior of inflation, it would be difficult to find political support for inflation targeting if, at the same time, it
had deleterious effects on output. Truman finds that industrialized inflationtargeting countries experience both an
increase in output growth and a reduction in output volatility relative to the
experience of noninflation-targeting
countries. His first analysis finds that
inflation targeting raises growth and
lowers the variance of growth rates.
His second experiment directly tests
whether the changes in relative growth
rates over the two samples (pre- and
post-inflation targeting) between
inflation-targeting and noninflationtargeting countries are significantly
different. He finds that the increase
in growth in inflation-targeting countries was significantly higher than the
increase in growth in noninflationtargeting countries. Similarly, he finds
that the decrease in the volatility of

www.philadelphiafed.org

output growth was significantly greater
for the inflation-targeting countries.
Conclusions
A number of countries have
implemented inflation targeting, and
it has been in effect in a few of these
countries for more than 10 years. The
exact nature of the inflation-targeting
framework differs across countries, and
in most countries, it has evolved over
time. As expressed in their testimony
and speeches, monetary policymakers
in the five inflation-targeting countries
examined in this article all seem to
be pleased with the results and have
found the framework flexible enough
to allow consideration of economic
performance. There is no indication
that inflation targeting has diminished
economic performance in countries
that have adopted it relative to the
performance of other industrialized
countries. Indeed, there is some evidence that inflation targeting has been
associated with a reduction in inflation and that expectations of inflation
are more stable in countries that have
adopted inflation targeting. Further,
inflation targeting appears to be compatible with robust economic activity.
While the empirical evidence
on the effects of inflation targeting is
encouraging, we must acknowledge

that the data that lend themselves to
this optimistic view are limited. The
experiment of inflation targeting has
proceeded for a fairly short time, and
thus, it has probably not been subject
to all the vagaries that economies can
experience. However, the testimony of
central bankers who have been responsible for guiding monetary policy in the
five inflation-targeting countries has
been overwhelmingly positive.14 Many
cannot envision departing from their
current practices and returning to regimes that were less explicit about underlying inflation goals. They point to
numerous instances where having an
inflation target both focused monetary
policy and made it easier to conduct. BR

Examples of the enthusiasm that inflationtargeting central banks have for inflation targeting can be found in several places: See the
comments pertaining to the Canadian experience by the Governor of the Bank of Canada,
Gordon Thiessen, and those describing the
Australian experience by the Governor of the
Australian Reserve Bank, Ian J. Macfarlane.
Also, a favorable opinion of inflation targeting
can be found in a speech by the Governor of the
Reserve Bank of New Zealand, Donald Brash,
delivered at the AEA meetings in 2002. Mervyn
King, the Governor of the Bank of England, has
also eloquently discussed the benefits of inflation targeting. For comments by members of the
Riksbank, who have viewed their experience
with inflation targeting favorably, see the article
by Claes Berg.
14

Business Review Q3 2006 19

REFERENCES
Archibald, Joanne. “Independent Review
of the Operation of Monetary Policy: Final
Outcomes,” Reserve Bank of New Zealand
Bulletin, 64, 3, pp. 4-14.
Ball, Laurence, and Niamh Sheridan.
“Does Inflation Targeting Matter?” in The
Inflation Targeting Debate. National Bureau
of Economic Research Studies in Business
Cycles, 32, Chicago: University of Chicago
Press, 2005.
Bank of England. Monetary Policy Framework, available at: www.bankofengland.
co.uk/monetarypolicy/framework.htm
Barker, Kate. “Monetary Policy in the
U.K.,” speech, National Association for
Business Economics, Washington, D.C.,
March 21, 2005.
Berg, Claes. “Inflation Forecast Targeting:
The Swedish Experience,” Sveriges Riksbank Quarterly Review, 3, 1999, pp. 44-70.
Berg, Claes. “Experience of Inflation Targeting in 20 Countries,” Sveriges Riksbank
Quarterly Review, 1, 2005, pp. 20-47.
Bernanke, Ben S., Thomas Laubach, Frederic Mishkin, and Adam Posen. Inflation
Targeting: Lessons from the International
Experience. Princeton: Princeton University Press, 1999.
Brash, Donald T. “Inflation Targeting 14
Years On,” Reserve Bank of New Zealand
Bulletin, 65, 1, pp. 58-70 (speech, American
Economic Association, January 5, 2002).
Dodge, David. “Inflation Targeting: A
Canadian Perspective,” speech, National
Association for Business Economics,
March 21, 2005 (www.bankofcanada.
ca/en/speeches/2005/sp05-2.html).
Gertler, Mark. “Comments,” in The Inflation Targeting Debate. National Bureau of
Economic Research Studies in Business
Cycles, 32, Chicago: University of Chicago
Press, 2005.

20 Q3 2006 Business Review

Gurkaynak, Refet, Andrew T. Levin, and
Eric T. Swanson. “Inflation Targeting and
the Anchoring of Long-Run Expectations: International Evidence from Daily
Bond Yield Data,” manuscript, Board of
Governors of the Federal Reserve System,
June 2005.
Johnson, David R. “The Effect of Inflation
Targeting on the Behavior of Expected
Inflation: Evidence from an 11 Country
Panel,” Journal of Monetary Economics, 49
(November 2002), pp 1521-38.
Levin, Andrew T., Fabio M. Natalucci, and
Jeremy Piger. “The Macroeconomic Effects
of Inflation Targeting,” Federal Reserve
Bank of St. Louis Review, 86 (July/August
2004), pp. 51-80.
King, Mervyn. “The Monetary Policy
Committee: Five Years On,” speech to
the Society of Business Economists,
available at: www.bankofengland.co.uk/
publications/speeches/2002/speech172.pdf
Kuttner, Kenneth N. “A Snapshot
of Inflation Targeting in Its
Adolescence,” paper, Reserve Bank
of Australia, available at: www.rba.
gov.au/PublicationsAndResearch/
Conferences/2004/Kuttner.pdf
Macfarlane, Ian J. “Australia’s Experience
with Inflation Targeting,” in Stabilization
and Monetary Policy: The International
Experience, Banco de Mexico (November
2000).
Mishkin, Frederic. “From Monetary
Targeting to Inflation Targeting: Lessons
from the Industrialized Countries,” in
Stabilization and Monetary Policy: The
International Experience, Banco de Mexico
(November 2000).
Reserve Bank of New Zealand. “What Is
the Policy Targets Agreement?” Fact Sheet
No.3, RBNZ, available at: www.rbnz.govt.
nz/monpol/pta/0127027.html.

Santomero, Anthony M. “Flexible
Commitment or Inflation Targeting
for the U.S.?,” Federal Reserve Bank
of Philadelphia Business Review, Third
Quarter 2003.
Santomero, Anthony M. “Monetary Policy
and Inflation Targeting in the U.S.,”
Federal Reserve Bank of Philadelphia
Business Review, Fourth Quarter 2004.
Sherwin, Murray. “Institutional Framework
for Inflation Targeting,” speech, Bank
of Thailand symposium on “Practical
Experiences on Inflation Targeting,”
October 20, 2000 (http://www.rbnz.govt.
nz/speeches/0097459.html).
Stevens, Glenn. “Inflation Targeting: A
Decade of Australian Experience,” Reserve
Bank of Australia Bulletin (April 2003),
pp. 17-29.
Svensson, Lars E.O. “Independent Review
of the Operation of Monetary Policy in
New Zealand: Report to the Minister of
Finance,” February 2001.
Thiessen, Gordon. “The Canadian
Experience with Inflation Targeting,”
in Stabilization and Monetary Policy: The
International Experience, Banco de Mexico
(November 2000), pp. 85-90.
Truman, Edwin M. Inflation Targeting in the
World Economy. Institute for International
Economics, Washington, D.C., 2003.
Twaddle, James. “The Reserve Bank of
New Zealand Amendment Act 2003,”
Reserve Bank of New Zealand Bulletin, 67,
1, pp. 14-33.
Vega, Marco, and Diego Winkelried.
“Inflation Targeting and Inflation
Behavior: A Successful Story?,”
manuscript, February 2005.

www.philadelphiafed.org

Residential Mortgage Default
by ronel elul

A

dramatic expansion of mortgage credit in
recent years, coupled with a rapid run-up in
house prices, has focused the attention of
pundits and policymakers on the risks of home
mortgage lending. In this article, Ronel Elul discusses
the models that economists have developed to help us
understand the default risk inherent in home mortgages
and how default risk and house prices are related. He also
applies these models to show how falling house prices
would affect mortgage default rates today and explores the
impact that rising default rates would have on financial
institutions and other participants in the mortgage market.

Although default rates on residential mortgages have been relatively
low in recent years, policymakers and
economists should still be concerned
about mortgage default for several reasons. First, while the foreclosure rate in
the U.S. has averaged only 1 percent
over the past 20 years, there have been
dramatic swings in regional default
rates over this period. For example,
in the early 1990s foreclosure rates
in California rose fivefold, from less
Ronel Elul is a
senior economist
in the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.
org/econ/br/index.
html.
www.philadelphiafed.org

than 0.4 percent to nearly 2 percent.
In addition, this jump in default rates
coincided with a 25 percent drop in
house prices in California.
One reason to be concerned about
mortgage default is the prominent role
that mortgages play in our financial
system. First, home mortgages represent the bulk of credit extended to
consumers. According to data collected by the Federal Reserve, mortgages
make up over $8 trillion of the $10
trillion in consumer debt outstanding. Second, defaults on mortgages
affect not only homeowners but also
the holders of the mortgages. These
obviously include the original lenders,
which are primarily banks and thrifts.
In addition, however, mortgage-backed
securities (MBS) distribute this risk
throughout the entire economy;
indeed, some estimates show that one-

quarter of all mortgages are ultimately
held by investors in MBS.1
In addition, the risk of default is
currently of particular concern because
of the rapid run-up in house prices in
recent years. Although many scenarios
are feasible, one possible outcome is
a significant decline in many housing markets across the U.S. Both
policymakers and market participants
certainly need to be able to quantify
the effect of falling house prices on
mortgage default rates. Fortunately,
economists have developed optiontheoretic models that permit us to
understand the default risks inherent
in home mortgages and how they relate
to house prices.2 According to these
models, homeowners simply compare
their house value to their remaining
debt when deciding whether to default.
While the simplified view of the world
that option-theoretic models present
provides useful insights, in practice,
other considerations also influence a
household’s decision about whether or
not to default on its mortgage.
THE OPTION-THEORETIC
APPROACH TO MORTGAGE
DEFAULT
The Ability to Default on a
Source: Mortgage Market Statistical Annual
(2004).
1

In addition to facing default risk, an investor
in mortgages also faces prepayment risk. This
is the risk that a borrower will pay a mortgage
before its maturity, and the investor will have
to find a new place to invest his funds. Since
prepayment often occurs through refinancing
the mortgage at a lower rate, it is usually disadvantageous for the lender. Not surprisingly, the
primary factor that determines prepayment risk
is the current level of interest rates relative to
the rate when the mortgage was issued.
2

Business Review Q3 2006 21

Mortgage Can Be Viewed as a Put
Option. One way to think about the
risk that a homeowner will default on
his mortgage is to view default as an
option available to the homeowner.
In general, an option is a contract in
which one party obtains the right to
buy or sell some underlying asset to
another party for a prespecified price,
known as the “strike,” or exercise,
price. When the party has the right
to buy the asset at a fixed price, the
contract is known as a call option; if
he has the right to sell the asset, it is a
put option.
The most prominent example is a
stock option (Figure 1). Consider the
case of a put option on IBM stock with
a strike price of $75. If IBM is trading
at $50 per share, exercising such an option would give the holder the right to
sell a share of IBM for $75, for a profit
of $25. When the exercise of the option is profitable, the option is said to
be in the money.3 By contrast, it would
not be profitable to exercise a put option with a strike price of $75 if IBM
were trading at $80, since the strike
price is below the current market price.
In such a case the option is said to be
out of the money. Figure 1 plots the
profit an investor would earn from this
put option as a function of the price
of IBM stock, assuming that a rational
investor would not exercise the option
when it is out of the money.
In the case of a mortgage, the
homeowner’s ability to default can
also be viewed as a put option. Should
the homeowner default, he is in effect “selling” the house to the lender
for the current mortgage balance.
When the house value is lower than
the mortgage balance (commonly
termed negative equity), the borrower

gains financially if he stops paying the
mortgage, surrenders the house to the
lender, and buys a similar house for less
than the mortgage balance. This corresponds to “selling” the house to the
lender for the mortgage balance, since
the borrower essentially gains the difference between the mortgage balance
and the value of the house.4
What We Learn from the Option-Theoretic Approach. Setting the
default decision in this sort of framework is very fruitful because economists know a lot about how to value
options. Indeed, the pioneering work
of Fisher Black and Myron Scholes
and that of Robert Merton developed
a methodology that enables us to
calculate a precise numerical value
for very general types of options. One

Michael Asay was the first to formally model
mortgage default as an option. For an overview
of more recent literature, see the article by
James Kau and Donald Keenan.
4

appeal of their approach is that it leads
to a formula that depends on only a
few variables, which can be measured.
In the case of the mortgage default
option, these variables are the current
loan-to-value (LTV) ratio, the mortgage amortization schedule (that is, the
monthly schedule of how the mortgage
balance is paid down), the volatility of
house prices, and interest rates.
Lenders can use option-pricing
formulas to determine how high an
interest rate they must charge in order
to compensate them for the risk of
default. Investors in mortgage-backed
securities can also use these formulas
to determine how much these securities are worth. Finally, regulators and
economists interested in mortgage
default can use these formulas to gauge
the risk that a given drop in house
prices might pose to lenders. We will
perform an exercise of this type later.
Viewing the right to default as an
option also gives us qualitative insights

FIGURE 1
Payoff to the Holder of a Put Option
Payoff
75

50

25

0

Of course, it may be preferable to wait longer
to exercise the option in the hope that the stock
price and the profit from exercising the option
go even higher before the option expires.
3

22 Q3 2006 Business Review

0

25

75

50

100

125

Stock Price

www.philadelphiafed.org

into mortgage default that might not
otherwise be apparent. For example,
options are more valuable when the
underlying asset is more volatile. Consider the case of an investor holding a
put option. Such an option will be in
the money, i.e., profitable to exercise,
when the asset price is below the
strike price. When the asset price is
more volatile, it is more likely to take
both high and low values. This means
that the option is more likely to be in
the money (and by larger amounts).
However, the greater likelihood of a
very high asset price doesn’t lead to a
counteracting loss because the holder
of the put option will choose not to
exercise the option when the asset
price is higher than the strike price.
Thus, viewing the right to default on
one’s mortgage as a put option suggests
that more volatile house prices should
be associated with both a greater incidence and a greater severity of default.
The study by James Kau and Donald
Keenan has confirmed this.
Finally, the option-theoretic model
also serves as a useful conceptual
framework for extending our knowledge further. By testing this model,
economists are able to assess the
extent to which it accurately describes
homeowners’ behavior and, when it
does not, to determine ways in which
the model may be improved.
EMPIRICAL TESTS OF THE
OPTION-THEORETIC MODEL
As we have discussed, one appeal
of the simple option-theoretic approach is that it is parsimonious: Only
a few factors play a role, most notably
home equity.5 Empirical testing of the
option-theoretic model has confirmed
the important role played by home

Home equity is defined here as the difference
between the value of a house and that of all
loans secured by the house.
5

www.philadelphiafed.org

equity.6 It has also provided evidence
that the homeowner’s option is more
complex than the simple model suggests. In addition, empirical work has
uncovered evidence that default decisions also depend on factors outside
the framework of an option-theoretic
model.

Economists Extend the Model
in Light of Empirical Findings.
One important finding uncovered by
testing of the option-theoretic model
is that homeowners do not appear to
default as soon as their equity becomes
negative. In their 1985 study, Chester
Foster and Robert Van Order found

One appeal of the simple option-theoretic
approach is that it is parsimonious: Only a few
factors play a role, most notably home equity.
For the most part, empirical work
has focused on fixed-rate mortgages,
in particular, those made to borrowers
with good credit histories, known as
prime loans. As the name suggests, the
payment on these mortgages is fixed
(in nominal terms) over the life of the
mortgage. In addition, the borrower
is typically permitted to refinance
(prepay) the mortgage, for example, if
interest rates drop.7
See, for example, the article by Yongheng
Deng, John Quigley, and Robert Van Order.
6

While a perfectly general analysis would take
into account other types of mortgage products
— most notably adjustable rate mortgages
(ARMs) and subprime mortgages (which are
loans made to riskier borrowers with poor credit
histories) — we can still learn a lot by restricting our attention to prime fixed-rate loans.
Despite the recent growth of other types of
mortgages (particularly subprime loans), prime
fixed-rate loans still represent approximately
two-thirds of all outstanding mortgages, and
models for subprime loans are in an earlier stage
of development.
In addition, the main factors affecting
default risk in prime fixed-rate mortgages are
shared by other types of mortgages as well. For
example, the risk from falling prices affects all
types of mortgages. Nonetheless, we should be
cautious in drawing general conclusions about
the mortgage market as a whole from studies of
prime fixed-rate mortgages alone because other
types of mortgages have additional risk factors.
For example, borrowers with ARMs are also
exposed to the risk that interest rates will rise
in the future, causing their required monthly
payment to go up. Subprime borrowers are at
greater risk for job loss than prime borrowers,
which puts them at greater risk of default in
response to a regional downturn that affects
both housing prices and labor markets.
7

that even when the LTV rises to as
much as 110 percent, only 4.2 percent
of borrowers in their data set default.
They suggest that this is evidence
against a simple option-theoretic
model in which homeowners default
as soon as the equity in their house
is negative. Other researchers have
argued, however, that homeowners’
behavior is still well described by the
option pricing model if we extend
the simple model to account for the
panoply of options available to the
homeowner.
In particular, some economists
point out that the mortgage default
option is essentially an “American”
option, which the holder can exercise
at any time up to its maturity. In contrast, a European option can be exercised only at a single prespecified date.
We have already observed that it may
not be optimal to exercise a put option
on a stock as soon as the stock price
dips below the strike price; one may
prefer to wait in case it falls further.
Similarly, if the house price is slightly
below the mortgage balance, a fully
rational homeowner may prefer to wait
to default in order to give house prices
a chance to fall further, making default
even more profitable. Kau, Keenan,
and Taewon Kim construct plausible
numerical examples that show that
it may be optimal to wait to default
until the house price is as much as 15
Business Review Q3 2006 23

percent below the mortgage value.
Another reason that a rational
homeowner may not default when it
may appear to be optimal is that he
actually has another option: prepaying
his mortgage (for example, by refinancing).8 This option may be viewed as a
call option on the mortgage, since in
prepaying the mortgage, the homeowner is taking the opportunity to buy
back his outstanding debt by paying
the remaining balance.9 These two options interact. If someone has already
prepaid his mortgage, he obviously
cannot default. Similarly, someone
who anticipates that he will refinance
his mortgage shortly might decide that
it is not worthwhile to default, since
he does not plan to pay on the current
mortgage for much longer.
A recent paper by Yongheng
Deng, John Quigley, and Robert Van
Order tests the extent to which mortgage default is driven solely by negative equity. They find that although
negative equity is indeed an important
determinant of default behavior, the
existence of a prepayment option
does have a statistically significant
impact on the default decision. That
is, a homeowner who is very likely to
prepay his mortgage (for example, if his
mortgage interest rate is much higher
than current rates) is also less likely to
default. Similarly, they also find that
the default option has a significant
impact on the exercise of the prepayment option; that is, households likely
to default tend to prepay less often.
Empirical Work Also Points to
Factors Outside the Option-Theoretic Framework. Other economists

argue, however, that the reason homeowners do not default as soon as their
equity turns negative is that defaulting
involves significant transaction costs.
For example, defaulting on a mortgage
entails moving and losing one’s home.10
The impact that a default has on a
borrower’s reputation (for example,
his credit score) may also be viewed as
a form of transaction cost, since the
defaulter sends a negative signal to potential lenders, a situation that makes
any future borrowing more costly and
difficult.11 Finally, some borrowers may
also have moral qualms that make
them more reluctant to default. All of

the borrower is self-employed, also help
explain default behavior.12 By contrast,
recall that in the option-theoretic
model, only variables directly related
to the mortgage or house value should
matter.13
These findings are consistent
with the plausible hypothesis that at
least some homeowners are liquidity
constrained; that is, a borrower cannot
borrow freely against his expected
future income or wealth.14 Consider
the example of a homeowner who loses
his job but knows he is likely to find a
new one in the near future. Suppose
that he would like to continue paying

Researchers have also found evidence that
variables that capture crisis or “trigger” events
for households, such as unemployment rates
and divorce rates, all seem to lead to defaults.
these may be viewed as factors outside
the option-theoretic framework, which
assumes that homeowners optimize in
a perfectly frictionless manner or, at
least, that transaction costs are small
enough to be ignored.
Researchers have also found evidence that variables that capture crisis
or “trigger” events for households, such
as unemployment rates and divorce
rates, all seem to lead to defaults.
Similarly, personal characteristics
of the homeowner associated with
greater income risk, such as whether

his mortgage so as to retain his home
but that he has no equity in the house
against which to borrow. If he could
find a lender willing to lend on his
assurances that he will find a new job,
and if he could commit to repay the
loan from this as yet unrealized future
income, he would be able to borrow
enough to continue making his mortgage payments during this temporary
spell of unemployment. In practice,
however, it is likely to be difficult to
find a lender willing to lend under
these circumstances, and the home-

In their 1984 study, Foster and Van Order
were the first to find evidence that these costs
have an impact on the default decision.

12

10

See the article by Kerry Vandell and Thomas
Thibodeau.
While it is fairly straightforward to test for the
impact of trigger events empirically, incorporating them into a theoretical model requires a
framework that focuses on consumer decisions,
rather than a simple modification of the option
pricing approach. See the paper by Peter J.
Elmer and Steven A. Seelig for an example.
13

This would imply that borrowers with lower
credit scores, who thus have less of a reputation
to protect, would be likelier to default. This has
been confirmed by several studies, for example,
the one by Anthony Pennington-Cross. But
note that low credit scores are also associated
with less access to credit and riskier income; so
this evidence is also consistent with theories
(discussed below) that relate default to credit
constraints.
11

Note that while the prepayment option is
nearly universal for prime mortgages, this is not
necessarily the case for subprime loans.
8

In addition, he would also have to pay any
costs associated with prepaying, for example,
closing costs, if he were to refinance his mortgage.
9

24 Q3 2006 Business Review

Many studies find evidence of liquidity constraints in other arenas; see, for example, the
article by Tullio Jappelli.
14

www.philadelphiafed.org

owner may well be forced to default.
Further support for the existence
of liquidity constraints can be found in
the paper by Deng, Quigley, and Van
Order. First, these authors confirm
that high state unemployment and divorce rates are associated with a higher
incidence of default. Second, they find
that higher initial loan-to-value ratios
are associated with greater default risk.
This finding is also consistent with the
existence of liquidity constraints, since
borrowers who have less wealth available for a down payment are likelier
to be constrained. Last, these authors
also find support for the existence
of transaction costs that discourage
homeowners from defaulting.
Finally, in addition to transaction costs and liquidity constraints,
state laws may also affect homeowners’
default behavior. (See State Laws and
Mortgage Default.)
EMPIRICAL MODELING OF
MORTGAGE DEFAULT
Competing Risks Models: An
Empirical Framework for Modeling
Mortgage Default. One framework
researchers use to test the option-theoretic model of mortgage default and
to assess the significance of additional
variables is the proportional hazard
model. D.R. Cox first applied this model in the biomedical sciences,15 where it
was used to study the effect of various
treatments on patients’ survival. The
proportional hazard model explains
the likelihood of exiting the sample in
the next instant of time, given that the
patient has survived up to this time.
For example, it has been used to explain mortality from cancer, given the
patient’s age, gender, treatment history,
and whether the patient is a smoker.
Proportional hazard models have also
been applied extensively to explain
mortgage default.
15

See the book by D.R. Cox and D. Oakes.

www.philadelphiafed.org

State Laws and Mortgage Default

I

n principle, the existence of state laws governing mortgage
default (in particular, those laws that govern deficiency judgments) may also impede the free exercise of homeowners’
default option. Some states prohibit lenders from pursuing
deficiency judgments, which means that they cannot try to
collect any deficiency between the value of the house and the
mortgage balance from the homeowner’s other assets. In principle, this would
make defaulting on a mortgage more attractive for a homeowner with negative
equity. Despite considerable effort, economists have uncovered little evidence
that laws that prohibit deficiency judgments make homeowners more likely to
default. The reason may be that deficiency judgments are rare even when they
are permitteda because the defaulting homeowner is unlikely to have many assets aside from his house and because even in states where deficiency judgments
are permitted, the homeowner may often protect himself against them by filing
for bankruptcy.b
a

See the article by Charles Capone.

For more on the empirical significance of these laws, see the article by Karen Pence and the one
by Terrence Clauretie and Thomas Herzog.
b

As we have discussed above,
however, the homeowner typically has
another option as well, which is to
prepay his mortgage. In light of this,
the model by Deng, Quigley, and Van
Order uses an extension of the proportional hazard model with two “competing risks”: default and prepayment. In
this case, the mortgage will terminate
when the borrower either prepays or
defaults, whichever occurs first. This
extension allows them to study the
interaction between default and prepayment and to estimate the relative
significance of trigger events such as
unemployment and divorce rates.16
Predicting Default Rates in
a Hypothetical Housing Market
Downturn. One immediate application of the models we have presented is
to forecast default rates in a hypothetiThis model is also used in other areas of
economics. For example, someone may leave
unemployment either because he finds a job
or because he drops out of the labor force
altogether.
16

cal downturn in the housing market.
This is obviously of interest to policymakers.
The scenario we consider is motivated by the work of Joshua Gallin.
He argues that, based on an analysis of
historical rent-price ratios, housing is
currently overvalued by more than 20
percent. One way to understand this is
to note that given today’s house prices
and rents, a savvy homeowner could
profit by selling his house, investing
the money in a relatively safe asset
such as long-term Treasury bonds, and
using the interest income to rent a
comparable house.17 He would profit
because at today’s inflated prices, his
interest income would exceed his rent.

This process may not necessarily be as
straightforward as we describe. In particular,
it is not always easy to find comparable rental
accommodation. Indeed, Gary Smith and
Margaret H. Smith argue that if one carefully
matches owner-occupied and rental housing,
prices do not appear to be out of line relative to
rents in most cities.
17

Business Review Q3 2006 25

Such selling pressure would tend to
lower house prices, and the increased
demand for rental units might also
raise rents. This process would continue until all such opportunities for easy
profits are exhausted. At this point,
the market would be in equilibrium.18
Gallin finds that when prices are
high relative to rents — as in the past
few years — there has indeed been a
tendency for this equilibrium relationship to be re-established. Figure 2
shows the rent-price ratio since 1970.19
Observe first that in late 2005 this
index was at its lowest level since 1970;
in addition, periods in which this ratio
moved away from its long-run mean
(roughly 100) appear to be followed by
reversals. Gallin also shows that this
adjustment process generally involves
both rents rising more rapidly than
usual and prices rising more slowly
(or even falling). In particular, assuming that housing is overvalued by 20
percent, Gallin’s work predicts that
over the next three years, real rents20
should rise about 1.2 percent per year
faster than usual, and real house prices
should rise 3.4 percent per year more
slowly than usual.
Gallin’s argument that the housing market is out of equilibrium is
statistical; that is, he compares the
rent-price ratio to its historical average.
He makes no conjectures as to why the
market moves out of equilibrium in
the first place. Furthermore, although
Gallin finds evidence that this adjustMore precisely, according to this argument, the equilibrium price of a house should
be roughly equal to the present value of the
expected future income one could earn by renting out the house, after adjusting for taxes and
maintenance.
18

The rent-price ratio is constructed by dividing
the rent index from the CPI-W (reported by the
Bureau of Labor Statistics), by Freddie Mac’s
conventional mortgage home price index; we
make several minor adjustments as suggested
by Gallin.

ment has taken place in the past, this
does not necessarily mean that it is
certain to occur in the future because
the equilibrium house-price–rent ratio
may have permanently changed for
various reasons. In particular, an argument often made is that the current
high level of prices (relative to rents)
can be justified because financial innovations have made borrowing easier
and cheaper. For example, increased
subprime lending allows households to
buy homes when they would previously have been forced to rent. This
increased demand for owner-occupied
housing should raise house prices relative to rents.21
To examine the potential impact
of price declines on mortgage default,
we will consider a more extreme
trajectory for house prices than the

one suggested by Gallin. We will begin
with a benchmark case in which prices
increase at a steady 4 percent a year.22
However, rather than stagnation in
prices, as Gallin suggests, we then consider the impact of an immediate 20
percent drop in prices (followed by 4
percent growth thereafter). While such
a scenario is admittedly extreme,23 it
nevertheless provides useful insights
by establishing bounds on the possible
impact of mortgage default. We also
consider a more conservative scenario.
We use the empirical model of
Yongheng Deng and John Quigley to
generate forecasts of mortgage deThis is consistent with the average real rate of
increase in house prices over the past 30 years.
That is, adjusting for inflation, the average rate
of increase has been 1.5 percent a year. Given a
current inflation rate of roughly 2.5 percent, we
arrive at a 4 percent nominal rate of increase.
22

Indeed, the homeownership rate in 2005 was
at a historical high of 69 percent. This view was
articulated by Janet Yellen, president of the San
Francisco Fed, in a speech on October 21, 2005.
21

However, there were drops of roughly this
magnitude in New England and California in
the early 1990s.
23

FIGURE 2
Rent-Price Ratio: 1970-2005
Rent-price ratio (1996: Q1 = 100)
120

110

100

90

80

19

20

That is, after adjusting for inflation.

26 Q3 2006 Business Review

70
1970

1973

1977

1980

1984

1987

1991

1994

1998

2001

2005

Source: Bureau of Labor Statistics and Freddie Mac

www.philadelphiafed.org

fault rates under these scenarios. We
consider a representative homeowner
who has just taken out a mortgage at
an interest rate of 6 percent (which
we assume is also the current market
interest rate) and who has an initial
LTV of 80 percent. According to data
from the 2004 Survey of Consumer
Finances, the fraction of homeowners with LTVs at or below 80 percent
is 80 percent.24 Further detail on the
distribution of LTVs is presented in
Figure 3.
Aside from the contemporaneous loan-to-value ratio, which we can
calculate from the initial LTV and the
interest rate, the other variables used
in the model are the volatility of house
prices, state unemployment rates, and
state divorce rates. We also assume
that interest rates are constant; so
given that the mortgage is taken out
at the market interest rate, there is no
reason for homeowners to prepay their
mortgages.25
For the benchmark scenario of an
80 percent LTV mortgage, the risk of
default over the 360-month life of the
mortgage is about 1.8 percent.26 Figure

4 plots the cumulative default rate as a
function of time (in months).
Now consider an instantaneous
drop in house prices of 20 percent just
after the mortgage has been taken
out, so that this mortgage now has
an LTV of 100 percent. Over the life
of the mortgage, the default rate, at 6
percent, is over three times as high as
the benchmark scenario because even
a small drop in house prices in the
future will lead to negative equity. As
can be seen in the figure, most of the
acceleration in default rates comes in
the early years of the mortgage, before
amortization lowers the LTV significantly. Once the LTV has fallen, the
value of the option to default declines
substantially.
It is also useful to explore what
happens for less dramatic scenarios. If
house prices decline only 10 percent,
for example, lifetime default rates
increase from 1.8 percent to 3 percent.
So a decline in prices that is twice as

large (20 percent as compared to 10
percent) results in default rates that
are three times as large. In other words,
drops in housing prices have a nonlinear effect on default rates, with large
declines increasing default rates more
than proportionally. This nonlinearity
can also be seen in Figure 4; observe
that the default rates corresponding
to a 10 percent drop are much closer
to those with no drop than they are to
those when prices drop 20 percent.
Since we saw in Figure 1 that the
price of an option does not have a
linear relationship to the price of the
underlying asset (because of the option
holder’s right to not exercise the option
when prices fall), it is not surprising
that drops in house prices have a similarly nonlinear effect on default rates.
The reason is that if the value of a
house falls only slightly below the outstanding mortgage balance, the homeowner will be unlikely to default on his
mortgage, since there is a significant

FIGURE 3
LTV Distribution for Those with Mortgages

These figures for LTVs include first mortgages
as well as home equity loans.
24

We assume house price volatility of 11.5 percent, following the study by John Campbell and
Joao Cocco. Unemployment and divorce rates
are set at roughly current U.S. levels: 5 percent
and 4.8 percent, respectively. We calculate the
default rates by simulating many paths for house
prices under our assumptions, use these paths
to calculate the probability of negative equity in
every period, and then apply the model in Deng
and Quigley to these simulated probabilities.
Deng and Quigley’s model is closely related to
that in the published paper by Deng, Quigley,
and Van Order; it has the advantage for us
that only publicly available data are required to
generate predictions.
25

This is higher than the 1 percent foreclosure
rate we reported at the start of the paper. The
difference may be attributed to the fact that
actual prices have risen more rapidly in the
past than our scenario specifies, as well as the
fact that we impose assumptions that rule out
prepayment.
26

www.philadelphiafed.org

Fraction of borrowers
0.9
0.8

0.8

0.7
0.6
0.5
0.4
0.3
0.2

0.11

0.1

0.08
0.01

0
LTV < 80%

80%< LTV < 90%

90% < LTV < 100%

100% < LTV

Source: 2004 Federal Reserve Board Survey of Consumer Finances

Business Review Q3 2006 27

likelihood that the house’s value will
rise above the mortgage balance in the
near future. By contrast, for large drops
in prices, default will be much likelier,
since equity will still be negative even
if prices go up in the future.27
Although homeowners gain financially when they exercise their option
to default in the face of falling house
prices, this gain obviously comes at
the expense of other market participants. The incidence of losses is also of
interest to economists and regulators.
(See Who Is Hurt When Homeowners
Default?)
SUMMARY
One of the risks to mortgage lending is that the homeowner will default
on his promise to continue making
payments. One of the primary drivers
of mortgage default is declines in house
prices. Economists have developed
option-theoretic models that can quantify the impact that falling prices have
on mortgage default. These models
have had some success in explaining
homeowners’ defaults; however, there
is evidence that they fail on three
dimensions. First, they do not recognize that default is costly, which makes
homeowners more reluctant to stop

paying. Second, they do not account
for the fact that some homeowners are
credit constrained, so that if they experience a “trigger event,” such as a job
loss, they may not be able to continue
paying on their mortgage even if they
expect to find new employment in the
near future; this increases the risk of
default. Finally, homeowners may be
less reluctant to default than is suggested by the option-theoretic models
because they also have another option:
prepaying their mortgage.
As a result, economists have developed empirical models that seek to

account for mortgage default through a
combination of explanatory variables,
both ones related to home equity and
ones that account for transaction
costs, trigger events, and the prepayment option. We have seen that such
models can be used to predict the
effect that falling prices would have
on mortgage default rates. Further research is needed on the determinants
of default for newer mortgage products,
such as subprime loans, as well as the
impact of default on other market
participants, particularly investors in
MBS. BR

FIGURE 4
Mortgage Default Rates for Three Scenarios
Default rate
0.07
0.06
0.05
0.04
0.03
0.02
0.01

Default rates are similarly nonlinear in LTVs.
For example, a 20 percent drop in prices would
have a negligible effect on a borrower with an
initial LTV of 60 percent, raising his lifetime
default rate from 1.1 percent to 1.3 percent. By
contrast, for a borrower with an initial LTV of
100 percent, the default rate would rise from 23
percent to nearly 100 percent.
27

28 Q3 2006 Business Review

0
0

100

200

300

Months

100%

90%

80%

www.philadelphiafed.org

Who Is Hurt When Homeowners Default?

T

here are four main parties that are exposed to the risk of homeowners defaulting on their mortgages.
Banks and thrifts hold approximately 30 percent of all home mortgages.
Although banks would obviously take
significant losses if prices fell dramatically, and some
might even find themselves under severe stress, the
banking sector as a whole is currently well capitalized
and could sustain a drop of the magnitude we considered
in the text. In particular, depository institutions have
approximately $850 billion in capital, against liabilities
of $9.6 trillion. Of these liabilities, no more than $2.75
trillion are nonguaranteed mortgage loans of some sort
(first mortgages, home equity loans, and private mortgagebacked securities).
To determine the impact of falling prices on banks,
we need information on the LTVs of the mortgages in
their portfolios; we will make the simple assumption that
the distribution of LTVs for those loans held by banks is
roughly the same as that for the population of mortgages
as a whole (see Figure 3 in the text). In this case, an application of our model allows us to conclude that the default
rates that banks experience on their mortgage portfolios
would rise roughly 2 percent (over and above the current
U.S. foreclosure rate of 1 percent) within one year of a
20 percent price decline. Given the currently sound state

of banking institutions, this would not appear to pose a
dramatic risk to the stability of this sector.a
Of those mortgages not held by depository institutions, the vast majority are packaged into mortgagebacked securities (MBS). Most are “agency MBS”: They
are backed by a government-sponsored enterprise (GSE),
most notably Fannie Mae and Freddie Mac. So investors in these securities are protected against default. The
GSEs themselves bear very little credit risk, however,
because they require private mortgage insurance (PMI)
for borrowers with LTVs above 80 percent. Thus, the vast
majority of the default risk on agency MBS falls on the
PMI industry, which insured approximately 13 percent of
all conventional mortgages issued in 2004.b
In addition, approximately one-quarter of all MBS
are “private-label MBS,” which are not backed by any
agency. Although these securities feature some sort of
credit enhancement to mitigate the risk of default, this
protection is typically incomplete, so that investors generally end up bearing some default risk. These investors
include hedge funds, life insurance companies, pension
funds, and private individuals. The extent to which these
participants are exposed to mortgage credit risk and the
degree to which this risk is concentrated in a few entities
are unknown, and further research on this issue would be
instrumental for policymakers.

This has not always been the case. In particular, some people have suggested that declines in the value of banks’ real estate portfolios led to a “credit
crunch” that aggravated the recession of the early 1990s. See the article by Joe Peek and Eric Rosengren.
a

Source: micanews.com and HMDA data. This represents a trend down from earlier years, since financial innovations such as “piggyback loans” have
reduced the importance of PMI.
b

www.philadelphiafed.org

Business Review Q3 2006 29

REFERENCES

Asay, Michael. “Rational Mortgage
Pricing,” Board of Governors of the
Federal Reserve System, Research Papers
in Banking and Financial Economics, 30
(1978).
Black, Fisher, and Myron Scholes.
“The Pricing of Options and Corporate
Liabilities,” Journal of Political Economy, 81
(May/June 1973), p. 637.
Campbell, John Y., and Joao F. Cocco.
“Household Risk Management and
Optimal Mortgage Choice,” Quarterly
Journal of Economics, 188 (November
2003), pp. 1449–94.
Capone, Jr., Charles A. “Providing
Alternatives to Foreclosure: A Report to
Congress,” U.S. Department of Housing
and Urban Development (March 1996).
Capozza, Dennis R., and Thomas A.
Thomson. “Optimal Stopping and Losses
on Subprime Mortgages,” Journal of Real
Estate Finance and Economics, 30 (March
2005), pp. 115-31.
Clauretie, Terrence M., and Thomas
Herzog. “The Effect of State Foreclosure
Laws on Loan Losses: Evidence from the
Mortgage Insurance Industry,” Journal of
Money, Credit and Banking, 22 (May 1990),
pp. 221-33.
Cox, D. R., and Oakes, D. Analysis of
Survival Data. London: Chapman and
Hall, 1984.
Deng, Yongheng, John Quigley, and Robert
Van Order. “Mortgage Terminations,
Heterogeneity, and the Exercise of
Mortgage Options,” Econometrica, 68
(March 2000), pp. 275-307.

30 Q3 2006 Business Review

Deng, Yongheng, and John Quigley.
“Woodhead Behavior and the Pricing
of Residential Mortgages,” University of
Southern California Lusk Center for Real
Estate, Working Paper 2003-1005 (revised
February 2004).
Elmer, Peter J., and Steven A. Seelig.
“Insolvency, Trigger Events, and Consumer
Risk Posture in the Theory of SingleFamily Mortgage Default,” Journal of
Housing Research, 10 (1999), pp. 1-25.
Foster, Chester, and Robert Van Order.
“An Option-Based Model of Mortgage
Default,” Housing Finance Review, 3
(October 1984), pp. 351-72.
Foster, Charles, and Robert Van Order.
“FHA Terminations: A Prelude to
Rational Mortgage Pricing,” AREUEA
Journal, 13 (1985), pp. 273-91.
Gallin, Joshua. “The Long-Run
Relationship Between House Prices and
Rents,” Board of Governors of the Federal
Reserve System, Finance and Economics
Discussion Series 2004-50 (2004).
Jappelli, Tullio. “Who Is Credit
Constrained in the U.S. Economy?”
Quarterly Journal of Economics, 105
(February 1990), pp. 219-34.
Kau, James B., Donald C. Keenan, and
Taewon Kim. “Default Probabilities for
Mortgages,” Journal of Urban Economics, 35
(May 1994), pp. 278-96.

Merton, Robert C. “Theory of Rational
Option Pricing,” Bell Journal of Economics
and Management Science, 4 (1973), pp.
41-83.
Peek, J., and Eric Rosengren. “Capital
Crunch: Neither a Borrower Nor a Lender
Be,” Journal of Money, Credit and Banking
27 (1995), pp. 625-38.
Pence, Karen M. “Foreclosing on
Opportunity: State Laws and Mortgage
Credit,” Review of Economics and Statistics,
88 (February 2006), pp.177-82.
Pennington-Cross, Anthony. “Credit
History and the Performance of Prime
and Nonprime Mortgages,” Journal of
Real Estate Finance and Economics, 27
(November 2003), pp. 279-301.
Smith, Gary, and Margaret H. Smith.
“Bubble, Bubble, Where’s the Housing
Bubble?” Preliminary draft prepared for the
Brookings Panel on Economic Activity,
March 30-31, 2006.
Vandell, Kerry D., and Thomas
Thibodeau. “Estimation of Mortgage
Defaults Using Disaggregate Loan History
Data,” AREUEA Journal, 13 (1985).
Yellen, Janet L. “Housing Bubbles and
Monetary Policy,” presentation to the
Fourth Annual Haas Gala, San Francisco,
October 21, 2005.

Kau, James B., and Donald C. Keenan.
“Patterns of Rational Default,” Regional
Science and Urban Economics, 29
(November 1999), p. 765.

www.philadelphiafed.org

Fiscal Imbalance:
Problems, Solutions, and Implications
A Summary of the 2005 Philadelphia Fed Policy Forum

“F

by Loretta J. Mester

iscal Imbalance: Problems, Solutions, and
Implications” was the topic of our fifth
annual Philadelphia Fed Policy Forum
held on December 2, 2005. This event,
sponsored by the Bank’s Research Department, brought
together economic scholars, policymakers, and market
economists to discuss and debate the implications of
fiscal imbalance for the U.S. economy. Our hope is that
the 2005 Policy Forum will serve as a catalyst for both
greater understanding and further research on the fiscal
challenges facing the U.S. economy.

At the current pace of spending
and revenue generation, the U.S. faces
a worsening budget position over the
coming years. While the problems
with the Social Security program have
garnered most of the headlines, financing health care and the Medicare
system poses the greatest challenge.
The size of the problem, longer-term
implications of fiscal imbalance, and
potential solutions were the focus of
the 2005 Philadelphia Fed Policy Forum. While there is general agreement

Loretta J. Mester
is a senior vice
president and
director of
research at the
Federal Reserve
Bank of
Philadelphia. This
article is available
free of charge at
www.philadelphiafed.org/econ/br/index.html.
www.philadelphiafed.org

that budget imbalance is one of the
important challenges facing the U.S.
economy over the medium and longer
run, there is considerably less agreement on what should be done to meet
those challenges.
Alan Greenspan, then Chairman
of the Federal Reserve Board, opened
the conference. In his view, the deficit-reducing actions necessary to stem
the worsening budget position will be
difficult to implement unless procedural restraints on the budget-making
process, like limits on discretionary
spending and the PAYGO requirements, are restored. He said that
reinstating the structure in the Budget
Enforcement Act of 1990 and coupling
it with provisions for dealing with
unexpected budget outcomes would be
beneficial. But it would not be enough
to solve the problem. The fundamental issue is making choices among
budget priorities, especially since the

number of retirees is increasing.
Greenspan pointed out that currently 3.25 workers contribute to the
Social Security system for each beneficiary. By 2030, the number of beneficiaries will have doubled and the ratio
of covered workers to beneficiaries will
have fallen to 2. At the same time,
spending per Medicare beneficiary
is expected to increase as the cost of
medical care rises. In fiscal year 2005,
federal outlays for Social Security,
Medicare, and Medicaid totaled about
8 percent of gross domestic product
(GDP). Office of Management and
Budget projections suggest this share
will rise to 9.5 percent by 2015 and to
13 percent by 2030. While productivity growth can help alleviate some of
the strain on the budget, it won’t be
the whole answer. Growing budget
deficits could drain resources from
private investment and thereby hurt
the growth of living standards.
As Greenspan noted, some of the
parameters needed to scale the problem are known. For example the size
of the adult population in 2030 is fairly
easy to estimate since most of that
population has already been born. But
other parameters, such as the amount
of future medical spending, are very
difficult to estimate. Medical technological innovations can improve the
quality of health care and lower the
cost of existing treatments, but they
can also expand treatment possibilities
and life expectancy, and both of these
can mean higher spending.
Greenspan said that he fears the
U.S. may have already committed
more resources to the baby boomers than it can deliver. If so, making
changes to those promises should come
Business Review Q3 2006 31

sooner rather than later – a theme
echoed by other speakers at the Policy
Forum – so people can plan their work,
savings, and retirement spending accordingly. Although he believes closing the budget gap depends on changes
to both the spending and tax sides,
he thinks that most of the change
should come on the outlay side, and
he suspects that we may need to make
significant structural changes to U.S.
retirement and health programs. Solving the Medicare problem is more difficult than Social Security because of
the difficulties in estimating the trend
in medical expenditures. Greenspan
concluded by saying that doing nothing to solve the budget imbalance
could have severe consequences for the
U.S. economy, but addressing the issue
in a timely and sound fashion could
produce lasting benefits.
SOCIAL SECURITY AND MEDICARE: SCALING THE PROBLEM
AND PROPOSED SOLUTIONS*
This session took up the problem
of how to scale the fiscal deficit problem and what can be done to solve it.
Our first speaker, Robert Shiller, of
Yale University, spoke on the underlying life-cycle issues involved in the
Social Security and Medicare deficits.
He expanded on many of Chairman
Greenspan’s themes but from the
perspective of behavioral economics.
One of the fundamentals underlying
the government budget deficit and the
low personal saving rate is the problem
people and society have in planning
for the distant future. In Shiller’s view,
these behavioral considerations justify
government interventions in a broader
set of circumstances than those suggested by the traditional economic

* Many of the presentations reviewed
here are available on our website at
www.philadelphiafed.org/econ/conf/ forum2005/
program.html.

32 Q3 2006 Business Review

theory of public goods or externalities.
Shiller listed several of the
concepts from behavioral economics
that are important to understanding
how we think about the future. One
of these is hyperbolic discounting,
which refers to the tendency to behave
inconsistently over time: We tend to
be impulsive and put more value on
today than tomorrow. Psychologists
are documenting that people think
about the present in concrete terms
but the future in more abstract terms,
and this may underlie why people place
more importance on the present than
the future. Another concept is that of
framing. People may behave inconsistently, depending on how a situation
is described to them; they react to
the names things are given and the
context. Psychological research has
also shown that some of the biggest
errors people make are errors of attention: Something else has caught their
attention, and they don’t get around
to thinking about saving. In addition,

what people tend to think about is
what other people think about (a kind
of herding). There is also a wishful
thinking bias: People believe what they
want to believe. This makes people
tend to underestimate risk. Indeed,
psychologists have hypothesized that
people have certain pathways in their
brains to deal with risk, but they are
not suited to the modern world. For
example, being in a crowded room
when a wild animal escapes would
cause your scared reflexes to engage,
but being told you are not saving
enough for the future doesn’t. The last
behavioral concept Shiller discussed
was the instinct for people to believe
those in authority: People have high
expectations for government authorities and tend to believe them.
In Shiller’s view, we need to incorporate these concepts of behavioral
economics into our thinking about
ways to solve the government budget
problems, and he believes this is beginning to happen. This approach need

Robert Shiller, Yale University (left), and Peter Diamond, Massachusetts Institute of Technology.

www.philadelphiafed.org

not imply a “big government” solution.
People are in a better position to know
what they need and should be allowed to express it, but they will make
mistakes that have to be dealt with.
He concluded his talk by discussing
some of the potential solutions to the
Social Security and health-care problems. Shiller has been critical of the
Bush Social Security plan, although
he acknowledges it had some creative
elements. In particular, Shiller thinks
the life-cycle part of the plan was
unique in that it would automatically
put people who chose the account plan
into a life-cycle portfolio at the age of
47. Having this as a default option is
in accord with some of the recent principles of behavioral economics – i.e.,
you cannot expect people to make
active choices. On the other hand, he
also criticizes the Bush plan, likening
it to a “margin loan” whereby people
could borrow against their Social Security benefits and put the money into
stock portfolios. For people already
saving with a diversified portfolio, it is
not of much use. For people who are
not saving, the plan is risky, since the
stock market is volatile.
Shiller said that the Medicare
Part D prescription and health savings
accounts have had some implementation problems that can be viewed
through a behavioral economics lens.
The prescription plans afford people
so many options that it is a daunting
task to make an optimal choice. In
Shiller’s view, there is a creative idea
behind the health savings accounts,
namely, insure people for catastrophic
events but have them manage a budget
to cover their other health spending.
Unfortunately, not many people have
signed up for these plans, suggesting
they don’t know how to prepare for
health risks. (Later Forum speakers
pointed out that consumers may not
have the information they need on
prices and quality to make optimal

www.philadelphiafed.org

choices regarding health care.) Shiller
thinks this should give government
and private initiatives motivation to
help people deal with these complicated issues, and he pointed to a few
examples of private initiatives.
The “save more tomorrow” plan of
Richard Thaler and Shlomo Benartzi
offers employees the choice to funnel
future pay raises automatically into
a savings account. People have a
tendency not to save for today, since

While poverty among
the elderly has fallen,
it is still at fairly high
levels, especially
among divorced
women.
it means taking away some of today’s
spending. But they are willing to
sign up to save more tomorrow, and
the plan has been shown to increase
savings. Firms are also beginning to
change their 401K plans so that the
default is that the employee is in the
plan rather than out of it – a simple
change that takes into account human
behavior. Shiller views this as a time
of experimentation in which our way
of thinking about basic economic
problems is changing. Our solutions
to the savings and health-spending
problem can be more creative, since
they will be based on new discoveries
about the ways people make decisions.
Peter Diamond, of the Massachusetts Institute of Technology, continued the discussion with a summary
of the size of the Social Security and
Medicare deficit problems and a critique of proposed solutions. According to the 2005 Annual Report of the
Trustees of Social Security, the Social
Security trust fund will be exhausted
in 2041. At that point, benefits would

be cut by about 25 percent to match
revenues. Using a 75-year horizon, the
unfunded portion of promised benefits
is 1.8 percent of taxable payrolls as of
January 1, 2005. Comparing this to
the current Social Security payroll tax
of 12.4 percent shows that there is a
problem, but it is not an enormous one
when compared with the Medicare
problem.
Certain groups rely more on Social
Security for a larger part of their retirement income than other groups. For
example, a fifth of the elderly get all
of their income from Social Security,
and two-thirds get 50 percent or more.
Particularly vulnerable groups include
long-career low earners, widows and
widowers with low benefits, disabled
workers, and surviving children.
While poverty among the elderly has
fallen, it is still at fairly high levels,
especially among divorced women.
In Diamond’s view, Social Security
is a part of addressing the country’s
poverty issue.
Diamond was critical of the Bush
Social Security plan and some of the
others on the table. Diamond agreed
with Shiller that to the extent that
people’s private retirement plans are
moving from defined benefit to defined
contribution, with some investment in
the stock market, he doesn’t see the
individual accounts in the Bush plan
as being that valuable. Moreover, the
private accounts would exhaust the
trust fund about a decade earlier than
under the current program. Some
of the other plans actually go in the
wrong direction and make the 75-year
Social Security trust fund shortfall
larger rather than smaller. Provisions such as price indexing, which
in Diamond’s view is a misnomer for
real-wage deflating, and raising the
age at which full benefits start result in
large reductions in benefits. Regarding
Medicare, the issue in Diamond’s view
is how to combine universal coverage

Business Review Q3 2006 33

Kent Smetters, Wharton School,
University of Pennsylvania

with quality and cost containment.
Diamond said that without universal
coverage, cost containment would
have some unintended consequences.
The final speaker in the first session, Kent Smetters, of the Wharton
School, University of Pennsylvania,
addressed some of the measurement
issues in budget accounting, arguing
that the traditional budget approach
worked fine when government
programs were more of a bricks and
mortar type; it works less well for programs with long-term liabilities, such
as Medicare and Social Security. The
federal budget substantially underestimates the government’s liabilities by
ignoring long-term liabilities. It tracks
them separately, off budget. So the
budget gives an incomplete picture of
the country’s fiscal imbalance. The
traditional budget accounting also
makes it hard to evaluate the impact of
program reforms. If the benefit of the
reform is off budget but the cost is on
budget, the reform will look like it increases the fiscal imbalance. Smetters
proposes a new budgetary framework
that includes two integrated components: a fiscal imbalance component

34 Q3 2006 Business Review

that equals debt held by the public
plus the present value of all future
outlays minus the present value of
all future revenues, and a generational imbalance component that
measures the proportion of the fiscal
imbalance due to spending by past
and current generations relative to
what they have paid into the system.
Different reform proposals for Social
Security will have different effects
on the generational imbalance,
depending on how they affect taxes
and benefits now and in the future.
Under the assumptions made by the
Office of Management and Budget,
the Department of the Treasury, and
the Council of Economic Advisers,
Smetters estimates that the total fiscal
imbalance in Social Security in 2004
was $8 trillion. Past and living generations have gotten about $9.5 trillion
more from Social Security than they
paid into it, and under current law,
future generations will pay $1.5 trillion
more into the program than they will
get out of it, for a net total imbalance
of $8 trillion. Medicare has a much
larger imbalance of $61 trillion, with
$24 trillion due to past and living generations and $37 trillion due to future
generations. The rest of the federal
government is in a surplus. Thus, the
total fiscal imbalance was $63 trillion
in 2004, and it is growing significantly
each year. This represents 18 percent
of all future payrolls and is a very large
problem. For example, Social Security
and Medicare benefits would have to
be cut by over half to close the imbalance. Alternatively, the combined
employer-employee payroll tax would
have to rise from 15.3 percent to over
32 percent and the payroll tax ceiling
would have to be removed.
Given these dire numbers, why
haven’t the capital markets reacted?
Smetters says it could be behavioral,
along the lines Shiller discussed; that
is, they don’t understand the magni-

tudes. But it also could be that capital
markets believe the government is
going to solve the problem mainly by
cutting benefits rather than raising
taxes. Smetters thinks this is a somewhat irrational view, given the aging
of the median voter. He ended on an
optimistic note by pointing out that 50
percent of U.S. households don’t hold
any equities either directly or indirectly
in employer-sponsored defined contribution plans. Thus, the component of
the Bush plan that puts people into a
life-cycle portfolio plan automatically
by default is an important innovation
in Smetters’ view. He is less concerned
than Shiller and Diamond that people
will make wrong choices.
The Policy Forum’s keynote
luncheon speaker was Katherine
Baicker, a member of the Council of
Economic Advisers, who spoke about
the important fiscal challenges the
U.S. faces over the coming years on
both the spending and revenue sides of
the federal balance sheet and her views
of what steps should be taken to meet
those challenges. Baicker pointed out
that while over the last 40 years spending and revenues have been relatively
stable, there have been important
changes in the composition of both
that will help determine future stability if nothing is done to entitlement
programs, the largest of which are Social Security, Medicare, and Medicaid.
Without changes in those entitlement
programs, Baicker says that a decade
from now, government spending as a
share of GDP will begin to rise swiftly,
with potentially dire consequences for
the U.S. economy.
On the expenditure side, federal
spending as a fraction of GDP since
1962 has been relatively stable at about
20.4 percent, but the share of GDP
devoted to entitlement spending has
tripled, while the share of spending
going to defense and other government
spending, such as highways, educa-

www.philadelphiafed.org

tion, and national parks, has fallen. In
1962, entitlement spending was primarily Social Security, and it was 2.5
percent of GDP and 13 percent of the
federal budget. Medicare and Medicaid were introduced in the 1960s, and
in 2005, the three programs together
accounted for 8 percent of GDP and
made up 40 percent of the federal
budget (not including the substantial
contributions to Medicaid made by the
states).
The revenue side of the federal
budget also shows stability, with total
federal revenues averaging 18.2 percent
of GDP since 1962. Payroll taxes,
which are used to fund Social Security and Medicare, have doubled over
the period, from about 3 percent of
GDP to a bit over 6 percent. Personal
income tax collections have been
relatively stable, while excise tax and
corporate income tax collections have
declined. Comparing the revenue
and expenditure sides shows that the
federal government has been running
a deficit of about 2.2 percent of GDP a
year. In 2005, the deficit was somewhat higher at 2.6 percent of GDP.
But Baicker pointed out that
the stability of the fiscal situation in
the U.S. over the past 40 years is in
jeopardy, since the first part of the
baby boomers will reach retirement age
in 2008. Over the next 40 years, the
costs of the three entitlement programs
will rise from about 8 percent of GDP
today to over 15 percent of GDP in
2045. This trend suggests that without
a change in the programs, either taxes
must increase substantially or spending outside of entitlements must be
nearly eliminated – both poor choices
in Baicker’s view. Baicker agreed
with the earlier speakers that solving
the Medicare/Medicaid problem was
more challenging than solving Social
Security, because she was optimistic
that the President’s plan of progressive
indexing of benefits of higher-earning

www.philadelphiafed.org

workers to prices would be
an important step toward
permanent solvency.
To control the cost of
government-financed health
care, Baicker said we need to
address the costs of health
care in the private sector
as well. In her view, much
of the spending on health
care – both publicly and
privately financed – is not
being efficiently allocated.
To alleviate this, she said it
is most important to create
incentives for high-value
care. For example, Baicker
Katherine Baicker, Council of Economic Advisers
said that the current tax
code subsidizes employerincrease competitiveness in these
provided health insurance relative to
markets, leading to lower prices and
other forms of compensation and to
improved quality. At the same time,
individually purchased health insurBaicker acknowledged in the question
ance. This leads to insurance coverage
and answer period that several difficulof routine and predictable health-care
ties would need to be solved before
expenditures rather than paying for
moving to what she calls “consumerthose out-of-pocket and insuring

The stability of the fiscal situation in the U.S.
over the past 40 years is in jeopardy, since
the first part of the baby boomers will reach
retirement age in 2008.
against catastrophic and unexpected
expenditures. Baicker says capping the
employer exclusion of health insurance premiums is one step that could
be taken to increase the sensitivity of
the use of health care to its cost. She
is also in favor of expanding health
savings accounts, which allow people
to pay for health care with pre-tax
dollars as long as their health insurance policy includes a sufficiently high
deductible and catastrophic coverage.
She believes steps like these would
help ensure that health-care resources
were allocated to uses with higher
value, and she thinks this could also

driven health care.” One of these is a
lack of transparency. For example, it
is difficult to decipher the pricing of
services from the bills you receive from
health-care providers and to obtain information on the quality of providers.
Without price and quality information,
rational health-care decisions are severely hampered (even aside from the
behavioral aspects of decision-making
Shiller spoke about).
Our second session turned to two
budget experts for their views on the
current budget deficit and prognosis.
Doug Holtz-Eakin, then director of
the Congressional Budget Office, said

Business Review Q3 2006 35

he viewed the U.S. fiscal situation to
be the single most important economic
policy challenge we face if the current programs are not reformed. In
his view, adhering to the promises
to spend as under current law will
fundamentally impair the economic
success of the U.S. It will result in a
larger federal government, higher tax
rates, and more reliance on mandates
and regulations to achieve policy aims
rather than on the budgetary process.
In the CBO’s summer update
to the budget outlook, the federal
budget was projected to move back to
baseline trends and become better in
balance over the next five years. But
there were several risks to that projection, for example, the path of defense
spending and possible changes to the
alternative minimum tax. Moreover,
the hurricanes, which occurred after
that update, affected the budget in
three ways. They changed the cost
of ongoing programs, but not by large
amounts. They led to direct appropriations for relief and recovery, but the
spending associated with those generally takes place over time. They might
also lead to permanent changes in the
law; for example, 12 pieces of legislation with hurricane relief provisions
passed quickly. Holtz-Eakin explained
that the spending and tax reconciliation is now an important part of the
budget process. For the first time in
eight years, Congress has used these
procedures to cut spending in mandatory programs that are relevant to the
long-term budget outlook. In HoltzEakin’s view the important thing is not
the amounts but the fact that Congress
now understands that each year the
mandatory programs need to be on the
table and that the process of reconciliation will be part and parcel of the
process of legislating.
While he suggested there are some
things the government could do to
improve the formulation of the budget

36 Q3 2006 Business Review

– such as incorporating an average
level of funds in anticipation of natural
disasters that recur repeatedly like
hurricanes, wild fires, and droughts
– Holtz-Eakin said this is not the key

Doug Holtz-Eakin
said he viewed the
U.S. fiscal situation
to be the single most
important economic
policy challenge we
face if the current
programs are not
reformed.
to solving our budget problems. Rather, the key is addressing the long-term
cost of our mandatory spending problems. It is important that the relatively
benign near-term budget outlook not
seduce us into ignoring the long-term
problems. In Holtz-Eakin’s view, policy
decisions rather than the course of the
economy are central to the long-term
budget outlook.
Alice Rivlin, of the
Brookings Institution,
continued the discussion by
pointing out that it has been
decades since the U.S. has
seen as rapid a reduction
in revenues as a percent of
GDP as has occurred in the
past five years. The CBO
projections are based on current law. Thus, they assume
the tax cuts of 2000, 2001,
and 2003 expire. If instead
they continue, the budget
imbalance is much worse.
Rivlin said we experienced
a similar situation in the
1990s. Back then there
was bipartisan agreement that

something needed to be done about
it. There wasn’t bipartisan agreement
about what should be done, but rules
were put in place to control spending, control entitlement spending,
and control tax cuts, and the strong
economy operated to reduce the deficit
and turn it into a surplus. Rivlin said
this time we do not have consensus
that there is a problem, even though
the future budget imbalance is very
large as spending on Social Security,
and especially on Medicare and Medicaid, increases rapidly. Echoing the
earlier Policy Forum speakers, Rivlin
believes the Social Security problem is
manageable; Medicare and Medicaid
are far larger problems. She thinks per
capita health spending, both nationally
and in these programs, will continue
to rise 2.5 percent faster than GDP as
it has over the last four decades; she
is skeptical of the Medicare trustees’
assumption that it will decelerate to 1
percent faster than GDP growth and
eventually to the same pace as GDP
growth.
Under Rivlin’s assumption, there
is no tax rate that will bring back fiscal
balance, and if the deficit problem isn’t

Alice Rivlin, Brookings Institution

www.philadelphiafed.org

solved, interest expenditures will rise
to over 20 percent of GDP by 2050, so
borrowing is not a sustainable option
either.
According to Rivlin, to solve the
budget imbalance problem, we need to
slow the rise in health-care spending
in the federal budget. And we need
to do that in a way that will slow the
per capita spending on health care not
only by the government but also by the
private sector, because otherwise it is
just shifting the expenditures. Rivlin
pointed out that the U.S. has a very
expensive health-care system compared with other countries; while the
rates of growth in per capita spending
are similar in developing countries,
our level of spending is higher. While
there have been cost-saving innovations in providing health care, these
innovations also tend to increase
demand for the service. But there are
ways to increase cost effectiveness.
For example, the practice of medicine
continues to be paper-based; improvements in information systems could
probably reduce costs and might also
result in fewer treatment errors.
Because the Medicare system is an
almost universal system for the over-65
population, it holds the potential for
learning about which treatments are
cost-effective – provided its data can
be analyzed. The next issue would
be what to do with this information.
Rivlin pointed out that one strategy
was suggested by Baicker: to give
consumers the information and let
them make the choices through, for
example, health savings accounts.
The other strategy is to change the
reimbursement system to reward effective medical care and not pay for
ineffective and excessive medical care
– although Rivlin admitted we don’t
know how to do that yet. She suggested a companion step is to use federal
government research dollars to push
for innovations likely to be cost saving,

www.philadelphiafed.org

especially for diseases like
cancer where the innovations are unlikely to lead
to expanded treatment,
since these diseases already
always get treated at some
stage. There are political
obstacles that would need
to be overcome. In Rivlin’s
view, these include the
power of insurers, pharmaceutical companies,
and providers, who have
been fairly negative on
change. Rivlin concluded
Richard Fisher, President, Federal Reserve Bank of Dallas
by pointing out that the U.S.
is not alone in this problem,
which she says is a problem
tion. He believes it is very important
of prosperity. In the U.S. and in other
to consider how the forces of globalizasuccessful economies people are living
tion affect U.S. fiscal deficits. Glolonger and better, and part of that livbalization means a nation’s economic
ing better is better medical care.
potential is no longer defined by its
geographic boundaries. In a global
economy, goods, services, capital, and
labor can migrate to where they can be
most efficiently used and where there
are fewest obstacles to putting them
to efficient use. So countries need
to compete for these resources. In
Fisher’s view, businesses have come to
grips with globalization, and globalization has helped discipline central
bankers around the world to focus on
keeping inflation low. Fisher believes
that globalization is also exerting some
discipline on fiscal policymakers, and
the U.S. is in better shape than most
of its competitors. One of the ways
The Policy Forum’s final session
globalization has a beneficial effect
took up the broader implications of
on fiscal decision-making is via tax
fiscal imbalance for the macroeconocompetition. Fisher pointed out that
my. Richard Fisher, president of the
average tax rates are falling in the
Federal Reserve Bank of Dallas, agreed
world’s most open economies. Also, to
that the magnitude of the projected
the extent that young people can move
budget deficits is of great concern
to escape high Social Security taxes,
and said that, left unchecked, they
it is more difficult to sustain a system
have the potential of harming U.S.
based on intergenerational transfers.
economic prosperity and undermining
In theory, globalization should exthe progress we have made on inflaert a similar discipline on the spending

In a global economy,
goods, services,
capital, and labor can
migrate to where they
can be most efficiently
used and where there
are fewest obstacles
to putting them to
efficient use.

Business Review Q3 2006 37

side, but Fisher says we have yet to see
such deficit-reduction pressures. Nonetheless, when investors are considering
where to allocate their capital, it is
the relative position of one country vs.
another that matters. In Fisher’s view
the U.S. has been able to finance its
spending via foreign capital because
we are doing better in terms of fiscal
policy compared with other countries. Fisher provided some numbers:
According to OECD data, public-sector spending (including federal, state,
and local government spending) was
projected to be 3.7 percent of GDP
in the U.S. in 2005, compared with
6.5 percent in Japan, 4.3 percent in
Italy, and 3.9 percent in Germany. He
thinks that the demographic challenges regarding Social Security and
Medicare in the U.S. are not as severe
as those facing Japan and Germany.
But while the U.S. may be better off than other countries, Fisher
believes following “least-bad” policy
is risky, since it is never clear whether
our advantages will last, especially if
a rising deficit erodes U.S. economic
performance. He believes that to secure our advantages we should put our
fiscal house in order before
our competitors put theirs
in order. Fisher pointed
out that monetary policymakers cannot be indifferent to the thrust of fiscal
policy because poor fiscal
policies create pressure for
poor monetary policies,
e.g., monetizing the debt
and fueling inflation. But
he emphasized that the
solution to the U.S. fiscal
imbalance rests with fiscal
policymakers and not the
central bank.
Robert Barro, of
Harvard University, took
up the theme of monetary
policy touched on by

38 Q3 2006 Business Review

Fisher. In Barro’s view, in the last 25
years there has been a major triumph
in terms of central banks around the
world achieving low and stable rates
of inflation. He said he is not certain
why monetary policy has worked as
well as it has in the U.S. and abroad.
His analysis indicates that Fed policy
under former Chairman Greenspan
could be characterized as a reaction
function, with the federal funds rate
reacting to the inflation rate and the
real economy as embodied in employment growth and the unemployment
rate. The analysis suggests that the
Fed does not respond to changes in
real GDP that are due to productivity growth. The Fed’s policy is also
characterized by gradualism: It moves
interest rates gradually. Barro said it
was not clear that the Fed’s reacting
to the real economy and gradualism
are beneficial. Nonetheless, in Barro’s
view the Fed’s triumph over high inflation is a remarkable achievement.
Alan Auerbach, of the University

of California at Berkeley, formulated
his talk around the policy changes we
should expect in response to the fiscal
situation we face and the economic
effects we should expect as people anticipate these policies. He agreed with
earlier speakers that we face a rising
imbalance that gets much larger with
every year something isn’t done to solve
it. Auerbach said the problems policymakers need to address (in order of importance) are health-care spending and
the federal contributions to Medicare
and Medicaid; general revenue taxes,
i.e., taxes not associated with entitlement programs and not payroll taxes;
and Social Security. In Auerbach’s
view, most discussions of the macroeconomic effects of fiscal imbalance
have focused mainly on the effect of
current fiscal policy on the economy.
These would include possible crowding
out of private investment by government spending, higher interest rates,
and current account deficits. There
has been little discussion of the effects

(left to right): Tony Santomero, former President, Federal Reserve Bank of Philadelphia;
Robert Barro, Harvard University; Alan Auerbach, University of California at Berkeley;
and Doug Holtz-Eakin, former Director, Congressional Budget Office.

www.philadelphiafed.org

of the necessary policy changes on the
economy.
Given the size of the future fiscal
imbalance and the fact that federal
taxes as a share of GDP are lower now
than at any time since the 1960s, an
eventual tax increase of 4 percent
of GDP, through a combination of
broadening the tax base and increasing
marginal tax rates, would not be implausible in Auerbach’s view. Economic models suggest that higher future
tax burdens should induce people to
increase effort today to be able to pay
future taxes and to save. Thus, we’d
expect higher labor-force participation,
higher employment, and higher private
saving to pay for future taxes. The
higher marginal tax rates might also
encourage more work today if people
plan to retire earlier than otherwise as
a result of the tax change and decide
to work harder now to save enough to
retire. However, higher marginal tax
rates would also induce lower private
saving, since those savings would be
taxed at a higher rate.
Auerbach doesn’t think much
progress on the Social Security and
Medicare problems will be made until
there is a crisis. At that point, the
problem will be too large to be solved
by increased payroll taxes alone, but
politically it will be nearly impossible to
make sizable benefit cuts for less affluent retirees. Thus, Auerbach believes
there will be means testing of entitle-

www.philadelphiafed.org

ments in the future. Means testing has
mixed effects on incentives to accumulate wealth. If you are so wealthy that
you know you are going to be hit by
the means test, you’ll have an incentive to accumulate even more wealth,
since your retirement and health-care
benefits have just been reduced. But
if your wealth is near the level where
benefits are phased out by the means

markets. Until that uncertainty is
resolved, the equity premium should
be higher. At least, this should occur
when people realize the current fiscal
situation is not sustainable. A resolution of policy uncertainty would make
us better off, and Auerbach suggested
that the costs of adjustment that we
know must come at some point would
be lower if we adopted more gradual

There was agreement that difficult policy
choices will have to be made and that the time
for making them is now, not later, if we want to
reduce the impact of the fiscal imbalance on
the U.S. economy.
test, you could have a strong incentive
to save less so that you would qualify
for benefits. And since you are saving
less, you’ll work less as well.
Auerbach pointed out two other
potential macroeconomic effects as the
economy adjusts to changes in fiscal
policy. Trade deficits will shrink and
turn into trade surpluses in the future.
As that occurs, the composition of
U.S. GDP will change toward more
trade-sensitive industries. Until we
know how the fiscal imbalance will be
handled – how much taxes increase,
how much marginal tax rates increase,
how much the tax base broadens, how
much benefits are cut – there will be
substantial uncertainty in financial

systemic plans to address the fiscal
imbalance. However, Auerbach said
he was not encouraged by recent policy
actions.
SUMMARY
The 2005 Policy Forum generated
lively discussion among the program
speakers and audience on the challenges facing the U.S. in dealing
with its increasing fiscal imbalance.
Although there was no agreement on
particular solutions, there was agreement that difficult policy choices will
have to be made and that the time for
making them is now, not later, if we
want to reduce the impact of the fiscal
imbalance on the U.S. economy. BR

Business Review Q3 2006 39

The Philadelphia Fed
POLICY FORUM

December 1, 2006

We will hold our sixth annual Philadelphia Fed Policy Forum on Friday, December 1, 2006.
This year’s topic is “Economic Growth and Development: Perspectives for Policymakers.” At
right is the program. The Policy Forum brings together a group of distinguished economists
and policymakers for a rousing discussion and debate of the issues. For further information,
please contact us at PHIL.Forum@phil.frb.org.

Photo by B. Krist for GPTMC

FEDERAL RESERVE BANK
OF PHILADELPHIA

The Philadelphia Fed Policy Forum

Economic Growth and Development: Perspectives for Policymakers
December 1, 2006
Federal Reserve Bank of Philadelphia, 6th and Arch Streets
Continental Breakfast
Opening Remarks
Charles I. Plosser, Federal Reserve Bank of Philadelphia
Economic Growth and Development: An Overview of Issues and Evidence
Moderator: Michael Dotsey, Federal Reserve Bank of Philadelphia
		
	Roberto Zagha, The World Bank
Xavier Sala-i-Martin, Columbia University
Discussion and Audience Participation
Policy Responses: Trade and Foreign Aid
Moderator: Kei-Mu Yi, Federal Reserve Bank of Philadelphia
	Elhanan Helpman, Harvard University
William Easterly, New York University
Discussion and Audience Participation
Lunch
Financial Markets and Growth
Moderator: Loretta J. Mester, Federal Reserve Bank of Philadelphia
Jeffrey M. Lacker, President, Federal Reserve Bank of Richmond
	Robert M. Townsend, University of Chicago
Discussion and Audience Participation
Institutional Arrangements and Economic Growth and Development
Moderator: George Alessandria, Federal Reserve Bank of Philadelphia
Dani Rodrik, Kennedy School of Government, Harvard University
	Ross Levine, Brown University
Daron Acemoglu, Massachusetts Institute of Technology
Discussion and Audience Participation
Reception and Informal Discussion

www.philadelphiafed.org

Business Review Q3 2006 41

Research Rap

Abstracts of
research papers
produced by the
economists at
the Philadelphia
Fed

You can find more Research Rap abstracts on our website at: www.philadelphiafed.org/econ/resrap/index.
html. Or view our Working Papers at: www.philadelphiafed.org/econ/wps/index.html.

Nontraded Goods and the
Behavior of Exchange Rates
Empirical evidence suggests that movements in international relative prices (such
as the real exchange rate) are large and persistent. Nontraded goods, both in the form
of final consumption goods and as an input
into the production of final tradable goods,
are an important aspect behind international relative price movements. In this paper,
the authors show that nontraded goods
have important implications for exchange
rate behavior, even though fluctuations in
the relative price of nontraded goods account for a relatively small fraction of real
exchange rate movements. In their quantitative study, nontraded goods magnify the
volatility of exchange rates when compared
to the model without nontraded goods.
Cross-country correlations and the correlation of exchange rates with other macro
variables are closer in line with the data.
In addition, contrary to a large literature,
standard alternative assumptions about the
currency in which firms price their goods
are virtually inconsequential for the properties of aggregate variables in the authors’
model, other than the terms of trade.
Working Paper 06-9, “Nontraded Goods,
Market Segmentation, and Exchange Rates,”

42 Q3 2006 Business Review

Michael Dotsey, Federal Reserve Bank of
Philadelphia, and Margarida Duarte, Federal
Reserve Bank of Richmond
Interpreting the Link
Between Technology and
Human Capital
The positive correlations found between
computer use and human capital are often
interpreted as evidence that the adoption of
computers has raised the relative demand
for skilled labor, the widely touted hypothesis of skill-biased technological change.
However, several models argue that the
skill intensity of technology is endogenously
determined by the relative supply of skilled
labor. The authors use instruments for the
supply of human capital coupled with a rich
data set on computer usage by businesses to
show that the supply of human capital is an
important determinant of the adoption of
personal computers. Their results suggest
that great caution must be exercised in placing economic interpretations on the correlations often found between technology and
human capital.
Working Paper 06-10, “Labor Supply and
Personal Computer Adoption,” Mark Doms,
Federal Reserve Bank of San Francisco, and
Ethan Lewis, Federal Reserve Bank of Philadelphia

www.philadelphiafed.org

Using the National Income Accounts
to Quantify Economic Activity
This article presents a brief overview of the national income accounts. It summarizes the main parts
of accounts and situates them within the efforts of
economists to quantify economic activity and economic
well-being. The author argues that these statistics are
necessarily provisional and imperfect but nevertheless
extremely useful. Some current directions for economic
research seeking to extend the accounts are also discussed.
Working Paper 06-11, “National Income Accounts,”
Leonard Nakamura, Federal Reserve Bank of Philadelphia
Understanding the Great Depression
What caused the worldwide collapse in output from
1929 to 1933? Why was the recovery from the trough of
1933 so protracted for the U.S.? How costly was the decline in terms of welfare? Was the decline preventable?
These are some of the questions that have motivated
economists to study the Great Depression. In this paper,

www.philadelphiafed.org

the authors review some of the economic literature that
attempts to answer these questions.
Working Paper 06-12, “Monetary and Financial Forces
in the Great Depression,” Satyajit Chatterjee, Federal Reserve Bank of Philadelphia, and Dean Corbae, University
of Texas at Austin
Extending the Job Matching Model
In the U.S. labor market, the vacancy-unemployment ratio and employment react sluggishly to productivity shocks. The authors show that the job matching
model in its standard form cannot reproduce these
patterns due to excessively rapid vacancy responses. Extending the model to incorporate sunk costs for vacancy
creation yields highly realistic dynamics. Creation costs
induce entrant firms to smooth the adjustment of new
openings following a shock, leading the stock of vacancies to react sluggishly.
Working Paper 06-13, “Job Matching and Propagation,” Shigeru Fujita, Federal Reserve Bank of Philadelphia,
and Garey Ramey, University of California, San Diego

Business Review Q3 2006 43

ANNOUNCEMENT AND CALL FOR PAPERS

The Federal Reserve Bank of Philadelphia, Rutgers University, and the University of Richmond

Real-Time Data Analysis and Methods in Economics
April 19-20, 2007
Philadelphia, Pennsylvania

T

he Research Department of the Federal Reserve Bank of Philadelphia, the Economics Department at Rutgers University, and the Robins School of Business at the University of Richmond
are sponsoring a conference on Real-Time Data Analysis and Methods in Economics to be held
at the Federal Reserve Bank of Philadelphia on April 19-20, 2007. The purpose of the conference is
to bring together leading researchers interested in all areas of real-time data analysis, including but not
limited to topics such as real-time macroeconometrics, finance, forecasting, and monetary policy.
Those interested in presenting a paper at the conference are encouraged to send a completed paper or detailed abstract by November 1, 2006, to Tom Stark at tom.stark@phil.frb.org. Discussions are
underway with a number of journals, including the Journal of Business and Economic Statistics, about
the possibility of publishing a special conference volume (though authors would not be compelled to
publish their paper in such a volume), and a variety of leading researchers in the area have expressed
interest in taking part. Additionally, a summary of the conference will be published in the Philadelphia Fed’s Business Review. We will provide some travel expenses for paper presenters and discussants,
following Federal Reserve guidelines. Conference details will be posted in due course on the websites
of the conference organizers.
Questions or comments should be directed to one of the conference organizers:
Dean Croushore: Economics, University of Richmond............................dcrousho@richmond.edu
Tom Stark: Federal Reserve Bank of Philadelphia....................................... tom.stark@phil.frb.org
Norman R. Swanson: Economics, Rutgers University........................ nswanson@econ.rutgers.edu

44 Q3 2006 Business Review

www.philadelphiafed.org