View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

The Complexities of Monetary Policy
The following is a speech President Santomero delivered to the Downtown Economists Club, New York City, on March 26, 2001.

A

BY ANTHONY M. SANTOMERO

s a former academic researcher who is now
president of the Federal Reserve Bank of
Philadelphia, I’ve encountered several

places where macroeconomic theory intersects with
real world economics.

As a long-time research economist, I
derive great enjoyment from spending
time with fellow economists. Some call
us practitioners of the “dismal science,”
but all of us in this room know better.
After all, this is a meeting of the
Downtown Economists Club. “Club”
certainly has a festive, friendly ring to it,
so I’m confident our time together will
be anything but dismal.
As you know, I came to the
Federal Reserve Bank of Philadelphia
after spending many years on the
faculty of the Wharton School. So I
thought I would spend my time with
you today talking about the interplay
between my long experience as an
academic researcher and my new
responsibilities as a central banker.
Since I joined the Fed last
summer, I’ve encountered several
conundrums. I suppose you could also
call them “points of tension” — places
where macroeconomic theory intersects with real world economics.
Whatever terminology one
uses, these conundrums illustrate the
challenges that one confronts in
analyzing economic conditions,
forecasting their likely future course,
and using information that is often
www.phil.frb.org

imperfect to map out appropriate
monetary policy.
As it happens, my tenure at
the Fed has partly coincided with
events that illustrate some of the
fundamental issues that I would like to
talk about today. Not long ago there
was concern about an overheating
economy. Then, in little more than the
blink of an eye, there was concern
about a possible recession. How
quickly things change and how
suddenly pressure for policy response
shifts direction! I once viewed this
from the relatively safe haven of the
academy. I now view it from the
trenches as a policymaker. It’s been an
interesting time.
Today I’ll talk about four
conundrums I’ve come upon in
making monetary policy decisions.
Let’s take them one at a time.
ON THE SUPPLY SIDE
The first we might call the
“supply side” conundrum. The key
challenge here lies in resolving the
fundamental issue of how rapidly the
economy can grow on a sustained
basis. There has been much discussion
about the U.S. economy’s long-run

capacity for growth in light of the
remarkable gains in productivity in the
latter half of the 1990s. The strength of
the economy over that period, accompanied by a remarkably low inflation rate,
was due, in no small measure, to more
rapid productivity growth, which
stemmed largely from technology
investments made during the decade.
With the technology sector undergoing
substantial change and reevaluation, it
might be interesting to examine this
relationship as the first area of focus.
Let me begin with what we
know: productivity growth has
improved because of technology. But
this statement is not as useful as it
might be, because one does not know
exactly what this foretells about the
future pace of productivity growth.
Put another way, we don’t
have the equation that describes how
technology affects productivity. Nor do
we have the equation that describes
how technology evolves. In the end,
we do not even have a satisfying

Anthony M. Santomero, President,
Federal Reserve Bank of Philadelphia
Business Review Q2 2001 3

measure of the variable we call
technology. So when we ask ourselves
how fast the economy can grow going
forward, we must acknowledge that
there is a substantial degree of uncertainty about the answer because of our
limited knowledge of the processes
underlying future productivity enhancement.
As an economist I can accept
this. But as a policymaker, I have to
take the next step — the one that
makes me uncomfortable as an
economist. That is, in spite of our
uncertainties, indeed our ignorance, I
have to make some assessment of the
rate at which the application of
technological innovations raises
potential output going forward.
Making that “supply side” assessment
is essential to laying out the path of
long-term, sustainable economic
growth that monetary policy aims to
match from the demand side.
Well, what is my estimate? I
expect annual productivity growth to
average 2 to 3 percent for the foreseeable future. Why? Because I believe
that “new economy” technologies have
yet to fully infuse the “old economy”
with the productivity gains they offer.
When I talk with business people
around our District, they tend to
agree. It seems that advances in
information technology and information management are still in the
process of revolutionizing the way
businesses design, produce, and deliver
their products and services. This
process takes time and often lags the
purchase of technology, but the
benefits accruing to real sector
productivity are real and sustained.
So if information technology
continues to revolutionize industry,
then the economy can sustain real
GDP growth of 3 to 4 percent without
accelerating inflationary pressures for
the foreseeable future. But let me
stress that this figure is a long-run
average. One must allow for some
4 Q2 2001 Business Review

margin of error around this number and
expect it to exhibit some cyclical
variability. Only simple equations are
straight lines; real economies tend to
move less linearly.

As an economist I am comfortable with this complexity because theory
tells me that the marketplace will weigh
them all and consistently drive the real
rate of interest to its proper equilibrium,

I believe that “new economy” technologies
have yet to fully infuse the “old economy” with
the productivity gains they offer.
ON THE DEMAND SIDE
The second of my conundrums is what might be called the
“demand side” conundrum. As
economists, we know there is some
interest rate that induces investors to
invest just the right amount, and
savers to save just the right amount, to
bring the economy to its potential
output. The key questions here are:
What is that interest rate? And how
does it evolve over time? One needs
to answer these questions in order to
assess whether monetary policy is
properly positioned to foster the
economy’s achieving its full growth
potential.
As an economist I am
comfortable with the idea that a
myriad of factors affect both saving
and investment decisions.
Some are identifiable and
measurable – like income on the
saving side, or depreciation rates on
the investment side. Others are
identifiable, if not so easily measured –
like expected returns to savings or
wealth targets on the saving side, and
technological breakthroughs, capacity
utilization, or acceptable hurdle rates
on the investment side.
I also know that some of
these factors are subject to highfrequency fluctuations – like changes
in wealth due to stock price variation
– and some are subject to low frequency trends – like changing demographics.

whatever it may be – even as that
equilibrium shifts over time.
As a monetary policymaker, I
cannot be quite so comfortable. That
is because, whether we like it or not,
monetary policy today is an interest
rate policy. And so gauging the stance
of monetary policy – determining
whether the Fed is being stimulative,
contractionary, or neutral – is essentially an exercise in assessing where we
have set the real rate relative to that
long-run equilibrium path.
Let me give you a very
practical example of how this problem
plays out. If trend productivity growth
is higher now than it was 10 years ago,
then, everything else constant,
businesses now have a stronger
demand for funds to invest in new
projects and consumers save less
because they expect their incomes to
rise faster. So the equilibrium real rate
of interest should now be higher. How
much higher? And, of course, since
everything else is never constant, how
much higher on net?
My own point of view is that
the average equilibrium real rate
probably is higher now than it was 10
years ago. But again, I would allow for
a wide margin of error around any
estimate. Short-term and cyclical
variations alter the appropriate
momentary natural rate of interest,
making it of considerably less use in
determining the stance of monetary
policy. Actually, gauging monetary
www.phil.frb.org

policy at any point in time presents other
problems as well. This brings me to my
next conundrum.
ABOUT THE DYNAMICS
OF POLICY
My third conundrum I would
label the “policy dynamics” conundrum. This one is certainly nothing
new. Milton Friedman summarized the
problem years ago, coining one of the
most famous phrases in modern
economics, when he said that the
impact of monetary policy is subject to
“long and variable lags.” Consequently,
at any point in time, monetary
policymakers cannot tell whether what
they see going on in the economy is
the reflection of changing market
conditions or, alternatively, the lagged
effect of their own past actions. And
so an activist monetary policy intended
to fine-tune the economy’s performance could, in fact, destabilize it.
Friedman argued that the best
approach for monetary policymakers to
take would be to fix the growth rate of
the money supply at some constant
amount. Following this rule would
allow the economy to achieve its peak
efficiency, recognizing that this would
inevitably include some cyclical ups
and downs.
As an economist, I respect
Friedman’s analysis. But as a policymaker, I am left with the dilemma of
how one would put his prescription
into practice. Today there is no
monetary aggregate reliably linked to
spending growth, and so monetary
policy is, as I said a moment ago, an
interest rate policy. Obviously, fixing
an interest rate is not the same thing
as fixing the money growth rate.
Indeed, holding short-term interest
rates constant – not allowing them to
move as market conditions change – is
a sure-fire prescription for destabilizing
the economy.
So how does one balance the
need to move short-term interest rates in
www.phil.frb.org

response to shifting economic conditions
with the need to provide the marketplace with a stable and reliable monetary policy? I think there are two
answers. One answer is to move beyond
a commitment to stable money growth
and make a credible commitment to low
and stable inflation.
I believe that over the past 10
years, the Fed has successfully made
that transition. Whatever the
subtleties of particular monetary policy
actions, it is clear that the Fed’s
ultimate goal is to help create the
financial conditions that foster
maximum sustainable economic
growth. In the long run, the most
important contribution the Fed can
make toward this goal is to maintain a
low inflation environment. To a
considerable extent, the public’s
expectations about long-run inflation
are measures of its confidence in the
Fed’s commitment to that mission.
As many of you know, our
own Reserve Bank conducts a quarterly survey of professional economic
forecasters. Results of that survey show
that long-run inflation expectations
remain low and stable and have been
for the last several years. I consider
that an important signal that the Fed
has established its commitment.
The second way to solve the
policy puzzle of preserving flexibility in
setting interest rates while also
providing stability in monetary policy is
more tactical. Fed policymakers must
stand apart from the incessant demand
for instant reaction and the expectation of instant results. There is a
tendency among observers to focus on
the Fed’s next interest rate move, with
the implication that the Fed can and
should fine-tune the economy’s
performance. But the fact is that it
takes time for a policy action’s impact
to play out, and we are frequently
waiting for past actions to reach
fruition and achieve their desired
effect on the economy.

ABOUT CONFIDENCE
But before we get too
comfortable with the wisdom of a
“wait and see” approach, let me
describe the fourth and final conundrum I want to discuss with you today.
This is one that I personally have
found particularly perplexing since
joining the Fed. It is also one that has
gotten a lot of “ink” recently. I’ll call it
“the confidence conundrum,” because
it centers on how confidence plays a
role in macroeconomic dynamics.
The issue is this: when waves
of confidence – or doubt – wash over
the economy, how should monetary
policymakers respond to them? This is
a conundrum because there is ample
evidence that expectations about the
future are rational in the long run, and
the marketplace validates them on
average. But in the short run, the
marketplace is beset by waves of
confidence that move expectations
and thus may significantly affect
spending in ways that may or may not
be either sustainable or desirable.
What to do in the face of
variations in consumer or business
confidence is not an easy issue to
resolve. Macroeconomists usually
assume that the economy behaves as if
consumers and businesses form their
expectations rationally, and they
forecast the future based on observations of stable historical economic and
financial patterns. This is a convenient assumption because it obviates
the need to model people’s decisionmaking explicitly, and it keeps changes
in expectations from playing an
independent role in the performance
of the economy. But we know that
reality is not that simple.
While measures of consumer
confidence usually track historical
movements in economic variables –
income, wealth, indebtedness,
unemployment, and the like – there are
occasions when confidence moves
beyond what the incoming economic
Business Review Q2 2001 5

data might warrant. These exogenous
shifts in confidence may not be rational.
Consumers and investors are capable of
over- and underreaction. After all, we
are only human.
Nonetheless, these shifts in
confidence can cause changes in
expectations that affect spending
decisions and so can become selffulfilling, or at least self-sustaining,
processes for a considerable period of
time. Consequently, the role played by
expectations can be at once more
significant and more complicated than
our standard macroeconomic models
allow.
We should not lose sight of
how important expectations are to
people’s decision-making and how farreaching the impact of changes in
expectations can be. Expectations can
change quickly and can dramatically
alter aggregate demand.
As a former finance professor,
I am intimately familiar with the
investment decision process. It is, to a
large extent, a process of expectations.
Businesses routinely try to project the
future gains to be derived from
investments made today. This is
fundamental to capital budgeting, a
subject that I taught too many MBA
students over the course of too many
years!
Likewise when individuals
make consumption and savings
decisions, expectations play an
important role. The appropriate
amount to save for retirement, for
example, depends in large part on
expectations of future rates of return.
In short, when it comes to
making economic decisions, expectations matter. And I would add that
shifts in that intangible we call
confidence affects those expectations.
I believe that we are in the midst of
dealing with one of these shifts in

6 Q2 2001 Business Review

confidence right now. The key issue
that we must address is the extent to
which it will have a significant impact
on the aggregate economy going
forward.
So how should monetary
policy respond? I do not think the Fed
should routinely take policy actions for
the sole purpose of boosting expectations or merely to affect confidence.
This would ultimately be a dangerous
and destabilizing game. However, I
believe that if a decline in confidence
is viewed as having a substantial
dampening effect on overall real sector

We should not lose
sight of how important expectations are
to people’s decisionmaking and how farreaching the impact
of changes in expectations can be.
demand growth, then monetary policy
can and should respond – with the aim
of restoring overall demand growth to a
pace consistent with potential supply
growth.
I believe the Fed’s recent
policy actions are consistent with this
approach. It responded to a variety of
indications that aggregate demand
growth has been weakening, including
a deterioration in confidence that was
more severe than the underlying data
seemed to indicate. And the Fed
remains vigilant by continually
monitoring the behavior of the real
economy.
The lesson I take away from

this experience is that sometimes
monetary policy decisions have to be
based on something more than wellconstructed theory and overwhelming
evidence from the data. Sometimes
they must be based on our sense of the
situation. Such situations do not arise
very often, but when they do, it is
important, given the lags in the impact
of monetary policy, that the Fed move
expeditiously.
Well, I have shared with you
some of the musings of a professor
turned policymaker. At the end of the
day, where do all of these conundrums
leave me?
By their nature, conundrums
are not easily resolved, and so I will
continue to consider them in the
months and years ahead. Even at this
stage, however, I think they suggest a
useful approach to monetary
policymaking. To deal prudently with
the uncertainties on both the supply
side and the demand side of the
economy, as well as the dynamics of
monetary policy, monetary policy
ought to move in careful increments
and at a measured pace.
Overlaying this is the fact
that expectations matter and we must
deal with the real impact of sharp
shifts in public confidence in a more
expeditious manner. Doing so requires
a sensitivity to nuance and timing that
I expect policymakers will always find
challenging.
For me personally, the
transition from academic life to the
world of central banking is proving to
be an invigorating challenge. In my
new role I’ve learned that I can be the
proverbial two-handed economist only
up to a point. In the end, decisionmaking requires a one-handed
economist who must take action, even
if issues remain open and questions
remain unanswered.

www.phil.frb.org

Why Does Countercyclical
Monetary Policy Matter?
BY SATYAJIT CHATTERJEE

M

odern capitalistic economies use stabilization policies to minimize fluctuations in
the unemployment and inflation rates. In

the United States, the Federal Open Market Committee (FOMC) lowers the target interest rate for
interbank loans as economic activity slows or when
a financial crisis looms (as in the fall of 1998) and
raises it when inflation threatens to accelerate (as
in late 1999 and early 2000).

Such countercyclical monetary policy
is one example of a stabilization policy.
Other examples of U.S. stabilization
policies include the federal insurance
of bank deposits (and the concomitant
supervision and regulation of banking)
and income-maintenance programs,
such as unemployment insurance.
Macroeconomists have
devoted much effort to understanding
how countercyclical monetary policy
affects the volatility of the unemploy-

Satyajit Chatterjee
is an economic
advisor in the
Research Department of the
Philadelphia Fed.

www.phil.frb.org

ment and inflation rates. In contrast,
macroeconomists have directed much
less effort to understanding why
countercyclical monetary policy is
beneficial. This neglect reflects the
fact that, until recently, macroeconomists of very different persuasions agreed that policies aimed at
reducing the volatility of unemployment and inflation are desirable. Of
course, economists disagreed about
what form those policies should take,
but no one questioned the premise
that a less volatile macroeconomic
environment was a desirable policy
goal.
That is no longer the case.
During the last dozen years or so, an
influential minority of macroeconomists has questioned the supposed
benefits of reducing volatility and, by
implication, the supposed benefits of

countercyclical monetary policy.
The source of this development is the same as that which
underlies most major developments in
macroeconomics in the last halfcentury, namely, the desire to ground
macroeconomics in sound theoretical
foundations. As in the other sciences,
“sound theoretical foundations” means
explaining macro-level phenomena in
terms of micro-level phenomena; for
example, using theories of household
and business behavior to explain the
behavior of, say, aggregate consumer
spending or aggregate business
investment.
The desire for microfoundations also means that macrolevel policies (such as countercyclical
monetary policy) need to be justified in
terms of micro-level effects — how
such policies ultimately benefit
households. Surprisingly, the link
between less macroeconomic volatility
and improved household well-being
has proven weaker than many
macroeconomists might have supposed.
Concerns about the benefits
of countercyclical monetary policy
(and of stabilization policies in
general) are obviously of great importance to the Federal Reserve System.
My purpose in this article is to
accomplish two tasks: to state clearly
the mainstream view of the supposed
benefits of countercyclical monetary
policy and the challenge posed to it by
recent microfoundations-oriented
research; and to consider how this
challenge may alter our views about
the benefits of countercyclical monetary policy.
Business Review Q2 2001 7

A PRIMER ON MAINSTREAM
MACROECONOMICS AND
ITS POLICY IMPLICATIONS
Let’s begin with a brief
account of how mainstream macroeconomics makes sense of countercyclical
monetary policy.1 In the mainstream
view, the actual unemployment rate
can deviate from the natural, or longterm, unemployment rate. This natural
rate is determined by factors that
change slowly, such as demographics,
technology, laws and regulations, and
social mores. Because markets don’t
work perfectly, there can be extended
periods when the actual unemployment rate exceeds the natural rate.
During such times, mainstream
macroeconomic theory predicts that
the inflation rate will fall because
aggregate demand for goods and
services will tend to fall short of
aggregate supply. At other times, the
unemployment rate can fall below the
natural rate, and during those times,
theory predicts that the inflation rate
will rise because aggregate demand will
tend to exceed aggregate supply. According to mainstream macroeconomics, business cycles are a manifestation
of these deviations between the actual
and natural unemployment rates.
This mainstream view of
business cycles provides the rationale
for countercyclical monetary policy.
Suppose that the monetary authority
uses monetary policy to eliminate the
gap between the actual and the natural
unemployment rates. In practice, the
monetary authority would lower shortterm interest rates whenever the

1 Some textbooks call this theory the New

Keynesian or IS-LM approach to macroeconomics. But labels can be misleading; for instance,
Bradford De Long calls the same theory a
subspecies of monetarism. To avoid confusion I
call it the “mainstream view” because it is the
view that characterizes a broad swath of
academic macroeconomics and virtually all of
policy-oriented macroeconomics.
8 Q2 2001 Business Review

actual unemployment rate threatened
to exceed the natural rate and raise
them whenever the opposite happened. If this policy were successful,
the actual unemployment rate would
track the natural unemployment rate
closely. Since the natural unemploy-

“How much would an
average person in the
U.S. pay to avoid all
cyclical volatility in
aggregate U.S. consumer spending?”
Robert E. Lucas, Jr.

ment rate changes only gradually over
time, the result would be a less volatile
actual unemployment rate. Without
persistent gaps between the actual and
natural unemployment rates, the
inflation rate would also be less
volatile. Generally speaking, households and businesses do not care for
volatility in the unemployment rate or
inflation rate, so such a policy would
enhance public well-being.
However, the mainstream
view acknowledges some important
limits on the scope of countercyclical
monetary policy. First, countercyclical
monetary policy cannot change the
level of the natural unemployment rate
directly. As noted earlier, the natural
unemployment rate is determined by
factors such as technology, demographics, laws and regulations, and social
mores. Effective countercyclical
monetary policy may provide an
environment that is conducive to
innovation (and therefore the advance
of technology), but it does not have a
direct effect on the natural unemployment rate.
Second, the natural unemployment rate is not directly observ-

able; it can only be inferred from longterm trends in the economy. Thus,
policymakers will sometimes judge a
change in the unemployment rate to
be a deviation from the natural rate
when, in fact, it reflects a change in
the natural rate itself, or vice versa. In
such situations, countercyclical
monetary policy will make the inflation
rate more volatile, not less. For
instance, a persistent attempt to
reverse a decrease in the natural
unemployment rate will lead to
deflation, and a persistent attempt to
reverse an increase in the natural
unemployment rate will lead to
inflation — both of which reduce
public well-being. Thus, misperceptions concerning the natural rate
may lead to policy errors.
Third, mainstream macroeconomics recognizes that the effects of
monetary policy actions are felt with
long and variable lags. Uncertainty
about the length of time it takes for
policy to have an effect on the
economy is another potential source of
policy errors.
STANDARD OF LIVING AS A
CRITERION FOR EVALUATING
MACROECONOMIC POLICY
The fact that countercyclical
monetary policy has both benefits and
costs suggests that it’s important to
find out whether the benefits exceed
the costs to determine if such policies
are worth pursuing. University of
Chicago economist and Nobel laureate
Robert E. Lucas, Jr. was the first to
explore this issue in the context of the
U.S. economy. Lucas observed that
cyclical volatility in the unemployment
and inflation rates per se is not
important to people. What really
matters is the resulting cyclical
volatility in people’s standards of
living. Since consumer spending is one
of the most commonly used indexes of
living standards, Lucas posed the
question: “How much would an
www.phil.frb.org

average person in the U.S. pay to avoid
all cyclical volatility in aggregate U.S.
consumer spending?” From the perspective of mainstream macroeconomics, an
answer to this question provides an
estimate of the maximum potential
benefit from the Fed’s pursuit of
countercyclical monetary policy.2
As one would expect, the
answer depends on how much
households dislike random fluctuations
in their standard of living (i.e., on their
degree of risk aversion), and Lucas
experimented with a variety of
estimates, some more plausible than
others. What he found was that a
person would be willing to pay rather
small amounts to avoid all fluctuations
in the aggregate standard of living.
One estimate, based on a plausible
amount of risk aversion, implies that a
person would pay no more than $23
per year for such a benefit! Such a
paltry sum makes it hard to build a
case for countercyclical monetary
policy.
Of course, Lucas’s finding
that cyclical volatility is not very
painful was (and remains) controversial. For one thing, economists were
quick to note that the degree of risk
aversion can be judged in a variety of
ways, and some of these alternative
ways suggest that the gains from
eliminating all cyclical volatility in
consumer spending are several
hundred-fold larger than Lucas
estimated. Also, as Lucas himself
noted, his calculations assumed that all
households share the burden of
business cycles equally. In reality, the

2 The answer provides only an estimate of the

maximum potential benefit for two reasons.
First, it ignores the costs of policy errors.
Second, it ignores the fact that some portion of
the volatility in consumer spending should be
excluded from the benefit calculation because it
stems from fluctuations in the natural
unemployment rate and cannot be eliminated
by countercyclical monetary policy.

www.phil.frb.org

burden falls disproportionately on
people who become unemployed
during recessions. Taking this fact into
account is likely to raise estimates of
the maximum potential benefit of
countercyclical monetary policy.
However, such criticisms miss
a deeper point: Lucas’s insistence that
the benefits of countercyclical monetary policy be judged from the effect
such policies have on the welfare of
individual households. As he put it:
“[A]n economic system is a collection
of people and serious evaluation of
economic policy involves tracing the
consequences of policies back to the
welfare of the individuals they affect.”
This quote succinctly expresses one of
the core principles of microfoundations-oriented research: volatility in

household in two ways: the probability
of job loss for employed members and
the probability of job gain for unemployed members. For instance, during a
recession, when the unemployment
rate is relatively high, the probability of
job loss for employed workers is also
relatively high, and the probability of
job gain for unemployed individuals is
relatively low. Thus, all individuals
face a higher risk of lost earnings.
Conversely, during an economic
expansion, the probability of job loss
for employed workers is relatively low,
and the probability of job gain for
unemployed workers is relatively high.
Thus, all individuals face a lower risk
of lost earnings. If countercyclical
monetary policy successfully keeps the
unemployment rate equal to the

“An economic system is a collection of people
and serious evaluation of economic policy
involves tracing the consequences of policies
back to the welfare of the individuals they
affect.”
Robert E. Lucas, Jr.

the unemployment and inflation rates
should concern policymakers only if it
results in unacceptable volatility in the
standard of living. As I explain in the
remainder of this article, evaluating
policies based on the standard of living
has surprising implications for the
benefits of countercyclical monetary
policy.
SELF-INSURANCE AS A
SUBSTITUTE FOR
COUNTERCYCLICAL
MONETARY POLICY
Let’s examine how reducing
the volatility of the unemployment
rate affects the volatility of consumer
spending. Fluctuations in the unemployment rate affect members of a

natural rate over time, the probability
of job loss for employed individuals and
the probability of job gain for unemployed individuals would be less
variable. So an effective countercyclical monetary policy reduces the
volatility of household earnings by
reducing fluctuations in the risk of
unemployment.
How does a reduction in the
volatility of earnings affect fluctuations
in a household’s standard of living?
Suppose that a household always
spends the full amount of its monthly
earnings and does not borrow or save.
In this case, fluctuations in consumer
spending will exactly match fluctuations in household earnings, and a
policy-induced reduction in the
Business Review Q2 2001 9

volatility of household earnings will have
a direct and equal effect on the
volatility of consumer spending.
But what if households save
or borrow? Then, consumer spending
may not fluctuate as much as earnings.
If a member of the household becomes
temporarily unemployed, the household may draw on a pool of savings
(built up over the years for such an
eventuality) to protect its standard of
living. So, consumer spending will not
fall as much as earnings. When the
member regains employment, household spending will not rise as much as
earnings because a portion of the
earnings will be used to replenish the
savings drawn down during unemployment. Building up and maintaining a
stock of savings to protect oneself from
temporary spells of unemployment or
unanticipated expenses is called selfinsurance.3
A surprising implication of
self-insurance is that it weakens the
ability of countercyclical monetary
policy to improve public well-being
because, from a household’s point of
view, self-insurance is a partial substitute for countercyclical monetary
policy. To see this, suppose the
monetary authority introduces a new
countercyclical policy that lowers the
volatility of household earnings. Faced
with lower volatility of earnings, a
household will have an incentive to
lower its stock of savings. Recall that
these savings were accumulated, in
part, to protect living standards from
shortfalls in earnings; however, lower
earnings volatility means that such
situations arise less often.
Thus, improved countercyclical policy will have two effects: it
will reduce the volatility of a
household’s earnings, and it will

3 Building up savings includes the case of paying

off debt to keep open the option of borrowing
more in the future.
10 Q2 2001 Business Review

induce households to reduce the savings
built up to protect against such volatility.
These two effects have opposing
consequences for the volatility of
consumer spending. The first effect
lowers the volatility of consumer
spending while the second raises it.4
What will the combined
effect be? Theory predicts that the first
effect will dominate and the volatility
of consumer spending will decline.
But theory also suggests that this
decline will be minor. In other words,
private stocks of savings are a partial
substitute for the beneficial effects of
countercyclical policies: an improved
countercyclical policy partly substitutes
for actions that a household takes to
deal with the ill effects of earnings
volatility.5
The significance of selfinsurance for assessing the benefits of
countercyclical policy was first
recognized in an article published in
1989 by Ayse Imrohoroglu. Imrohoroglu simulated an economy in
which individuals could borrow and
save to protect their living standards in
the face of temporary spells of unemployment. Her simulations showed
that even if countercyclical policies
made the unemployment rate constant
and ensured that each individual faced
a constant (rather than fluctuating)

4 It will raise the volatility of spending because,

all else remaining the same, a lower stock of
savings means that a household is less able to
protect living standards in case of a loss in
earnings.
5 That being said, it’s important to recognize

that some households may not be in a position
to self-insure. For instance, a poor household
living hand-to-mouth is not going to be able to
self-insure and will benefit substantially from a
less volatile macroeconomic environment. But
such households do not constitute the majority.
Furthermore, there are social programs in place
that attempt to deal directly with the many
causes and consequences of poverty. Given
these programs, the appropriate goal of
monetary policy is to concentrate on improving
the well-being of the typical household.

probability of job loss, the gain in wellbeing would be around $69 per person
per year. 6 Although larger than Lucas’s
estimate, the gain was still quite
small.7 As Imrohoroglu noted in her
article, her findings reflected the fact
that individuals in her artificial
economy self-insured themselves pretty
well against temporary spells of
unemployment. As a result, although
effective countercyclical policy did
reduce the volatility of consumer
spending, the resulting gain in wellbeing was minor. 8
What about volatility in the
inflation rate? From a household’s
point of view, inflation volatility could
be important because it affects the
volatility of the real return on financial
assets, the assets that households use
to self-insure against temporary loss of
earnings. If the expected real return on
these assets is poor, it will blunt the

6 Even if countercyclical monetary policy

manages to keep the unemployment rate
constant, an individual’s earnings may still
fluctuate over time because of the possibility
that an individual may lose his or her job. Thus,
even when monetary policy is perfect,
households have to self-insure against temporary
spells of unemployment.
7 In her article, Imrohoroglu presented results

from several different simulations. The result
reported here is for the simulation where
individuals borrow at an annual real interest
rate of 8 percent and save at a real interest rate
of 0 percent. Like Lucas’s, Imrohoroglu’s
calculations provide an estimate of the
maximum potential benefit from countercyclical
monetary policy. She ignores the potential costs
of countercyclical monetary policy, and she
assumes that a fully effective countercyclical
policy corresponds to no fluctuations in the
unemployment rate.
8 Improved countercyclical policy permits

households to lower savings. The additional
one-time increase in consumer spending
permitted by the decline in savings is another
benefit of improved countercyclical policy. But a
one-time increase in consumer spending cannot
permanently improve well-being. For permanent
improvements, one must look at how improved
countercyclical policy affects the volatility of
consumer spending. But that effect, as already
noted, is minor.
www.phil.frb.org

incentive to self-insure. In a sequel to
her first article, Ayse Imrohoroglu and
Edward Prescott used simulation
techniques to investigate the impact of
inflation volatility on public well-being.
Assuming that fluctuations in the
expected inflation rate led to opposite
fluctuations in the expected real return
on assets, they found that inflation
volatility had virtually no adverse
effect on well-being.9 As they noted in
their article, what mattered most to
people in their model was the average
expected real return on financial
assets, not the volatility of the expected real return.
In short, both theory and
simulation results suggest that selfinsurance acts as a partial substitute
for effective countercyclical policies.
Households can protect their standard
of living from temporarily low earnings
by drawing on a pool of savings built
up for such eventualities. If they do
not have savings, they can borrow,
then repay the debt when earnings go
back to normal. In such a situation,
improvements in countercyclical
policy partly substitute for private
actions that people take to contain
volatility in their standard of living.
Consequently, the net effect on public
well-being is not as large as one might
otherwise suppose.
Of course, the decline in
household income due to loss of

9 The real return on financial assets depends on

the difference between the yield (or interest
rate) on these assets and the inflation rate.
According to mainstream macroeconomics, the
real return on financial assets is countercyclical.
The yield on financial assets does not rise as
much as the inflation rate when the unemployment rate falls below the natural rate, and it
does not fall as much as the inflation rate when
the unemployment rate rises above the natural
rate. The article by Imrohoroglu and Prescott
examined the extreme case in which the
interest rate on financial assets stayed constant,
so that any change in the expected inflation rate
led to an equal and opposite change in the
expected real return.
www.phil.frb.org

employment is often mitigated by state
unemployment insurance programs and
by the progressive nature of the federal
tax code (tax liabilities fall faster than
earned income). From a household’s
perspective, self-insurance is also a
substitute for social insurance programs
and so raises troubling questions about
the net benefits of these programs as
well. However, Ayse Imrohoroglu and
Gary Hansen have shown that even if
households self-insure, unemployment
insurance programs are generally quite
beneficial, at least as long as the
programs don’t adversely affect people’s
desire to seek work.

In short, both theory
and simulation results
suggest that selfinsurance acts as a
partial substitute for
effective countercyclical policies.
Before we take the policy
implications of self-insurance seriously
we must ask if, theory and simulations
aside, households really do self-insure.
Fortunately, a body of evidence now
speaks to that question.10 First, selfinsurance accords with common sense.
For instance, one financial planning
guide recommends that households
accumulate a stock of savings to deal
with uncertainty: “It is generally held
that your liquid assets should roughly
equal four to six months’ employment
income. If you are in an unstable
employment situation…the amount

should probably be greater” (Touche
Ross, 1989, p.10). Perhaps because of this
commonsense aspect, surveys of
household finances show that saving
for emergencies is the most important
reason cited for saving. These surveys
also find that a household’s stock of
financial wealth is very volatile, even
over short periods. Furthermore,
studies show that households that face
greater uncertainty about earnings
tend to accumulate more financial
wealth.
All these findings are
consistent with households’ using
financial wealth as a buffer against
random shocks to income and expenses. In addition, self-insurance
accounts for several puzzling patterns
in consumer spending. It would take us
too far afield to discuss all of these
here, but one is worth mentioning.
Researchers have known for some time
that a typical household does not begin
to save for retirement until fairly late
in life. This late start in providing for
retirement has puzzled economists
because it seems inconsistent with
forward-looking behavior. However,
simulations have now shown that selfinsurance may dominate other motives
for saving until an individual reaches
his or her late 40s. It’s only in late
middle age that retirement-related
considerations surface as the main
determinant of savings behavior. Thus,
self-insurance may go a long way
toward accounting for the puzzling
delay in providing for retirement.11
From a theoretical point of
view, self-insurance is a basic outcome
of forward-looking behavior, and the
idea played a key role in Milton

11 This result emerges because self-insurance

10 This discussion draws heavily on Christopher

Carroll’s 1997 article on the subject.

requires that households save in safe financial
assets, the return on which is usually low. The
low return discourages saving for retirement
until late middle age.
Business Review Q2 2001 11

Friedman’s Nobel Prize-winning work
on the theory of consumer spending.12
It’s remarkable that although macroeconomists have been aware of selfinsurance since the 1950s, its significance for countercyclical monetary
policy remained unappreciated until
the late 1980s. In all likelihood, the
reason for this lies in the fact that for a
long time, the criteria for evaluating
countercyclical policies made no direct
reference to living standards. When
Lucas insisted that macroeconomists
use living standards as a criterion for
policy evaluation, the significance of
self-insurance quickly became apparent.
SO WHY DOES COUNTERCYCLICAL MONETARY
POLICY MATTER?
Self-insurance raises doubts
about the goals of countercyclical
monetary policy as conceived by
mainstream macroeconomics. Since
households can self-insure against the
adverse effects of earnings and
inflation volatility, and the evidence
suggests that they do, policy-induced
reductions in earnings and inflation
volatility are predicted to yield only a
minor improvement in public wellbeing. One could conclude from these
findings that improving countercyclical
monetary policy is not worth the cost;
monetary authorities should deemphasize reducing volatility and
concentrate on other monetary policy
goals, such as maintaining a low rate of
inflation. However, such a conclusion
overlooks a potentially devastating

12 The idea also attracted the attention of

economic theorists, most notably Truman
Bewley of Yale University. In a series of articles
published in the 1970s, Bewley provided a wideranging discussion of the implications of selfinsurance. More recently, macroeconomists
have picked up where the theorists left off.
Influential articles by macroeconomists include
those by Mark Hugget and S. Rao Aiyagari.
12 Q2 2001 Business Review

side-effect of self-insurance: unbridled
self-insurance can be a source of
macroeconomic instability. A simple
example illustrates how this can
happen.
Imagine a small community
served by a single bank. The bank
accepts deposits from local households
and uses those deposits to make loans
to local businesses. Imagine also that
there is no federal insurance of bank
deposits or unemployment insurance.
A bank deposit is one financial asset
that households use to self-insure
themselves; another is cash. Under
normal circumstances, a bank deposit
is the preferred financial asset for selfinsurance, since it accrues interest and
cash does not.
Now imagine that some
shock adversely affects many businesses in this community. Some
businesses close; some people become
unemployed; and those that still have
jobs face a higher probability of job
loss. The logic of self-insurance says
that the employed will increase their
savings to offset the heightened
probability of job loss. As households
reduce their spending, businesses in
the community will experience a
further fall in sales. The decline in
sales will send more firms out of
business, causing more unemployment
and making households even more
eager to self-insure.
If business failures continue,
households will begin to think that the
next business to fail will be the bank,
and they’ll rush to convert their
savings into cash. The bank may well
be sound, but the large-scale withdrawals of deposits will cause it to fail.
The bank failure will deprive local
businesses of a source of credit and,
thus, force even more businesses to
close. This cycle of falling demand,
rising unemployment, more hoarding,
and further decline in demand is an
economic crisis. The sequence of events
in this hypothetical community can,

and does, happen on a larger scale.
Indeed, it’s what happened to many
U.S. communities during the Great
Depression.
This example highlights the
point that actions that are beneficial
from an individual’s point of view can
be self-defeating when taken simultaneously by many. The effect of a single
household’s increasing its savings to
self-insure against a heightened
possibility of job loss is quite different
from the effects of all households doing
the same. A simultaneous increase in
the desire to self-insure may be selfdefeating because it can make the
event against which insurance is
sought more probable. John Maynard
Keynes observed long ago that an
economy in which saving and investment decisions are carried out by
different sets of people is susceptible to
the paradox of thrift: if all individuals
attempt to save more cash (so that the
additional savings do not lead to a
corresponding increase in business
investment), aggregate demand will
fall and so will income and savings.13
Once we recognize that a
simultaneous increase in the desire to
self-insure could destabilize the
economy, the current U.S. policy
arrangement begins to make more
sense: self-insurance is only part of the
solution to reducing earnings volatility.
Some of the burden of providing
insurance against loss of earnings is
borne by the government through the
other two prominent stabilization
policies mentioned in the introduction: federal insurance of bank deposits
and state-run unemployment insurance programs. Deposit insurance
eliminates the need for households to

13 The possibility that individually rational

actions can have bad social consequences is a
recurring theme in economics. For a wideranging and very readable discussion of this
theme, see Thomas Schelling’s book.
www.phil.frb.org

self-insure in the form of cash, and
unemployment insurance permits
households to face a higher probability
of job loss with greater equanimity.
Both programs attenuate the potentially destabilizing effects of households’ response to heightened economic insecurity.
The benefit of countercyclical
monetary policy can also be understood in these terms. By attempting to
reduce the volatility of the unemployment
rate, countercyclical monetary policy
makes it less likely that households will
face a large simultaneous increase in the
probability of job loss. In other words,
countercyclical monetary policy helps
to nip the problem of macroeconomic
instability in the bud. One might think
that with two other stabilization
policies in place, it’s unnecessary for
monetary policy to attempt to reduce
fluctuations in the unemployment
rate. However, the two insurance
programs provide partial, not complete, protection. The federal guarantee of bank deposits protects each
individual account up to $100,000, so
large accounts are not fully protected.
Most state unemployment insurance
programs replace somewhere between
one-half to two-thirds of a worker’s
most recent weekly pay, but only for a
maximum of 28 weeks. Because both
deposit and unemployment insurance
are not complete, the possibility
remains that a large enough increase in
the unemployment rate may lead to
enough of an increase in the desire to
self-insure so as to destabilize the
economy.
That stabilization policies
exist to protect against instability
should not come as a surprise. What is
somewhat odd is that mainstream
macroeconomics does not really accept
the point that stabilization policies are
necessary to prevent instability. The
mainstream view is that market
economies are self-regulating: if a
shock moves the unemployment rate
www.phil.frb.org

away from the natural rate, market
forces eventually bring the unemployment rate back to the natural rate.
The cycle of rising unemployment,
more hoarding, and more unemployment that I highlighted earlier is
assumed to be impossible.14 But there
is no theoretical presumption that
market economies are necessarily selfregulating. It’s possible to construct
macroeconomic models in which the
forces of self-regulation are weak
enough that adverse shocks precipitate
economic crises.15 Whether the forces

suggests that if counter-cyclical policy
eliminates even a very small likelihood
of a Great Depression-like event, the
resulting gain in living standards can
be quite significant. We estimate that a
person would pay as much as $1,380
per year to eliminate a once-in-83years chance of living through a
Depression-like event.16 Thus, if
countercyclical monetary policy does
nothing other than prevent economic
crises, that benefit alone may provide
an adequate justification for pursuing
it.

By attempting to reduce the volatility of the
unemployment rate, countercyclical monetary
policy makes it less likely that households will
face a large simultaneous increase in the
probability of job loss.
of self-regulation can be relied on to
avoid crises in actual economies is a
controversial issue.
If the true benefit of countercyclical monetary policy lies in
preventing economic crises, how much
of an effect does it have on living
standards? The answer depends on
how likely it is that economic crises
will occur in the absence of countercyclical monetary policy. Although
there is no accepted estimate of that
likelihood, some of my research

14 The textbook New Keynesian or IS-LM

model does not allow for the possibility of
economic crises. While a decline in aggregate
demand may cause a temporary rise in
unemployment, the model predicts that market
forces (in the absence of any further shocks to
aggregate demand) will eventually bring the
unemployment rate back to the natural rate.

SUMMARY
Macroeconomists typically
view reducing the cyclical volatility of
the unemployment and inflation rates
as the proper goal of countercyclical
monetary policy. Generally speaking,
macroeconomists and policymakers
have not been very explicit about why
such reductions enhance well-being.
This article discussed some research
that bears on this question. In particular, it laid out the implications of the
view that the benefits of
countercyclical monetary policy
ultimately derive from the effect such
a policy has on people’s standard of
living.
The standard-of-living
criterion has unexpected implications
for assessing the benefits of countercyclical monetary policy. If the goal of
countercyclical monetary policy is to
reduce volatility in the standard of
living, such a policy is unlikely to be

15 MIT professor Peter Diamond demonstrated

this possibility in a series of influential articles in
the early 1980s. His views are summarized in his
1982 book.

16 See my working paper with Dean Corbae for

details.

Business Review Q2 2001 13

very beneficial. The problem is that if
monetary authorities succeed in
reducing the volatility of the unemployment and inflation rates, this
success will partly substitute for private
actions taken to safeguard living
standards (self-insurance). Thus,
because of self-insurance, the overall
reduction in the volatility of the
standard of living will not be as great

as one might otherwise suppose.
On the other hand, selfinsurance may be a mixed blessing.
Sudden increases in the desire to selfinsure can be a source of macroeconomic instability. Taking this possibility
into account suggests that an important benefit of countercyclical monetary policy (along with deposit and
unemployment insurance) is to reduce

the likelihood of a sudden upward jump
in the unemployment rate. Such a jump
could trigger a destabilizing rise in the
desire to self-insure and cause an
economic crisis. If countercyclical
monetary policy eliminates even a
small likelihood of economic crisis, the
gain in the average person’s living
standard may be large enough to justify
the potential costs of such a policy.

Diamond, P. Search-Equilibrium Approach to
the Micro Foundations of Macroeconomics.
Cambridge, MA: MIT Press, 1982.

Imrohoroglu, Ayse, and Edward C.
Prescott. “Evaluating the Welfare Effects of
Alternative Monetary Arrangements,”
Journal of Money, Credit and Banking 23
(1991), pp. 462-75.

REFERENCES
Aiyagari, S. Rao. “Uninsured Idiosyncratic
Risk and Aggregate Saving,” Quarterly
Journal of Economics 109 (1994),
pp. 659-84.
Bewley, Truman. “Permanent Income
Hypothesis: A Theoretical Formulation,”
Journal of Economic Theory 16 (1977), pp.
252-92.
Carroll, Christopher. “Buffer Stock Saving
and the Life Cycle/Permanent Income
Hypothesis,” Quarterly Journal of Economics 112 (1997), pp. 1-55.
Chatterjee, Satyajit, and Dean Corbae.
“On the Welfare Gains of Reducing the
Likelihood of Economic Crises,” Working
Paper No. 00-14, Federal Reserve Bank of
Philadelphia.
De Long, J. Bradford. “The Triumph of
Monetarism?” Journal of Economic
Perspectives, Winter (2000), pp. 83-94.

14 Q2 2001 Business Review

Friedman, M. A Theory of the Consumption
Function. Princeton: Princeton University
Press, 1957.
Hugget, Mark. “The Risk-Free Rate in
Heterogeneous-Agent, Incomplete
Insurance Economies,” Journal of Economic
Dynamics and Control 17 (1993),
pp. 953-69.
Imrohoroglu, Ayse. “Cost of Business
Cycles with Indivisibilities and Liquidity
Constraints,” Journal of Political Economy
97 (1989), pp. 1364-83.
Imrohoroglu, Ayse, and Gary D. Hansen.
“The Role of Unemployment Insurance in
an Economy with Liquidity Constraints
and Moral Hazard,” Journal of Political
Economy 100 (1992), pp. 118-42.

Keynes, John M. The General Theory of
Employment, Interest and Money. London:
Macmillan, Reprinted 1967.
Lucas, Robert E., Jr. Models of Business
Cycles. Oxford: Basil Blackwell, 1987.
Schelling, Thomas C. Micro Motives and
Macro Behavior. New York: W.W. Norton
& Company, 1978.
Touche Ross. The Touche Ross Personal
Financial Management and Investment
Workbook, 1989.

www.phil.frb.org

Understanding Changes
In Aggregate Business Fixed Investment
BY AUBHIK KHAN

W

hen economists talk about business
fixed investment, they mean the expenditures by firms on equipment and

structures. Business fixed investment is commonly held to be an important determinant of
an economy’s long-run growth.

On average, higher levels of such
investments raise production by
increasing the productivity of the labor
force. While the significance of shortterm changes in business investment is
less widely recognized, the importance
of such changes for the business cycle
has been known to economists since
the beginning of the last century. For
example, many believe that the
current record expansion has been
driven, at least in part, by strong

1 The definition of business fixed investment

used throughout this paper does not include
software expenditures by firms because these
data are not available.

Aubhik Khan is
an economist in
the Research
Department of
the Philadelphia
Fed.

www.phil.frb.org

1

investment in computers and related
equipment. In this article, I attempt
to explain some of what economists
have learned about how investment
changes over the business cycle.
INVESTMENT AT THE
PLANT LEVEL
For individual plants,
investment is simply the expenditure
required to adjust its stock of capital.
Capital includes all equipment and
structures the plant uses. The plant
combines capital with other inputs,
such as labor and energy, to produce
goods or services. When a mining
company acquires diesel engines, it is
investing in equipment. When an
automobile manufacturer builds a new
warehouse, it is investing in structures.
Because it takes time to manufacture,
deliver, and install new capital goods,
investment expenditures today do not
immediately raise the level of a plant’s
capital. So investment involves a
planning decision that trades off
present against future earnings.

Investment expenditures today reduce
current profit but increase a plant’s
future possible production and, as a
result, future profit.
Since investment spending
raises future capital and thus the
quantity of goods and services that
may be produced in the future, plants
will tend to adjust their investment
levels in response to forecasted
changes in the market’s demand for
their own output. Changes in productivity — the efficiency with which
inputs may be combined to produce
output — will also tend to increase
investment. For example, if productivity increases, the firm may be able to
sell more of its product, since it can
offer it at a more attractive price. The
firm may then expand and more
workers may be hired. These workers
will need equipment, and, as a result,
investment will rise.
AGGREGATE INVESTMENT
OVER THE BUSINESS CYCLE
When plants anticipate
increased demand for their output or
higher productivity, they will generally
raise their investment spending. For
most sectors of the economy such
increases in investment occur when
GDP rises, for example, during
economic expansions. In contrast, if
plants expect a decline in demand,
such as occurs for most plants when
GDP falls, investment spending will
fall. As a result, aggregate investment —
the sum of all investments by all plants
in the economy — is procyclical: it rises
when output rises and falls when
output falls over the business cycle.
Even a casual glance at the
Business Review Q2 2001 15

data will reveal that investment is
much more volatile than output
(Figure 1). 2 During periods of abovetrend growth, aggregate investment
experiences a much larger percentage
rise. Moreover, when growth rates are
below trend, such as during recessions,
aggregate investment falls far more
sharply than does aggregate output.
Indeed, if we use a standard measure

2 Episodes of negative growth rates in Figure 1

do not imply recessions, at least as they are
commonly understood. For example, if the
output trend is 3 percent, and actual output
grows at 2 percent, Figure 1 will report –1
percent. Of course, actual recessions will be
recorded when growth is negative.

3 The percentage standard deviation of output

is 1.4 while that of investment is 4.9, hence the
ratio of 3.4.

of variability, quarterly investment is
3.4 times more volatile than quarterly
output over 1956 - 1994.3
The reader will likely note
another striking regularity between the
two series: investment and output
almost always move in the same
direction. For example, in the sharp
recession of the early 1980s, detrended
output fell almost 5 percent, and
concurrently, investment fell more
than 10 percent.4 Plants adjust
investment in anticipation of changes
in output demand, and consequently,
investment moves similarly to output.
It follows that to understand the

4 This co-movement in investment and output

is captured by a correlation coefficient of 0.92
between the two series.

FIGURE 1
Investment and Output over the Business Cycle

Figure 1 displays detrended quarterly total real business fixed investment and GDP in the
United States in each quarter over the years 1956 – 1994. Since we want to concentrate on how these
series move over the business cycle, we have detrended them. That is, the figure shows changes in
output and investment from their longer-term trends. These trends were computed using the BandPass Filter developed by Marianne Baxter and Robert G. King in their 1999 paper. Note that the use
of this filter eliminates several years of data at the beginning and end of our series.

16 Q2 2001 Business Review

business cycle, we must understand
why aggregate investment changes
over time. As Harvard economist
Robert J. Barro has stated, “As a first
approximation, explaining recessions
amounts to explaining the sharp
contractions in the investment
components” (p. 245).
But to understand why
aggregate investment fluctuates,
economists are learning that they must
understand the decisions of individual
plants. Such emphasis on the role of
an individual entity characterizes
recent progress in many areas of
macroeconomics.
THE PARTIAL
ADJUSTMENT MODEL
As with all other forms of
scientific progress, progress in economics relies on the development of
theories. The success of these theories
is determined by their ability to
contribute to an explanation of
observed phenomena. In the study of
investment, this has led to a theory of
how firms choose their levels of
investment.
Traditionally, economists tried
to understand aggregate investment
using an approach that ignored
possible differences across individual
firms. This approach led to a theory
that relied on the fiction of a representative firm that undertook all investment that actually occurred in the
economy. In reality, many firms own
and operate several plants, and
investment decisions are made at both
the firm and the plant level. But as
long as we examine representative
firms, there is no meaningful distinction between a firm (an ownership
unit) and a plant (a production unit).
Let’s consider a representative firm, BIGCAP. Even if BIGCAP
sees no reason to change the level of
its capital stock, it will nonetheless
have to undertake some maintenance
to sustain capital stock at current
www.phil.frb.org

levels, since capital depreciates over
time. Investment beyond the level
needed to offset depreciation will raise
the stock of capital BIGCAP will have
in the future. This higher level of
capital will allow BIGCAP to raise
production. So investment today will
affect future earnings and, thus, future
profits. Therefore, by undertaking
investment today and building capital,
BIGCAP can influence its future
profits. The fundamental assumption
of the standard theory of investment is
entirely reasonable: A firm chooses its
stock of capital in order to maximize
its shareholder value. This is the firm’s
target level of capital.
Adjustment Costs. However, modern variants of this theory
make another important assumption:
there is a cost associated with changing a firm’s capital, the cost of adjustment itself. In their 1996 review
paper, Daniel Hammermesh and
Gerard Pfann discuss some of the
sources of these adjustment costs.
Adding a new machine takes time.
During installation, the firm must
reallocate production across its other
machines, a move that may overburden these other machines and may
present machine operators with
unfamiliar working conditions. As a
result, production will fall during this
first adjustment period. Next, after
the new machine has been installed,
workers must be trained to use it.
Again, the firm will be operating at
temporarily reduced levels of productivity during this second adjustment
period.
Overall, when a firm installs
new capital goods it incurs internal
costs over and above the cost of the
equipment itself. These costs reduce
the firm’s profits over the adjustment
period.
Consider what happens if
BIGCAP purchases a new computer to
add to its existing stock. In addition to
the price of the computer equipment,
www.phil.frb.org

BIGCAP will incur additional costs of
integrating the machine into its
network and setting it up with the
required software. The nature of these
costs — how they change with respect
to the quantity of investment undertaken by BIGCAP — is critical in
determining their effect. Traditional
investment theory assumes that it costs
more, per unit, to install more capital.
Thus, BIGCAP’s cost of installing two
new computers would be more than
twice the cost of installing a single
machine.
Rising costs of adjustment
imply that adjusting capital rapidly
would cost more than doing it gradually. So traditional theory said that
firms adjusted to their target capital
stock — that which maximized
shareholder value — slowly in an
effort to reduce adjustment costs. So
this theory was called the partial
adjustment model.
It is not at all obvious why
the costs of adjustment should rise
with the level of investment. We
might well think that competent

computer staff, learning from setting
up the first computer, would install the
second in much less time. However,
when rising adjustment costs were
ignored, the model performed very
poorly, since it predicted too much
volatility in aggregate investment. So
by including rising adjustment costs,
the model better matched the data for
the economy as a whole.
Figure 2 shows how assuming
rising adjustment costs leads to
smoother aggregate investment. It
displays two possible models of a firm’s
investment over time.5 For each one,
the vertical axis displays the firm’s
current level of investment over time,
as a percentage of its long-run average
level. Suppose the firm experiences a
rise in productivity, lowering its costs,
or, instead, a rise in expected demand

5 Figures 2 and 3 were generated by solving

economic models of a firm’s behavior under
different assumptions about the costs of capital
adjustment.

FIGURE 2
Investment With and Without Adjustment Costs

Business Review Q2 2001 17

for its product. As a result, it chooses
to increase its capital stock so that it
can produce more. The blue line
indicates the investment the firm will
make if it faces no adjustment costs:
there is a sharp rise in investment as
the firm immediately adjusts its capital
stock to allow it to efficiently increase
production. Subsequently, when
productivity or demand eventually
returns to normal, there is an equally
dramatic disinvestment episode, as the
firm sells off its excess capital stock.
In contrast, if adjustment costs rise
with the level of investment, the
change in capital is much more
protracted. Capital partially adjusts in
each period as investment slowly raises
it toward its target value. As a result,
when the change in productivity or
demand ends, the plant has much less
disinvestment to do. Investment is
much more gradual under partial
adjustment.
Since we are looking at a
representative firm, total investment
for the economy is the same as this
firm’s investment. Hence, more
gradual investment at the firm level
means that aggregate investment, that
is, the total investment of all firms,
shares the same properties.
When we compare Figures 1
and 2, we see that the model without
adjustment costs generates an investment series that is too volatile when
compared with the data. For example,
in the model without adjustment costs,
the largest deviation of investment
from its trend is 25 percentage points,
but in the data over 1956 – 94, the
largest deviation was 10 percentage
points. But when we examine the
model with adjustment costs, we see
that it exhibits much less variability in
investment. As a result, the introduction of adjustment costs allows for a far
better match with the aggregate data.6
Adjustment Costs Revisited.
The partial adjustment model means
gradual change in investment at the
18 Q2 2001 Business Review

aggregate level, which matches the
data, but it also means gradual
adjustment in investment at each
individual firm — and this does not
seem to match the data! When
researchers at the Bureau of the
Census undertook an extensive study
of how manufacturing plants adjusted
their stock of capital, the story they
uncovered was inconsistent with the
predictions of the partial adjustment
model. Instead of changing capital
slowly and gradually, plants made
capital adjustments that were lumpy,
that is, they would invest a lot at one
time, then refrain from investing for a

6 It should be noted, however, that the match is

still imperfect. Adjustment costs reduce
variability too much (the largest deviation from
trend in the model with adjustment costs is
about 5 percentage points).

while, then invest a lot again, and so
on. Typically, plant capital remains
roughly constant for long periods of
time, with low levels of associated
investment. These long episodes of
relative inactivity are interrupted by
sudden bursts of investment spending
that drive large increases in plants’
capital stock over short periods of
time. The partial adjustment model
with rising adjustment costs predicted
plant-level investment that was too
smooth. Given the limited success of
the partial adjustment model,
macroeconomists began to reconsider
the plausibility of the assumption
about rising adjustment costs. Indeed,
much of the recent progress in our
understanding of investment has arisen
from replacing the unrealistic assumption of rising costs of adjustment with
a better one. (See How Do Plants
Adjust Their Capital?)

How Do Plants Adjust Their Capital?

I

n their 1998 paper, Mark Doms and Timothy Dunne examined
capital adjustment at the plant level. Using the Longitudinal Research Datafile collected by the U.S. Bureau of the Census, they
studied changes in the capital stock of 13,700 large U.S. manufacturing plants over 1972 – 1988. In terms of the total number of
manufacturing plants, the sample is small: over this period, between 312,000 and
360,000 plants were operating in the manufacturing sector. However, the sample
accounts for approximately 50 percent of total manufacturing production and 40
percent of employment. In addition to including relatively large plants, the sample
is also unusual because all the plants present in the sample in 1972 were still in it
through 1988.
In a typical year, over 80 percent of all plants in the sample undertook
very little capital adjustment: their capital stocks changed less than 10 percent.
But approximately 8 percent of plants adjusted capital by more than 30 percent,
and more than half of the sample experienced capital growth of more than 37
percent in at least one year.
The partial adjustment model, which predicts gradual changes in investment due to the rising costs of undertaking too much capital adjustment at one
time, cannot explain these sharp, sudden investment episodes followed by long
periods of low adjustment.

www.phil.frb.org

As the mathematical sophistication of researchers in the field
increased, they understood how to
adapt the existing theory of investment
to account for the new observations.
The new theory assumes that the costs
of capital adjustment are unrelated to
the scale of the adjustment. Much of
the adjustment cost borne by a plant
would now be the same whether it was
adding one, two, or even 10 computers
to its network.
Such fixed costs (fixed because
they are the same regardless of the
amount of investment) lead to lumpy
investment over time at the plant.
Let’s consider BIGCAP once again,
assuming that BIGCAP is a firm that
owns only one plant. BIGCAP
determines its target level of capital,
the level that maximizes shareholder
value in the absence of adjustment
costs. However, BIGCAP will adjust to
this capital stock only if the rise in
shareholder value from doing so is
greater than the fixed cost associated
with the capital adjustment. As
explained by Ricardo Caballero in his
1999 paper, what this means is that a
plant like BIGCAP will adjust its
capital only when the current level of
its capital stock is far enough away
from its target level of capital stock.
If current and target capital
levels are close, there’s not much gain
in shareholder value from adjustment;
the fixed adjustment cost outweighs
the benefits of adopting the target
level of capital. But once it decides to
adjust its capital stock, BIGCAP has
no incentive to move gradually, since
the adjustment cost is independent of
the size of the adjustment. Notice that
a simple modification of existing
theory has led to a dramatic change in
the model’s predictions. Investment at
the plant level is no longer slow and
gradual but rather erratic and lumpy.
Plants don’t change their actual capital
in response to small changes in their
target capital. So there are typically
www.phil.frb.org

long periods when plants don’t
undertake much investment. However,
when target capital is sufficiently
different from actual capital, there is
sudden, sharp adjustment.
THE SUM OF INDIVIDUALS:
THE IMPLICATION OF FIXED
ADJUSTMENT COSTS FOR
AGGREGATE INVESTMENT
Fixed adjustment costs seem
to fit the plant-level data quite well.
But how well do they match the
aggregate data? Before we add up
individual plants’ behavior, we must
understand how these fixed adjustment costs vary across plants and
across time. Once this is accomplished, we will see that the new
model actually fits the aggregate data
better than the partial adjustment
model.
How can a model of lumpy
plant-level investment match the
aggregate investment data, which
show gradual changes in investment?
The answer is that the fixed costs that
are the foundation of the new theory
are assumed to vary both across plants
and over time, that is, fixed costs
behave randomly.
Recall our example of
installing new computers. Now, let’s
consider the installation of two new
machines, on two separate occasions,
at our hypothetical plant. For the first
installation, managers may have
available a very competent senior
technician. He or she may be able to
efficiently integrate the new machine
into the plant’s network. The cost of
capital adjustment will be relatively
small. However, at a later date, the
senior technician may be unavailable,
and managers may have to rely on a
novice. This technician, new to the
plant and unfamiliar with its computer
systems, is likely to take far longer to
install the new computer and will
therefore incur a much larger adjustment cost. A simple way to introduce

such variations into models of investment is to assume that adjustment
costs are random.
What Do Random Adjustment Costs Mean for Aggregate
Investment? If these costs differ
randomly across plants and over time,
then even two similar plants are likely
to behave differently because they’ll
have different adjustment costs.
Consider a world full of plants that all
start out with the same level of capital.
Over time, they’ll face different
adjustment costs, and thus, their
capital adjustment behavior will differ.
The difference between a

Plants with larger
capital imbalances
will see higher gains
from adjusting capital,
no matter what the
adjustment cost
plant’s actual and target capital stock
will also differ across plants. Plants
that had small fixed costs will have
adjusted their capital stocks and be
close to their targets. Plants that were
less lucky and experienced several
large adjustment costs in a row will
have much larger capital imbalances.
Generally, plant actions will not be
synchronized. Plants with larger
capital imbalances will see higher gains
from adjusting capital, no matter what
the adjustment cost; hence, they’ll be
more likely to undertake adjustments.
Plants with low capital imbalances will
not be willing to absorb even moderate
adjustment costs and will be unlikely
to adjust capital. At any time,
someone studying the entire population of plants will find that some
actively adjust their capital while
others do not.
Business Review Q2 2001 19

Changes in aggregate
investment will arise for two reasons:
changes in the level of investment
undertaken by plants actively investing
and changes in the number of these
active plants. When there are many
plants, small increases in productivity
or demand that affect most plants will,
generally, induce small changes in the
number of plants actually investing.
But by raising target capital a little, a
few more plants will be induced to
become active and adjust capital. As a
result, while individual plants may
exhibit lumpy investment, the number
of plants investing will evolve more
gradually, leading to slower changes in
aggregate investment.
We see that the fixed cost
model is able to preserve the success of
the partial adjustment model in
explaining changes in aggregate
investment, while it improves the
match with the microeconomic
evidence on plant-level investment.7
SYNCHRONIZATION AND
BUSINESS CYCLES
The fixed adjustment cost
model and the partial adjustment
model make different predictions
about how investment should behave
over the business cycle. While plants
will typically not act together in the
fixed adjustment cost model, at other
times, plants will behave in a dramatically more synchronized manner in the
model, mainly whenever there is a
sharp change in some factor that
affects all plants.

7 In fact, the fixed cost model is actually better

able to explain aggregate investment than the
partial adjustment model because the partial
adjustment model reduced the variability of
investment too much. And while the fixed cost
model typically behaves like the partial
adjustment model, at other times it allows for
much sharper changes in investment. This
undoes much of the excess smoothness of the
partial adjustment model.
20 Q2 2001 Business Review

Economists agree that plants
are subject to unforeseen events that
can either increase or decrease their
productivity. For example, a bank
might be subject to new regulation, a
farm might experience a drought, or a
firm might adopt a new type of

steeper decline during years 12 – 14,
the overall impact of synchronization
is to raise investment spending by 4
percentage points.
This is the principal achievement of the new theory of investment. By
allowing differences in capital imbal-

The fixed adjustment cost model and the
partial adjustment model make different
predictions about how investment should
behave over the business cycle.
technology, for example, newer, faster
computers.
Consider a large unforeseen
rise in future productivity for all plants
— what macroeconomists refer to as a
large shock. Such a productivity
shock, which might occur at the end
of a recession, will yield a large change
in the target capital of all plants. As a
result, there will be few plants left with
low capital imbalances, and most
plants will adjust their capital. Their
actions will, to a large extent, be
synchronized. In their 1999 paper,
Ricardo Caballero and Eduardo Engel
show that such synchronization can
lead to a sharp, unusual rise in
aggregate investment.
The black series in Figure 3
represents the total investment of a
group of plants when an extraordinary
change in productivity results in a
sudden synchronization of their
investment. The blue series presents a
hypothetical alternative case in which
the number of plants allowed to adjust
their capital is constrained to remain
at ordinary levels. Notice the increased response in total investment
due to the synchronization effect.
Over the first 11 years, investment
initially rises by a total of 17 percentage points more in the synchronized
case. While this is partly offset by a

ances across plants to evolve over the
business cycle, the new investment
theory allows the synchronization of
investment activities during episodes
involving large changes in the
macroeconomy. It is through such
episodes that the fixed cost model we
have been examining overcomes the
excessively low variability of investment in the partial adjustment model.
The fixed cost model provides a
considerably better match with both
the aggregate and the plant-level data
(these models are compared in Figure
4) and can explain the sharp increase
in aggregate investment that follows a
recession.8
CONCLUSION
The theory of investment has
evolved into one that’s now better able
to explain the facts about investment
at both the macro and micro levels.
Traditional theory, known as the
partial adjustment model, ignored
differences across plants and firms. As
a result, while it was reasonably

8 This is shown in the 1995 paper of Ricardo

Caballero, Eduardo Engel, and John
Haltiwanger and the 1999 paper of Russell
Cooper, John Haltiwanger, and Laura Power.

www.phil.frb.org

successful at explaining aggregate
investment, it did poorly at explaining
lumpy plant-level investment. Newer
theories that explicitly address plantlevel investment resolve the problems
not addressed by traditional theory.
These new theories of investment
emphasize the role of fixed costs of
capital adjustment in inducing large
but occasional plant-level investment.
Moreover, once it was understood that
these costs were likely to vary across
plants and over time, the fixed cost
theory has been able to explain not
only plant-level investment but also
aggregate investment. Indeed, by
allowing for unusual synchronization of
investment across plants, fixed cost
theory is able to explain brisk recoveries following recessions, something
traditional theory could not do.
Of course, even the new
theory leaves something out. For
example, recent work suggests that
changes in interest rates, ignored in
the new theory, may have powerful
effects on firms’ investment decisions.9
Nevertheless, the new theory certainly
represents progress — it provides an
explanation of changes in aggregate
investment that, in contrast to
traditional theory, is consistent with
our observations of plants’ investment
behavior.

FIGURE 3
Investment With and Without Synchronization

FIGURE 4
Investment Under Partial Adjustment and
With Fixed Costs

9 See the 2000 paper by Julia Thomas and the

2000 paper by both Julia Thomas and me.

www.phil.frb.org

Business Review Q2 2001 21

REFERENCES
Barro, Robert J. Macroeconomics. John
Wiley & Sons, 1984.
Baxter, M., and R. G. King. “Measuring Business Cycles: Approximate
Band-Pass Filters for Economic Time
Series,” Review of Economics and
Statistics 81, 1999, pp. 575-93.
Caballero, R. J. “Aggregate Investment,” in M. Woodford and J. Taylor,
eds., Handbook of Macroeconomics.
Elsevier Science, 1999.
Caballero, R. J., and E. M. R. A. Engel.
“Explaining Investment Dynamics in
U.S. Manufacturing: A Generalized (S,
s) Approach,” Econometrica 67, 1999,
pp. 783-826.

22 Q2 2001 Business Review

Caballero, R. J., E. M. R. A. Engel, and
J. C. Haltiwanger. “Plant-Level
Adjustment and Aggregate Investment
Dynamics,” Brookings Papers on
Economic Activity 2, 1995, pp. 1-54.
Cooper, R., J. Haltiwanger, and L.
Power. “Machine Replacement and the
Business Cycle: Lumps and Bumps,”
American Economic Review 89, 1999,
pp. 921–46.
Doms, M. and T. Dunne. “Capital
Adjustment Patterns in Manufacturing
Plants,” Review of Economic Dynamics
1, 1998, pp. 409-30.

Hammermesh, D., and G. A. Pfann.
“Adjustment Costs in Factor Demand,”
Journal of Economic Literature 34, 1996,
pp. 1264-92.
Khan, A., and J. Thomas. “Nonconvex
Factor Adjustments in Equilibrium
Business Cycle Models: Do
Nonlinearities Matter?” GSIA Working Paper No. 2000-E33, 2000.
Thomas, J. “Lumpy Investment, Partial
Adjustment and the Business Cycle: A
Reconciliation,” GSIA Working Paper
No. 1999-E250, 1999.

www.phil.frb.org

The Interplay Between Home Production
And Business Activity
BY JEFFREY M. WRASE

H

ouseholds, like businesses, combine
inputs, such as labor and capital, to
produce output. This “home produc-

tion” includes dishwashing, lawn mowing,
home-improvement projects, and other chores
that households do without pay.
Although decisions households make
about the amount of time and resources to devote to home production
versus working in the marketplace
influence official measures of economic conditions, these measures
don’t take home production into
account.
Models of economic activity
usually ignore home production as
well. In fact, research on home
production shows that the variability
of many key macroeconomic variables,
as well as how those variables respond
to changes in the economic environment, may be skewed because the
typical macroeconomic model does not
fully account for how people allocate
time and resources between market

Jeff Wrase is an
economist in the
Research Department of the
Philadelphia Fed.

www.phil.frb.org

activity and other activities. Analysts,
forecasters, and policymakers could
benefit from economic models that
incorporate such decisions.
This article explores how
home production influences official
measures of the economy.1 It also
discusses the potential gains from
incorporating household decisions
about allocating resources to home
production into models used to
forecast and to account for changes in
economic conditions.
HOME PRODUCTION AND
ECONOMIC MEASUREMENT
Most people are familiar with
headlines describing how fast or slow
the economy’s growth is, and many

measures of growth are based on
government statistics that gauge the
total value of output produced in the
economy. A substantial amount of the
output captured by those statistics is
devoted to goods and services used by
households. However, some output,
such as that produced by households, is
not counted in official measures of
economic activity.
How Much Home Production Takes Place in the Economy?
Because official measures of economic
activity do not explicitly include home
production, it is not easy to say how
much takes place at any point in time
or how such production changes over
time. Some economists have, however,
attempted to measure home production and other nonmarket activity.2
Others have studied how households
divide their available time across
alternative activities, such as market
work and home production.3
We can gauge the magnitude
of home production in two ways. One
involves looking at the amount of time
people devote to unpaid work at
home. Thomas Juster, Frank Stafford,
and Martha Hill have produced
extensive research studies of how
households use their time. Their
studies use a number of sources,
including an extensive database
compiled by the Institute for Social
Research at the University of Michigan, called the Michigan Time Use

1 Official measures refer to economic data

produced by various statistical agencies,
including the Bureau of Economic Analysis,
Bureau of Labor Statistics, and the U.S.
Department of Commerce. The data are
available on the agencies’ web sites or in various
documents, such as the Federal Reserve
Bulletin, the Survey of Current Business, or the
Economic Report of the President.

2 See the articles by Robert Eisner and the

article by William Nordhaus and James Tobin.
3 See the article by Thomas Juster and Frank

Stafford and the one by Martha Hill.

Business Review Q2 2001 23

Survey. This survey contains data on
individuals’ allocations of time to
various activities during each day,
based on extremely detailed diaries
kept by respondents for one year.
According to these time-use
surveys, a married couple, on average,
devotes 25 percent of discretionary
time to unpaid — and not officially
measured — home production such as
child care, cooking, and cleaning, and
33 percent of discretionary time to
work in the marketplace for pay.4 By
this measure, home production is
indeed significant.
Another way to gauge home
production is to look at inputs and
outputs. On the input side, economists
Jeremy Greenwood, Richard Rogerson,
and Randall Wright (1995) examined
data from the U.S. national income
and product accounts. They found
that household capital investment,
defined as purchases of residential
structures and consumer durable
goods, exceeds business capital investment, defined as purchases of nonresidential structures and producer
durable goods.
On the output side, another
economist, Robert Eisner (1988),
reported that the value of home
production could range between 20
percent and 50 percent of the value of
the U.S. economy’s output, officially
measured as gross domestic product
(GDP).5 And with GDP currently
around $10 trillion, 20 to 50 percent is
a lot of unmeasured output.
Thus, all the measures above
indicate that home production
amounts to a significant portion of
activity that is not explicitly picked up

4 Discretionary time refers to time not spent

sleeping or on personal maintenance.

5 GDP is the current market value of all final

goods and services produced in a period by
domestically owned factors of production.
24 Q2 2001 Business Review

in official measures of the economy’s
performance. Furthermore, most
macroeconomic models have little to
say about it.6 To better explain
movements in official measures of
economic variables, models need to
account for the way inputs into home
production and the resulting output

expansions and less time during
recessions? Many possible answers to
these basic macroeconomic questions
have been put forth.
One answer emphasizes that
macroeconomic fluctuations arise from
the sometimes unintended consequences of economic policies, includ-

A married couple, on average, devotes 25
percent of discretionary time to unpaid home
production and 33 percent of discretionary
time to work in the marketplace for pay.
change from period to period. We each
have a fixed amount of time available
each day, and we divide it among
market production, home production,
and leisure.7 Accordingly, changes in
home production over time will lead to
changes in time allocated to economic
activity picked up by official measures.
WHY ARE THERE
FLUCTUATIONS IN
MACROECONOMIC ACTIVITY?
What fundamental forces
drive business cycles? 8 And why do
households devote more time to
working in the marketplace during

6 The idea of incorporating home production

into economic models — or, more particularly,
households’ time-allocation decisions between
activities other than simply leisure or work in
the marketplace — is not new. Labor economists have included home production in models
of the labor market for decades — at least as
early as 1965 (see Gary Becker’s article). But
the relevance of home production and attention
to households’ time-allocation decisions across a
variety of possible activities have only recently
been considered in research into factors
contributing to fluctuations in economic
activity.
7 Leisure includes time spent sleeping and on

personal maintenance.

8 Broadly defined, business cycles, also called

macroeconomic fluctuations, are alternating
periods of expansion and contraction of
economic activity.

ing changes in taxes or government
expenditures, changes in regulations
imposed on firms, or changes in the
money supply. For example, an
increase in taxes may slow the demand
for goods by households and firms. The
slowdown in demand could then lead
to layoffs and, consequently, reduced
time devoted by households to
working in the marketplace. A major
difficulty with this explanation is that
it is hard to establish statistically a
causal link between changes in
economic policies and macroeconomic
fluctuations.
Another answer proposes that
the economy fluctuates between
periods of expansion and contraction
because of inexplicable shifts in
consumers’ preferences, in the
preferences of firms that invest in
goods to use in producing other goods,
or in the preferences of savers, who
supply funding for consumers and
firms. Such shifts in preferences,
sometimes called changes in consumer
or investor optimism, or “animal
spirits,” could also lead to changes in
overall demand for goods and, consequently, to changes in the amount of
time that people devote to working in
the market. The problem with this
explanation of business cycles lies in
the difficulty of obtaining convincing
measures of preference shifts.
www.phil.frb.org

In the past 20 years, another
answer has gained a lot of attention:
fluctuations arise as a consequence of
random shifts in technologies used by
firms to produce goods and services.
Specifically, these random shifts alter
the effectiveness of inputs in producing output. For example, a technological change may mean that a given
amount of labor, when combined with
other inputs, can produce more output

than before. Such changes in the
productivity of labor, in turn, could
lead to changes in the amount of labor
that firms want to hire and perhaps the
amount of time that households wish
to supply to firms in the marketplace.
So, random shifts in productivity —
also called productivity shocks or
technology shocks — can spark
changes in employment, GDP, and
other key variables. (See How Impor-

tant Are Technology Shocks for Growth
and Fluctuations in Macroeconomic
Activity?)
TYPICAL MACROECONOMIC
MODEL
Most macroeconomic models
attempt to account for changes in the
amount of time devoted to market
work over a business cycle by assuming
that households devote time either to

How Important Are Technology Shocks for Growth
and Fluctuations in Macroeconomic Activity?

N

obel laureate Robert Solow calculated the
sources of long-term economic growth,
using what is known as the neoclassical
growth model of the economy, based on
data for the period 1909 to 1949.* His
estimates revealed that changes in
productivity accounted for 87.5 percent of the growth of
output per worker and increased capital per worker
accounted for 12.5 percent. Thus, during the period
Solow studied, most of the growth of output per worker
was due to improvements in productivity.
Updating Solow’s estimates of the sources of
growth using data from the mid 1950s to the early 1990s,
Thomas Cooley and Edward Prescott obtained results
similar to Solow’s: the majority of growth in output per
worker in the U.S. economy stems from improvements in
productivity.
While the findings of Solow and others point to
productivity improvements as the primary contributors to
average growth in the economy, we are also interested in
business cycles. Again, using the neoclassical growth
model, Solow, followed by Cooley and Prescott, found
that sources of business-cycle fluctuations seem to be
different from sources of average growth. Because
capital, such as machines and structures used by firms, is
an input that does not change very much over business
cycles, most fluctuations in GDP over business cycles

* For a complete description of how researchers use data drawn from
the economy to measure technology shocks, see Satyajit Chatterjee’s
1995 article.

www.phil.frb.org

stem from fluctuations in labor inputs. According to
Cooley and Prescott, around two-thirds of fluctuations in
output per worker stem from fluctuations in labor input;
the remainder comes primarily from fluctuations in
productivity.
A key lesson from the findings of Solow and
those of Cooley and Prescott is that a model capable of
accounting for both average growth in GDP and for
fluctuations in key macroeconomic variables has two
requirements. First, because changes in productivity are
important contributors to business cycles, the model
needs a way for productivity to change over business
cycles. Second, because a majority of fluctuations in GDP
stem from changes in labor inputs, the model must
include incentives for both households and firms to make
large changes in the time devoted to market work over
the course of a business cycle.
Finn Kydland and Edward Prescott’s influential
theoretical and empirical work advanced the idea that
fluctuations in macroeconomic conditions are driven
largely by random changes in productivity. The work of
Kydland and Prescott, as well as subsequent research by
other economists, uses a typical macroeconomic model
that includes measures of productivity shocks to account
for fluctuations in key macroeconomic variables. Using
data on labor inputs and capital inputs such as machinery
and equipment, along with data on output produced in
the economy, Kydland and Prescott, following Solow’s
earlier work, provide measures that represent the
technology of a typical firm in the economy. These
measures can be used to gauge how that technology
changes over time.
Business Review Q2 2001 25

market work or to leisure. Time
devoted to home production is typically
not incorporated into the models. But,
as we have seen, time and other
resources devoted to home production
are quantitatively significant.
To see why accounting for
time devoted to home production can
help a macroeconomic model account
for business cycles, we first need a
description of what constitutes a
typical model, including the choices
available to households and business
firms in the model.9 The typical
macroeconomic model allows households and firms to make choices within
a period, such as a quarter, as well as
across periods. Market prices determine
how goods are allocated across possible
alternative uses — consumption by
households, capital accumulation by
firms to facilitate the production of still
more goods, and, perhaps, the
government’s use of goods.
Allowing for choices within a
period enables households and firms to
allocate time available in that period to
either work or leisure or other activities
and to allocate available resources
across possible alternative uses, such as
consumption by households or capital
accumulation by firms. Choices made
within a period about how much time
to devote to work in the marketplace
and how much to leave for leisure or
other activities are important because a
majority of short-run fluctuations in
GDP stem from variations in labor
inputs.
Allowing for choices across
periods enables the model to include
important dynamic responses of
households and firms to changes in the
economy. For example, if labor
productivity randomly increases,
households will respond by determining

9 A technical exposition of the typical

macroeconomic model, in which consumption
and investment goods are assumed to be
identical, can be found in Gary Hansen’s article.
26 Q2 2001 Business Review

how much capital to accumulate today
to be able to produce and consume
more in the future.
WHY DO HOUSEHOLDS
CHANGE TIME SPENT
WORKING IN THE MARKET?
During recessions, fewer
hours are devoted to market work, and
during expansions, market work
usually rises. But what do people who
are laid off during recessions, or who
have their regular work hours cut
back, do with their remaining time?
During expansions, what draws more
people into market work or into
devoting even more time than before
to market work? Also, if some people
put more time into market work, what

The lower cost of
choosing leisure
gives households an
incentive to switch
from devoting time to
market work to devoting more time to leisure.
happens to the rest of their available
time? After all, we have only 24 hours
each day.
Sleeping eight hours a day
leaves 16 hours for market work,
leisure, and home production. So, if
during an expansion you decide to
work 10 hours at market work, rather
than your usual eight, you are left with
six hours of nonsleep time. Do you cut
back on leisure? Or do you cut back on
home production?
The typical business-cycle
model has had difficulty answering
these basic questions because it
postulates a simple choice for households’ time allocations: working in the

marketplace or enjoying leisure. For
example, suppose that, for some
reason, the technology used by firms
changes so that labor inputs, when
combined with other inputs, become
less productive, leading to a decline in
the demand for labor. This decline in
labor demand reduces wages paid to
the households that supply labor in the
marketplace. Reduced wages produce
two effects on households’ timeallocation decisions. One is that lower
wages make leisure a less costly
alternative to market work because
households that forgo market work for
leisure sacrifice a lower amount of
wage income. The lower cost of
choosing leisure gives households an
incentive to switch from devoting time
to market work to devoting more time
to leisure. A second effect, though, is
that for a given amount of time
supplied as market work, lower wages
make households less wealthy. This
reduction in wealth would lead a
typical household to give up some
leisure time and spend more time on
market work to fend off the loss of
wealth.
It is normally assumed that
the first effect of declining wages
dominates the second. That is, in the
face of reduced labor demand and
lower wages, households, on balance,
choose to switch from time spent at
market work to more time spent at
leisure. Therefore, a technology shock
that lowers labor productivity, reduces
labor demand, and lowers wages also
leads to a decline in the amount of
labor supplied to the market. As a
result, employment and output
decline, as do wages, and the economy
could slip into a recession.10
The typical macroeconomic
model implies that in times of recession, households facing lower wages
voluntarily reduce market work in
order to engage in more leisure. Many
people find this claim dubious.
The typical model also
www.phil.frb.org

doesn’t allow for home production.
Furthermore, in this model, households cannot accumulate or vary the
use of household-capital goods, such as
lawnmowers or vacuum cleaners,
employed in home production. But, as
we saw from estimates of homeproduction inputs and output, home
production involves significant
amounts of a household’s time and
capital goods.
The typical macroeconomic
model has been used to describe
movements in key macroeconomic
variables in the U.S. economy, even
though it ignores home production.11
Let’s see how adding home production
enriches the model’s ability to describe
economic activity. Toward that end,
we will first describe a standard way of
evaluating the typical model’s ability to
account for key features of the U.S.
economy. We will then show how
adding households’ home-production
choices can enhance the typical
model’s ability to account for the data.
CAN THE TYPICAL MODEL
EXPLAIN THE DATA?
The standard way to evaluate
a typical macroeconomic model’s
ability to explain fluctuations in
economic variables is to run simulations using the model. Properties of the

10 The typical macroeconomic model views

“artificial,” or model-generated, data
for key variables are then compared to
properties of their counterparts drawn
from national income accounts for the
U.S. economy. Economists Jess
Benhabib, Richard Rogerson, and
Randall Wright, among others, have
done just that for the typical model
without home production. These
authors point to a number of the
model’s shortcomings relative to actual
data, shortcomings that including
home-production decisions could
potentially overcome.12 Figure 1
shows the variability of GDP and the
variability of consumption, investment,
and hours worked relative to GDP. As
the figure shows, the model finds that
compared to U.S. data: (1) GDP itself
fluctuates too little in the model; (2)
consumption and hours worked
fluctuate too little relative to GDP in
the model; and (3) investment
fluctuates too much relative to GDP in
the model.

12 The typical model refers to a variant of the

model in Gary Hansen’s article.

Why Do the Shortcomings
of the Typical Model Arise? The fact
that consumption is not variable
enough and investment is too variable
relative to output in the typical model
can be easily understood. In the
typical model, when labor productivity
is high, for example, relatively fewer
labor inputs are devoted to market
production of goods for consumption,
such as clothing or furniture, and
relatively more labor inputs are
devoted to production of investment
goods, such as machines used by firms
to produce output. This switch occurs
because people don’t want their
consumption to fluctuate over time as
much as output does. Channeling
resources to the production of investment goods facilitates relatively
smooth consumption over time
because such goods can be accumulated, added to the economy’s capital
stock, and used to help provide goods
for consumption in times when labor
productivity is relatively low. Consequently, over time, consumption in the
typical model does not vary relative to
output as much as we see in actual
data, and investment varies more

FIGURE 1
Variability of Important Economic Indicators

business cycles as ups and downs in economic
activity relative to the underlying long-run
trends in economic activity. The description of
employment changes in response to a technology shock is a description of deviations from the
long-run trend in employment. Over the long
run, hours worked in the marketplace have
been fairly constant in the post-war U.S., even
though there has been productivity growth and
accompanying increases in wages. However, in
the short run, at business-cycle frequencies,
hours worked in the marketplace tend to vary
relative to the trend in hours worked.
11 The key features of the economy come from

data on variables such as GDP, households’
consumption of goods and services, firms’
purchases of goods as investments to be used in
producing more goods in the future, exports and
imports, major price indexes, and interest rates.
www.phil.frb.org

The variability of GDP is measured by its standard deviation, which is a statistical
measure of how GDP fluctuates relative to its average value. Relative variability is the standard
deviation of either consumption, investment, or hours worked relative to that of GDP. The data
are from the article by Greenwood, Rogerson, and Wright.

Business Review Q2 2001 27

relative to output than we see in the
data.
Similarly, the fact that total
hours devoted to market work do not
vary much relative to output in the
typical model is easy to understand. In
the typical model, households switch
between hours devoted to producing
consumption goods and hours devoted
to producing investment goods in
response to changes in productivity.
When labor productivity is high, for
example, fewer hours are spent
producing consumption goods, and
more hours are spent producing
investment goods, but total hours
devoted to market work don’t vary
much in the typical model. So, the
sum of total hours over a business
cycle ends up far less variable relative
to output than we observe in the data.
If there were a mechanism in
the model to allow more hours to be
devoted to producing more consumption goods in the market as well as
more investment goods during good
times, the typical model would benefit.
Not only would hours become more
variable than in the typical model, so,
too, would total output become more
variable. As we’ll see, adding a homeproduction sector to the model
provides the needed mechanism.
Does the Addition of Home
Production Improve the Typical
Model? On many dimensions, adding
home production to the typical model
improves its ability to account for what
we observe in the economy because
households now have more choices.
Including home production in the
model allows households to allocate
time among leisure, market work, and
home production. The typical model
allows a choice only between leisure
and market work. Adding home
production also means that output
must be divided among consumption,
investment in business capital, and
investment in household capital.
The enriched set of choices
28 Q2 2001 Business Review

results in a model that allows more
switching between using time and
goods for market activity or for
alternative activities in response to the
state of the economy. For example,
during recessions households become
relatively more productive places than
the marketplace. Hours of market
work and household purchases of
market goods both decline because
households increase the time devoted
to producing goods at home, an
avenue of substitution ignored in the
typical model.
The shortcomings of the
typical model can be remedied by
including home-production decisions,
and we can demonstrate this by
comparing U.S. data with data from a
typical macroeconomic model and
from a model that allows for homeproduction decisions (Figure 2). To
see how home-production decisions
improve the typical model, suppose,
for example, that there is a period of
economic good times with high labor
productivity in the marketplace. 14 As
in the typical model without homeproduction decisions, a homeproduction model will allocate some

resources, such as time, to produce
additional investment goods that will
facilitate production of more consumption goods in future periods.
But now that the model
allows for home-production decisions,
imagine that in the face of relatively
high marketplace productivity, more
people have their lawns mowed by
landscaping companies — resources
shift away from home production
(homeowners previously mowed their
own lawns) to market production
(homeowners now hire landscaping
services). Such shifts in resources in a
home-production model reflect the
fact that people are devoting less of
their time to home production
(mowing the lawn) while, at the same
time, purchasing more consumption
goods in the marketplace (landscaping
services). Thus, in a model with home
production, market production of
consumption goods increases, as does

14 Good times here means periods during which,

perhaps because of shocks to technology used in
the marketplace or to home-production
technology, the marketplace is a relatively more
productive place in which to devote resources.

FIGURE 2
Variability of Important Economic Indicators

The variability of GDP is measured by its standard deviation, which is a statistical
measure of how GDP fluctuates relative to its average value. Relative variability is the standard
deviation of either consumption, investment, or hours worked relative to that of GDP. The data
are from the article by Greenwood, Rogerson, and Wright.

www.phil.frb.org

production of investment goods. So,
not only will production of investment
goods vary with increasing labor
productivity in the marketplace, as in
the typical model, but market production of consumption goods will vary as
well and will fluctuate more than in
the typical model.15
Finally, another shortcoming
of the typical model — that output
fluctuates too little — can be over-

15 How well a model with home production

explains the data depends critically on the
incentives households have and their willingness to substitute between home and market
production. The model’s implications also
depend on the form assumed for the home
technology with which households combine
time and capital, perhaps subject to random
shocks to the technology. Unfortunately, to date
there is not much evidence on how shocks to
home-production technologies compare with
shocks to technologies used by firms in the
marketplace. The relative variability of market
and home production depends, of course, on
the variability of shocks to market productivity
relative to home productivity.

come by a model that includes home
production. In the typical model, the
size of output variation driven by
marketplace productivity shocks
reflects only the degree to which people
are willing to substitute time and
resources across periods in response to a
change in marketplace productivity. For
example, a household could give up
some leisure today when productivity is
high and devote that time to market
work to allow for more leisure in the
future. But when people can switch
between market production and home
production over time, the variability of
market production overall — measured
output — increases because of relative
differences in productivity in the two
types of production. The size of the
variations in measured output resulting
from relative productivity shocks in a
model with home production depends
on households’ willingness to switch
between home production and market
production at a given time as well as
over time.16

SUMMARY
Typical modern macroeconomic models do not account for some
important features of U.S. economic
data, in part because they ignore a
substantial amount of unmeasured
economic activity associated with the
use of time and resources in the
production of goods and services at
home. Including home production in
modern dynamic models of business
cycles seems to be a promising way to
help account for movements in key
economic variables, especially when
we consider the undeniably large
amount of time and resources that go
into home production.

Eisner, Robert. “Extended Accounts for
National Income and Product,” Journal of
Economic Literature 26, December 1988,
pp. 1611-84.

Juster, F. Thomas, and Frank Stafford.
“The Allocation of Time: Empirical
Findings, Behavioral Models, and Problems
of Measurement,” Journal of Economic
Literature 29, June 1991, pp. 471-522.

16 Jeremy Greenwood, Richard Rogerson, and

Randall Wright present a formal model with
home production in their 1993 article. This
article also offers a comparison of quantitative
implications of the model with properties of U.S.
data on key macroeconomic variables.

REFERENCES
Benhabib, Jess, Richard Rogerson, and
Randall Wright. “Homework in Macroeconomics: Household Production and
Aggregate Fluctuations,” Journal of Political
Economy 99, December 1991, pp. 1166-87.
Becker, Gary. “A Theory of the Allocation
of Time,” Economic Journal 75, September
1965, pp. 493-517.
Chatterjee, Satyajit. “Productivity Growth
and the American Business Cycle,” Federal
Reserve Bank of Philadelphia Business
Review, September/October 1995.
Cooley, Thomas, and Edward Prescott.
“Economic Growth and Business Cycles,”
in Thomas Cooley, ed., Frontiers of Business
Cycle Research. Princeton: Princeton
University Press, 1995, pp. 1-38.
Eisner, Robert. “The Total Income System
of Accounts,” Survey of Current Business,
January 1985, pp. 24-48.

www.phil.frb.org

Greenwood, Jeremy, Richard Rogerson,
and Randall Wright. “Putting Home
Economics into Macroeconomics,” Federal
Reserve Bank of Minneapolis Quarterly
Review, Summer 1993.

Kydland, Finn, and Edward Prescott.
“Time to Build and Aggregate Fluctuations,” Econometrica, 50, November 1982,
pp. 1345-70.

Greenwood, Jeremy, Richard Rogerson,
and Randall Wright. “Household Production in Real Business Cycle Theory,” in
Thomas Cooley, ed., Frontiers of Business
Cycle Research. Princeton: Princeton
University Press, 1995, pp. 157-74.

Nordhaus, William, and James Tobin. “Is
Growth Obsolete?” Economic Growth,
Fiftieth Anniversary Colloquium, Vol. 5,
New York: National Bureau of Economic
Research, 1972.

Hansen, Gary D. “Indivisible Labor and
the Business Cycle,” Journal of Monetary
Economics 16, 1985, pp. 309-27.

Solow, Robert. “Technical Change and the
Aggregate Production Function,” Review of
Economics and Statistics, 39, 1957, pp. 31220.

Hill, Martha. “Patterns of Time Use,” in F.
Thomas Juster and Frank Stafford, eds.,
Time, Goods, and Well Being. Ann Arbor:
University of Michigan Press, 1984.
Business Review Q2 2001 29