View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

A Theory of Asset Price Booms and Busts
and the Uncertain Return to Innovation*

M

by Satyajit Chatterjee

any observers believe that turbulence in asset
prices results from bouts of optimism and
pessimism among investors that have little to
do with economic reality. While psychology
and emotions are no doubt important motivators of
human actions, an explanation for asset price booms
and busts that ignores the fact that humans are also
thinking animals does not seem entirely satisfactory or
plausible. In this article, Satyajit Chatterjee presents a
counterpoint to the view that “it’s all psychology.” He
reports on a theory of asset price booms and busts that
is based entirely on rational decision-making and devoid
of psychological elements. The explanation suggests
that asset price booms and crashes are most likely to
occur when the value of the asset in question depends
on an innovation whose full profit potential is initially
unknown to investors.

Asset prices, such as the price of
company stock, the price of houses
in a particular location, or the price
of a foreign currency, can often
rise strongly for many periods and
then crash spectacularly. Does such
turbulence in asset prices result from
Satyajit Chatterjee
is a senior
economic advisor
and economist in
the Philadelphia
Fed’s Research
Department.
This article is
available free of
charge at www.
philadelphiafed.org/research-and-data/
publications.
www.philadelphiafed.org

irrational behavior on the part of
market participants, or does it have a
basis in rational behavior?
Many observers believe that the
turbulence in asset prices results from
bouts of optimism and pessimism
among investors that have little to
do with economic reality. More than
60 years ago, John Maynard Keynes
attributed these highs and lows in the
stock market to the “animal spirits”
that motivate humans to collectively
take on or shun financial risk. Given
the recent history of booms and
*The views expressed here are those of the
author and do not necessarily represent
the views of the Federal Reserve Bank of
Philadelphia or the Federal Reserve System.

crashes in the industrialized world,
the influence of mass psychology on
asset prices has once again come to the
fore. People wonder how much of the
frenetic buying and selling in capital
markets around the world serves any
useful social purpose.
While psychology and emotions
are no doubt important motivators
of human actions, an explanation
for asset price booms and busts that
ignores the fact that humans are also
thinking animals does not seem entirely
satisfactory or plausible. Why would
investors believe that an asset will rise
strongly in value unless there is, at
some level, a good reason for such a
belief? As a counterpoint to the view
that “it’s all psychology,” this article
reports on a theory of asset price
booms and busts that is based entirely
on rational decision-making and
devoid of psychological elements. The
explanation suggests that asset price
booms and crashes are most likely to
occur when the value of the asset in
question depends on an innovation
whose full profit potential is initially
unknown to investors. As investors
learn over time about what that
earnings potential is, the price of the
asset can rise strongly for a while and
then crash. As an example, think of
the advent of the World Wide Web in
1990, an innovation that opened the
door to the commercialization of the
Internet.1 Initially, it was not evident

The concept of the World Wide Web (or simply the web) was proposed by the English computer scientist Tim Berners-Lee and the Belgian
computer scientist Robert Cailliau in 1990.
The originators conceived of the web as a vast
information repository that anyone anywhere in
the world could access via the Internet.

1

Business Review Q4 2011 1

how to make money using the web,
but many new ideas were tried and
investors and entrepreneurs learned
over time what worked and what did
not.
PRIMER ON THE
DETERMINATION OF ASSET
PRICES
What theory do economists use
to discuss the determination of asset
prices? The most basic and simplest
of such theories asserts that the price
an investor will pay to buy an asset
today is related to the dividend the
investor expects to receive on the asset
in the future and the price at which
he expects to sell the asset at a future
date. An example will make this clear.
Suppose that a single share in the
stock of company X promises to pay $5
in dividends one year from today. Also
suppose that investors expect the price
of this single stock to be $100 a year
from today. Ignoring taxes, an investor
who can put his money in the bank
and earn a 5 percent interest rate will
not be willing to pay more than $100
for the stock today. If he paid $100, he
will earn $5 in dividends and then sell
the asset for $100. Therefore, he will
have $105 from his investment a year
from today. He can get the same dollar
amount by saving $100 in the bank
and earning a 5 percent return on it.
Therefore, the market price of the
asset cannot exceed $100. The market
price of the asset cannot fall below
$100 either because, if it did, then all
investors who currently have their
money in the bank would be better off
removing their funds from the bank
and buying the asset. They would earn
a higher rate of return on the stock
than on their bank accounts.
A bit more formally, the theory
asserts that the current price of the
asset, call it P, is simply the present
discounted value of the dividend to be
given out next period, call it D, plus

2 Q4 2011 Business Review

the expected price of the asset next
period, call it Pe. As we just saw, it
must be the case that the amount one
can earn by keeping the money in the
bank, namely, P(1+r) (where r is the
interest rate on the bank deposit), must
equal the amount one can earn from
the stock, namely, [D+Pe]. Therefore,
P(1+r) must equal [(D+Pe], so P must
equal [D+Pe]÷(1+r). The essence of
the economic theory of asset price
determination is the idea that the
rate of return on different but equally
risky assets should be equalized. In

What theory do
economists use
to discuss the
determination of
asset prices?
the above example, we assumed that
the return from holding the stock
for one year was perfectly certain so
that the rate of return on the stock
had to equal the interest rate on bank
deposits. If the return on the stock
is uncertain, the theory takes into
account that investors would demand
a higher rate of return on the risky
asset as compensation for bearing that
risk and the price of the stock will be
correspondingly lower, resulting in an
expected capital gain.
DIVIDEND GROWTH AND
GROWTH IN ASSET PRICES
This simple theory of asset price
determination, when coupled with a
theory of how expectations about the
next period’s asset price are formed,
makes predictions about the level and
growth of asset prices that depend
only on fundamentals, in this case the
dividend flow from the asset and the
interest rate on bank accounts. This
connection between fundamentals and

asset prices can be somewhat subtle,
and we will approach it through some
simple examples.
Imagine that the dividend from
the stock is the same each period and
the interest rate on bank deposits is
constant over time. In this situation,
an investor might reason that whatever
the price of the asset is today, it will
be the same in the next period. After
all, if neither the dividend nor the
interest rate changes, why should
the price of the asset change? This
kind of reasoning — which is at the
heart of the theory of expectation
formation that economists call rational
expectations — leads to the prediction
that the price of the asset will be the
(constant) dividend flow D divided by
the (constant) interest rate r.2
However, if dividends are growing
over time at some constant rate and
the interest rate is constant over time,
the same investor might now reason
that since the asset is becoming
more profitable over time, its price
should increase over time at the same
constant rate as that of dividends.
With this guess about the behavior of
future asset prices, the theory predicts
that the price of the asset in period
t will be the dividend to be given
out next period, D, divided by the
difference between the interest rate,
r, and the growth rate of dividends,
g. That is, the current asset price will
simply be D divided by (r-g). Since
the dividend given out each period
is growing over time at rate g, this

This formula can be obtained by solving the
equation P = [D+P]/[1+r] for P (in terms of D
and r). The investor’s guess that if the dividend
flow and the interest rate are both constant over
time then the price of the asset will be constant
over time is employed to replace Pe (the future
price) with P (the current price). Notice that
the investor’s guess that the future price of the
asset will be the same as it is today is indeed
verified by the resulting formula for P: the
formula depends only on D and r, both of which
are constant over time.
2

www.philadelphiafed.org

formula confirms the investor’s guess
that the asset price will grow at the
same constant rate as dividends.3
Thus, the simple theory of asset
price determination links the growth
in asset prices to the growth in
dividends. But this simple theory does
not come to grips with the behavior of
asset prices during a boom. During a
boom, asset prices seem to grow faster
than the growth rate of dividends.
As an example of this phenomenon,
Figure 1 displays the time paths of the
logarithm of the S&P 500 index and
of the logarithm of earnings per share
for the index for the period around the
tech boom.4 On a logarithmic scale,
steeper lines imply faster growth, and
we can see that between 1995 and
2001, the index grew at a faster rate,
while the growth in earnings did not
show any tendency to grow faster.
One can see the increase in
the growth rate of stock prices even
more clearly in the time path of the
NASDAQ composite index.5 Figure 2
plots the logarithm of the NASDAQ
index for the same time period as in
Figure 1. Between 1990 and 1995, the
time path is more or less a straight
line, which implies that the index grew
at a roughly constant rate. Following

It is perhaps worth pointing out that the interest rate available on a bank account will typically depend on the dividend flow from other
investments available in the economy. So, r and
g will not be independent of each other. Indeed,
the dependence of the interest rate on the
dividend flow available in the economy is what
guarantees that the interest rate, r, will always
be greater than the growth rate, g. Without this
ordering, the formula gives nonsensical results.

FIGURE 1
Earnings and Stock Prices: S&P 500
Logarithm of S&P 500 Earnings Per Share

Logarithm of S&P 500 Index

3.2

7.4
S&P 500 Index

3

7.2

2.8

7

2.6

6.8

2.4

6.6

2.2

6.4

2

6.2

1.8

6
S&P 500 Earnings Per Share

1.6
1.4
1990

5.8
5.6

1992

1994

1996

1998

2000

2002

2004

2006

Years

FIGURE 2
NASDAQ Index: Boom and Crash
Logarithm
8.5
NASDAQ Index

8

7.5

3

The S&P 500 index is proportional to the
average stock price of 500 large U.S.-based corporations whose shares are traded on U.S. stock
markets. The theory outlined in the text applies
equally well to such averages.

7

6.5

6

4

The NASDAQ index is the average stock price
of over 3,000 corporations (not necessarily U.S.
based) whose shares are traded on U.S. stock
markets and that are oriented toward hightechnology areas.

5.5
1990

1992

1994

1996

1998

2000

2002

2004

2006

Years

5

www.philadelphiafed.org

1995, however, the angle of the path
tilts up, implying faster growth in asset
prices. This continues until the market

crash we associate with the end of the
dot-com boom. Unfortunately, there is
no easily available series on earnings

Business Review Q4 2011 3

growth for the NASDAQ index, but
all anecdotal evidence suggests that
there was no corresponding speed-up
in the growth rate of earnings.
The apparent disconnect between
the growth rate of fundamentals (in
this case, earnings) and the growth
rate of asset prices makes observers
think that something other than
fundamentals (“animal spirits” or mass
psychology) is at work. While mass
psychology may well influence asset
prices, it turns out that the simple
theory of asset price determination
outlined above can shed considerable
light on the origin and mechanics of
asset price booms and crashes.
The key insight is that market
participants’ beliefs regarding how
long dividend growth will continue
may play a crucial role in generating
an asset price boom and crash.6
When there is an innovation, such
as the World Wide Web, investors
may be uncertain about the full profit
potential of the innovation — that
is, they do not know in advance how
far, or in what ways, the World Wide
Web can be used for commerce. This
creates uncertainty about the duration
of earnings growth. As the innovation
continues to diffuse through the
economy and earnings continue
to grow, investors revise up their
estimate of the profit potential of the
innovation. This upward revision may
temporarily make the asset price rise
faster than earnings. When earnings
growth comes to a halt and investors
learn the limits of the innovation, the
asset price crashes. Thus, a boom can
happen without a speed-up in earnings
growth, while the cessation of earnings
growth can result in a crash.7 These
ideas are fleshed out in the next two
sections.

This discussion draws on the 1999 article by
Joseph Zeira.

6

4 Q4 2011 Business Review

Cessation of Dividend Growth
Can Induce an Asset Price Crash.
As we have seen already, growth in
dividends increases the price of the
asset because the asset becomes more
profitable for investors. Therefore, in
order to value the asset today investors
have to form beliefs about future
dividend growth. In this situation,
uncertainty about whether growth in
dividends will continue or stop can
have surprising consequences for the
price of the asset.

chance on dividends continuing to
grow today and the price of the asset
yesterday reflected that expectation.
If dividends fail to grow today, the
asset becomes less valuable to investors
today compared with yesterday. Thus,
the mere cessation of dividend growth
will cause the asset price to fall.
Can uncertainty about the
duration of dividend growth explain
asset price booms and crashes? That
is, can it provide an explanation for
the phenomena displayed in Figures

Can uncertainty about the duration
of dividend growth explain asset
price booms and crashes?
Imagine that investors put a 50
percent probability on dividend growth
coming to a stop next period and a
50 percent probability that dividends
will continue to grow at the same rate
as in the past. Then, if the growth in
dividends does stop next period, the
theory of asset price determination
predicts that the price of the asset
will fall. At first sight this might seem
puzzling because the profitability of
the asset hasn’t fallen: The asset is
generating the same dividend flow as
it did in the previous period. However,
investors yesterday had put an equal

From the point of view of valuing an asset,
the main quantity of interest is the growth rate
of earnings. But to assess the validity of an
earnings-growth forecast, investors will examine
many sources of information. For instance, they
may track the increase in the number of visitors
to a website as an indicator of commercial interest. During the tech boom, investor interest in
various measures of Internet use (such as the
number of websites and the number of “hits” per
website) was quite intense, and these measures
were used to justify very optimistic earnings
forecasts for Internet-related businesses. The
point, however, is that such optimism could be
sustained because investors were truly uncertain
about the profit potential of this new way of
conducting commerce.

7

1 and 2? To explore this question,
we will work with a simple example.
The interest rate available on bank
accounts is taken to be 1 percent per
quarter. Suppose that there is an asset
whose dividend flow is currently $100.
Next quarter, there is a ¾ probability
that the asset’s dividend flow will
increase by 5 percent (i.e., rise to $105)
and there is a ¼ probability that its
dividend flow will stop growing and
stay at $100 forever. If the dividend
flow increases next quarter, the
situation next quarter will be the same
as in the current quarter: namely,
there will be a ¾ probability that
the dividend flow will increase by 5
percent again in the following quarter
(to $110.25) and there will be a ¼
probability that the dividend flow will
stabilize forever at $105. Thus, as long
as dividends continue to grow, there
is a constant probability that this
growth will continue next period and a
(complementary) constant probability
that growth in dividends will come to
a stop forever.
Figure 3 displays a snapshot of
the time paths of the logarithms of

www.philadelphiafed.org

dividends and asset prices predicted
by the simple theory of asset price
determination. The theory predicts
that as long as dividends continue to
grow, the price of the asset will grow
at the same rate as the growth in
dividends. In the figure, this is what
happens for the periods preceding
period 45: The time plot of the
logarithm of asset prices and dividends
rises at the same rate. At period 45,
however, dividends stop growing,
and the time plot of the dividend
path flattens out. As displayed, the
cessation of dividend growth causes
a crash in the asset price. Following
the crash, the time path of the asset
price flattens out as well: Recall that
the theory of asset price determination
predicts that if dividends are constant
over time, so will be the price of the
asset.8
The crash in the asset price
reflects investors’ re-assessment of
the profitability of the asset. Prior
to the cessation of dividend growth,
investors placed a three in four chance
on dividend growth continuing into
period 45, a nine in 16 chance of
dividend growth continuing into
period 46, a 27 in 64 chance of growth
continuing into period 47 and so on.9
Consequently, the price of the asset
in period 44 incorporated investors’

belief that dividends will continue to
rise in period 45 and beyond with high
probability. When these beliefs are
belied by events, the price of the asset
tumbles.
It appears, then, that the simple
theory of asset price determination
predicts sudden drops in asset prices
that stem simply from a downward
re-assessment of the growth potential of
the earnings flow underlying the asset.
Because the bad news that leads to the
crash concerns diminished prospects
for future growth, the asset price may
fall even if the current dividend flow
does not fall.
Learning About the Likely
Duration of Dividend Growth Can
Induce an Asset Price Boom and
Crash. But how can this simple model
of asset price determination account
for the boom in the price of assets? As
noted earlier, we cannot attempt to
account for the tech boom in terms of
faster dividend growth because there is
no evidence of a speed-up in earnings

growth during the boom phase.
It turns out that the model can
account for the boom and the crash
if we allow for the realistic possibility
that investors’ beliefs concerning
the duration of dividend growth
may evolve over time. Instead of
imagining that investors assign a
constant probability to dividend
growth continuing (or, equivalently,
a constant probability of it coming to
an end), imagine that investors start
off believing that dividend growth
will last somewhere between eight and
15 years. That is, they believe that
dividend growth will continue for sure
until period 32 (since each period is
a quarter, eight years amount to 32
quarters) and stop for sure by period
60. But they are uncertain about the
duration of the expansion between
these two dates.
Figure 4 displays the time plot
of the logarithm of the asset price
implied by these beliefs when dividend
growth stops in period 45 (as before,

FIGURE 3
Asset Price Effects of a Cessation in
Dividend Growth
Logarithm of Asset Price

Logarithm of Dividends

11.8

It is worth pointing out that in this example,
the growth rate of dividends exceeds the interest rate on bank accounts (5 percent versus
1 percent). Nevertheless, the simple theory
of asset price determination applies because
investors recognize that dividend growth will
not continue forever. According to the theory,
the growth rate of dividends can be higher than
the interest rate as long as the product of the
probability of growth continuing and (1+g) is
less than (1+r).

8

The nine in 16 chance comes from recognizing
that the probability that dividends will grow for
two consecutive periods is simply the product
of ¾ and ¾, or (¾)2. Similarly, the probability
that dividends will grow for three consecutive
periods is (¾)3 or 27 in 64. More generally, the
probability of n consecutive periods is (¾3)n.

9

7

11.6

6.8

Dividends

6.6

11.4
Asset Price

11.2

6.4

11

6.2

10.8

6

10.6

5.8

10.4

5.6

10.2

5.4
20

25

30

35

40

45

50

55

60

65

70

75

80

Years

www.philadelphiafed.org

Business Review Q4 2011 5

FIGURE 4
Boom and Crash Effects of a Cessation in
Dividend Growth
Logarithm of Asset Price
11.8

11.7

11.6
Asset Price

11.5

11.4

11.3

11.2
0

10

20

30

40

50

60

70

80

Years

we assume that the interest rate is 1
percent per quarter). Notice that the
time plot of the logarithm of asset
price grows at more or less a constant
rate until period 32. But after period
32 and until the crash in period 45, the
growth rate of prices is faster, although
there is no change in the growth rate
of dividends.
This surprising outcome is the
result of the evolution of investors’
beliefs regarding the likelihood of
the different dates at which the
expansion might stop. To understand
this point, notice that in period 32,
an investor assigns a 1/28 chance that
the expansion will continue to period
33, a 1/28 chance that it will continue
to period 34, and so forth, because
there are 28 possible dates (33 to 60)
at which the expansion might stop
and the investor is equally uncertain
about at which date the expansion will
stop. But once this investor learns that

6 Q4 2011 Business Review

the expansion has, in fact, continued
into period 33, he will assign a higher
chance to the expansion’s continuing
to period 34 and beyond. This is
because there are now only 27 possible
dates left, and investors will assign
each date a 1/27 chance. Thus, as the
expansion continues, the investor will
assign a higher and higher probability
to the expansion’s continuing to the
fewer remaining dates.
What all this amounts to is that
as the expansion continues beyond
period 32, investors successively
eliminate the possibility of relatively
unfavorable outcomes in favor of an
increase in the likelihood of relatively
favorable ones. For instance, if the
expansion continues on to period 35,
investors know that the expansion
will go on until some date that lies
between periods 36 and 60. This is
a more favorable assessment of the
asset’s earning potential than what

investors believed in any earlier
period. Of course, once the expansion
stops, all of the remaining favorable
outcomes to which investors had
previously assigned a positive chance
are eliminated, and that elimination
results in a sharp fall in the price of
the asset.10
There are some additional points
worth making. First, the boom and
crash scenario depends on the timing
of the cessation of dividend growth. If
the expansion in dividends continues
all the way to period 60, there will
be a boom but no crash: The price
of the asset will simply stabilize at
its peak value and stay at that level
forever. At the other extreme, if the
dividend expansion comes to a stop
in period 33, there will be a crash
but no boom. To get a boom-bust
scenario, the expansion in dividends
must last longer than the minimum
period of expansion but less than the
maximum period of expansion. Of
course, in reality, investors cannot be
completely certain about the minimum
and maximum periods of expansion.
But the explanation will work as long
as the duration of the expansion falls
somewhere near the “middle regions”
of the set of possible outcomes.
Second, Figure 1 indicates that
there was also a crash in operating
earnings when the tech boom ended,
something that is not true of the
explanation given above. But this is
not an important deviation between
theory and fact. There was a crash
in earnings because learning also
affected corporate decisions. Hightech corporations discovered that
they had invested “too much” in
information and communications
For the example shown in Figure 4, the average annual growth in asset prices prior to period
33 is 3.13 percent, the annual growth between
periods 33 and 44 is 9.80 percent, and the drop
in asset value at the time of the crash is 31
percent.

10

www.philadelphiafed.org

technology capacity because they too
believed there was some chance that
the expansion in profit opportunities
would continue beyond 2001.11 The
write-offs related to this “excess
investment” contributed to corporate
bankruptcies and a drop in operating
earnings. Consistent with this
situation, there was also a crash in
information and communication
technology (ICT) investment, which,
in turn, led to the brief recession of
2001-02. The recession contributed to
the drop in corporate earnings as well.
Third, Figure 1 also shows that
following the crash in prices and
operating earnings, growth in earnings
recovered quickly, which seems
inconsistent with the theory outlined
above. However, we have to recognize
that an index as broad as the S&P
500 is affected by more than just the
high-technology sector. As we are
all too well aware now, the high-tech
boom was followed closely by a boom
in housing and construction. Although
a variety of factors contributed to the
housing boom and subsequent bust, at
the center of the boom and crash was
yet another innovation — this time in
financial markets in the form of the
securitized subprime mortgage.12
INNOVATIONS AND ASSET
PRICE BOOMS AND CRASHES
The above explanation of a
boom-bust scenario is special. It
assumes that the uncertainty regarding
dividend growth is of a particular kind
(uncertainty regarding the duration
of expansion) and that investors put

See Robert Gordon’s article on how ICT
capacity outstripped ICT demand and led to
corporate bankruptcies and a slowdown in ICT
investment in early 2000.

11

12
See the book by Gary Gorton for a discussion
of the nature of the financial innovation in
mortgage markets that, in part, contributed to
the housing boom and, ultimately, to the current mortgage crisis.

www.philadelphiafed.org

an equal probability weight on the
expansion’s stopping between two
fixed future time periods. However, it
is also true that boom-bust scenarios
do not happen all the time, which
suggests that their occurrence requires
a particular confluence of events.
The important question to ask is:
Under what circumstances are the
assumptions of the theory likely to be
met?
Imagine a situation in which there
is a new discovery or innovation that
is truly novel. For such an innovation,

the two future dates.13 This will be
the case, for instance, if investors
currently expect the expansion to last
somewhere between five to 10 years.
Historically, booms in asset
prices have, in fact, followed truly
novel innovations or events. In
describing the genesis of financial
crises in Western Europe, the financial
historian Charles Kindleberger
summarizes the historical record thus:
“The macroeconomic system receives
a shock…a ‘displacement’. This
displacement can be monetary or real.

It is also true that boom-bust scenarios do
not happen all the time, which suggests
that their occurrence requires a particular
confluence of events. The important question
to ask is: Under what circumstances are the
assumptions of the theory likely to be met?
the past is a poor guide for judging the
innovation’s profit potential. Investors
understand that the innovation will
create new opportunities, but no
one is certain about the innovation’s
ultimate profit potential. In this
situation, the basic assumptions of
the simple model outlined above seem
plausible. Investors know that the
innovation will generate new business
opportunities over time (increasing
profits or dividends) until, at some
point in the future, the innovation’s
profit potential will stabilize and profits
will stop growing (or will grow at the
rate of growth of the overall economy).
But no one knows when this stage of
“normal” profits (or profit growth)
will arrive, and past experience is of
no help in making a guess. In this
situation, the principle of indifference
suggests that investors may well put
an equal probability weight on the
expansion’s stopping any time between

What is significant is that it changes
expectations in financial markets
with respect to the profitability of
some range of investments. New
profit opportunities are opened up,
and people move to take advantage
of them.”14 Again, in another work,
Kindleberger states: “The nature of
the displacement varies from one
speculative boom to another. It may
be the outbreak or end of war, a
bumper harvest or crop failure, the
widespread adoption of an invention
with pervasive effects — canals,
railroads, the automobile — some
political event or surprising financial

13
The “principle of indifference” asserts that if
there is no knowledge indicating that any one
outcome among N possible outcomes is more
likely than another, each outcome should be
assigned an equal chance of occurring, namely,
a chance of 1/N.
14

Kindleberger, 1993, p. 524.

Business Review Q4 2011 7

success, or a debt conversion that
precipitously lowers interest rates.
But whatever the source of the
displacement, if it is sufficiently
large and pervasive, it will alter the
economic outlook by changing profit
opportunities in at least one important
sector of the economy. Displacement
brings opportunities for profit in some
new or existing lines, and closes out
others. As a result, business firms
and individuals with savings or credit
seek to take advantage of the former
and retreat from the latter. If the new
opportunities dominate those that lose,
investment and production pick up. A
boom is under way.”15
The boom in house prices in the
mid to late 2000s can, in part, be
traced to a financial innovation — the
securitized subprime mortgage —
whose true profit potential was initially
unknown. The tech boom of the 1990s
was a direct consequence of the spread
of ICT and the rise of the World Wide
Web. The boom of the 1920s could
arguably be traced to the revolutionary
effects of the automobile. The boom of
the 1850s (in the U.S.) could be traced
to the revolutionary effects of railroads.
Arguably, each of these booms ended
in a crash when investors came to a
more precise understanding of the
innovation’s profit potential.
15

Kindleberger, 1978, p. 18.

The explanation for the boombust scenario described in this article
is based on the fact that investors
learn about the asset’s profit potential
over time. And what they learn
can cause them to strongly revise
their perception of the asset’s value.
The basic idea regarding the role of
learning is present in other studies that
go beyond the simple model discussed
above. For instance, researchers
have shown that the transaction
costs of trading in financial markets
coupled with learning about an asset’s
profitability over time can lead to
abrupt and sharp movements in asset
prices, so that asset prices may appear
to be much more volatile than the
flow of dividends.16 This finding is
important because the low variability
of dividend flow compared with the
high variability of asset prices is often
taken as evidence that fundamentals
(i.e., dividend flow) have little to do
with asset price fluctuations.

16
See the article by In Ho Lee for a discussion of
this point. As the author explains, transaction
costs can keep an investor from immediately
trading on new information that becomes available to him. Thus, information relevant to the
value of the asset can remain hidden until some
shock (which could be relatively minor) forces
all investors who had refrained from trading
to trade. At that point, information that was
hitherto dispersed and hidden among investors
gets reflected in the price, which can cause the
price to change abruptly.

SUMMARY
There is considerable
circumstantial evidence supporting
the notion that asset price booms
and busts follow the advent of novel
innovations that are expected to have
pervasive effects on the economy. If
this is accepted as a starting point
for further analysis, the problem
becomes one of understanding why
and how innovation and novelty
generate asset booms and busts. The
simple model outlined above provides
one explanation. It stresses the fact
that truly novel innovations create
uncertainty in the mind of investors
regarding the innovation’s ultimate
profit potential, and the resolution
of this uncertainty can first lead to a
boom and then a crash.
The informational theory of
booms and busts suggests that such
episodes are inevitable, since they arise
from deep-seated forces governing the
evolution of industrial economies. It
implies that there is more than a grain
of truth to the notion that boom-bust
scenarios are unique (“this time it’s
different”) in that these episodes result
from circumstances that are truly
novel, such as the advent of railroads,
the automobile, the personal computer,
and the Internet. BR

REFERENCES
Gordon, Robert J. “Hi-Tech Innovation
and Productivity Growth: Does Supply
Create Its Own Demand?” NBER Working
Paper 9437 (2003).
Gorton, Gary B. Slapped by the Invisible
Hand: The Panic of 2007. Oxford: Oxford
University Press, 2010.

8 Q4 2011 Business Review

Kindleberger, Charles P. Manias, Panics and
Crashes. New York: Basic Books, 1978.
Kindleberger, Charles P. A Financial History of Western Europe, Second Edition.
Oxford: Oxford University Press, 1993.

Lee, In Ho. “Market Crashes and Informational Avalanches,” Review of Economic
Studies, 65:4 (1998), pp. 741-59.
Zeira, Joseph. “Informational Overshooting, Booms and Crashes,” Journal of Monetary Economics, 43 (1999), pp. 237-57.

www.philadelphiafed.org

How Do Businesses Recruit?*

M

by R. Jason Faberman

ost economic theories of hiring and job
seeking assume that businesses post
vacancies when they demand more labor.
Workers then apply for the job, and the most
qualified candidate is hired. However, as those who have
ever recruited or applied for a job know, the recruiting
process is considerably more complex. In this article,
Jason Faberman discusses some recent research on how
employers recruit. It shows that the extent to which a
business uses various recruiting channels depends on the
characteristics of the employer, how fast the employer
is growing (or contracting), and the overall state of the
economy.

One question that has been on
the minds of workers and policymakers
alike over the past year is: when will a
strong pickup in hiring take hold? The
hiring of workers by businesses is a key
component of the labor market. It is a
common occurrence in both recessions
and booms, and most individuals
have been on one or both sides of the

Jason Faberman
is a senior
economist at the
Federal Reserve
Bank of Chicago.
When he wrote
this article, he
was a senior
economist in
the Philadelphia
Fed’s Research Department. This article
is available free of charge at www.
philadelphiafed.org/research-and-data/
publications/.

www.philadelphiafed.org

hiring process. In fact, according to
the Bureau of Labor Statistics (BLS),
nearly 5 million people, on average,
are hired each month. Even at its
lowest point during the last recession,
total hiring in the U.S. totaled 3.9
million workers per month. Given
how often hiring occurs, much of the
economic evidence in this article will
likely sound familiar to most readers.
Nevertheless, the complexities and
informalities associated with the
hiring process have made it a difficult
concept for economists to fully
formalize in a theoretical framework,
and consequently, these same elements
have made it difficult to predict how
aggregate hiring will behave over time.

*The views expressed here are those of the
author and do not necessarily represent
the views of the Federal Reserve Bank of
Philadelphia or the Federal Reserve System.

Most economic theories of hiring
and job seeking assume that businesses
post vacancies when they demand
more labor. Workers then apply for the
job, and the most qualified candidate
is hired. As those who have ever
recruited or applied for a job know,
however, the recruiting process is
considerably more complex. First, it
takes time for businesses to find a
suitable candidate and for workers to
find acceptable employment. Economic
theories characterizing these “search
frictions” have become commonplace
in economic research. In addition,
businesses have multiple options for
increasing their chances of hiring
a qualified employee, for example,
engaging in informal networking,
increasing their recruiting efforts,
or offering relatively generous pay or
benefits. These channels make the
recruiting process more complex, and
economic theories on how businesses
recruit have yet to fully capture these
complexities.
In this article, I present some
recent research that documents
that the extent to which a business
uses these other recruiting channels
depends on its characteristics, such
as its industry and the type of job it
is recruiting for. It also depends on
how fast the business is growing (or
contracting). Last, it depends on the
state of the economy. Recessions are
periods when individuals find it hard
to find work, and consequently, they
are also times when businesses find it
relatively easy to fill open positions.
ECONOMIC THEORIES OF
HIRING AND RECRUITING
There are many economic models
Business Review Q4 2011 9

of recruiting and hiring.1 These models
are generally based on theories of labor
market search and matching that were
recently recognized in the awarding
of the 2010 Nobel Prize in economics.
The models evaluate how workers
find new jobs and how firms find new
workers, given that there are frictions
in matching the two. That is, it takes
time for workers to figure out what
jobs are available, and it takes time for
employers to evaluate candidates for
jobs. These frictions cause unemployed
workers and vacant jobs to exist in
the labor market simultaneously. Over
the years, such models have proven
valuable in evaluating the behavior
of hiring, wages, and unemployment,
most often over the business cycle,
and in evaluating various labor
market policies, such as employment
protection and unemployment
insurance benefits.
Central to many of these
models is the notion of a vacancy
or, more generally, that the frictions
involved in matching workers to
firms make recruiting a worker costly.
Consequently, firms must weigh the
expected cost of hiring a new worker,
which consists of not only the wage
they must pay but also the time and
resources they must devote to the
search process, against the expected
benefit, which is generally how
productive a firm expects its new hire
to be.
Starting from this basic premise,
different theories of labor market
search and matching diverge widely
in how the recruiting process occurs.
For example, some theories implicitly
Seminal work on this topic includes the 1985
study by Christopher Pissarides and the 1994
work by Dale Mortensen and Pissarides. Their
work spawned a large literature on the issue,
much of which is summarized in the survey
piece by Richard Rogerson, Robert Shimer, and
Randall Wright. Mortensen and Pissarides,
along with Peter Diamond, shared the 2010
Nobel Prize in economics.

1

10 Q4 2011 Business Review

model a link between wages and
recruiting behavior. These models
of “directed search,” such as the one
presented by Espen Moen, postulate
that workers observe the wages offered
by firms before they decide where to
apply. The implication from these
models is that firms can reduce the
time it takes to find a worker by
offering a wage higher than what their
competitors offer (and thereby increase
their number of applicants). Similarly,

suitable match.2 James Montgomery
develops a model in which the social
networks of the existing workforce
provide an alternative recruiting
channel for firms.
Together, these lines of research
underscore the need to understand
exactly how firms recruit in the
real world. The different types of
models provide for very different
characterizations of how firms hire
workers and thus provide differing

It takes time for workers to figure out what jobs
are available, and it takes time for employers
to evaluate candidates for jobs. These frictions
cause unemployed workers and vacant jobs
to exist in the labor market simultaneously.
in his book, Christopher Pissarides
presents a model in which firms vary
in how much effort they put into
recruiting rather than the wages they
offer in trying to fill their vacancies.
In another example, Boyan
Jovanovic addresses the uncertainty
often associated with the hiring
process by constructing a model in
which workers are hired by (matched
with) firms and both must learn about
the match’s “quality” over time. That
is, they both learn whether or not
each is happy with the employment
relationship. This type of model
implies that recruiting efforts are just
one cost in a longer process to figure
out whether a worker is a good fit with
that firm.
There are also theories that
ignore the search and matching aspect
of recruiting and focus instead on
its other complexities. For example,
Michael Rothschild and Joseph Stiglitz
present a model in which firms design
contracts to screen their applicants
to improve their chances of finding a

views on which channels are most
important for recruiting, on how
much recruiting differences affect
the behavior of the labor market,
and on what policies may best spur
hiring. Only empirical evidence on
employers’ recruiting practices can
shed light on which aspects of these
models best describe what happens
in the real world. In the remainder of
this article, I summarize the existing
evidence on these recruiting practices.
A central theme that stands out is that
no one theory captures what goes on
in the data. This is partly because the
different types of recruiting practices
that firms use often depend on the
characteristics of the position they are
trying to fill. It is also because certain
practices, such as informal recruiting
methods, are not well captured at all
by the existing theories.

2
The Rothschild-Stiglitz model is explicitly
about contracts in insurance markets, but it has
been extended to an understanding of labor
markets.

www.philadelphiafed.org

EMPIRICAL ECONOMIC
RESEARCH ON RECRUITING
Perhaps surprisingly, economic
research on how firms recruit is
relatively thin. This contrasts with
the amount of research that exists
on how individuals (both employed
and unemployed) find new work
(i.e., the labor supply counterpart to
recruiting).3 A major reason for this
is a severe lack of data on recruiting.
There are few surveys that capture the
data needed for a complete study of
recruiting behavior, and these surveys
usually have relatively few observations
and are often outdated.
Another major reason for the
paucity of research on recruiting is
that informal recruiting has proven to
be an important channel. This point
has been stressed in research dating
back to work in 1966 by Albert Rees.
Formal recruiting methods generally
refer to explicit efforts by a business
to find and hire a worker. These
methods include posting a help wanted
sign in the window or an ad in the
newspaper or on the Internet, posting
an opening at a job center (a common
practice in European labor markets),
and posting a vacancy announcement
with an employment agency. While
data on these recruiting methods are
sparse, the methods themselves employ
tangible measures of recruiting that
an economist could study. Informal
recruiting methods refer to hires made
through channels such as referrals
from acquaintances or existing
employees, informal contacts made
through networking, and the hiring
of walk-in applicants who inquired
about work without the existence
of a formal job opening. Given their
informal nature, these practices prove
3
For example, see the 1999 review article by
Henry Farber and the studies by Robert Hall,
Shigeru Fujita and Gary Ramey, and Michael
Elsby, Ryan Michaels, and Gary Solon, to name
a few.

www.philadelphiafed.org

difficult to accurately measure even
when surveys on recruiting explicitly
try to account for them. Other actions
related to recruiting have also proven
difficult to accurately measure. These
include the number of applicants and
interviews for a particular position and
the efforts a business undertook to hire
someone.

EXISTING EVIDENCE ON
VACANCIES AND HIRING
Other research has also shed light
on how firms recruit. The existing
evidence can be grouped into three
categories: recruiting based on the
characteristics of the business and the
job, recruiting based on how much a
business is growing (or contracting),

Informal recruiting methods refer to
hires made through channels such as
referrals from acquaintances or existing
employees, informal contacts made through
networking, and the hiring of walk-in
applicants who inquired about work without
the existence of a formal job opening.
Nevertheless, research by Rees
and more recent work by Jed DeVaro
provide some useful insights on how
firms recruit. For example, Rees
finds that informal recruiting is an
important part of hiring, primarily
because it allows businesses to gather
more information about a potential
hire in a less costly way than more
formal methods. Using a survey
of employers in the Chicago area,
Rees is able to document a variety
of informal channels that firms use,
such as relaxed hiring standards, and
finds that the benefits these channels
afford often made them preferable to
the more formal methods provided by
placement agencies that specialized in
recruiting workers. DeVaro shows that
the type of recruiting method used is
closely related to the starting wage of
the position. He finds that informal
recruitment methods (such as referrals)
have longer vacancy durations but
lead to higher wage hires. The findings
of both researchers underscore the
importance of recruiting channels
outside of the standard method of
posting a vacancy.

and recruiting behavior over the
business cycle.
Recruiting Behavior Varies with
Business Characteristics. From an
economist’s point of view, one of the
most important metrics for analyzing
recruiting is the cost of recruiting, in
terms of time, money, and resources.
A big part of this cost is how long it
takes to fill a vacant position. An open
vacancy represents an unfilled job,
meaning that a business has profitable
work to be done, but there is no one
currently doing it. Thus, one aspect
of the cost of a vacancy that remains
open is the opportunity cost of the
unfilled position. A vacancy also
signifies that there is some form of
active recruiting undertaken by firms.
This implies that the firm is devoting
resources — in terms of the time
and effort of its existing workers, as
well as potential direct costs, such as
advertising expenses — to recruiting
a new worker. These costs and their
effects on the recruiting behavior of
individual firms can vary widely by the
firm’s industry and the characteristics
of both the job and the firm.

Business Review Q4 2011 11

In my research with Steven Davis
and John Haltiwanger, we show that
one useful metric of how successful
firms are in recruiting workers is the
vacancy yield. The vacancy yield is the
number of hires per vacancy posted
(i.e., the success, in terms of a hire, of
an employer’s recruiting efforts). It is
a simplified measure of the job-filling
rate, which is the speed at which
employers fill their vacancies.4 When
analyzed alongside the rates of hiring
and vacancy posting, the vacancy yield
can provide a more complete picture of
the recruiting behavior of firms.
Table 1 shows how the number of
hires as a percent of employment (the
hiring rate), the number of vacancies
as a percent of total jobs (employment
plus vacancies), and vacancy yields
vary across industries and across
the major U.S. regions. The data
come from published statistics from
the BLS’s Job Openings and Labor
Turnover Survey (JOLTS).
On average, the hiring rate is 3.8
percent of nonfarm employment and
the vacancy rate is 2.9 percent of total
jobs (employment plus vacancies, i.e.,
filled plus unfilled jobs). The vacancy
yield averages 1.3 hires over the month
per vacancy open at the beginning
of the month. In theory, the vacancy
yield would take a value between zero
and one. In practice, however, the
yield can be greater than one, as is
the case in Table 1. This is because
data on hiring are often measured as
a total amount over a period, while
vacancies are usually measured as a
stock at a specific point in time, in this
case, at the beginning of the month.
Consequently, the vacancy yield will
capture the hires from vacancies that

4
The main difference between the vacancy
yield and the job-filling rate is that the latter
accounts for the fact that some vacancies can
be both posted and filled within a period, and
therefore not show up in the data that are used
to calculate the vacancy yield.

12 Q4 2011 Business Review

TABLE 1
Summary Statistics on Hiring and Vacancies
Category

Hiring
Rate

Vacancy
Rate

Vacancy
Yield

Employment
Growth Rate

Total Nonfarm

3.8

2.9

1.32

-0.02

Total Private

4.2

3.0

1.41

-0.04

Construction

6.0

1.9

3.24

-0.17

Manufacturing

2.5

1.9

1.35

-0.38

Retail Trade

4.8

2.5

1.93

-0.07

Transportation &
Utilities

3.2

2.2

1.45

-0.07

Information

2.7

3.2

0.83

-0.30

Finance &
Insurance

2.5

3.3

0.74

-0.01

Real Estate

4.0

2.5

1.55

-0.02

Professional &
Business Services

5.4

3.8

1.41

-0.01

Education

2.5

2.0

1.24

0.21

Health Services

3.0

4.1

0.73

0.21

Leisure &
Hospitality

6.8

3.6

1.88

0.08

Government

1.6

1.9

0.83

0.06

Midwest

3.7

2.5

1.44

-0.08

Northeast

3.3

2.7

1.23

0.02

South

4.0

2.9

1.34

0.01

West

3.9

2.9

1.34

-0.06

Selected Industries

Region

Source: Author’s calculations from published JOLTS statistics from January 2001-May 2010. Hiring rates are percentages of employment. Vacancy rates are percentages of employment plus vacancies (i.e., total jobs). The vacancy yield is the number of hires during the month per vacancy
open at the beginning of the month. The employment growth rate is the difference between
total hires and total separations as a percent of employment. It is comparable to the growth rate
obtained from calculating the change in payroll employment.

www.philadelphiafed.org

are posted and filled within the period
but not from the vacancies that open
during the period.5 In addition, hiring
done through informal channels may
never use a vacancy, which could also
push the average amount of hires per
vacancy above one if these channels
are prevalent enough. There is a
large variation in these rates and in
hires per vacancy across industries
and across regions. Industries with
high worker turnover (and thus high
hiring rates), such as construction,
retail, and leisure and hospitality, have
relatively high vacancy yields. The
high vacancy yield, in part, reflects the
high turnover in these industries, but it
also reflects the fact that many of their
hires come from recruiting channels
other than posting a formal vacancy.
The converse is true for industries
such as government, which has both
low turnover and a low vacancy yield,

5
My research with Davis and Haltiwanger, as
well as several other studies (e.g., the study by
Kenneth Burdett and Elizabeth Cunningham),
finds that vacancy durations are relatively short,
with the average vacancy remaining open for
about three weeks.

the latter partly reflecting the fact
that government agencies tend to have
more formal recruiting practices than
the private sector. The differences
across regions generally reflect
differences in the mix of jobs across
areas, but they also reflect differences
in growth, which generally coincides
with a greater churning of workers
(through greater migration, jobhopping, etc.).
Table 1 also shows that there is
considerable variation across regions.
The generally faster-growing South
and West tend to have higher hiring
rates (and, consequently, higher
turnover), while the Midwest has the
lowest growth but the highest vacancy
yield. The Northeast, which tends
to have a disproportionate share of
industries and occupations that are low
turnover and high wage, has both low
hiring rates and low vacancy yields.
Research has also found that
recruiting efforts and recruiting
outcomes tend to be highly related to
the starting wage offered. For example,
Table 2, which is replicated from
research by John Barron, John Bishop,

and William Dunkelberg, shows that
larger firms tend to pay higher wages,
interview more workers, and invest
more time in recruiting. This occurs
primarily because high-wage jobs
tend to require high or specialized
skills. Finding workers with such skills
often proves difficult. In addition, the
opportunity cost of getting a poorly
matched worker is relatively higher for
these positions.
As some of my research with
Guido Menzio shows, high-wage jobs
also tend to have longer vacancy
durations (Table 3). This is especially
true for managerial and professional
and technical jobs. Again, the skills
required for the job strongly affect how
much firms are willing to invest in the
search process. Table 3 also shows that
a sizable fraction (20 percent) of hiring
occurs without any recruiting, as
reported by the firms surveyed.6 This
is some of the most striking evidence
in support of the informal channels

6
The survey asks how long it took for firms to
fill their last vacancy, allowing for the special
case where “no recruiting” took place.

TABLE 2
Characteristics of Recruiting by Firm Size, 1980
Starting Wage
(2009 $)

Number of People
Interviewed

Number of
Offers Made

10.73

6.3

1.3

8.0

1-9 workers

10.10

5.2

1.2

6.2

10-25 workers

10.31

6.3

1.3

7.1

26-250 workers

11.09

7.0

1.4

9.4

251 or more workers

13.00

8.3

1.3

12.7

Name
All Firms

Hours Spent Recruiting,
Screening & Interviewing

Size of Firm

Source: Author’s calculations and replication of estimates from Barron, Bishop, and Dunkelberg. The original estimates come from the 1980 Employment Opportunities Pilot Project.

www.philadelphiafed.org

Business Review Q4 2011 13

TABLE 3
Characteristics of Recruiting by Occupation, 1980 and 1982
Avg. Vacancy
Duration (days)

Pct. with No
Recruiting

Number of
Applications

Number of
Interviews

11.42

22.0

20.1

12.6

7.0

Professional &
Technical

14.71

37.1

22.0

9.3

8.0

Management

16.12

49.1

29.4

11.0

5.3

9.32

17.7

15.1

16.4

8.7

10.64

29.7

16.9

13.0

7.2

8.08

9.9

18.7

9.6

4.8

Processing &
Machinery

11.36

19.3

25.4

9.3

7.2

Structural Work

15.58

23.4

27.8

8.3

6.3

Name
All Hires

Starting Wage
(2009 $)

Selected Occupations

Clerical
Sales
Personal & Other
Services

Source: Author's work with Guido Menzio. Estimates come from the 1980 and 1982 waves of the Employment Opportunities Pilot Project. The
fraction of hires with “no recruiting” refers to positions that were reported to have a vacancy open for zero days.

stressed by Montgomery, Rees, and
DeVaro as an important recruiting
tool.
Recruiting Behavior Varies with
Business Growth. In my research with
Davis and Haltiwanger, we find that
how fast a business is growing affects
how it recruits. Namely, we find that
the hiring rate rises nearly one-forone with a business’s employment
growth rate but the vacancy rate rises
much less than one-for-one with the
growth rate (Figure 1). This implies
that the vacancy yield (which is
measured as hires per vacancy) also
rises with the growth rate (Figure 2).
The relationship of these variables
to business growth is predominantly
limited to when businesses expand.
Contracting businesses have similar
hiring rates, vacancy rates, and

14 Q4 2011 Business Review

vacancy yields regardless of the size of
the contraction.
The behavior of hires is mostly
mechanical (the dashed line in Figure
1 represents the minimum hiring rate
needed to grow by a certain percent),
but there is no mechanical reason
why the vacancy rate or vacancy yield
should exhibit such behavior. In fact,
most economic models of labor market
search and matching imply a vacancy
yield that is unrelated to business
growth. In our research, however,
we find that the vacancy yield rises
even after controlling for the fact
that fast-growing businesses may just
post and fill vacancies very quickly.
There are several reasons for this to
be the case, although more research is
needed to determine its exact causes.
One hypothesis is that firms relax

their hiring standards when trying
to expand rapidly, making it easier to
fill their vacant positions. Another
hypothesis is that there are scale
economies in recruiting, meaning that
firms are able to benefit from added
efficiencies when trying to hire many
people at once. Yet another hypothesis
is that firms rely more heavily on
informal recruiting channels when
trying to expand quickly, implying that
hiring per (formal) vacancy would rise
with growth.
Recruiting Behavior Varies over
the Business Cycle. Finally, and
perhaps most important, recruiting
behavior varies over the business
cycle. Obviously, when times are good,
businesses are more likely to post
vacancies and hire. Less obvious is
the fact that a business’s success rate

www.philadelphiafed.org

FIGURE 1
Hiring and Vacancy Rates by
Business-Level Growth
Percent of Employment

Percent of Total Jobs

10.0

35.0

9.0
30.0
Hiring Rate (left axis)

8.0

25.0

7.0
6.0

20.0

5.0
15.0
10.0

4.0
Vacancy Rate (right axis)

3.0
2.0

5.0
1.0
0.0
-30.0 -25.0 -20.0 -15.0 -10.0 -5.0

0.0

5.0

10.0

15.0 20.0

25.0

0.0
30.0

Source: Estimates from my study with Steven Davis and John Haltiwanger, which uses
establishment micro-data from JOLTS pooled over 2001-2006. The dashed line represents a 45degree line emanating from the origin, representing the minimum amount of hiring to achieve a
given growth rate.

FIGURE 2
Vacancy Yield by Business-Level Growth
Hires Per Vacancy

7.0
6.0
5.0
4.0
3.0
2.0
1.0
0.0
-30.0 -25.0 -20.0 -15.0 -10.0 -5.0

0.0

5.0

10.0

15.0 20.0

25.0

Business-Level Employment Growth Rate, Percent

Source: Estimates from my study with Steven Davis and John Haltiwanger, which uses
establishment micro-data from JOLTS pooled over 2001-2006.

www.philadelphiafed.org

in recruiting and its potential use of
alternative recruiting channels vary
over the business cycle as well.
Figure 3 shows the behavior of
the hiring rate, the vacancy rate, and
the vacancy yield over the past 10
years, again from published JOLTS
statistics. Recessions are indicated by
the shaded bars. Hiring and vacancies
are procyclical. They both increase
during expansions and fall during
recessions. Two things stand out for
the hiring and vacancy rates in Figure
3. First, relative to the earlier recession,
the 2008-09 period was a time of very
steep declines in the rates of hiring
and vacancy posting. Second, over the
full period, the vacancy rate is more
volatile than the hiring rate (that is, it
rises relatively more during expansions
and falls relatively more during
recessions).
The vacancy yield is
countercyclical. It rises during
recessions and falls during booms and
thus moves opposite to both hires and
vacancies primarily because it is easier
to fill openings during recessions when
there are more unemployed workers
applying for relatively fewer positions.
Figure 4 shows the movements
of the daily job-filling rate and the
monthly escape rate from unemployment over a longer time series.7 The
job-filling rate (the day-by-day rate
at which vacancies are filled) is an
estimate that comes from my research
with Davis and Haltiwanger. As noted
earlier, it is similar in concept to the
vacancy yield. The main exception is
that the job-filling rate accounts for
the fact that some hires come from
vacancies that are posted and filled
within a month (such vacancies never
appear as part of the monthly vacancy

30.0
7
The time series in Figure 4 ends earlier (December 2009) than the series in Figure 3 (July
2010), which is why the job-filling rate does
not exhibit the same decline observed with the
vacancy yield.

Business Review Q4 2011 15

FIGURE 3
Hiring, Vacancies, and the Vacancy Yield
over Time
5.0

2.0

Hiring Rate (left axis)

1.8

4.0
Vacancy Rate (left axis)

1.6

3.0
1.4

2.0
1.2

Vacancy Yield (right axis)

1.0
2001

1.0
2002

2003

2004

2005

2006

2007

2008

2009

2010

Source: Author’s calculations from published JOLTS data for nonfarm employment, January 2001May 2010. Rates are expressed as percentages of employment. The vacancy yield is measured as
the number of hires during the month per vacancy open at the start of the month. Shaded areas
represent NBER-dated recessions.

FIGURE 4
Unemployment Escape Rate and
Job-Filling Rate over Time
Fraction of Vacancies

Fraction of Unemployment

0.12

0.7
Monthly Unemployment
Escape Rate (left axis)

0.6

0.10

0.5
0.08
0.4
0.06
0.3
0.04

0.2
Daily Job-Filling Rate
(right axis)

0.1
1976 1979 1982 1985 1988 1991 1994 1997 2000 2003 2006 2009

0.02

Source: Author’s calculations from published CPS unemployment data, and vacancy rate estimates
from the study by Regis Barnichon. Shaded areas represent NBER-dated recessions.

16 Q4 2011 Business Review

data). Its main limitation is that its calculation is more involved than that of
the vacancy yield, so it is not as easily
obtained from published statistics and,
consequently, not as current as the vacancy yield series in Figure 3. The jobfilling rate in Figure 4 is at the daily
frequency, so it implies that businesses
fill, on average, about 5.7 percent of
their open vacancies on a given day.
The monthly escape rate from unemployment is the percent of unemployed
individuals from the previous month
who are no longer employed in the
current month. One shortcoming is
that the measure does not distinguish
between individuals who found new
work and those who dropped out of the
labor force, although research suggests
that the escape rate closely tracks the
rate at which the unemployed actually
find new jobs.8
Despite the differences in measurement, Figure 4 shows that the
job-filling rate, like its counterpart the
vacancy yield, is strongly countercyclical. It exhibited its largest spike at the
height of the 1982 recession, rising to
over 11 percent of vacancies per day.
The spike at the height of the most recent recession, at 8.6 percent, was the
second highest on record. Businesses
found it hardest to fill their vacancies
during the boom times of the 19982000 period. The movements in the
unemployment escape rate are almost
a mirror image of the movements in
the job-filling rate. The contrasting
behavior of the two series over time is
intuitive: recessions are periods when
it is hard for workers to find a job but
easy for firms to fill their vacancies.
The opposite is true of expansions. It
is worth noting that during the last
recovery, the rate at which individuals
escaped unemployment has remained
well below the next lowest trough on

8
See, for example, an earlier Business Review
article by Shigeru Fujita.

www.philadelphiafed.org

record. This is a primary reason why
the unemployment rate has remained
persistently high during this period.
The divergence currently remains a
puzzle to economists. A rise in structural unemployment, perhaps due to
the downturn in the housing market,
changes in the industry composition
of the economy, or changes in government policies (such as extensions of
unemployment insurance benefits)
have all been suggested as potential

CONCLUSION
Hiring and recruiting are
key features of the labor market.
While these features are common
occurrences often experienced by most
individuals, many economic models
of the labor market still grapple with
dealing with their complexities. The
models do well in capturing the notion

that many costs and frictions exist
in the matching of workers to firms,
but they have yet to fully characterize
the fact that businesses use multiple
channels, both formal and informal,
to attract and recruit workers. Existing
evidence on these channels shows that
the extent to which firms use these
channels, and their success with them,
varies with the type of firm, the type
of job, how much the firm is looking to
expand, and economic conditions. BR

Farber, Henry. “Mobility and Stability:
The Dynamics of Job Change in Labor
Markets,” in Orley E. Ashenfelter and
David Card, eds., Handbook of Labor
Economics, Vol. 3B, 1999, pp. 2439-83.

Mortensen, Dale T., and Christopher
A. Pissarides. “Job Creation and
Job Destruction and the Theory of
Unemployment,” Review of Economic
Studies, 61:3 (1994) pp. 397-415.

Fujita, Shigeru. “What Do Worker Flows
Tell Us About Cyclical Fluctuations in
Employment?” Federal Reserve Bank of
Philadelphia Business Review (Second
Quarter 2007), pp. 1-10.

Pissarides, Christopher A. “Short-Run
Equilibrium Dynamics of Unemployment,
Vacancies and Real Wages,” American
Economic Review, 75:4 (1985), pp. 676-90.

causes, although much work remains
to be done on the issue.

REFERENCES
Barnichon, Regis. “Building a Composite
Help Wanted Index,” Economics Letters,
109:3 (December 2010), pp. 175-78.
Barron, John M., John Bishop, and
William C. Dunkelberg. “Employer
Search: The Interviewing and Hiring of
New Employees,” Review of Economics and
Statistics, 67:1 (1985), pp. 43-52.
Burdett, Kenneth, and Elizabeth J.
Cunningham. “Toward a Theory of
Vacancies,” Journal of Labor Economics,
16:3 (1998), pp. 445-78.
Davis, Steven J., R. Jason Faberman, and
John C. Haltiwanger. “The EstablishmentLevel Behavior of Vacancies and Hiring,”
NBER Working Paper 16265 (August
2010).
DeVaro, Jed. “Employer Recruitment
Strategies and the Labor Market Outcomes
of New Hires,” Economic Inquiry, 43:2
(2005), pp. 263-82.
Elsby, Michael, Ryan Michaels, and Gary
Solon. “The Ins and Outs of Cyclical
Unemployment,” American Economic
Journals: Macroeconomics, 1:1 (2009), pp.
84-110.
Faberman, R. Jason, and Guido Menzio.
“Evidence on the Relationship Between
Recruitment and the Starting Wage,”
unpublished paper, 2010.

www.philadelphiafed.org

Fujita, Shigeru, and Gary Ramey. “The
Cyclicality of Separation and Job Finding
Rates,” International Economic Review 50:2
(2009), pp. 415-30.
Hall, Robert E. “Job Loss, Job Finding,
and Unemployment in the U.S. Economy
over the Past Fifty Years,” NBER
Macroeconomics Annual, Vol. 20 (2005),
pp. 101-37.
Jovanovic, Boyan. “Job Matching and the
Theory of Turnover,” Journal of Political
Economy, 87:5 (1979), pp. 972-90.
Moen, Espen. “Competitive Search
Equilibrium,” Journal of Political Economy
105:2 (1997), pp. 385-411.

Pissarides, Christopher. Equilibrium
Unemployment Theory, Second Edition.
Cambridge, MA: MIT Press (2000).
Rees, Albert. “Information Networks in
Labor Economics,” American Economic
Review, 56:1.2 (1966), pp. 559-66.
Rogerson, Richard, Robert Shimer, and
Randall Wright. “Search-Theoretic Models
of the Labor Market: A Survey,” Journal
of Economic Literature, 43:4 (2005), pp.
959-88.
Rothschild, Michael, and Joseph Stiglitz.
“Equilibrium in Competitive Insurance
Markets: An Essay on the Economics of
Imperfect Information,” Quarterly Journal
of Economics, 90:4 (1976), pp. 629-49.

Montgomery, James D. “Social Networks
and Labor Market Outcomes: Towards an
Economic Analysis,” American Economic
Review, 81:5 (1991), pp. 1408-18.

Business Review Q4 2011 17

Rehypothecation*

H

by cyril Monnet

ow would you feel if even though you were
making regular monthly payments, your
mortgage bank sold your house? This may
seem like an odd question, but this type
of situation happens every day in financial markets in
a practice known as rehypothecation. Although such
practices may be hard for nontraders to understand,
rehypothecation is widespread in financial markets.
Following the crisis of 2007-2009, the Dodd-Frank Act
put restrictions on rehypothecation for derivatives. To
understand the scope of these restrictions, we need to
understand the role of rehypothecation in financial trades.
In this article, Cyril Monnet discusses questions such as:
Which party to a financial trade does rehypothecation
benefit? Are there limits to its advantages? And how
should it be regulated? There are no hard and fast answers
to the last question, but the author notes that we can
make a more informed decision about the pros and
cons of various forms of regulation if we understand the
underlying economics.

Cyril Monnet
is a professor of
economics at
the University of
Bern and director
of the doctoral
program at the
Study Center
Gerzensee,
Switzerland.
When he wrote this article, he was a senior
economic advisor and economist in the
Philadelphia Fed’s Research Department.
This article is available free of charge at
www.philadelphiafed.org/research-and-data/
publications/.
18 Q4 2011 Business Review

How would you feel if even though
you were making regular monthly
payments, your mortgage bank sold
your house? This may seem like an
odd question, but this type of situation
happens every day in financial
markets: A borrower pledges a security

*The views expressed here are those of the
author and do not necessarily represent
the views of the Federal Reserve Bank of
Philadelphia or the Federal Reserve System.

as collateral to a lender, and the lender
sells the security to a third party, a
practice known as rehypothecation.
Although such practices may be
hard for nontraders to understand,
nonetheless, rehypothecation is
widespread in financial markets.
It is easy to understand why a
secured lender — a lender whose
loans have been collateralized with
a security — would want to put the
security (that is, the collateral) to a
profitable use. After all, if the borrower
repays his loan, the lender could always
use the proceeds to re-purchase the
security and transfer it back to the
borrower. And if the borrower defaults,
the lender simply keeps the security. It
is more difficult to see why a borrower
would consent to this practice: The
borrower must take into account the
risk that the lender will not return
his collateral when the borrower
repays his loan. This risk is amplified
when the borrower has consented to
rehypothecation.
Following the crisis of 20072009, the Dodd-Frank Act, which
was passed by Congress in July 2010,
put restrictions on rehypothecation
for derivatives. To understand the
scope of these restrictions, we need to
understand the role of rehypothecation
in financial trades. Which party to
a financial trade does it benefit? Are
there limits to the advantages of
rehypothecation? And, in the end,
how should it be regulated? There
are no hard and fast answers to the
last question, but we can make a more
informed decision about the pros and
cons of various forms of regulations
if we understand the underlying
economics.
www.philadelphiafed.org

COUNTERPARTY RISK
AND COLLATERAL
To understand the use of
rehypothecation in financial markets
and its consequences, it is first
important to understand why and how
trades are collateralized.
Traders demand collateral to
insure against counterparty risk — the
risk that the party they are trading
with (their counterparty) defaults.
Counterparty risk is more acute for
long-term contractual obligations
such as commodity futures or forward
contracts — obligations to deliver a
given quantity of a commodity (porkbellies, soybeans, oil, etc.) at a fixed
price, on a given date in the future.1 In
this article I will focus on commodity
contracts just for concreteness, but the
arguments also apply more generally.
Broadly, default comes in two
types. First, traders may not fulfill
their promises if it is not in their best
interest to do so. This type of default
is called strategic default. Second, the
creditworthiness of each party to the
trade can deteriorate over time, the
results of poor market conditions or
bad investments. If a trader defaults
because it is insolvent, we say that this
is a nonstrategic default. To illustrate,
suppose that an onion farmer who
wants to insure against the fluctuation
of onion prices signs a forward contract
with a merchant promising to deliver
100 onions at $1 each on May 1, 2011.
If the crops are bad, the farmer may be
unable to deliver 100 onions. There is
not much traders can do to limit this
default event because it is nonstrategic.
Alternatively, price movements can
trigger a strategic default: If the price
of onions on May 1 is $2, the farmer
has a strong incentive to renege on
A forward contract differs from a futures
contract in that it is traded over-the-counter,
i.e., traders negotiate the terms of the contract
between themselves, while a futures contract is
traded on a centralized exchange.

1

www.philadelphiafed.org

his promise and sell his 100 onions
elsewhere for $2 each. More generally,
if the price goes down, the buyer has
a strong incentive to renege on its
promises to pay the (higher) contract
price, while if the price goes up, the
seller has a strong incentive to renege
on its promise to deliver the good at
the (lower) contract price.
As a general rule, price
fluctuations are very likely over time
and creditworthiness is more likely to
deteriorate over a longer time horizon.

example, if the price of onions falls
between the day the contract is signed
and the delivery date, the merchant
may have to pledge more collateral;
if the price increases, the farmer
may have to pledge more collateral.
Notice that the requirement to pledge
collateral may switch from one party to
the next, depending on how the price
of onions moves. As a consequence,
it is hard to predict who will need to
pledge collateral at the time traders
agree to a trade. To avoid confusion,

Traders demand collateral to insure against
counterparty risk — the risk that the party they
are trading with (their counterparty) defaults.
So contracts with a long maturity
date, that is, contracts with settlement
dates far in the future, are more prone
to default by one of the traders, be it
strategic or nonstrategic.
Requiring collateral is a
nearly universal contractual way to
address these risks of default. When
traders carry out their business on
an organized exchange, such as
the Chicago Mercantile Exchange
(CME), the exchange’s clearing agent
handles collateral requirements (CME
Clearing, in the case of the CME),
and there is little traders can do to
modify these requirements. However,
many other contracts, such as forward
contracts, are traded over-the-counter
and not on an organized exchange.
In over-the-counter markets, traders
directly negotiate bilateral contracts,
including collateral requirements.
The amount of required collateral
typically depends on the observable
creditworthiness of the counterparty
(for example, their credit rating), as
well as overall market conditions,
to control for strategic default. For

I will refer to the trader who receives
the collateral as the receiver and the
one who offers the collateral as the
pledgor. In our example, the pledgor
will be the merchant if the price of
onions goes down or the farmer if
the price of onions goes up. Notice
also that collateral requirements
serve two distinct functions. First,
collateral limits the receiver’s losses in
the event of default, whether strategic
or nonstrategic. Second, collateral
actually reduces strategic default by
raising the pledgor’s costs of defaulting.
The failure to pledge the required
collateral generally triggers a default
event that can terminate the trade.2
However, posting collateral is costly,
since traders have to keep assets,
including cash, in reserve, for the sole
purpose of securing their positions if
need be, and they have to forgo the
potential benefits of investing the

2
When a trade is terminated, the obligations
are cancelled and the collateral is returned to
its owner.

Business Review Q4 2011 19

assets somewhere else. Thus, traders
have strong incentives to develop ways
to conserve collateral. This is where
rehypothecation plays a role.
REHYPOTHECATION, OR HOW
TO SAVE ON COLLATERAL
Before explaining how
rehypothecation works, let me define
what it is precisely. There are two
notions of rehypothecation. The first
(narrow) notion of rehypothecation
relates to how broker-dealers3 (and
no other market participants)
should handle the securities of
their customers: If they can use
their customers’ securities as they
see fit, we say that broker-dealers
enjoy a rehypothecation right. The
second notion, as proposed by the
International Swaps and Derivatives
Association (ISDA), applies to any
secured lender, not only to brokerdealers: The right of rehypothecation
refers to the right of a secured party
to sell, pledge, rehypothecate (in its
narrow definition above), assign,
invest, use, commingle, or otherwise
dispose of posted collateral. In
what follows, I will use the broader
definition of rehypothecation, which,
simply put, says that a lender with
collateral can use it as if it was his own
asset.
Now, picture yourself as a trader
on an over-the-counter market. If
business is good, you will be involved
in many repeated interactions with
traders at other firms. You will have

3
Under the Securities Exchange Act of 1934,
a “broker” is defined as “any person engaged in
the business of effecting transactions in securities for the account of others.” A “dealer” is
defined as “any person engaged in the business
of buying and selling securities for [his] own
account, through a broker or otherwise.” If the
person performs these functions on a private
basis and not as a business, he is considered
a trader. Depending on the securities traded,
a significant proportion of trades can be conducted by broker-dealers.

20 Q4 2011 Business Review

to take thousands of positions during
a typical day. So you can see that
negotiating every aspect of each
contract will be costly and very
inefficient, since it would slow down
your trading activity and others’. So,
in order to speed things up, market
participants typically transact under
standardized contractual terms known
as a Master Agreement.

collateral very differently. Under the
English Credit Support Deed (CSD)
the pledgor remains the owner of the
asset, and the receiver must open
a segregated account in which the
collateral cannot be combined with
his own property. So the English CSD
simply prohibits the reuse of collateral.
This is not the case under the
New York Credit Support Annex (CSA).

To complement its Master Agreement,
the ISDA provides three standard
templates for handling collateral, known
as the ISDA Credit Support Annexes.
Three Types of Master
Agreements. A Master Agreement
is a standardized form that specifies
not only the terms of a trade, such
as the price and the assets to be
delivered, but also what constitutes
events of default and termination
events. These Master Agreements
reduce legal uncertainty about how
disputes will be resolved. The precise
terms have evolved over time through
the resolution of past disputes. Now,
when two traders choose a Master
Agreement, there is a body of case law
that tells the contracting parties what
the terms actually mean, how judges
will interpret them, and so forth. In
particular, a Master Agreement will
specify the rights of the parties to a
trade regarding the use of collateral in
protecting their exposures. The most
common Master Agreement is the
ISDA Master Agreement.
To complement its Master
Agreement, the ISDA provides three
standard templates for handling
collateral, known as the ISDA Credit
Support Annexes. There are three
types of Credit Support Annexes,
and legally, they treat the handling of

Although the pledgor remains the
owner of the asset, the receiver gains
broad rights to use the collateral.
In particular, the receiver can
rehypothecate any posted collateral
it holds. By using the New York CSA
and agreeing to rehypothecation,
the pledgor gives up his right of
redemption, that is, the pledgor loses
his right to reclaim his collateral in
case the receiver’s exposure to the
pledgor declines. Giving the pledgor
an open-ended right to redeem
collateral whenever the receiver’s
exposure changes would make it nearly
impossible for the receiver to use the
collateral in another transaction; after
all, prices are constantly changing.
Traders can choose to amend the New
York CSA to disengage the provisions
that make rehypothecation possible.
However, we will see that this does not
seem to happen in practice.
Finally, under the English CSA,
the pledgor loses ownership over
the pledged asset, and instead, the
receiver gains full legal ownership of
the collateral. However, and contrary
to the New York CSA, the receiver has
the obligation to return “equivalent”

www.philadelphiafed.org

property when the pledgor’s exposure
is reduced. To provide additional
flexibility, traders can define the
meaning of “equivalent” in the English
CSA.
Why Choose One Type Over
Another? There are reasons traders
might prefer the New York CSA over
the English CSA or vice versa. It is
clear that the receiver enjoys more
flexibility under the English CSA,
since the receiver can return any type
of collateral as long as it is judged
equivalent. However, this flexibility
imposes legal risk on the pledgor, who
may not agree with either the receiver
or a court that the collateral provided
is truly equivalent. Then, why would
the pledgor accept the English CSA?
When negotiating the terms of trades,
the pledgor may still accept this type
of agreement if he gets a better price
in exchange for the additional risk.
Unfortunately, there are no data on
the relative use of English versus New
York CSAs, so it is difficult to check
whether the price terms actually reflect
this flexibility-risk tradeoff.
However, actual contracting practices strongly suggest that rehypothecation is useful. Traders could choose
to prohibit rehypothecation, either by
using an English CSD or by amending
a New York CSA. But, interestingly, a
high proportion of large traders choose
to allow rehypothecation. From the
2010 ISDA margin survey, 44 percent
of all respondents to the survey and 93
percent of large dealers report rehypothecating collateral. To put these
numbers in some perspective, the
survey was conducted after one of the
most serious disturbances to financial
markets in decades. As I will discuss
later, the risk that a pledgor would be
unable to recover his collateral became
very real during the financial disturbances of 2008. Nonetheless, just
over a year later, significant fractions
of traders were willing to bear these

www.philadelphiafed.org

risks again. Given that traders have a
choice, rehypothecation appears to be
useful. But how?
Rehypothecation Increases
Market Liquidity When Collateral
Is Scarce. Rehypothecation lowers
traders’ funding liquidity needs, the
ease with which a trader can obtain
funding. This is quite intuitive. When
traders use rehypothecation, the
receiver can again pledge collateral to
borrow cash. Thus, the same collateral
can be used to support more than one
transaction, making it (more) liquid.
So rehypothecation allows the receiver
to fund his activity easily, rather than
having to scramble for cash or to
mobilize other assets on his balance
sheet. For example, suppose that in
addition to the onion futures, our
merchant also bought apple futures
for $2 and received $1 of collateral
for them. Now suppose onion prices
fall to 50 cents but there is no change
in apple prices. It is then very likely
that the onion farmer will demand
more collateral, and in this case, our
merchant could use the $1 pledged by
the apple farmer to satisfy this added
collateral requirement rather than use
his own reserves.
Lowering traders’ funding liquidity
needs is important because it has
market-wide effects. Funding liquidity
affects market liquidity, the ease
with which a trader finds a suitable
counterparty. When it becomes easier
to secure funding, traders are willing
to take on some positions that would
otherwise require too much capital.
This improves market liquidity by
increasing the number of traders
willing to take positions (see the
article by Markus Brunnermeier and
Lasse Pedersen and the one by Ronel
Elul). And a higher degree of market
liquidity is usually associated with a
higher level of social welfare.
Clearly, the receiver benefits from
rehypothecation. But why should

the pledgor agree to rehypothecation
if the receiver is the real beneficiary
while the pledgor bears more risk?
While a more liquid market benefits
everyone, individual traders capture
only a small share of the total benefits
that all traders receive from enhanced
liquidity. However, the receiver’s
flexibility to reuse collateral could and
should be reflected in more favorable
terms of trade, at least in a competitive
market. For example, if the pledgor
uses cash collateral, the receiver could
agree to pay a higher interest rate
on this cash. Or perhaps the pledgor
might be required to post less collateral
if the receiver can reuse it.
That said, the amount of
compensation traders must receive
for allowing their counterparties to
repledge their collateral will depend
on various factors. One of these is
market structure. Large dealers may
be able to exploit their position in
order to extract more profit from their
customers. This is consistent with
the evidence that large dealers use
collateral rehypothecation relatively
more than others. Also, according to
Christian Johnson’s article, traders
(including dealers) may refuse to trade
if they cannot rehypothecate the
collateral. His account is consistent
with a market in which large dealers
simply make a take-it-or-leave-it offer
to all other traders. The two-sided
nature of the default risk is another
factor. Recall that traders can end up
as pledgor or receiver, depending on
market conditions. In this case, both
traders have an incentive to accept
rehypothecation, since it lowers their
funding costs if they turn out to be the
receiver. As of yet, there is no formal
empirical evidence on the relationship
between rehypothecation and other
contractual terms, and so it is difficult
to evaluate the relative importance of
these factors.

Business Review Q4 2011 21

REHYPOTHECATION
AMPLIFIES MARKET STRAINS
When market conditions
deteriorate, rehypothecation can
amplify market strains. Simply
put, rehypothecation re-introduces
counterparty risk in case a trader
fails. This makes traders wary
about agreeing to rehypothecation
when conditions deteriorate. As a
consequence, funding liquidity needs
can increase, thus amplifying market
strains. In this section, I describe each
step in detail.
Rehypothecation Introduces
Counterparty Risk. First, consider
what happens if a trader fails. For
example, suppose our merchant
goes bust having rehypothecated
the farmer’s collateral. Legitimately,
the farmer will want to recover his
collateral. But since the merchant used
it to secure another of his transactions,
the farmer will not find it easy to get
his collateral back.
Legally, several scenarios are
possible. If the merchant has pledged
the collateral to a third party, this
third party has the right to seize the
collateral to cover the merchant’s
obligations. In this case, the farmer
loses his collateral. A second possible
scenario is when the farmer owes a
debt to the merchant; for example,
the merchant has made an early
partial payment to the onion farmer
on the total due. In this case, the
value of the farmer’s collateral can
be deducted from his debt. However,
the law would treat the farmer as an
unsecured creditor if the value of the
collateral exceeds the value of his debt.
As an unsecured creditor, the farmer
will typically receive only a piece of
the value of the collateral. In both
scenarios, the farmer who pledged
collateral ends up losing when the
merchant fails.
So rehypothecation lowers the
trader’s coverage against counterparty

22 Q4 2011 Business Review

risk. And in an interlinked market
with rehypothecation, the actual
amount of collateral in the market
can be much lower than the
amount of collateral that has been
contractually committed. Think of a
number of dealers linked in a chain
of trades. In an extreme case, each
dealer in the chain may find that
he isn’t collateralized at all, even if
contracts fully collateralize traders’
exposure! For example, suppose that
the apple producer is $100 in debt
to the merchant, who contracted a
debt of $100 with the onion farmer,
who himself owes a debt of $100
to the apple producer. If they all
rehypothecate the collateral, then the
trades do not look collateralized at
all. If the onion farmer defaults, no
collateral can really be seized, and it
is as if no collateral had been pledged.
Although this is an extreme example,
it illustrates how rehypothecation can
undo the beneficial effects of collateral.
More realistically, rehypothecation
can lead to chains of traders who
are much less protected than they
thought they were. The bottom line
is that rehypothecation increases
the same counterparty risk that the
collateral requirement was supposed
to tame. Note that if rehypothecation
was prohibited or not used, the total
available collateral would always
equal the collateral that has been
contractually committed, and each
trader would recover his collateral in
the event of default.
Thinking about chains of traders
also helps to see another effect of
rehypothecation: Rehypothecation
increases the linkages between traders.
In our example, the onion farmer
and the third party who received
collateral from the merchant had no
formal contractual agreement at all.
If you asked the onion farmer, he
would say he had an agreement only
with the merchant. Nonetheless,

the merchant’s ability to pledge the
collateral means that the onion
farmer and the third party are also
interlinked. In this type of market,
individual traders are potentially
exposed to large numbers of
participants with whom they have no
formal agreement. Note, this effect
is in addition to the liquidity effects I
have already discussed.
Rehypothecation Amplifies
Market Strains When Traders
Become Nervous. When traders
grow anxious about the possibility
of a counterparty’s default, they will
tend to deny rehypothecation rights.
In a time of crisis, the financial health
of market participants can change
by the hour. As dealers grow unsure
of the quality of their counterparty,
they prefer to take precautionary
measures regarding their collateral.
So it is natural that in a time of crisis,
dealers become reluctant to agree to
rehypothecation, to ensure that they
know where their collateral is.
Unfortunately, dealers do not
take into account the effects of their
behavior on other traders, and this
reversal in collateral policy makes
funding pressures more severe. Other
dealers might then scramble for
collateral to secure the loans necessary
for their business. If collateral becomes
so scarce that dealers are unable to
place orders to buy securities, the
market can freeze.4 Note that although
every individual trader may be making
the best possible decision for himself
or herself, traders might act quite
differently if they could all make a
collective decision to continue to
accept rehypothecation agreements.
The freeze can be inefficient if traders
are financially sound but lack the
necessary liquid assets. In our simple

4
See Yaron Leitner’s Business Review article on
market freezes.

www.philadelphiafed.org

example, while everyone would be
better off if the (financially sound)
merchant actually buys a forward
contract from the onion farmer,
the merchant’s inability to pledge
collateral means that he will have to
buy onions on the spot market at a
higher price5 and will have to charge
his clients more. This is inefficient,
since the farmer, the merchant, and
the merchant’s customers would have
preferred that a forward contract be
written before buying and selling
on the onion market revealed the
actual spot price. So a sudden change
in a trader’s willingness to accept
rehypothecation amplifies market
strains and makes (inefficient) market
freezes more likely.
Unfortunately, a sudden reduction
in the practice of rehypothecation is
not just a theoretical possibility, since
it happened during the financial crisis
of 2008-2009. In their 2010 article,
Manmohan Singh and James Aitken
show that rehypothecation declined
rapidly after Lehman Brothers failed
on September 14, 2008. The total
collateral pledged that could be reused
declined from $4.5 trillion at the end
of 2007 to $2.1 trillion at the end of
2009. In their 2009 article, Singh and
Aitken show that the total amount of
assets available as collateral decreased
by up to $5 trillion as a result of
reduced rehypothecation and collateral
hoarding. At the same time, credit
markets seized up.
During the height of the crisis,
dealers found it difficult to conduct
their business, since they could

5
A spot market is a market in which goods or
securities are traded for cash, and each transaction is settled immediately.
6
A haircut is a percentage that is subtracted
from the value of the collateral. Hence, only
collateral worth more than $100 will be accepted to secure a $90 loan with a 10 percent
haircut.

www.philadelphiafed.org

not find proper counterparties that
would lend to them without stringent
contractual guarantees. For example,
counterparties would accept only
Treasury securities as collateral, and
they would apply large collateral
haircuts.6 The Federal Reserve System
(and other government agencies)
viewed this market freeze as inefficient
and felt that intervention was justified

During the height of
the crisis, dealers
found it difficult
to conduct their
business, since
they could not find
proper counterparties
that would lend
to them without
stringent contractual
guarantees.
to “bolster market liquidity and
promote orderly market functioning.
Liquid, well-functioning markets
are essential for the promotion of
economic growth.”7 To ease large
dealers’ funding needs, the Federal
Reserve put in place a back-stop
facility for dealers, the Primary Dealer
Credit Facility (PDCF). Under this
program, large dealers could borrow
from the Federal Reserve’s discount
window using as collateral a broad
set of securities (with appropriate
haircuts), not only Treasury securities.
As described in the article by Tobias

7
From the March 16, 2008 press release from
the Federal Reserve Board announcing the
creation of the Primary Dealer Credit Facility
(PDCF).

Adrian, Christopher Burke, and James
McAndrews, PDCF usage immediately
spiked to $40 billion before receding
progressively, as conditions in the
financing markets improved and the
pricing of the PDCF became less
attractive. As tensions from the Bear
Stearns bailout abated, use of the
PDCF stopped altogether in mid-July
2008. But then came the failure of
Lehman Brothers on September 15.
Perceiving that Lehman Brothers’
difficulties could contaminate other
dealers, lenders imposed higher
haircuts and accepted only highquality securities as collateral. As
a result, dealers struggled to obtain
funding. As a preventive policy, the
Fed expanded the types of PDCFeligible collateral on September 14. As
a result, PDCF usage exploded to $59.7
billion on Wednesday, September 17,
from no activity during the previous
week. Eventually, PDCF borrowing
reached more than $140 billion in
October 2008. Adrian, Burke, and
McAndrews conclude that in this
instance, the PDCF fulfilled one of the
purposes for which it was intended:
to be available in the event that a
failure of a primary dealer led to severe
funding disruptions for the surviving
dealers.
SHOULD REHYPOTHECATION
BE PROHIBITED?
The possibility that (the lack
of) rehypothecation can amplify

8
The act stipulates that (A) “a futures commission merchant shall treat and deal with all
money, securities, and property of any swaps
customer received to margin, guarantee or
secure a swap cleared by or through a derivatives clearing organization as belonging to the
swaps customer,’’ and (B) “Money, securities,
and property of a swaps customer described in
(A) shall be separately accounted for and shall
not be commingled with the funds of the future
commission merchant or be used to margin,
secure or guarantee any trades or contracts of
any swaps customer or person other than the
person for whom the same are held.”

Business Review Q4 2011 23

market strains and lead to inefficient
market freezes provides a partial
rationale for the Dodd-Frank Act’s
prohibition against rehypothecation
for many derivative transactions.
Precisely, the Dodd-Frank Act limits
rehypothecation by requiring that
most swap contracts be cleared by
a derivatives clearing organization,
such as a central counterparty, and
that the collateral pledged be held
in a segregated account with no
possibility of rehypothecation.8 These
provisions of the Dodd-Frank Act
will limit rehypothecation because a
central counterparty imposes collateral
requirements to clear trades and holds
the collateral on behalf of the traders.9
Therefore, the central counterparty is
the sole receiver of the collateral, and
it will not be rehypothecated. Other
contracts that are not considered
swap contracts under the act are not
(yet) subject to these requirements
(for example, commodity futures or
some security futures). While a limit
to rehypothecation will make trading
safer for those market participants who
need to pledge collateral, there may be
significant costs to limiting this market
practice for most derivatives contracts:
The cost of pledging collateral may
increase, funding liquidity needs may
become more severe, and overall
market liquidity may deteriorate.
During the financial crisis,
in spite of increased counterparty
risk, derivatives traders still agreed
to rehypothecation (although at a
lower level than before the crisis)
and continued to do so after the
crisis receded, as shown by Singh and
Aitken in their 2010 article. This use
of rehypothecation even under adverse
conditions might suggest that traders

9
See my earlier Business Review article or my
working paper with Thorsten Koeppl for more
details on central counterparty clearing.

24 Q4 2011 Business Review

view rehypothecation as valuable
in itself. If traders did not find the
benefits of rehypothecation greater
than the costs, they did have means
for preventing its practice. Traders
could prohibit rehypothecation by,
for instance, amending the New York
CSA.10 A second option is to use an
English CSD. This option is rather
inexpensive and guarantees that the
pledgor will get his collateral back.
The fact that some traders did not rely
on either option suggests that they
may have seen value in the practice,
and that limiting rehypothecation via
regulation may impose costs.

In light of the evidence of
the use of rehypothecation, both
theories are plausible, although they
have very different implications for
regulators. Unfortunately, without
more micro-level data on the use of
rehypothecation, it is difficult to know
which of the two theories is correct.
CONCLUSION
Before the enactment of the
Dodd-Frank bill, rehypothecation was
widely used by market participants.
In this article, I have tried to explain
why this is so while also highlighting
some of the drawbacks to individual

Alternatively, we can’t rule out the
possibility that the practice occurred
because some participants were able
to exploit their market power to impose
rehypothecation on other traders.
Alternatively, we can’t rule
out the possibility that the practice
occurred because some participants
were able to exploit their market
power to impose rehypothecation on
other traders. If the receiver has a
monopoly over the provision of some
securities, he can cut out any trader
who refuses the rehypothecation of
his collateral. In this case, we would
also observe that market participants
use rehypothecation during moments
of stress, not because they want to
but because they have to. In this case,
limiting rehypothecation is an indirect
way of addressing abusive positions in
financial markets.

10
It is true that this option is costly, since
traders who want to amend a CSA would need
to agree on the content of the amendment. Because negotiation takes time, adding an amendment in itself might defeat the whole purpose of
using a Master Agreement, and, in fact, it seems
that the credit annexes are rarely amended.

traders and to the market as a whole.
In a nutshell, rehypothecation reduces
the cost of pledging collateral, it
reduces funding liquidity needs, and
it improves market liquidity. However,
rehypothecation carries problems of
its own, since it seemingly has the
potential to introduce market-wide
counterparty risks that are difficult
for a single trader to control and can
amplify market strains.
While, at this stage, it is not
clear if rehypothecation should be
encouraged or limited, the DoddFrank Act took the stance that the
uncertainties in cases of default
were too strong to leave current
rehypothecation and clearing
practices in place. Although central
counterparty clearing is desirable
for standardized contracts, it
remains to be seen how prohibiting
rehypothecation will affect the
derivatives markets. BR

www.philadelphiafed.org

REFERENCES

Adrian, Tobias, Christopher Burke, and
James McAndrews. “The Federal Reserve’s
Primary Dealer Credit Facility,” Federal
Reserve Bank of New York, Current Issues
in Economics and Finance, 15:4 (August
2009).
Brunnermeier, Markus, and Lasse
Pedersen. “Market Liquidity and Funding
Liquidity,” Review of Financial Studies, 22:6
(2008), pp. 2201-38.
Elul, Ronel. “Liquidity Crises,” Federal
Reserve Bank of Philadelphia Business
Review (Second Quarter 2008).
International Swaps and Derivatives
Association. “Market Review of OTC
Derivative Bilateral Collateralization
Practices,” ISDA (2010).

www.philadelphiafed.org

International Swaps and Derivatives
Association. “ISDA Margin Survey 2010,
Preliminary Results,” ISDA (April 2010).
Johnson, Christian. “Derivatives and
Rehypothecation Failure. It’s 3:00 pm. Do
You Know Where Your Collateral Is?”’
Arizona Law Review, 30 (1997).
Koeppl, Thorsten, and Cyril Monnet.
“The Emergence and Future of Central
Counterparties,” Federal Reserve Bank
of Philadelphia Working Paper 10-20
(September 2010).

Monnet, Cyril. “Let’s Make It Clear: How
Central Counterparties Save(d) the Day,”
Federal Reserve Bank of Philadelphia
Business Review (First Quarter 2010).
Singh, Manmohan, and James Aitken.
“Deleveraging After Lehman: Some
Evidence from Rehypothecation,” IMF
Working Paper 09/42 (2009).
Singh, Manmohan, and James Aitken.
“The (Sizable) Role of Rehypothecation
in the Shadow Banking System,” IMF
Working Paper 10/172 (2010).

Leitner, Yaron. “Why Do Markets Freeze?”
Federal Reserve Bank of Philadelphia
Business Review (Second Quarter 2011).

Business Review Q4 2011 25

Research Rap

Abstracts of
research papers
produced by the
economists at
the Philadelphia
Fed

You can find more Research Rap abstracts on our website at: www.philadelphiafed.org/research-and-data/
publications/research-rap/. Or view our working papers at: www.philadelphiafed.org/research-and-data/
publications/.

A SURVEY OF EMPIRICAL
RESEARCH ON FISCAL POLICY
ANALYSIS BASED ON REAL-TIME
DATA
This paper surveys the empirical
research on fiscal policy analysis based
on real-time data. This literature can be
broadly divided into three groups that
focus on: (1) the statistical properties of
revisions in fiscal data; (2) the political and
institutional determinants of fiscal data
revisions and of one-year-ahead projection
errors by governments, and (3) the reaction
of fiscal policies to the business cycle. It
emerges that, first, fiscal data revisions are
large and initial releases are biased estimates
of final values. Second, the presence of
strong fiscal rules and institutions leads to
relatively more accurate releases of fiscal
data and small deviations of fiscal outcomes
from government plans. Third, the cyclical
stance of fiscal policies is estimated to be
more “counter-cyclical” when real-time data
are used instead of ex-post data. Finally,
more work is needed for the development
of real-time data sets for fiscal policy
analysis. In particular, a comprehensive
real-time data set, including fiscal variables
for industrialized (and possibly developing)
countries, published and maintained by
central banks or other institutions, is still
missing.
Working Paper 11-25, “Real-Time Data
and Fiscal Policy Analysis: A Survey of the
Literature,” Jacopo Cimadomo, European
Central Bank

26 Q4 2011 Business Review

A QUANTITATIVE EQUILIBRIUM
MODEL OF THE HOUSING SECTOR
The authors construct a quantitative
equilibrium model of the housing sector
that accounts for the homeownership
rate, the average foreclosure rate, and the
distribution of home-equity ratios across
homeowners prior to the recent boom and
bust in the housing market. They analyze
the key mechanisms that account for
these facts, including the preferential tax
treatment of housing and inflation. The
authors then use the model to gain a deeper
understanding of the recent housing and
mortgage crisis by studying the consequence
of an unanticipated increase in the supply
of housing (overbuilding shock). They
show that the model can account for the
observed decline in house prices and much
of the increase in the foreclosure rate if two
additional forces are taken into account:
(i) the lengthening of the time to complete
a foreclosure (during which a defaulter
can stay rent-free in his house) and (ii) the
tightening of credit constraints in the market
for new mortgages.
Working Paper 11-26, “A Quantitative
Analysis of the U.S. Housing and Mortgage
Markets and the Foreclosure Crisis,”
Satyajit Chatterjee, Federal Reserve Bank of
Philadelphia, and Burcu Eyigungor, Federal
Reserve Bank of Philadelphia
ESTIMATING SCALE ECONOMIES
AT LARGE BANKS
Earlier studies found little evidence of

www.philadelphiafed.org

scale economies at large banks; later studies using data
from the 1990s uncovered such evidence, providing a
rationale for very large banks seen worldwide. Using
more recent data, the authors estimate scale economies
using two production models. The standard riskneutral model finds little evidence of scale economies.
The model using more general risk preferences and
endogenous risk-taking finds large scale economies.
The authors show that these economies are not driven
by too-big-to-fail considerations. They evaluate the cost
implications of breaking up the largest banks into banks
of smaller size.
Working Paper 11-27, “Who Said Large Banks Don’t
Experience Scale Economies? Evidence from a RiskReturn-Driven Cost Function,” Joseph P. Hughes, Rutgers
University, and Loretta J. Mester, Federal Reserve Bank of
Philadelphia
CAN MONETARY POLICY ENHANCE THE
FUNCTIONING OF THE PRIVATE CREDIT
SYSTEM?
The authors investigate the extent to which
monetary policy can enhance the functioning of the
private credit system. Specifically, they characterize
the optimal return on money in the presence of credit
arrangements. There is a dual role for credit: It allows
buyers to trade without fiat money and also permits
them to borrow against future income. However, not
all traders have access to credit. As a result, there is
a social role for fiat money because it allows agents
to self-insure against the risk of not being able to use
credit in some transactions. The authors consider a
(nonlinear) monetary mechanism that is designed to
enhance the credit system. An active monetary policy
is sufficient for relaxing credit constraints. Finally, they
characterize the optimal monetary policy and show that
it necessarily entails a positive inflation rate, which is
required to induce cooperation in the credit system.
Working Paper 11-28, “Optimal Monetary Policy in
a Model of Money and Credit,” Pedro Gomis-Porqueras,
Monash University, and Daniel R. Sanches, Federal
Reserve Bank of Philadelphia
HOW STRATEGIC COMPLEMENTARITIES
INTERACT WITH MARKOV-PERFECT
POLICIES
The literature on optimal monetary policy in
New Keynesian models under both commitment and

www.philadelphiafed.org

discretion usually solves for the optimal allocations
that are consistent with a rational expectations
market equilibrium, but it does not study whether the
policy can be implemented given the available policy
instruments. Recently, King and Wolman (2004)
have provided an example for which a time-consistent
policy cannot be implemented through the control
of nominal money balances. In particular, they find
that equilibria are not unique under a money stock
regime and they attribute the nonuniqueness to
strategic complementarities in the price-setting process.
The authors clarify how the choice of monetary
policy instrument contributes to the emergence of
strategic complementarities in the King and Wolman
(2004) example. In particular, they show that for an
alternative monetary policy instrument, namely, the
nominal interest rate, there exists a unique Markovperfect equilibrium. The authors also discuss how a
time-consistent planner can implement the optimal
allocation by simply announcing his policy rule in a
decentralized setting.
Working Paper 11-29, “On the Implementation of
Markov-Perfect Monetary Policy,” Michael Dotsey, Federal
Reserve Bank of Philadelphia, and Andreas Hornstein,
Federal Reserve Bank of Richmond
ANALYZING THE STRUCTURED FINANCE
ASSET-BACKED SECURITIES CDO MARKET
This paper conducts an in-depth analysis of
structured finance asset-backed securities collateralized
debt obligations (SF ABS CDOs), the subset of CDOs
that traded on the ABS CDO desks at the major
investment banks and were a major contributor to
the global financial panic of August 2007. Despite
their importance, we have yet to determine the exact
size and composition of the SF ABS CDO market or
get a good sense of the write-downs these CDOs will
generate. In this paper the authors identify these SF
ABS CDOs with data from Intex©, the source data
and valuation software for the universe of publicly
traded ABS/MBS securities and SF ABS CDOs. They
estimate that 727 publicly traded SF ABS CDOs were
issued between 1999 and 2007, totaling $641 billion.
Once identified, they describe how and why multisector
structured finance CDOs became subprime CDOs,
and show why they were so susceptible to catastrophic
losses. The authors then track the flows of subprime
bonds into CDOs to document the enormous cross-

Business Review Q4 2011 27

referencing of subprime securities into CDOs. They
calculate that $201 billion of the underlying collateral
of these CDOs was referenced by synthetic credit
default swaps (CDSs) and show how some 5,500 BBBrated subprime bonds were placed or referenced into
these CDOs some 37,000 times, transforming $64
billion of BBB subprime bonds into $140 billion of
CDO assets. For the valuation exercise, the authors
estimate that total write-downs on SF ABS CDOs will
be $420 billion, 65 percent of original issuance balance,
with over 70 percent of these losses having already
been incurred. They then extend the work of BarnettHart (2009) to analyze the determinants of expected
losses on the deals and AAA bonds and examine
the performance of the dealers, collateral managers,
and rating agencies. Finally, the authors discuss the
implications of their findings for the “subprime CDO
crisis” and discuss the many areas for future work.
Working Paper 11-30, “Collateral Damage: Sizing
and Assessing the Subprime CDO Crisis,” Larry Cordell,
Federal Reserve Bank of Philadelphia; Yilin Huang, Federal
Reserve Bank of Philadelphia; and Meredith Williams,
Federal Reserve Bank of Philadelphia
NEW METHODOLOGIES FOR EVALUATING
OUT-OF-SAMPLE FORECASTING
PERFORMANCE
This paper proposes new methodologies for
evaluating out-of-sample forecasting performance that
are robust to the choice of the estimation window size.
The methodologies involve evaluating the predictive
ability of forecasting models over a wide range of
window sizes. The authors show that the tests proposed
in the literature may lack the power to detect predictive
ability and might be subject to data snooping across
different window sizes if used repeatedly. An empirical
application shows the usefulness of the methodologies
for evaluating exchange rate models’ forecasting ability.
Working Paper 11-31, “Out-of-Sample Forecast Tests
Robust to the Choice of Window Size,” Barbara Rossi,
Duke University, and Visiting Scholar, Federal Reserve
Bank of Philadelphia, and Atsushi Inoue, North Carolina
State University
EFFECTS OF FISCAL POLICY UNCERTAINTY
ON AGGREGATE ECONOMIC ACTIVITY
The authors study the effects of changes in
uncertainty about future fiscal policy on aggregate

28 Q4 2011 Business Review

economic activity. Fiscal deficits and public debt
have risen sharply in the wake of the financial crisis.
While these developments make fiscal consolidation
inevitable, there is considerable uncertainty about the
policy mix and timing of such budgetary adjustment.
To evaluate the consequences of this increased
uncertainty, the authors first estimate tax and spending
processes for the U.S. that allow for time-varying
volatility. They then feed these processes into an
otherwise standard New Keynesian business cycle
model calibrated to the U.S. economy. The authors find
that fiscal volatility shocks have an adverse effect on
economic activity that is comparable to the effects of a
25-basis-point innovation in the federal funds rate.
Working Paper 11-32, “Fiscal Volatility Shocks and
Economic Activity,” Jesus Fernandez-Villaverde, University
of Pennsylvania; Pablo Guerron-Quintana, Federal
Reserve Bank of Philadelphia; Keith Kuester, Federal
Reserve Bank of Philadelphia; and Juan Rubio-Ramirez,
Duke University
INCORPORATING LONG-TERM DEBT INTO
MODELS OF SOVEREIGN DEBT
In this paper, the authors advance the theory
and computation of Eaton-Gersovitz style models of
sovereign debt by incorporating long-term debt and
proving the existence of an equilibrium price function
with the property that the interest rate on debt is
increasing in the amount borrowed and implementing a
novel method of computing the equilibrium accurately.
Using Argentina as a test case, they show that
incorporating long-term debt allows the model to match
the average external debt-to-output ratio, average
spread on external debt, the standard deviation of
spreads and simultaneously improve upon the model’s
ability to account for Argentina’s other cyclical facts.
Working Paper 11-33, “Maturity, Indebtedness, and
Default Risk,” Satyajit Chatterjee, Federal Reserve Bank of
Philadelphia, and Burcu Eyigungor, Federal Reserve Bank
of Philadelphia
DO OIL PRICES HAVE A STABLE OUTOF-SAMPLE RELATIONSHIP WITH THE
CANADIAN/U.S. DOLLAR EXCHANGE RATE?
This paper investigates whether oil prices have a
reliable and stable out-of-sample relationship with the
Canadian/U.S. dollar nominal exchange rate. Despite
state-of-the-art methodologies, the authors find little

www.philadelphiafed.org

systematic relation between oil prices and the exchange
rate at the monthly and quarterly frequencies. In
contrast, the main contribution is to show the existence
of a very short-term relationship at the daily frequency,
which is rather robust and holds no matter whether the
authors use contemporaneous (realized) or lagged oil
prices in their regression. However, in the latter case
the predictive ability is ephemeral, mostly appearing
after instabilities have been appropriately taken into
account.
Working Paper 11-34, “Can Oil Prices Forecast
Exchange Rates?,” by Domenico Ferraro, Duke University;
Ken Rogoff, Harvard University; and Barbara Rossi, Duke
University, and Visiting Scholar, Federal Reserve Bank of
Philadelphia

assets evolve over time. Comparing economies in which
the initial fraction of lemons varies, the authors study
the relationship between the severity of the lemons
problem and market liquidity. They use this framework
to understand how asymmetric information contributed
to the breakdown in trade of asset-backed securities
during the recent financial crisis and to evaluate the
efficacy of one policy that was implemented in attempt
to restore liquidity.
Working Paper 11-36, “Trading Dynamics in
Decentralized Markets with Adverse Selection,” Braz
Camargo, São Paolo School of Economics—FGV, and
Benjamin Lester, Federal Reserve Bank of Philadelphia

IMPLICATIONS OF ELIMINATING
BANKRUPTCY PROTECTION FOR
INDEBTED INDIVIDUALS
What are the positive and normative implications
of eliminating bankruptcy protection for indebted
individuals? Without bankruptcy protection, creditors
can collect on defaulted debt to the extent permitted
by wage garnishment laws. The elimination lowers
the default premium on unsecured debt and permits
low-net-worth individuals suffering bad earnings shocks
to smooth consumption by borrowing. There is a large
increase in consumer debt financed essentially by
super-wealthy individuals, a modest drop in capital per
worker, and a higher frequency of consumer default.
Average welfare rises by 1 percent of consumption
in perpetuity, with about 90 percent of households
favoring the change.
Working Paper 11-35, “Dealing with Consumer
Default: Bankruptcy vs. Garnishment,” Satyajit Chatterjee,
Federal Reserve Bank of Philadelphia, and Grey Gordon,
University of Pennsylvania

ESTIMATING THE VALUE OF THE
TOO-BIG-TO-FAIL SUBSIDY
This paper estimates the value of the too-big-to-fail
(TBTF) subsidy. Using data from the merger boom of
1991-2004, the authors find that banking organizations
were willing to pay an added premium for mergers that
would put them over the asset sizes that are commonly
viewed as the thresholds for being TBTF. They estimate
at least $15 billion in added premiums for the eight
merger deals that brought the organizations to over
$100 billion in assets. In addition, the authors find that
both the stock and bond markets reacted positively
to these TBTF merger deals. Their estimated TBTF
subsidy is large enough to create serious concern,
particularly since the recently assisted mergers have
effectively allowed for TBTF banking organizations to
become even bigger and for nonbanks to become part
of TBTF banking organizations, thus extending the
TBTF subsidy beyond banking.
Working Paper 11-37, “How Much Did Banks Pay
to Become Too-Big-to-Fail and to Become Systemically
Important?,” Elijah Brewer III, DePaul University, and
Julapa Jagtiani, Federal Reserve Bank of Philadelphia

STUDYING THE RELATIONSHIP BETWEEN
THE SEVERITY OF THE LEMONS PROBLEM
AND MARKET LIQUIDITY
The authors study a dynamic, decentralized
lemons market with one-time entry and characterize
its set of nonstationary equilibria. This framework
offers a theory of how a market suffering from adverse
selection recovers over time endogenously; given an
initial fraction of lemons, the model provides sharp
predictions about how prices and the composition of

THE CONTINUING IMPORTANCE
OF PORTAGE SITES
The authors examine portage sites in the U.S.
South, Mid-Atlantic, and Midwest, including those
on the fall line, a geo-morphological feature in the
southeastern U.S. marking the final rapids on rivers
before the ocean. Historically, waterborne transport of
goods required portage around the falls at these points,
while some falls provided water power during early
industrialization. These factors attracted commerce and

www.philadelphiafed.org

Business Review Q4 2011 29

manufacturing. Although these original advantages
have long since been made obsolete, the authors
document the continuing importance of these portage
sites over time. They interpret these results as path
dependence and contrast explanations based on sunk
costs interacting with decreasing versus increasing
returns to scale.
Working Paper 11-38, “Portage and Path Dependence,”
Hoyt Bleakley, University of Chicago, and Jeffrey Lin,
Federal Reserve Bank of Philadelphia
MACROECONOMIC AND WELFARE
IMPLICATIONS OF RELAXING BORROWING
CONSTRAINTS
Is the observed large increase in consumer
indebtedness since 1970 beneficial for U.S.
consumers? This paper quantitatively investigates the
macroeconomic and welfare implications of relaxing
borrowing constraints using a model with preferences
featuring temptation and self-control. The model can
capture two contrasting views: the positive view, which
links increased indebtedness to financial innovation
and thus better consumption smoothing, and the
negative view, which is associated with consumers’
over-borrowing. The author finds that the latter is
sizable: The calibrated model implies a social welfare
loss equivalent to a 0.4 percent decrease in per-period
consumption from the relaxed borrowing constraint
consistent with the observed increase in indebtedness.
The welfare implication is strikingly different from the
standard model without temptation, which implies a
welfare gain of 0.7 percent, even though the two models
are observationally similar. Naturally, the optimal level
of the borrowing limit is significantly tighter according
to the temptation model, as a tighter borrowing limit
helps consumers by preventing over-borrowing.
Working Paper 11-39, “Rising Indebtedness and
Temptation: A Welfare Analysis,” Makoto Nakajima,
Federal Reserve Bank of Philadelphia
EXAMINING THE FORECASTING ABILITY OF
PHILLIPS CURVE MODELS
The Phillips curve has long been used as a
foundation for forecasting inflation. Yet numerous
studies indicate that over the past 20 years or so,
inflation forecasts based on the Phillips curve
generally do not predict inflation any better than
a univariate forecasting model. In this paper, the

30 Q4 2011 Business Review

authors take a deeper look at the forecasting ability
of Phillips curves from both an unconditional and
a conditional view. Namely, they use the test results
developed by Giacomini and White (2006) to examine
the forecasting ability of Phillips curve models. The
authors’ main results indicate that forecasts from their
Phillips curve models are unconditionally inferior
to those of their univariate forecasting models and
sometimes the difference is statistically significant.
However, the authors do find that conditioning on
various measures of the state of the economy does at
times improve the performance of the Phillips curve
model in a statistically significant way. Of interest is
that improvement is more likely to occur at longer
forecasting horizons and over the sample period
1984Q1–2010Q3. Strikingly, the improvement is
asymmetric — Phillips curve forecasts tend to be more
accurate when the economy is weak and less accurate
when the economy is strong. It, therefore, appears
that forecasters should not fully discount the inflation
forecasts of Phillips curve-based models when the
economy is weak.
Working Paper 11-40, “Do Phillips Curves
Conditionally Help to Forecast Inflation?,” Michael Dotsey,
Federal Reserve Bank of Philadelphia; Shigeru Fujita,
Federal Reserve Bank of Philadelphia; and Tom Stark,
Federal Reserve Bank of Philadelphia
POOLING INFORMATION IN
ESTIMATES OF GDP TO CONSTRUCT
A COMBINED ESTIMATE
Two often-divergent U.S. GDP estimates are
available: a widely used expenditure-side version GDPE,
and a much less widely used income-side version
GDI. The authors propose and explore a “forecast
combination” approach to combining them. They
then put the theory to work, producing a superior
combined estimate of GDP growth for the U.S., GDPC.
The authors compare GDPC to GDPE and GDPI, with
particular attention to behavior over the business cycle.
They discuss several variations and extensions.
Working Paper 11-41, “Improving GDP Measurement:
A Forecast Combination Perspective,” S. Boragan Aruoba,
University of Maryland, and Visiting Scholar, Federal
Reserve Bank of Philadelphia; Francis X. Diebold,
University of Pennsylvania, and Visiting Scholar, Federal
Reserve Bank of Philadelphia; Jeremy Nalewaik, Federal
Reserve Board of Governors; Frank Schorfheide, University

www.philadelphiafed.org

of Pennsylvania, and Visiting Scholar, Federal Reserve
Bank of Philadelphia; and Dongho Song, University of
Pennsylvania
STUDYING THE SPATIAL CONCENTRATION
OF R&D LABS
The authors study the location and productivity of
more than 1,000 research and development (R&D) labs
located in the Northeast corridor of the U.S. Using a
variety of spatial econometric techniques, they find that
these labs are substantially more concentrated in space
than the underlying distribution of manufacturing
activity. Ripley’s K-function tests over a variety of
spatial scales reveal that the strongest evidence of
concentration occurs at two discrete distances: one at
about one-quarter of a mile and another at about 40
miles. These findings are consistent with empirical
research that suggests that some spillovers depreciate
very rapidly with distance, while others operate at the
spatial scale of labor markets. The authors also find that
R&D labs in some industries (e.g., chemicals, including
drugs) are substantially more spatially concentrated
than are R&D labs as a whole.
Tests using local K-functions reveal several
concentrations of R&D labs (Boston, New YorkNorthern New Jersey, Philadelphia-Wilmington, and
Washington, DC) that appear to represent research
clusters. The authors verify this conjecture using
significance-maximizing techniques (e.g., SATSCAN)
that also address econometric issues related to “multiple
testing” and spatial autocorrelation.
The authors develop a new procedure for
identifying clusters — the multiscale core-cluster
approach — to identify labs that appear to be clustered
at a variety of spatial scales. They document that while
locations in these clusters are often related to basic
infrastructure, such as access to major roads, there is
significant variation in the composition of labs across
these clusters. Finally, the authors show that R&D labs
located in clusters defined by this approach are, all else
equal, substantially more productive in terms of the
patents or citation-weighted patents they receive.
Working Paper 11-42, “The Agglomeration of
R&D Labs,” Gerald A. Carlino, Federal Reserve Bank
of Philadelphia; Jake K. Carr, Federal Reserve Bank of
Philadelphia; Robert M. Hunt, Federal Reserve Bank
of Philadelphia; and Tony E. Smith, University of
Pennsylvania

www.philadelphiafed.org

EFFECTS OF GOVERNMENT SPENDING
CUTS ON ECONOMIC ACTIVITY IN AN
ENVIRONMENT OF SEVERE FISCAL STRAIN
The authors analyze the effects of government
spending cuts on economic activity in an environment
of severe fiscal strain, as reflected by a sizeable risk
premium on government debt. Specifically, they
consider a “sovereign risk channel,” through which
sovereign default risk spills over to the rest of the
economy, raising funding costs in the private sector.
The authors’ analysis is based on a variant of the model
suggested by Cúrdia and Woodford (2009). It allows
for costly financial intermediation and inter-household
borrowing and lending in equilibrium but maintains
the tractability of the baseline New Keynesian model.
They show that if monetary policy is constrained in
offsetting the effect of higher sovereign risk on privatesector borrowing conditions, the sovereign risk channel
exacerbates indeterminacy problems: private-sector
beliefs of a weakening economy can become selffulfilling. Under these conditions, fiscal retrenchment
can limit the risk of macroeconomic instability. In
addition, if fiscal strain is very severe and monetary
policy is constrained for an extended period, fiscal
retrenchment may actually stimulate economic activity.
Working Paper 11-43, “Sovereign Risk and the Effects
of Fiscal Retrenchment in Deep Recessions,” Giancarlo
Corsetti, Cambridge University; Keith Kuester, Federal
Reserve Bank of Philadelphia; André Meier, International
Monetary Fund; and Gernot J. Müller, University of Bonn
IDENTIFYING SOURCES OF THE DECLINE IN
THE AGGREGATE JOB SEPARATION RATE
The purpose of this paper is to identify possible
sources of the secular decline in the aggregate job
separation rate over the last three decades. The author
first shows that aging of the labor force alone cannot
account for the entire decline. To explore other sources,
he uses a simple labor matching model with two types
of workers, experienced and inexperienced, where the
former type faces a risk of skill obsolescence during
unemployment. When the skill depreciation occurs, the
worker is required to restart his career and thus suffers
a drop in earnings. The author shows that a higher skill
depreciation risk results in a lower aggregate separation
rate and a smaller earnings loss. The key mechanisms
are that the experienced workers accept lower wages in
exchange for keeping the job and that the reluctance to

Business Review Q4 2011 31

separate from the job produces a larger mass of lowquality matches. He also presents empirical evidence
consistent with these predictions.
Working Paper 11-44, “Declining Labor Turnover
and Turbulence,” Shigeru Fujita, Federal Reserve Bank of
Philadelphia
DEVELOPING A UNIFIED FRAMEWORK FOR
MEASURING CONNECTEDNESS AT VARIOUS
LEVELS
The authors propose several connectedness
measures built from pieces of variance decompositions
and argue that they provide natural and insightful
measures of connectedness among financial asset
returns and volatilities. They also show that variance
decompositions define weighted, directed networks,
so that their connectedness measures are intimately
related to key measures of connectedness used in the
network literature. Building on these insights, the
authors track both average and daily time-varying
connectedness of major U.S. financial institutions’
stock return volatilities in recent years, including during
the financial crisis of 2007-2008.
Working Paper 11-45, “Measuring the Connectedness
of Financial Firms,” Francis X. Diebold University of
Pennsylvania, and Visiting Scholar, Federal Reserve Bank
of Philadelphia, and Kamil Yilmaz, Koç University
EXAMINING INVESTORS’ REACTIONS TO
SEASONED EQUITY OFFERINGS
The authors examine investors’ reactions to
announcements of large seasoned equity offerings
(SEOs) by U.S. financial institutions (FIs) from 2000

32 Q4 2011 Business Review

to 2009. These offerings include market infusions as
well as injections of government capital under the
Troubled Asset Relief Program (TARP). The sample
period covers both business cycle expansions and
contractions and the recent financial crisis. The
authors present evidence on the factors affecting FI
decisions to issue capital, the determinants of investor
reactions, and post-SEO performance of issuers as well
as a sample of matching FIs. They find that investors
reacted negatively to the news of private market SEOs
by FIs, both in the immediate term (e.g., the two
days surrounding the announcement) and over the
subsequent year, but positively to TARP injections.
Reactions differed depending on the characteristics of
the FIs, stage of the business cycle, and conditions of
financial crisis. Larger institutions were less likely to
have raised capital through market offerings during
the period prior to TARP, and firms receiving a TARP
injection tended to be larger than other issuers. The
authors find that while TARP may have allowed FIs to
increase their lending (as a share of assets) in the year
after the issuance, they took on more credit risk to do
so. They find no evidence that banks’ capital adequacy
increased after the capital injections.
Working Paper 11-46, “Large Capital Infusions,
Investor Reactions, and the Return and Risk Performance
of Financial Institutions over the Business Cycle and
Recent Financial Crisis,” Elyas Elyasiani, Fox School of
Business, Temple University; Loretta J. Mester, Federal
Reserve Bank of Philadelphia and The Wharton School,
University of Pennsylvania; Michael S. Pagano Villanova
School of Business, Villanova University

www.philadelphiafed.org