View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Economic Quarterly—Volume 98, Number 2—Second Quarter 2012—Pages 77–110

Does Monetarism Retain
Relevance?
Robert L. Hetzel

T

he quantity theory and its monetarist variant attribute significant recessions to monetary shocks. The literature in this tradition documents
the association of monetary and real disorder.1 By associating the occurrence of monetary disorder with central bank behavior that undercuts the
working of the price system, quantity theorists argue for a direction of causation running from monetary disorder to real disorder.2 These correlations are
robust in that they hold under a variety of different monetary arrangements
and historical circumstances.
Nevertheless, correlations, no matter how robust, do not substitute for a
model. As Lucas (2001) said, “Economic theory is mathematics. Everything
else is just pictures and talk.” While quantity theorists have emphasized the
importance of testable implications, they have yet to place their arguments
within the standard workhorse framework of macroeconomics—the dynamic,
stochastic, general equilibrium model. This article asks whether the quantity
theory tradition, which is long on empirical observation but short on deep
theoretical foundations, retains relevance for current debates.
Another problem for the quantity theory tradition is the implicit rejection
by central banks of its principles. Quantity theorists argue that the central
bank is responsible for the control of inflation. It is true that at its January
2012 meeting, the Federal Open Market Committee (FOMC) adopted an inflation target. However, the FOMC did not accompany its announcement with
The author is a senior economist and research adviser at the Federal Reserve Bank of Richmond. Without implicating him in any way, the author is especially indebted to Andreas
Hornstein for his comments. The views in this article are the author’s, not the Federal Reserve Bank of Richmond’s or the Federal Reserve System’s. E-mail: robert.hetzel@rich.frb.org.
1 Two examples of discussion of monetarist ideas are Laidler (1981) and Mayer (1999).
2 For example, Milton and Rose Friedman (1980) wrote: “In one respect the [Federal Re-

serve] System has remained completely consistent throughout. It blames all problems on external
influences beyond its control and takes credit for any and all favorable circumstances. It thereby
continues to promote the myth that the private economy is unstable, while its behavior continues
to document the reality that government is today the major source of instability.”

78

Federal Reserve Bank of Richmond Economic Quarterly

quantity-theoretic language. Quantity theorists argue that the reason central
banks are responsible for inflation is their power over money creation, not any
influence over conditions in financial markets (intermediation between savers
and investors). Power over money creation comes from the Fed’s monopoly
over creation of the monetary base (reserves of commercial banks held as deposits with the Fed and currency held by the nonbank public), which serves
as the medium for exercising finality of settlement in payments.
This article summarizes the quantity theory tradition without attempting
to exposit a quantity-theoretic model. Section 1 sharpens the issues at stake by
briefly summarizing some “red flags” for monetarists concerning the behavior
of the monetary aggregates over the past few years. The remaining sections
provide an overview of the monetarist tradition, which derives from the longerrun quantity theory tradition.

1.

MONETARIST RED FLAGS

In Europe, the behavior of the monetary aggregates engenders monetarist
criticisms. In the United Kingdom, the growth rate of money (broad money
or M4) started declining in late 2007 from a level of around 13 percent and
became negative in 2011. In the Eurozone, the growth rate of money (broad
money or M3) started declining in late 2007 from a level of around 12 percent,
ceased growing in late 2009 and early 2010, and then steadied at around 3
percent in 2011.3 Because monetary velocity (the ratio of nominal gross
domestic product [GDP] to money) exhibits a downward trend in both the
United Kingdom and the Eurozone, the increased money demand reinforces
the monetary contraction.
Does this pattern of “high” followed by “low” money growth constitute evidence of go-stop monetary policy? Despite low rates of interest, do the recent
low rates of money growth indicate contractionary monetary policy? Does the
sustained decline in nominal GDP growth provide evidence of contractionary
monetary policy? As elucidated in Section 4, the issue is stark. One possibility is that in the monetarist tradition the decline in money growth, nominal
GDP growth, and real GDP growth reflects a negative monetary shock and
causation going from money to output. Alternatively, the precipitating shock
could have been real with causation going from real output to money.
3 Figures from Federal Reserve Bank of St. Louis, International Economic Trends.

R. L. Hetzel: Does Monetarism Retain Relevance?

79

2. THE SPIRIT OF THE QUANTITY THEORY TRADITION
David Hume ([1752] 1955) expressed the kind of empirical correlations used
by quantity theorists to support the hypothesis of the short-run nonneutrality
of money and longer-run neutrality.
Lowness of interest is generally ascribed to plenty of money. But. . .
augmentation [in the quantity of money] has no other effect than to
heighten the price of labour and commodities. . . . In the progress toward
these changes, the augmentation may have some influence, by exciting
industry, but after the prices are settled. . . it has no manner of influence.
[T]hough the high price of commodities be a necessary consequence
of the increase of the gold and silver, yet it follows not immediately
upon that increase; but some time is required before the money circulates
through the whole state. . . . In my opinion, it is only in this interval
of intermediate situation, between the acquisition of money and rise of
prices, that the increasing quantity of gold and silver is favourable to
industry. . . . [W]e may conclude that it is of no manner of consequence,
with regard to the domestic happiness of a state, whether money be in
greater or less quantity. The good policy of the magistrate consists only
in keeping it, if possible, still increasing. . .

Knut Wicksell ([1935] 1978, 6) referred to episodes of economic disruption in a paper money standard:
By means of money (for example by State paper money) it is
possible—and indeed this has frequently happened—to destroy large
amounts of real capital and to bring the whole economic life of society
into hopeless confusion.

Hume was generalizing about the expansionary impact of gold inflows
from the New World.4 Wicksell referred to the inflationary issuance of paper
money to finance government deficits. An example often cited in the 19th
century was the assignat experience in revolutionary France before Napoleon
restored the gold standard (White [1876] 1933). The Hume and Wicksell references make evident the exogenous origin of money creation. The Bullionist
(quantity theorists)/Antibullionist (real bills) debate following the depreciation
of the pound when Britain abandoned the gold standard during the Napoleonic
Wars originated the quantity-theoretic criterion for money creation as an independent force (shock) in the more typical case of a central bank employing an
interest rate target. The quantity theory imputes causality to monetary disturbances based on central bank behavior that flouts the need to provide a nominal
anchor and to allow the price system to work. The Bullionists argued that as
4 For references to episodes of deflation, see Humphrey (2004).

80

Federal Reserve Bank of Richmond Economic Quarterly

a consequence of setting its bank rate below the “natural” rate of interest, the
Bank of England created money, which forced an increase in prices.5
Wicksell ([1898] 1962, 120, 148, and 189) repeated the Bullionist criticism
that inflation (deflation) results if the central bank sets a bank rate that ignores
the determination of the real rate of interest by market forces:6
[T]here is a certain level of the average rate of interest which is such
that the general level of prices has no tendency to move either upwards
or downwards. . . . Its magnitude is determined by the current level of the
natural capital rate and rises and falls with it. If. . . the average rate of
interest is set and maintained below this normal level. . . prices will rise
and go on rising.
[O]nce the entrepreneurs begin to rely upon this process continuing—
as soon, that is to say, as they start reckoning on a future rise in prices—the
actual rise will become more and more rapid. In the extreme case in
which the expected rise in prices is each time fully discounted, the annual
rise in prices will be indefinitely great. [Italics in original.]
If prices rise, the rate of interest is to be raised; and if prices fall,
the rate of interest is to be lowered.

As evident from the above quotations, quantity theorists contend that the
uniqueness of the central bank derives from its control over money creation.
That contention contrasts with the view of a central bank as a financial intermediary that exercises its control through influence over conditions in credit
markets. In an exchange with Senator Prescott Bush (R-CT), Milton Friedman
[U.S. Congress 1959, 623–4] expressed the quantity theory view:
5 Thornton ([1802] 1939, 255–6) wrote: “[C]apital. . . cannot be suddenly and materially encreased by any emission of paper. That the rate of mercantile profits depends on the quantity of
this bona fide capital and not on the amount of the nominal value which an encreased emission
of paper may give to it, is a circumstance which it will now be easy to point out. . . . It seems
clear that when the augmented quantity of paper. . . shall have produced its full effect in raising
the price of goods, the temptation to borrow at five percent. will be exactly the same as before;
for the existing paper will then bear only the same proportion to the existing quantity of goods,
when sold at the existing prices, which the former paper bore to the former quantity of goods,
when sold at the former prices; the power of purchasing will, therefore, be the same; the terms of
lending and borrowing must be presumed to be the same; the amount of circulating medium alone
will have altered, and it will have simply caused the same goods to pass for a larger quantity of
paper. . . . [T]here can be no reason to believe that even the most liberal extension of bank loans
will have the smallest tendency to produce a permanent diminution of the applications to the Bank
for discount.”
Thomas Joplin ([1823] 1970, 258–9) employed the terminology of the “natural” rate of interest. When the loan rate diverges from the natural rate, the money supply changes to the extent
that this divergence produces a difference in the saving and investment planned by the public.
For a discussion of the history of the distinction between real and nominal interest rates, see
Humphrey (1983). For a discussion of the Bullionist-Antibullionist debate, see Hetzel (1987).
6 Wicksell’s analysis did not incorporate the distinction between the nominal and real interest
rate developed by Fisher (1896). Friedman ([1968] 1969) first combined this distinction with the
Wicksell analysis.

R. L. Hetzel: Does Monetarism Retain Relevance?

81

Senator Bush: What should the Federal Reserve Board do with
demands for credit increasing? Prior to the most recent recession, we
had tremendous increases in the use of installment credit. In fact, there are
some pretty reliable opinions that it was overuse of credit by consumers,
particularly installment credit that brought about this recession in business
because it stimulated the purchase of goods beyond the year in which
they should be buying them. . . .
Mr. Friedman: Congress and its agencies have a definite responsibility
about money. So far as credit is concerned, free enterprise is just as
good for credit as it is for shoes, hats, and anything else. The objective
of our policy ought to be to allow credit to adjust itself in a free market,
provided we maintain a stable monetary background.

3.

QUANTITY THEORY HYPOTHESES

The quantity theory starts from two premises. The first premise is that the
central bank is the institution that controls money creation. It does so through
its control over its liabilities—the monetary base. Because individual welfare
depends only on real variables (physical quantities and relative prices), the
central bank must endow money, a nominal (dollar) variable, with a welldefined (determinate) value. Phrased alternatively, the intrinsic worthlessness
of money requires the central bank to choose a nominal anchor that determines
the money price of goods (the price level).
The second premise is that changes in the price level play a role in the
working of the price system in a way that depends on how the central bank
chooses the nominal anchor. The three basic choices that exist for the central
bank define alternative monetary regimes. First, with a gold (commodity)
standard, the central bank sets the parity price of gold (the paper dollar price
of gold). The price level then adjusts to give the paper dollar the same real
purchasing power as a gold dollar. Second, with a fixed exchange rate and
for a small open economy, the central bank sets the foreign exchange value
of the currency. The price level then adjusts to provide the real terms of trade
that equilibrates the balance of payments. With each regime, an explicit rule
underpins the belief that the central bank will maintain the nominal anchor
(the dollar peg to gold or the foreign exchange value of the currency) in the
future.
With the third choice of monetary regime, the concern of the central bank
is for stability of the domestic price level. This regime necessitates a floating
exchange rate (Keynes [1923] 1972, ch. 4). The price level adjusts to endow
the nominal quantity of money with the purchasing power desired by the
public. A central bank desirous of achieving price stability must close down
this adjustment by making nominal money grow in line with the public’s
demand for real money. How the central bank does so depends on a choice

82

Federal Reserve Bank of Richmond Economic Quarterly

of one of two possible nominal anchors determined by a choice of one of two
possible instruments.
With a reserve aggregate as the instrument, the central bank follows a
“Pigovian” rule in which a reserves-money multiplier relationship controls
money creation (Pigou 1917). With an interest rate as the instrument, the
central bank follows a “Wicksellian” rule in which maintenance of equality between the “bank rate” and the “natural rate” controls money creation
(Wicksell 1898 [1962]).7 With either instrument, the central bank must follow a rule that disciplines the way in which the public forms its expectation of
the future price level. The reason is that money possesses value in exchange
today only because of the expectation that it will possess value in exchange
tomorrow, and the rule conditions that expectation.8
With a reserve-aggregate targeting regime, the central bank controls the
nominal quantity of money through its control over a reserve aggregate. Given
a well-defined demand for real money (the purchasing power of money), sustained changes in the nominal quantity of money that do not correspond to
prior changes in the real demand for money work through a real balance effect
to change growth in the nominal expenditure of the public relative to trend
growth in real output.9 Trend inflation emerges as the difference. Inflation
maintains equality between the real purchasing power desired by money holders and the real purchasing power of the nominal quantity of money (Pigou
1917; Keynes [1923] 1972).
In a reserve-aggregate targeting regime, a real balance effect provides the
nominal anchor by giving the price level a well-defined value. As explained
by Patinkin (1965), arbitrary changes in the price level produce changes in real
money balances (outside money) and consequent changes in the expenditure
of the public that counteract the price level changes. Woodford generalizes
7 For a review of the quantity theory literature, see Humphrey (1974, 1990).
8 Woodford (2005) states the general argument for a rule based on the idea that individu-

als make efficient use of information (take account of the forecastable behavior of central banks)
in forecasting the future: “Because the key decision-makers in an economy are forward-looking,
central banks affect the economy as much through their influence on expectations as through any
direct, mechanical effects of central bank trading in the market for overnight cash. As a consequence, there is good reason for a central bank to commit itself to a systematic approach to
policy that not only provides an explicit framework for decision-making within the bank, but that
is also used to explain the bank’s decisions to the public.” (Italics in original.)
9 Friedman ([1961] 1969, 255) wrote of the real balance effect consequent upon an openmarket purchase by the central bank: “[T]he new balance sheet is out of equilibrium, with cash
being temporarily high relative to other assets. Holders of cash will seek to purchase assets to
achieve a desired structure. . . . [T]his process. . . tends to raise the prices of sources of both producer
and consumer services relative to the prices of the services themselves; for example, to raise the
price of houses relative to the rents of dwelling units, or the cost of purchasing a car relative to
the cost of renting one. It therefore encourages the production of such sources. . . and, at the same
time, the direct acquisition of services rather than of the source. . . .”

R. L. Hetzel: Does Monetarism Retain Relevance?

83

Patinkin’s analysis by adding to contemporaneous money the public’s expectation of future money.10
Since the Treasury-Fed Accord of 1951 and before December 2008, the
Fed has possessed an evolving reaction function broadly characterized as
“lean-against-the-wind” (LAW).11 The instrument has been a short-term interest rate (the funds rate since 1970). In order to provide for nominal and
real stability, the central bank must implement LAW in a way that allows the
price system to work and that conditions the public’s expectation of the future
value of money (McCallum 1986; Goodfriend 1987; Hetzel 1995).12 With
an interest rate instrument, a real balance effect does not provide a nominal
anchor. The nominal anchor comes from credibility for a rule with which
the central bank will initiate a contractionary monetary policy action if the
public’s expectation for inflation exceeds the bank’s target (Goodfriend 1993;
Hetzel 2008a, ch. 21), and conversely for a shortfall of expected inflation
from the target.13
10 As formulated by Woodford (2003, 108), equation (1) expresses the price level (P ) given
the central bank’s target for money (M s ):

log Pt =

∞

j −0





s
ϕ j Et log Mt+j
− ηi log 1 + itm − ut+j − log m̄.

In (1), ϕ j depends on the interest elasticity of money demand, ηi ; i m is the interest paid on
money; u captures exogenous changes in real output, the natural rate of interest, money demand,
and the interest paid on money; m̄ is the steady-state demand for real money.
11 LAW marked the departure from the real bills pre-World War II focus on financial market
instability construed as speculative behavior in asset markets or macroprudential regulation in today’s terminology (on real bills, see Humphrey [1982] and Hetzel [1985]). With LAW, the FOMC
focused directly on the economy as opposed to asset prices. Hetzel (2008a, 2008b) contrasts the
two broad variants of LAW. The first variant emerged gradually with FOMC chairman William
McChesney Martin (until derailed by the populist policies of Lyndon Johnson) and reemerged after the Volcker disinflation. It focused on moving short-term interest rates in a way that countered
sustained changes in the rate of resource utilization in the economy (changes in the output gap)
and on maintaining low, stable inflation premia in long-term government bond yields. The second characterized the “fine tuning” period from the mid-1960s through the end of the 1970s. It
focused on moving short-term interest rates in response to the level of the output gap and on
responding directly to actual inflation. Hetzel (2012) argues that this latter variant reappeared in
2008 through the practice of responding directly to actual inflation. LAW procedures provide a
necessary condition for allowing market forces to determine the real interest rate. The fine-tuning
variant under which the FOMC periodically attempts to increase the magnitude of a negative output
gap to lower inflation contravenes this latter principle.
12 If the central bank possesses a credible rule that stabilizes the expectation of the future
price level, it need only respond to the real behavior of economy. The LAW procedures with which
the Fed moves the funds rate away from its prevailing value in response to sustained changes in
the economy’s rate of resource utilization cause the real funds rate to track the natural rate of
interest (Hetzel 2008b). In effect, the central bank delegates to the price system determination
of the real interest rate and, by extension, other real variables. In principle, realized inflation
can offer information on the real economy and a central bank reaction function could include as
arguments both real output and inflation, but that fact in no way implies central bank manipulation
of a Phillips curve relationship between inflation and output.
13 With an interest rate instrument, money demand controls money creation. The central bank
then limits money creation indirectly through its control of the public’s expectation of the future
price level. That expectation disciplines nominal money demand. The discipline comes from the

84

Federal Reserve Bank of Richmond Economic Quarterly

With an interest-rate instrument and a LAW reaction function, growth
in nominal expenditure emerges as the sum of two components: growth in
real expenditure and in inflation. Because of the assumption that the central
bank cannot exercise systematic control over real variables, to avoid becoming a source of instability, the central bank needs to implement LAW in a
way that allows the price system to determine the first component—real expenditure (output). The rule determines the long-run behavior of the second
component—trend inflation—through the way in which it conditions the inflationary expectations of firms that set prices for multiple periods.14 Trend
nominal expenditure then arises from the sum of the two components: potential output growth and trend inflation. Because of the central bank’s interest
rate peg, the nominal money stock follows the public’s demand for nominal
money. However, the rule constrains that demand in a way consistent with the
inflation target.
To reiterate, the central bank is unique because of its monopoly over the
creation of the monetary base and, as a consequence, over broader money
creation. With a floating exchange rate, the price level adjusts to endow the
nominal quantity of money with the purchasing power desired by the public.
This monetary character of the price level endows the central bank with control
over inflation through its control over trend growth in nominal expenditure.
Central to the way in which quantity theorists endow this framework with empirical content is the assumption that the price system works well in the absence
of monetary shocks that cause the price level to evolve in an unpredictable
way (Humphrey 2004). Violation of the discipline placed on central banks
by a rule that allows the price system to determine real variables produces
monetary emissions (absorptions) that force changes in nominal expenditure
(output) and the associated booms and recessions.
The assumption that markets work well in the absence of monetary disorder subsumes more fundamental assumptions about markets. Competitive
belief by the public that the central bank will vary its interest rate target if, in the future, the
price level deviates from target. As formulated by Woodford (2003, 83), equation (2) expresses
the contemporaneous price level (P ) given the central bank’s target for the price level (P ∗ ):
log Pt =

∞

j −0



∗ + φ −1 r̂
ϕ j Et log Pt+j
t+j − vt+j .
p

In (2), φ p measures how the central bank changes its interest rate instrument in response
to deviations of the price level from target and ϕ j is a function of φ p ; vt captures exogenous
changes to the interest rate rule; r̂ is the natural rate of interest.
14 In the base case of price stability maintained by a credible rule, firms setting prices for
multiple periods only change their dollar prices in order to change the relative price of their product. For a general discussion, see Wolman (2001, 30–1) and Goodfriend (2004b, 28). The central
bank moves its interest rate instrument in a way that tracks the natural interest rate. Allowing the
price system to work causes firms to maintain the optimal markup of product price over marginal
cost. The environment of nominal expectational stability conditions the price-setting behavior of
firms and maintains price stability apart from random, transitory changes in prices.

R. L. Hetzel: Does Monetarism Retain Relevance?

85

markets determine market-clearing prices and those prices aggregate information from dispersed markets efficiently. As a result, the central bank can avoid
major recessions by following a rule that allows market forces to determine real
variables (the real rate of interest, real output, and employment) and relative
prices. Moreover, the efficient use of information by market participants implies that the central bank cannot systematically control real variables (exploit
the inflation/unemployment correlations of empirical Phillips curves).15
Monetary nonneutrality arises from behavior by the central bank that
causes the price level to evolve in an unpredictable way.16 In the absence of a
widely understood, credible rule underpinning an inflation target, changes in
the price level have to occur in a way that is uncoordinated by a common set of
expectations among price setters. That unpredictability presents price-setting
firms with a coordination problem that they cannot solve. To counter monetary instability, collectively, firms would have to move dollar prices together
to search for the price level that endows nominal money with the real purchasing power desired by money holders while also maintaining dollar prices
individually to achieve the relative prices that clear markets. The price system
fails to provide the requisite coordination.

4. THE KEYNESIAN-MONETARIST DEBATE
No central bank characterizes the role it plays in the economy as emanating from its control over money creation. Instead, central banks characterize their influence over prices and the economy in terms of how they
affect conditions in financial markets and the resulting impact on financial
intermediation. Moreover, the use of the language of discretion when combined with the legislative injunction to maintain “maximum employment”
implies ongoing discretionary intervention into the working of the price system rather than implementation of a rule that delegates the determination of
15 The best known statement of the hypothesis that the central bank cannot control real variables in a predictable fashion is in Friedman ([1968] 1969). In response to an attempt by the
central bank to control real variables in a systematic fashion, expectations adjust in a way that
cause prices to change to eliminate the ability of the central bank to manipulate the real quantity
of money: The long-run neutrality of money telescopes into the short run. In an attempt to systematize this hypothesis, Lucas ([1972] 1981) provided the first systematic exposition of quantity
theory ideas. See also Humphrey (1999).
Friedman ([1958] 1969, 182–3) wrote: “[O]nce it becomes widely recognized that prices are
rising, the advantages. . . [adduced to support the view that ‘slowly rising prices stimulate economic
output’] will disappear. . . . If the advantages are to be obtained, the rate of price rise will have
to be accelerated and there is no stopping place short of runaway inflation. From this point of
view, there may clearly be a major difference between the effects of a superficially similar price
rise, according as it is an undesigned and largely unforeseen effect of such impersonal events as
the discovery of gold, or a designed result of deliberative policy action by a public body.”
16 This hypothesis is in the spirit of the model in Lucas ([1972] 1981) in which only unpredictable policy actions have real effects. In New Keynesian sticky price models, the central bank
can exert a predictable control over real variables.

86

Federal Reserve Bank of Richmond Economic Quarterly

employment to market forces. Implicitly, the message is that the central bank
counters economic instability that arises in the private economy. Although
not articulated as such, it follows from such an “activist” policy of intervening
to influence employment that the control of inflation entails trading off between inflation and unemployment based on a Phillips curve relating the two
variables.
As a way of assessing the tacit rejection of quantity theory ideas by central
banks, this section reviews the Keynesian-monetarist debate. As in the real
bills tradition, Keynesians often assume that recessions follow as the consequence of prior unsustainable speculative increases in asset prices and creditdriven overconsumption. Herd behavior among investors reflects “animal
spirits.” Both traditions reject the relevance of money as a factor determining either prices or cyclical fluctuations. With the central bank as a financial
intermediary, the liabilities of the central bank (the monetary base and, by
extension, the money stock) are determined by market (real) forces. In the
real bills tradition, purposeful monetary expansion by the central bank leads
to asset bubbles. In the Keynesian tradition, purposeful monetary expansion
by the central bank leads to offsetting changes in monetary velocity that render monetary policy inefficacious. Both traditions attribute nominal and real
instability to real shocks.
Figures 1 through 7 organize the discussion. Figures 1 and 2 show annual
rates of consumer price index (CPI) inflation, respectively, for the intervals
starting after the Civil War to World War II and subsequent to World War
II to the present. For the post-World War I period to World War II, Figures
3–5 present graphs of growth rates of nominal and real output (GNP), M1
velocity and the interest rate, and growth rates of M1 and nominal output
(gross national product [GNP]). For the post-Korean War period until the
start of the Volcker disinflation, Figures 6 and 7, respectively, present graphs
of growth rates of nominal and real output (GDP) and growth rates of M1
and nominal output (GDP).17 In Figures 3 and 6, which display the rate of
growth of nominal and real output for the years 1919–1940 and 1953–1981,
inflation (deflation) measured by the implicit output deflator appears as the
rate of growth of nominal output (dashed line) minus the rate of growth of real
output (solid line). Inflation appears as the cross-hatched lines sloping upward
17 The second set of graphs excludes the graph of the interest rate and M1 velocity because

of the small interest sensitivity in the latter period of real M1 demand (the inverse of velocity).
The graphs end in the early 1980s when the deregulation of interest rates made real M1 demand
sensitive to interest rates. As a result, the visual relation between M1 and nominal GDP disappears.
In particular, when the economy weakens and the interest rate falls, funds flow out of the money
market into NOW accounts (interest-bearing checkable deposits included in M1). Heightened M1
growth then corresponds to weakness in nominal output growth. Even with a stable M1 demand
function, the relationship between growth rates of money and nominal output is obscured by a
decline in velocity (Hetzel and Mehra 1989). The pre-1981 period is an extraordinary laboratory
for testing quantity theory ideas because of the usefulness of M1 growth as a measure of the
impact of monetary policy on nominal expenditure and nominal output.

R. L. Hetzel: Does Monetarism Retain Relevance?

87

(dashed line above the solid line), while deflation appears as the cross-hatched
lines sloping downward (solid line above the dashed line).
Keynesian economists have pointed to real shocks to explain the behavior
of inflation shown in Figures 1 and 2. At the time of the Samuelson-Solow
([1960] 1966) formulation of the Phillips curve relating inflation to the unemployment rate, Keynesian economists divided inflation into three major
categories: demand pull, cost push, and wage-price spiral.18 By assumption,
the real interest rate is ineffectual in keeping real output close to potential
output. Persistent positive output gaps created by positive real shocks such
as increased defense expenditures or an investment boom fueled by excessive optimism create demand-pull inflation. The exercise of market power
by large corporations and unions creates cost-push inflation. Inflationary expectations, which are by assumption undisciplined by the systematic behavior
of the central bank, can create a self-perpetuating spiral of wage and price
increases. Because Keynesians believe that real phenomena like government
deficit spending and the monopoly power of unions and corporations cause
inflation, they argue that the control of inflation requires manipulation of a
countervailing real force—the output gap. Specifically, to counter inflationary forces, the central bank must increase the amount of idle resources in the
economy (unemployed workers).19
Figures 3–5 and 6–7 are useful in discussing the opposite assumptions
made about causality by Keynesians and quantity theorists. Heuristically, in
discussing causality, these two schools place the graphs in a different order.
Keynesians place the graph showing real output first and money last while
quantity theorists reverse the order. That is, Keynesians and quantity theorists
are divided over whether the shocks that drive the fluctuations in the real output
series are real or nominal and over the causes of the common movements of real
and nominal variables (whether Phillips curve correlations are structural).20
Keynesians believe that real shocks drive the fluctuations in real output.
Fluctuations in nominal output, monetary velocity, and money are derivative to the fluctuations in real output. Such real shocks typically appear as
18 See, for example, Ackley (1961, ch. 16).
19 In the 1970s, the United States and other industrial countries used incomes policies and

actual wage and price controls to control perceived cost-push inflation (Hetzel 2008a) and, it was
assumed, to lessen the need for excess unemployment to control inflation. As a result of the
failure of aggregate-demand policy to control unemployment combined with intervention by government into private price setting to control inflation, governments turned the control of inflation
over to central banks. However, that assignment of responsibility left unaddressed the Keynesian
presumption that the control of inflation requires manipulation of an output gap subject to Phillips
curve constraints.
20 This discussion omits the real business cycle (RBC) viewpoint. Early Keynesianism (see
Samuelson 1967) and the RBC view share a common assumption about the irrelevance of monetary
shocks for the business cycle. Quantity theory arguments for the primacy of monetary shocks
as precipitating serious recessions are antithetical to both the Keynesian and RBC views, which
maintain the irrelevance of monetary phenomena for the behavior of real phenomena.

88

Federal Reserve Bank of Richmond Economic Quarterly

irrational swings in investor sentiment between excessive optimism and excessive pessimism (animal spirits). Sticky prices transmit the shock to nominal
output. Pessimism about the future causes monetary velocity (the demand for
money) to decline as households hoard money. However, the decline in output
produces an even larger decline in the demand for money and the central bank
accommodates that decline by contracting the money stock. If the central bank
were to increase the money stock, a pessimistic public would simply hoard
the additional money (a liquidity trap).
In recession, the central bank can lower the real interest rate by lowering
its interest rate target. The real interest rate is the price of current resources in
terms of future resources foregone. A “low” real interest rate should transfer
demand for consumption and investment from the future to the present and
thereby mitigate negative shocks to real aggregate demand. However, Keynesians believe that the real interest rate in particular and the price system
in general are inefficacious. The price system fails to serve its role as an
equilibrating mechanism. Pessimism about the future overwhelms the selfequilibrating properties of the price system.
The Keynesian policy prescription for recession is deficit spending by
the government. Ex ante, given an increase in pessimism about the future,
private saving exceeds investment. With reductions in the real interest rate
ineffective in redistributing aggregate demand from the future to the present,
only a decline in output reduces saving to restore ex post equality between
saving and a lower level of investment. (The Keynesian multiplier derives
from the fact that saving declines only as a fraction of the decline in output.)
The counterpart to irrational pessimism on the part of households is a short
time horizon that does not account for the recovery of economic activity in the
future. In contrast, government can take a longer-run perspective. By running
a deficit, it can dissave sufficiently to offset the excessive saving of the public.
Real shocks interact with a poorly working price system characterized
by sticky nominal prices and by relative prices that fail to clear markets.
Keynesians want central banks to target a real variable—the output gap—and
to determine the behavior of inflation as an optimal tradeoff between the output
gap and (changes in) inflation based on a presumed hard-wired real-nominal
(unemployment-inflation) relationship captured by Phillips curve correlations.
Price stickiness constitutes a friction that causes real shocks to impact real
output and employment. At the same time, it is the lever by which a central
bank can exercise control over real variables through its control over nominal
variables (the nominal interest rate and nominal expenditure).
In contrast to Keynesian assumptions, quantity theorists attribute sustained
changes in prices (inflation and deflation) to behavior by the central bank
that produces sustained departures of money growth from the growth in real
money demand consistent with the growth in potential output. Intuitively,
as illustrated in Figures 3 and 6, inflation makes the real purchasing power

R. L. Hetzel: Does Monetarism Retain Relevance?

89

of the money growth consistent with growth in nominal GDP (dashed line)
consistent with the real purchasing power demanded as a result of growth in
real GDP (solid line). In attributing causation to the correlations among the
series displayed in Figures 3–5 and 6–7, quantity theorists assume an initial
monetary shock manifested in the fluctuations in money. Given an assumption
of stability in the functional form for monetary velocity, they consider the
fluctuations in nominal and real output as derivative.
Because money is endogenously determined when the central bank employs an interest rate peg, fluctuations in money need not reflect monetary
shocks. The endogeneity of money implies that neither sustained high (low)
money growth nor sharp fluctuations in money growth necessarily produce
inflation (deflation) or cyclical fluctuations in economic activity. The relevant
criterion for money to become a source of nominal and real instability is behavior by the central bank that flouts the discipline imposed by the requirements
of creating a stable nominal anchor and of allowing the price system to work.
Flouting that discipline creates monetary shocks through forcing changes in
money that require an unpredictable evolution of the price level.

5.

MONETARIST METHODOLOGY FOR TESTING
“MONEY MATTERS”

Much of the monetarist literature concentrates on event studies designed to
distinguish between real and monetary causes of inflation and of the business
cycle. Friedman and Schwartz (1963) are synonymous with this methodology. As examples, Friedman and Schwartz ([1963] 1969, 216–7) attributed
the deflation that began after 1873 to “political pressure for resumption [establishment of gold convertibility of the paper greenbacks issued in the Civil War
that] led to a decline in high-powered money. . . .” In arguing for monetary
shocks as the cause of recessions, that is, for monetary contraction arising
from events unrelated to the determination of nominal income, they argued:
[C]hanges in the stock of money can generally be attributed to specific
historical circumstances that are not in turn attributable to contemporary
changes in money income and prices.. . . [In 1892–94] agitation for [monetizing] silver and destabilizing movements in Treasury cash produced fears
of imminent abandonment of the gold standard by the United States and
thereby an outflow of capital which trenched on gold stocks. Those effects
were intensified by the banking panic of 1893, which produced a sharp
decline, first in the deposit-currency ratio and then in the deposit-reserve
ratio.

With the establishment of a central bank (the Fed), this strategy for identification of monetary shocks becomes harder. The desired information, namely,
the economy’s response to the Fed’s behavior, is confounded in macroeconomic correlations with the Fed’s response to the economy’s behavior. As

90

Federal Reserve Bank of Richmond Economic Quarterly

a result, quantity theorists rely on an identification strategy based on the assumption that nominal and real stability require consistent implementation of
a rule that provides a stable nominal anchor and that allows the price system
to determine real variables.
An implication of the assumption that the price system works well to
maintain economic stability unless disrupted by monetary disturbances is that
monetary policy procedures that provide for economic stability require continual adjustment of the central bank’s interest rate target in response to the
ongoing fluctuations in strength and weakness in economic activity. It becomes natural to look for isolated episodes in which the Fed has pursued
some objective unrelated to smoothing the fluctuations in the growth rate of
real economic activity that produce corresponding changes in the economy’s
rate of resource utilization. That is, the intent is to isolate departures from
moving the real interest rate implicit in the interest rate target in a way that redistributes aggregate demand over time to counter unsustainable strength and
weakness in the economy. With such departures, the Fed moves short-term
interest rates up or down in a sustained way and then either imparts significant inertia or holds fixed its interest rate target despite increasing weakness
or strength in the economy. One then looks for monetary deceleration or
acceleration. The quantity theory hypothesis is that this criterion provides
a necessary and sufficient condition for booms and recessions (Hetzel 2012,
chs. 6 and 7).
Obvious examples are the interest rate pegs of World War I and World War
II. The example highlighted by Friedman and Schwartz (1963) and Meltzer
(2003) was the intermittent real bills focus of policy prior to World War II.
With real bills, the Fed concentrated on preventing speculative bubbles in
asset prices rather than on allowing the real interest rate to vary continually
to stabilize real economic activity. Another example, highlighted by the same
authors, was the decade-and-a-half effort to manage aggregate demand in a
way intended to stabilize the unemployment rate at its full-employment level
started after the Kennedy tax cut in 1964. In conjunction with pursuit of the
objective of full employment, policymakers attempted to maintain a moderate
level of demand-pull inflation while using incomes policies to mitigate costpush inflation (Hetzel 2008a; forthcoming). Hetzel (2012) argues that the
employment by central banks since 2008 of reaction functions that entail
a direct response of the interest rate setting to realized inflation constitutes
another example. As argued by Friedman (1960), such a rule imparts inertia
to reductions in short-term interest rates in the face of persistent declines in
economic activity (Hetzel 2012).

R. L. Hetzel: Does Monetarism Retain Relevance?

91

Figure 1 Inflation: 1869–1949
25

20

15

Percent

10

5

0

-5

-10

-15
1869 1872 1875 1878 1881 1884 1887 1890 1893 1896 1899 1902 1905 1908 1911 1914 1917 1920 1923 1926 1929 1932 1935 1938 1941 1944 1947

Notes: Annual percentage change in the CPI. Data from Officer and Williamson (2012).
Shaded areas represent National Bureau of Economic Research (NBER) recessions.

Disentangling Causation: Money and Prices
The following sketches briefly the kind of historical narrative quantity theorists have used to disentangle causation between money and prices. Figure 1
shows annual inflation rates from 1869–1949. Quantity theorists argue that
the monetary arrangements of the United States explain the broad patterns
shown in the graph.21
From 1869 through 1897, deflation predominated. After the Civil War,
the United States stopped issuing Greenbacks while the economy grew. The
21 Friedman (1966, 17) stated the quantity theory position phrased in terms of the events
he used to disentangle causation from correlation. That is, he argued that historical experience
demonstrated that intervention by the government into the price setting in private markets was
inevitably futile as a way of controlling inflation. Only moderation in money growth was effective.
“Since the time of Diocletian,. . . the sovereign has repeatedly responded to generally rising
prices in precisely the same way: by berating the ‘profiteers,’ calling on private persons to show
social responsibility by holding down the prices at which they sell their products or their services,
and trying, through legal prohibitions or other devices, to prevent individual prices from rising. The
result of such measures has always been the same: complete failure. Inflation has been stopped
when the quantity of money has been kept from rising too fast, and that cure has been effective
whether or not the other measures were taken.”

92

Federal Reserve Bank of Richmond Economic Quarterly

resulting deflation allowed a return in 1873 to the gold standard at the pre-war
parity. The deflation also reflected increases in the real price of gold due to
limited worldwide supplies of gold combined with increased demand as the
world economy grew and the demand for monetary gold stocks increased as
countries joined the international gold standard as part of the Latin Monetary
Union. Starting in the mid-1890s, the world stock of gold began to grow
because of gold discoveries in Alaska and South Africa and because invention
of the cyanide process rendered the extraction of gold more efficient.22
The monetization of government debt in World War I created a large
spike in inflation. When released from the task of financing the war effort, in
1920 and 1921, the Fed initiated a contractionary monetary policy with sharp
increases in the discount rate to end inflation and to arrest gold outflows. The
severe deflation associated with the Great Depression, which began in August
1929, was derived from the Fed’s desire to maintain a high cost to banks of
obtaining funds first to stop and then to prevent reemergence of a presumed
speculative bubble in the price of equities and real estate.23 The inflation after
1934 occurred because of the monetization of the gold inflows accompanying
the increase in the dollar price of gold and political instability in Europe.
The Fed’s immobilization of bank reserves in 1936 and 1937 through phased
increases in required reserve ratios temporarily replaced monetary expansion
and inflation with monetary contraction and deflation. World War II again
created inflation through a rate peg that forced the Fed to monetize government
deficits.
Figure 2 shows annual inflation rates from 1949–2011. The surge in
inflation in late 1951 was an inflation shock. It arose during the Korean War
when the crossing of the Yalu River by the Chinese in November 1951 created
the expectation of World War III with the return of price controls and inflation
(Hetzel and Leach 2001a). However, contrary to the Keynesian presumption
of hard-wired (intrinsic) inflation persistence, the shock did not propagate. In
1957, inflation increased to 3 percent. Arthur Burns, who was chairman of the
Council of Economic Advisors from 1953 to 1956, and William McChesney
Martin attributed the increase to the slowness of policy to tighten after the
1954 trough in the business cycle (Hetzel 2008a, 52).
The most striking part of Figure 2 is the irregular increase in inflation from
1 percent in 1964 to 13 percent in 1981 followed by disinflation and quiescent
22 Various monetary histories exist for the United States (Friedman and Schwartz 1963;
Friedman 1992; Timberlake 1993; Meltzer 2003, 2009; and Hetzel 2008a, 2012).
23 Like Friedman and Schwartz (1963), Hetzel (2012, ch. 4) attributes the Depression to
contractionary monetary policy. Friedman and Schwartz place primary emphasis on bank runs. In
contrast to Friedman and Schwartz, Hetzel emphasizes the robustness of the banking system. He
argues that, given unit banking, the decline in the money stock required by contractionary monetary
policy took place in part through closing the weaker banks by bank runs. The bank runs were a
byproduct, not a cause of, contractionary monetary policy.

R. L. Hetzel: Does Monetarism Retain Relevance?

93

Figure 2 Inflation: 1949–2011
16

14

12

Percent

10

8

6

4

2

0

-2
1949 1952 1955 1958 1961 1964 1967 1970 1973 1976 1979 1982 1985 1988 1991 1994 1997 2000 2003 2006 2009

Notes: Annual percentage change in the CPI. Data from Officer and Williamson (2012).
Shaded areas represent NBER recessions.

inflation until the drop in 2009. Hetzel (2008a, chs. 6–12 and 22–25; 2012,
ch. 8; forthcoming) attributed the increase in inflation to a monetary policy
oriented toward achievement of full employment, almost universally considered as represented by a 4 percent unemployment rate, combined with the
widespread understanding of inflation as a cost-push phenomenon. Given the
presumed high social costs of an unemployment rate in excess of 4 percent and
the belief in the nonmonetary character of inflation, the working assumption
of monetary policy was that “incomes policies,” represented in the extreme
case by wage and price controls, were the desirable method of restraining
inflation. The prevailing assumption was that using restrictive monetary policy (low rates of money growth) to deal with an inflation caused by cost-push
pressures and by inflation shocks would create “high” interest rates that would
hurt housing disproportionately and would create a socially intolerable level
of unemployment. With a few exceptions, Federal Open Market Committee
(FOMC) members attributed high rates of growth of money to the need to
accommodate cost-push inflation in order to avoid high unemployment.

94

Federal Reserve Bank of Richmond Economic Quarterly

Disentangling Causation: Money and Output in
the Depression
The following provides a flavor of the kind of monetary narrative that quantity theorists provide to disentangle causation from the correlations shown in
Figures 3–7. For quantity theorists, the iconic example of Fed interference
with the price system is its high interest rate policy (started in 1928) of countering the presumed speculative excess in financial markets associated with
high price/earnings ratios for stocks on the New York Stock Exchange. In his
testimony at the Strong hearings [U.S. Congress 1927, 381], Cassel provided
an early statement of this criticism:
Cassel: [Increases in Federal reserve bank rates to limit speculation]
may have an effect on the general level of prices that will result in
a depression in production in the country, followed by a decrease in
employment, all only for the purpose of combating some speculators in
New York. I think that is absurd. . . . [T]he Federal reserve system has
no other function than to give the country a stable money. The business
of checking stock-exchange speculation is disturbing this function. . . .
Mr. Wingo: I say that monetary causes are not the only causes that
affect the general price level. There are other things besides monetary
causes.
Cassel: No; the general level of prices is exclusively a monetary
question.

In 1930, Cassel (1930) provided a more complete account of how the Fed’s
focus on preventing asset bubbles required interference with the working of the
price system. That interference created monetary contraction and deflation.24
This limitation [of money supplies]. . . has of late been far too strict.
The reason is the attempt to regulate the bank rate in such a way that it
would have a supreme influence on the Stock Exchange, limiting the speculative inflation of share prices. . . . The Federal Reserve system. . . since
last summer has adhered to rates which were far too high, with the result
of a collapse in prices which seriously endangered the whole political
economy. . . . The collapse in prices is bound to drag with it the whole
rest of the world. . . . The whole matter is a blatant example of what
happens if we yield to the modern tendency of permitting Government
to meddle unnecessarily with economics. The Government assumes a
task which is not in its province; in consequence of this it is driven to
mismanage one of its most pertinent tasks, i.e., the supervision of money
resources. This causes a depression, which the same government seeks
to remedy by measures which are again outside the sphere of its true
activity and which can only make the whole position worse.

24 Lars Christensen, The Market Monetarist, June 9, 2012, reproduces the quotation.

R. L. Hetzel: Does Monetarism Retain Relevance?

95

Figure 3 Real and Nominal GNP Growth Rates: 1919–1939
30
25
20
15
10

Percent

5
0
-5
-10
-15
-20
-25
-30

Nominal GNP
Real GNP
1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939

Notes: Quarterly observations of four-quarter percentage changes in real and nominal
GNP growth. Rising cross-hatching indicates inflation and falling cross-hatching indicates deflation. Data from Balke and Gordon (1986, Appendix B). Shaded areas represent
NBER recessions. Heavy tick marks indicate fourth quarter.

In congressional testimony in April 1932, Gov. Harrison explained why
the Fed was unwilling to pursue an expansionary monetary policy. The House
Committee on Banking and Currency held these hearings to promote a bill to
require the Fed to restore the price level to its pre-deflation value. Repeatedly,
Harrison challenged that goal on the grounds that it would require the Fed to
increase bank reserves while the price level was falling even if it believed that
banks would use the additional funds for speculative purposes. Harrison (U.S.
Congress 1932, 485) said:
[S]uppose. . . the price level is going down, and the Federal reserve
system begins to buy government securities, hoping to check the decline,
and that inspires a measure of confidence, and a speculation is revived
in securities, which may in turn consume so much credit as to require
our sales of Governments. There was that difficulty in 1928 and 1929.

Hetzel (2008a, 2012) argues that the Fed fell into a deflation trap. The
high nominal interest rates presumed necessary to restrain speculation required monetary contraction. Monetary contraction created deflation, which

96

Federal Reserve Bank of Richmond Economic Quarterly

9

4.25

8

4.00

7

3.75

6

3.50

5

3.25

4

3.00

3

2.75

2

2.50

1

Commercial Paper Rate (Left Axis)
M1 Velocity (Right Axis)

Percent

Figure 4 M1 Velocity and Commercial Paper Rate

2.25
2.00

0
1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939

Notes: Quarterly observations of M1 velocity: GNP divided by M1. Data for GNP are
from Balke and Gordon (1986, Appendix B); M1 is from Friedman and Schwartz (1970);
commercial paper rate is from Board of Governors of the Federal Reserve System (1943).
Shaded areas represent NBER recessions. Heavy tick marks indicate fourth quarter.

engendered expected deflation. Expected deflation raised real interest rates.
Higher real interest rates exacerbated monetary contraction, and so on. Starting in March 1933, the monetary standard changed (Hetzel 2012). The new
Roosevelt administration undertook to end the Depression. Based on the
widespread public association of economic decline with deflation, the administration undertook measures to raise “prices.” However, consonant with the
common understanding at the time, it thought in terms of raising relative prices.
The desire to raise the prices of agricultural products entailed manipulating
the dollar price of gold.
In March 1933, Roosevelt embargoed gold exports and floated the dollar. For the remainder of 1933, the government pursued what amounted to a
commodity stabilization scheme to raise the dollar price of gold. In January
1934, the United States raised the dollar price of gold from $20.67 per ounce
to $35.00 per ounce. At the same time, the Fed removed itself from the active
conduct of monetary policy in favor of the Treasury by freezing the size of
the holdings of Treasury securities in its portfolio and by keeping the discount
rate at a level that eliminated most borrowing by banks from the discount

R. L. Hetzel: Does Monetarism Retain Relevance?

97

window. Along with political instability in Europe, the dollar depreciation in
1934 from $20.67 an ounce to $35 an ounce produced gold inflows, which the
Fed monetized.
Prior to March 1933, the Fed’s instrument was the marginal cost of funds
to banks determined by the sum of the discount rate and the nonpecuniary
(“administrative guidance”) surcharge imposed on banks’ use of the discount
window (Hetzel 2008a, ch. 3; Hetzel 2012, ch. 4). These procedures made
the monetary base endogenous. After March 1933, the monetary base became
exogenous. Despite the exogenous increases in money produced by gold
inflows, M1 velocity remained a stable function of interest rates (Figure 4).
That fact contradicts the Keynesian liquidity trap assumption that purposeful
money creation would simply be neutered by an offsetting change in velocity.
Friedman and Schwartz (1982, 626) generalized:
A stable demand function for real money balances means that an
autonomous change in either nominal money or nominal income will
have to be accompanied by a corresponding change in the other variable,
or in variables entering into the demand function for money, in order to
equate the desired quantity of money balances with the quantity available
to be held. . . . Given stability of money demand, variability in conditions
of money supply, and similar parallelism for the period as a whole, it
is appropriate to regard the observed fluctuations in the two nominal
magnitudes as reflecting primarily an influence running from money to
income. (Italics supplied.)

Disentangling Causation: Money and Output in the
Stop-Go Period
After the Treasury-Fed Accord of 1951, in an evolutionary process, FOMC
chairman William McChesney Martin and his adviser Winfield B. Riefler developed procedures termed “lean-against-the-wind” (LAW) by Martin
(Hetzel and Leach 2001a, 2001b; Hetzel 2008a, ch. 5). In the changed
intellectual environment of the time in which government accepted a role
in economic stabilization, LAW involved moving short-term interest rates in
a way that counteracted above-trend and below-trend growth in real output.
Under Martin, concern for increases in long-term government bond yields
furnishing evidence of increases in expected inflation replaced the real bills
concern with speculative increases in asset prices (Hetzel 2008a, ch. 5).
The extent of the discipline placed on LAW derives from the importance
the FOMC assigns to price stability or stabilization of inflation at a low level.
However, different chairmen have imposed such discipline in two very different ways. They have imposed it either by behaving in a way that stabilized
expected inflation or by responding to the actual emergence of inflation. Hetzel (2008a) terms the former variant “lean-against-the-wind with credibility.”

98

Federal Reserve Bank of Richmond Economic Quarterly

Figure 5 M1 and Nominal GNP Growth: 1919–1939
30
25
20
15
10

Percent

5
0
-5
-10
-15
-20
-25
-30

Nominal GNP Growth
M1 Growth
1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939

Notes: Quarterly observations of four-quarter percentage changes in nominal GNP and
M1 growth. Data for GNP are from Balke and Gordon (1986, Appendix B). M1 is from
Friedman and Schwartz (1970). Shaded areas represent NBER recessions. Heavy tick
marks indicate fourth quarter.

Martin departed from LAW with credibility after 1964 in an ultimately futile
attempt to avoid a politically divisive increase in interest rates with his own
FOMC house divided. He attempted to eliminate the need for an increase in
interest rates through a tax hike that would eliminate the deficit. The effort
failed (Bremner 2004; Hetzel 2008a, ch. 7). Despite the passage of an income
tax surcharge in June 1968, which transformed the deficit into a surplus, high
money growth trumped restrictive fiscal policy, and the economy expanded
while inflation rose.
Arthur Burns, Martin’s successor, desired to control inflation and inflationary expectations but through the use of incomes policies to control the
wage setting of corporations and unions with presumed market power. In
this way, Burns viewed monetary policy through the lens of the businessman
(Hetzel 1998). Burns’successor, G. William Miller, buttressed by a Keynesian
Board of Governors, followed a similar strategy.
Under Burns and Miller, monetary policy earned the appellation of stopgo or, more aptly, go-stop. Given the political and policymaking consensus
holding 4 percent as a desirable target for the unemployment rate, the FOMC

R. L. Hetzel: Does Monetarism Retain Relevance?

99

Figure 6 Real and Nominal GDP Growth Rates: 1952–1981
16
14

Nominal GDP
Real GDP

12
10

Percent

8
6
4
2

0
-2
-4
1952

1954

1956

1958

1960

1962

1964

1966

1968

1970

1972

1974

1976

1978

1980

Notes: Quarterly observations of four-quarter percentage changes in real and nominal
GDP. Shaded areas represent NBER recessions. Data from Haver. Heavy tick marks
indicate fourth quarter.

operated with consensus about the magnitude of the output gap. The output
gap was the difference between actual output and output consistent with a
4 percent unemployment rate. In go phases, the FOMC pursued an expansionary monetary policy by limiting increases in the funds rate even after the
emergence of economic recovery. In doing so, it intended to engineer a high
enough rate of growth in aggregate output in order to lower the magnitude of
the assumed negative output gap.
In response to stimulative monetary policy, with a lag of almost two years,
the inflation rate rose (Hetzel 2008a, Figure 23.3).25 The FOMC responded
directly to the increase in realized inflation by raising the funds rate and then
maintaining that rate while a negative output gap developed (see discussion
explaining Figures 8.1–8.5, Hetzel 2012). The resulting cyclical inertia in
25 Friedman (1989, 31) wrote: “[A] change in the rate of monetary growth produces a change
in the rate of growth of nominal income about six to nine months later. . . . The changed rate of
growth of nominal income typically shows up first in output and hardly at all in prices. . . . The
effect on prices. . . comes some 12 to 18 months later, so that the total delay between a change
in monetary growth and a change in the rate of inflation averages something like two years.”

100

Federal Reserve Bank of Richmond Economic Quarterly

Figure 7 M1 and Nominal GDP Growth: 1952–1981
10

16

14

Nominal GDP Growth (Left Axis)
M1 Growth (Right Axis)

9
8

12

7
6

10

Percent

5
8
4
6
3
2

4

1
2
0
0

-2
1952

-1
-2
1954

1956

1958

1960

1962

1964

1966

1968

1970

1972

1974

1976

1978

1980

Notes: Quarterly observations of four-quarter nominal GDP and M1 growth. Data for
GDP are from Haver. M1 is from Friedman and Schwartz (1970) and from the Board
of Governors via Haver. M1 in 1981 is “shift-adjusted” (Bennett 1982). Shaded areas
represent NBER recessions. Heavy tick marks indicate fourth quarter.

interest rates created procyclical money growth. In the stop phases, the FOMC
never intended to engineer recession. The intent of the FOMC was always to
maintain a negative output gap of moderate magnitude to lower inflation in a
controlled way—the so-called easy landing.
The stop-go period is the closest one comes in historical experience to
the policy guideline represented by conventional Taylor rules. That is, the
FOMC acted on the basis of an assumed knowledge of the output gap and
responded directly to realized inflation. The FOMC also acted with a sense of
the normal or benchmark interest rate such that a “high” interest rate indicated
contractionary monetary policy and a “low” interest rate indicated expansionary monetary policy. This sort of policy rule turned out to be destabilizing as
predicted by Friedman (1960).
Under chairmen Volcker and Greenspan, the FOMC returned to the procedures that had evolved in the pre-1965 era. The FOMC followed a LAW
procedure but with a rule designed to stabilize expected inflation. The discipline imposed by the desire to return to low, stable inflationary expectations
removed much of the cyclical inertia in funds rate movements. Specifically,

R. L. Hetzel: Does Monetarism Retain Relevance?

101

the FOMC moved the funds rate in a sustained, persistent fashion in response
to changes in the rate of resource utilization in the economy.
In doing so, the FOMC moved the funds rate in response to sustained
changes in the output gap, but without any presumption about the magnitude of
the gap. Moreover, it abandoned any assumption of knowledge of a normal or
benchmark real interest rate and allowed changes in the funds rate to cumulate
without fear of overly high or low interest rates. The discipline on changes in
the funds rate made in response to sustained changes in the economy’s rate of
resource utilization came from a superimposed reaction to sharp increases in
bond rates interpreted as increases in expected inflation. That is, the FOMC
followed its LAW procedures subject to the constraint that financial markets
believed that funds rate changes would cumulate to whatever degree necessary
to prevent deviations of trend inflation from a low, stable value. The rule
stabilized the expectation of inflation and thus conditioned the price-setting
behavior of firms setting prices for multiple periods. Phrased alternatively,
the Fed’s reaction function abandoned the direct response to realized inflation
that had characterized the earlier stop-go period (Hetzel 2008a).
Several authors have characterized the monetary policy that followed the
Volcker disinflation (Goodfriend 1993, 2004b; Mehra 2001; Goodfriend and
King 2005; Hetzel 2008a, chs. 13–15; Hetzel 2012, ch. 8; Hendrickson
2012). The common strand in these accounts is the importance that FOMC
chairmen Volcker and Greenspan assigned to stability in inflationary expectations measured by moderate long-term bond rates and by the absence of
discrete jumps in bond rates. Stability of expected inflation meant not only a
low inflation premium in bond rates but also the decoupling of increases in the
inflation premium from the above-trend growth in output that had developed
in the stop-go era. The focus on expected inflation moved the FOMC away
from the direct response to inflation that had characterized the stop phases of
the preceding stop-go monetary policy.
The considerable stability in growth of potential output in the 1980s that
persisted through most of the 1990s meant that to achieve low, stable inflation
the FOMC had to engineer low, stable growth in nominal expenditure (GDP).
However, the FOMC lacked a nominal GDP target.26 Given the FOMC’s concern for inflationary expectations, the sensitivity of “bond-market vigilantes”
to a reemergence of the inflation that followed above trend growth in the prior
stop-go era meant that the FOMC had to raise the funds rate promptly in
response to strong real growth. That behavior largely removed the cyclical
inertia in interest rates that had characterized the stop-go era.
26 The procedures are described in Section 3 in the paragraph that begins “With an interest
rate instrument and a LAW reaction function. . . .” The objective was stable trend inflation; however,
the intermediate target was stability in expected trend inflation. Only with stable growth in potential
output due to steady growth in productivity and labor are these LAW-with-credibility procedures
equivalent to nominal GDP targeting.

102

Federal Reserve Bank of Richmond Economic Quarterly

Figure 8 Growth Rates of Nominal and Real GDP
16
14

Nominal GDP
Real GDP

12
10

Percent

8
6
4
2
0
-2
-4
-6
1961 1963 1965 1967 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 1989 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 2011

Notes: Quarterly observations of four-quarter percentage changes of real and nominal GDP. Trend lines fit to observations from 1960:Q1–1979:Q4 and 1985:Q1–2007:Q4.
Shaded areas represent NBER recessions. Data from Haver. Heavy tick marks indicate
fourth quarters.

Figure 8 shows the upward trend in nominal GDP growth that preceded the
Volcker disinflation and the moderate downward trend after the Volcker disinflation. After this disinflation and prior to 2008, the main cyclical fluctuations
in nominal GDP growth occurred in the last part of the 1980s and in the last
part of the 1990s. Each episode arose as an echo of the prior go-stop monetary
policy with the go phases initiated by FOMC concern for unwanted strength in
the foreign exchange value of the dollar and an associated reluctance to raise
the funds rate despite unsustainable growth rates in the real economy (Hetzel
2008a, chs. 14 and 16).

6.

CONCLUDING COMMENT

This article has summarized quantity theory views and has provided a sampling
of the sort of historical narrative its proponents have used to buttress their
position that inflation is a monetary phenomenon and that cyclical fluctuations
derive from monetary shocks.

R. L. Hetzel: Does Monetarism Retain Relevance?

APPENDIX:

103

POST-2008 QUANTITATIVE PROCEDURES

Since December 2008, when the FOMC lowered the funds rate basically to
zero, the relevant monetary regime has been reserve-aggregate targeting (quantitative operating procedures). The determining factor is that the FOMC’s
reaction function has set the size of its asset portfolio and, as a consequence,
the size of the monetary base. Given the public’s demand for currency, bank
reserves are exogenously given to the banking system. Since spring 2009,
through purchases known in the market as quantitative easing but within the
Fed as large scale asset purchases (LSAP), the FOMC has twice increased
the size of its asset portfolio.27 (In late 2011, reserves also increased when
foreign central banks drew on the Fed’s swap lines.)
For the given level of bank reserves, the banking system’s desire to decrease (increase) excess reserves determines the aggregate acquisition (sale)
of its assets and, as a result, the expansion (contraction) of bank liabilities.
Growth in bank deposits and in money follows. Given a well-defined demand
for real money, growth in money determines growth in nominal expenditure. Given the high level of demand by banks for excess reserves that arose
in response to the uncertainty created subsequent to the failure of Lehman
Brothers in September 2008 and the near-zero funds rate, since January 2009,
the monetary aggregate M2 (adjusted for flight-to-safety inflows) has grown
on average at a 4 percent annual rate. That rate of money growth has been
consistent roughly with 4 percent growth in nominal GDP. (For details, see
Hetzel [2012, postscript].)28
The following analysis assumes that the shock that created the 2008–2009
recession was monetary, not real (see Hetzel 2012, ch. 12). It follows that
the productive capacity of the economy did not contract and that the 8 percent
unemployment rate that existed in 2012 revealed a negative output gap. At
the same time, the Fed’s credibility for its inflation target of 2 percent has
set the expectational environment in which firms set dollar prices for multiple
periods. As a result, core inflation has been steadied at 2 percent.29 With
27 Although the LSAP purchases occurred in response to an unemployment rate in excess of
8 percent and core personal consumption expenditures (PCE) inflation of less than 2 percent, it is
unclear what the policy rule is.
28 In the period since fall 2008, to determine the resulting growth rate for nominal expenditure (output or GDP), one must remove the inflow of funds from the money market into the
too-big-to-fail banks precipitated by stress in financial markets. Such deposits are unrelated to
the transactions demand for money and nominal expenditure. Those inflows occurred discretely in
September 2008, in June and July 2011, and to a lesser extent at year-end 2011.
29 Inflation shocks due chiefly to increases in energy prices boosted inflation, especially starting in late 2010. The resulting transitory increase in inflation temporarily depressed output.

104

Federal Reserve Bank of Richmond Economic Quarterly

baseline inflation of 2 percent, nominal GDP growth of 4 percent allows for 2
percent growth in real GDP.
Assuming that the growth rate of potential output is 2 percent, real GDP
growth of 2 percent during the later stage of the economic recovery leaves the
negative output gap intact. The uncertainty created by a weak labor market
makes the public pessimistic about the future. That pessimism has engendered
low long-term real rates of interest.30 Moreover, it has made the natural rate
of interest (the short-term real interest rate consistent with full employment)
negative. A funds rate near zero combined with expected inflation of 2 percent
creates a negative real interest rate of about 2 percent. The natural real interest
rate must lie somewhat below this value in order to maintain a rate of real
GDP growth insufficient to eliminate the negative output gap.
If the natural rate of interest lay significantly below the actual short-term
real interest rate, monetary contraction would ensue. The reason is that individual banks would sell assets in an attempt to place the reserves they gained
in the higher-yielding deposits offered by the Fed at an interest rate of .25
percent. Monetary contraction would depress nominal output growth and,
with inflation of 2 percent real growth, would decline further below normal
for an economic recovery. With expected inflation remaining at 2 percent and
actual inflation steadied around 2 percent as a result, higher nominal GDP
growth would produce higher real GDP growth through a real balance effect
that stimulates nominal expenditure. Higher real growth would ultimately
raise the natural interest rate.
Since December 2008, the Fed has paid to banks interest on reserves (IOR)
at 25 basis points. That innovation renders more complicated the classification
of the Fed’s operating procedures as reserve-aggregate targeting or interest
rate targeting. Whether allowing banks to lend to the central bank (IOR)
is consistent with reserve-aggregate targeting or with interest rate targeting
depends on the FOMC’s reaction function. Prior to December 2008, the
FOMC implemented an interest rate targeting regime (Hetzel 2012, ch. 14).
In a regime of interest rate targeting, the FOMC possesses a reaction
function that uses the interest rate as the policy instrument. The FOMC could
then use the level of IOR as the mechanism for setting the desired interest rate
target. In this case, given the interest rate target set equal to the value of the
IOR, the FOMC could expand the size of its asset portfolio without depressing
short-term interest rates below its rate target (Goodfriend 2000). For example,
the FOMC might want to purchase Treasury securities in order to expand the
size of its asset portfolio and, as a byproduct, increase bank excess reserves
as a way of providing banks a cushion against short-term funding problems.
Such an initiative would be consistent with limiting the extent of the financial
30 The assumption that the origin of this pessimism lies in a negative monetary shock differentiates this view from an animal spirit view.

R. L. Hetzel: Does Monetarism Retain Relevance?

105

safety net in which banks experiencing a run have unlimited access to the
discount window. Alternatively, if the FOMC wanted to use credit allocation
as an instrument, it could purchase mortgage-backed securities to lower the
yield difference between mortgages and Treasury securities. (That initiative
would not be a free lunch in that it would require a somewhat higher target for
the interest rate to maintain inflation at target.)

REFERENCES
Ackley, Gardner. 1961. Macroeconomic Theory. New York: The Macmillan
Company.
Balke, Nathan S., and Robert J. Gordon. 1986. “Appendix B: Historical
Data.” In The American Business Cycle: Continuity and Change, edited
by Robert J. Gordon. Chicago: The University of Chicago Press,
781–810.
Bennett, Barbara A. 1982. “‘Shift Adjustments’ to the Monetary
Aggregates.” Federal Reserve Bank of San Francisco Economic Review
(Spring): 6–18.
Board of Governors of the Federal Reserve System. 1943. Banking and
Monetary Statistics: 1914–1941. Washington, D.C.: Board of Governors
of the Federal Reserve System.
Bremner, Robert P. 2004. Chairman of the Fed: William McChesney Martin,
Jr. and the Creation of the American Financial System. New Haven,
Conn.: Yale University Press.
Cassel, Gustav. 1930. “President Hoover’s Mistake.” The West Australian, 17
February, 16. Available at: http://trove.nla.gov.au/ndp/del/article/
32393949?searchterm=gustav%20cassel&searchlimits=.
Fisher, Irving. 1896. Appreciation and Interest. New York: The Macmillan
Company.
Friedman, Milton. 1960. A Program for Monetary Stability. New York:
Fordham University Press.
Friedman, Milton. 1966. “What Price Guideposts?” In Guidelines, Informal
Controls, and the Market Place, edited by George P. Shultz and Robert
Z. Aliber. Chicago: The University of Chicago Press, 17–39.
Friedman, Milton. [1968] 1969.“The Role of Monetary Policy.” In The
Optimum Quantity of Money and Other Essays, edited by Milton
Friedman. Chicago: Aldine Publishing Company.

106

Federal Reserve Bank of Richmond Economic Quarterly

Friedman, Milton. 1969. “The Optimum Quantity of Money” (1969); “The
Lag in Effect of Monetary Policy” (1961); “The Supply of Money and
Changes in Prices and Output,” reprinted from The Relationship of
Prices to Economic Stability and Growth, 85th Congress, 2nd sess., Joint
Economic Committee Print (1958). In The Optimum Quantity of Money
and Other Essays, edited by Milton Friedman. Chicago: Aldine
Publishing Company.
Friedman, Milton. 1989. “The Quantity Theory of Money.” In The New
Palgrave Money, edited by John Eatwell, Murray Milgate, and Peter
Newman. New York: W. W. Norton, 1–40.
Friedman, Milton. 1992. Money Mischief: Episodes in Monetary History.
Orlando: Harcourt Brace & Company.
Friedman, Milton, and Anna J. Schwartz. 1963. A Monetary History of the
United States, 1867–1960. Princeton: Princeton University Press.
Friedman, Milton, and Anna J. Schwartz. [1963] 1969. “Money and
Business Cycles.” Review of Economics and Statistics 45 (February):
32–64; in The Optimum Quantity of Money and Other Essays, edited by
Milton Friedman. Chicago: Aldine Publishing Company.
Friedman, Milton, and Anna J. Schwartz. 1970. Monetary Statistics of the
United States. New York: National Bureau of Economic Research.
Friedman, Milton, and Anna J. Schwartz. 1982. Monetary Trends in the
United States and the United Kingdom. Chicago: University of Chicago
Press.
Friedman, Milton, and Rose Friedman. 1980. Free to Choose: A Personal
Statement. Orlando: Harcourt Inc.
Goodfriend, Marvin. 1987. “Interest Rate Smoothing and Price Level
Trend-Stationarity.” Journal of Monetary Economics 19 (May): 335–48.
Goodfriend, Marvin. 1993. “Interest Rate Policy and the Inflation Scare
Problem.” Federal Reserve Bank of Richmond Economic Quarterly 79
(Winter): 1–24.
Goodfriend, Marvin. 2000. “Overcoming the Zero Bound on Interest Rate
Policy.” Journal of Money, Credit and Banking 32 (November):
1,007–35.
Goodfriend, Marvin. 2004a. “The Monetary Policy Debate since October
1979: Lessons for Theory and Practice.” Paper for “Reflections on
Monetary Policy: 25 Years after October 1979,” Conference at the
Federal Reserve Bank of St. Louis, October 7–8.

R. L. Hetzel: Does Monetarism Retain Relevance?

107

Goodfriend, Marvin. 2004b. “Monetary Policy in the New Neoclassical
Synthesis: A Primer.” Federal Reserve Bank of Richmond Economic
Quarterly 90 (Summer): 3–20.
Goodfriend, Marvin, and Robert G. King. 2005. “The Incredible Volcker
Disinflation.” Journal of Monetary Economics 52 (July): 981–1,015.
Hendrickson, Joshua R. 2012. “An Overhaul of Federal Reserve Doctrine:
Nominal Income and the Great Moderation.” Journal of
Macroeconomics 34 (2): 304–17.
Hetzel, Robert L. 1985. “The Rules versus Discretion Debate over Monetary
Policy in the 1920s.” Federal Reserve Bank of Richmond Economic
Review 71 (November/December): 3–14.
Hetzel, Robert L. 1987. “Henry Thornton: Seminal Monetary Theorist and
Father of the Modern Central Bank.” Federal Reserve Bank of
Richmond Economic Review 73 (July/August): 3–16.
Hetzel, Robert L. 1995. “Why the Price Level Wanders Aimlessly.” Journal
of Economics and Business 47 (May): 151–63.
Hetzel, Robert L. 1998. “Arthur Burns and Inflation.” Federal Reserve Bank
of Richmond Economic Quarterly 84 (Winter): 21–44.
Hetzel, Robert L. 2008a. The Monetary Policy of the Federal Reserve: A
History. Cambridge: Cambridge University Press.
Hetzel, Robert L. 2008b. “What Is the Monetary Standard, Or, How Did the
Volcker-Greenspan FOMC’s Tame Inflation?” Federal Reserve Bank of
Richmond Economic Quarterly 94 (Spring), 147–71.
Hetzel, Robert L. 2012. The Great Recession: Market Failure or Policy
Failure? Cambridge: Cambridge University Press.
Hetzel, Robert L. Forthcoming. “The Great Inflation.” In The Handbook of
Major Events in Economic History, edited by Randall Parker and Robert
Whaples. New York and Oxford: Routledge Publishing.
Hetzel, Robert L., and Ralph F. Leach. 2001a. “The Treasury-Fed Accord: A
New Narrative Account.” Federal Reserve Bank of Richmond Economic
Quarterly 87 (Winter): 33–55.
Hetzel, Robert L., and Ralph F. Leach. 2001b. “After the Accord:
Reminiscences on the Birth of the Modern Fed.” Federal Reserve Bank
of Richmond Economic Quarterly 87 (Winter): 57–64.
Hetzel, Robert L., and Yash Mehra. 1989. “The Behavior of Money Demand
in the 1980s.” Journal of Money, Credit, and Banking 21 (November):
455–63.

108

Federal Reserve Bank of Richmond Economic Quarterly

Hume, David. 1955. “Of Money” In David Hume Writings on Economics,
edited by Eugene Rotwein. Madison, Wisc.: University of Wisconsin
Press, 33–46.
Humphrey, Thomas M. 1974. “The Quantity Theory of Money: Its Historical
Evolution and Role in Policy Debates.” Federal Reserve Bank of
Richmond Economic Review 60 (May/June): 2–19.
Humphrey, Thomas M. 1982. “The Real Bills Doctrine.” Federal Reserve
Bank of Richmond Economic Review 68 (September/October): 3–13.
Humphrey, Thomas M. 1983. “Can the Central Bank Peg Real Interest Rates?
A Survey of Classical and Neoclassical Opinion.” Federal Reserve Bank
of Richmond Economic Review 69 (September/October): 12–21.
Humphrey, Thomas M. 1990. “Fisherian and Wicksellian Price-Stabilization
Models in the History of Monetary Thought.” Federal Reserve Bank of
Richmond Economic Review 76 (May/June): 3–19.
Humphrey, Thomas M. 1999. “Mercantilists and Classicals: Insights from
Doctrinal History.” Federal Reserve Bank of Richmond Economic
Quarterly 85 (Spring): 55–82.
Humphrey, Thomas M. 2004. “Classical Deflation Theory.” Federal Reserve
Bank of Richmond Economic Quarterly 90 (Winter): 11–32.
Joplin, Thomas. [1823] 1970. Outlines of a System of Political Economy.
New York: Augustus M. Kelley.
Keynes, John Maynard. [1923] 1972. “A Tract on Monetary Reform.” In The
Collected Writings of John Maynard Keynes, vol. IX “Essays in
Persuasion.” London: The Macmillan Press.
Laidler, David. 1981. “Monetarism: An Interpretation and an Assessment.”
The Economic Journal 91 (March): 1–28.
Lucas, Robert E., Jr. [1972] 1981. “Expectations and the Neutrality of
Money.” In Studies in Business-Cycle Theory, edited by Robert E.
Lucas, Jr. Cambridge, Mass.: The MIT Press.
Lucas, Robert E., Jr. 2001. Lecture at Trinity University, April.
Mayer, Thomas. 1999. Monetary Policy and the Great Inflation in the United
States: The Federal Reserve and the Failure of Macroeconomic Policy,
1965–79. Northampton, Mass.: Edward Elgar.
McCallum, Bennett T. 1986. “Some Issues Concerning Interest Rate
Pegging, Price Level Determinacy, and the Real Bills Doctrine.” Journal
of Monetary Economics 17 (January): 135–60.
Mehra, Yash P. 2001. “The Bond Rate and Estimated Monetary Policy
Rules.” Journal of Economics and Business 53: 345–58.

R. L. Hetzel: Does Monetarism Retain Relevance?

109

Meltzer, Allan H. 2003. A History of the Federal Reserve, vol. 1, 1913–1951.
Chicago: University of Chicago Press.
Meltzer, Allan H. 2009. A History of the Federal Reserve, vol. 2, 1951–1969
and vol. 3, 1970–1986. Chicago: University of Chicago Press.
Officer, Lawrence H., and Samuel H. Williamson. 2012. “The Annual
Consumer Price Index for the United States, 1774-2011.” In Measuring
Worth. Available at www.measuringworth.com/growth/.
Patinkin, Don. 1965. Money, Interest, and Prices. New York: Harper & Row.
Pigou, Arthur C. 1917. “The Value of Money.” Quarterly Journal of
Economics 32 (November): 38–65.
Samuelson, Paul A. 1967. Economics: An Introductory Analysis. New York:
McGraw-Hill Book Company.
Samuelson, Paul, and Robert Solow. [1960] 1966. “Analytical Aspects of
Anti-Inflation Policy.” In The Collected Scientific Papers of Paul A.
Samuelson, vol. 2, no. 102, edited by Joseph Stiglitz. 1,336–53.
Thornton, Henry. 1939. An Enquiry into the Nature and Effects of the Paper
Credit of Great Britain (1802) and Two Speeches (1811), edited with an
Introduction by F. A. v. Hayek. New York: Rinehart and Co.
Timberlake, Richard H. 1993. Monetary Policy in the United States: An
Intellectual and Institutional History. Chicago: The University of
Chicago Press.
U.S. Congress. 1926–1927. “Stabilization.” Hearings before the House
Committee on Banking and Currency. pt. 2, 69th Cong., 1st sess., April,
May, and June 1926 and February 1927.
U.S. Congress. 1932. “Stabilization of Commodity Prices.” Hearings before
the Subcommittee of the House Committee on Banking and Currency
(Goldsborough Committee) on H.R. 10517. 72nd Cong., 1st sess., Parts
1 and 2, March 16–18, 21–22, 28–29, April 13–14.
U.S. Congress. 1959. “Employment, Growth, and Price Levels.” Hearings
before the Joint Economic Committee. 86th Cong., 1st sess., May 25–28.
White, Andrew Dickson. [1876] 1933. Fiat Money Inflation in France: How
It Came, What It Brought, and How It Ended. New York: D.
Appleton-Century Company.
Wicksell, Knut. [1898] 1962. Interest and Prices. New York: Augustus M.
Kelley.
Wicksell, Knut. [1935] 1978. Lectures on Political Economy. Fairfield, N.J.:
Augustus M. Kelley.

110

Federal Reserve Bank of Richmond Economic Quarterly

Wolman, Alexander L. 2001. “A Primer on Optimal Monetary Policy with
Staggered Price-Setting.” Federal Reserve Bank of Richmond Economic
Quarterly 87 (Fall): 27–52.
Woodford, Michael. 2003. Interest and Prices: Foundations of a Theory of
Monetary Policy. Princeton, N.J.: Princeton University Press.
Woodford, Michael. 2005. “Central-Bank Communication and Policy
Effectiveness.” Prepared for the Federal Reserve Bank of Kansas City
Conference “To Greenspan Era: Lessons for the Future,” Jackson Hole,
Wyo., August 25–27.

Economic Quarterly—Volume 98, Number 2—Second Quarter 2012—Pages 111–138

The Performance of
Non-Owner-Occupied
Mortgages During the
Housing Crisis
Breck L. Robinson

T

he past decade has seen a dramatic rise and fall in home values within
the United States, causing policymakers to contemplate their cause
and solution. Much of the attention and blame for the rise in mortgage
delinquencies and foreclosures has been attributed to mortgages that were
originated to subprime homeowners. These homeowners have high loanto-value ratios, high debt-to-income ratios, low credit scores, and little or no
documentation of income.1 In addition to rising risk, other factors contributed
to the housing crisis. For example, changes in economic conditions like higher
unemployment rates led to a lower capacity for homeowners to meet their
mortgage obligations.2
In an attempt to limit the impact of the housing crisis and to help stimulate a housing recovery, policymakers have proposed a number of foreclosure
mitigation programs to help homeowners that are owner occupants. Unfortunately, none of the programs initiated to help slow down the housing crisis
Robinson is an associate professor in the School of Public Policy and Administration at the
University of Delaware and a visiting scholar in Banking Supervision and Research at the
Federal Reserve Bank of Richmond. For their advice and assistance, the author thanks Larry
Cordell, Anne Davlin, Ron Feldman, Fred Furlong, Michael Grover, Juan Carlos Hatchondo,
Marianna Kudlyak, Crystal Myslajek, Jordan Nott, Eli Popuch, Ned Prescott, Dan Rozycki,
Michael Schramm, Laura Smith, Richard Todd, and Neil Willardson. The views expressed in
this article are those of the author and do not necessarily reflect those of the Federal Reserve
Bank of Richmond or the Federal Reserve System. The data reported are from staff calculations based on data provided by LPS Applied Analytics. E-mail: breck.robinson@rich.frb.org.
1 Doms and Krainer (2007) find an increase in riskiness in the pool of potential homeowners
around the expansion of the subprime market. Bhardwaj and Sengupta (2008) find that underwriting
standards did not universally decline prior to the housing crisis, especially in the subprime market.
2 Campbell and Dietrich (1983) and Deng, Quigley, and van Order (2000) find a positive
relationship between unemployment rate and mortgage default.

112

Federal Reserve Bank of Richmond Economic Quarterly

were able to achieve the goals stated by policymakers.3 One possible reason
why these programs have not been successful is that there are multiple causes
of the housing crisis. The lack of success of these programs suggests that a
public policy approach to combat mortgage delinquencies and foreclosures
needs to be flexible and multidimensional in order to be effective.
One segment of the housing market that has received little attention from
policymakers and the press is the plight of homeowners who do not reside in
their home. This group of homeowners is typically identified as non-owner
occupants, which includes investors in residential properties and owners of
vacation homes. The 2003American Housing Survey produced by the Harvard
Center for Housing Studies found that 35 percent of American renters lived in
single-unit housing and another 21 percent live in two- to four-unit structures.
In other words, non-owner occupants provided about half of the single-family
housing for renters in the United States in 2003. However, it is not known
how mortgages to non-owner occupants performed during the housing crisis.
In this article, I investigate the size and importance of the non-owner
occupant housing market prior to the housing crisis. Using two nationally
representative data sets for mortgage originations, I show that prior to the
housing crisis, the size of the mortgage market for non-owner occupants grew
at a faster pace when compared to owner occupants.
In order to explore the impact of the housing crisis on non-owner occupants, I use two measures of mortgage performance. The first is foreclosure
rates. Using aggregate data on owner and non-owner occupant mortgages,
the results show that foreclosures rates for the two groups are similar. One
possible explanation for this result is that non-owner occupants are held to a
higher underwriting standard, which may help mitigate perceived differences
in default.
The second measure of mortgage performance is a prevalence and performance index. Using state-level data, a number of states received high impact
measures during the housing crisis, but the source of their impact varies by
region of the country. For example, the driving factor for a high impact ratio
for states in the Midwest is poor mortgage performance. This result is very
different for states in the South and West where high impact ratios are driven
mostly by a high concentration of non-owner occupant mortgages. In other
words, the impact of the housing crisis on non-owner occupied mortgages is
widespread, but the source of their impact varies by region. Not surprising,
the states identified in this study are the same states that are identified in the
press as being the hardest hit during the housing crisis.
3 Programs that have been initiated during the housing crisis include the Federal Deposit
Insurance Corporation’s “Mod in a Box,” Hope for Homeowners, and the Home Affordable
Mortgage Program, among others.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

113

The remainder of this article is organized as follows: Section 1 presents
a discussion on the motivation behind foreclosure, and Section 2 provides a
discussion of the data. Section 3 provides evidence of the impact of foreclosures in the housing market from non-owner occupants, and the results are
summarized Section 4.

1.

MOTIVATION BEHIND FORECLOSURE: THEORY

Policymakers have wondered if mortgage holders who use their homes as investment properties played a role in contributing to the current housing crisis.
“Investors” are defined as those individuals who do not use their home as
their primary residence, but to generate revenue. Answering this question is
difficult—the main hurdle facing researchers is that data limitations make it
difficult to measure home purchase activity for this particular group of homeowners. However, data is available if I broaden my focus to include both
investors and owners of second homes. I define a second homeowner as an
individual who purchases a non-primary residence for recreational use as an
occasional or seasonal residence. This broader group of investors and second
homeowners is defined as non-owner occupants.4
The post-origination performance of mortgages has been the subject of
academic research, but almost all of it, both empirical and theoretical, has
focused on the performance of homeowners in their primary residence.5 A
few studies of owner occupants indirectly provide insight into the behavior of
non-owner occupants. For example, studies like Cowan and Cowan (2004)
and Immergluck and Smith (2004) find that foreclosure rates are higher among
non-owner occupants even after controlling for credit scores and other risk
factors.
They contend that the decision of the mortgage holder to become delinquent or enter foreclose depends on the homeowner’s ability and willingness to
repay his/her mortgage. Other studies, including Bajari, Chu, and Park (2008)
and Haughwout, Peach, and Tracy (2008), find that non-owner occupants are
more likely to exercise their option to default even after controlling for other
factors. In addition, Gerardi, Shapiro, and Willen (2008) find a similar result
when using condominiums and multifamily dwellings as proxies for loans to
non-owner occupants.
There are two theories that have been proposed to help explain why
homeowners enter foreclosure: “trigger-event” theory and “options” theory.6
Trigger-event theory states that an individual may experience a life-changing
4 A single-family home is defined as a detached structure with one to four residents, a townhouse, or a condominium contained in a larger building but available for sale separately.
5 Quercia and Stegman (1992) and Vandell (1995) provide a nice review of the literature.
6 See Avery, Brevoort, and Canner (2008).

114

Federal Reserve Bank of Richmond Economic Quarterly

event that negatively impacts the homeowner’s ability to meet his/her financial obligations. Typically, trigger events lead to disruptions in income or an
expansion in expenses due to a loss of employment, change in marital status,
or a health-related event.7 In other words, when a financial hardship occurs,
the homeowner is less likely to remain current on his/her mortgage and other
financial obligations.
While trigger-event theory helps explain mortgage foreclosures that are
driven by factors outside the homeowner’s direct control, it does not explain
those situations where the homeowner is making a conscious decision to allow
foreclosure to occur, even when the homeowner’s current financial situation
may not have changed. Under option theory, the homeowner has an incentive
to walk away from his/her home when the value of the home is less than
the amount owed on the mortgage.8 Option theory does not suggest that
homeowners will always walk away from their home or will do so immediately
after finding themselves in a negative equity position. Homeowners may
delay exercising their option to pay their mortgage if they believe that housing
values are not likely to increase in the near future.9 In addition, homeowners
may not exercise their foreclosure option even if they believe housing values
may not appreciate to the full cost of the home given that foreclosure is not
without costs. Owner occupants face sizeable transaction costs associated
with foreclosure that directly impact the homeowners’ credit reports and will
reduce their access to credit and increase borrowing costs in the future.
Gerardi, Shapiro, and Willen (2008) use a theoretical model for foreclosure where homeowners enjoy a stream of monetary and non-monetary
benefits from homeownership. I contend that ownership status is a factor in
determining how quickly a homeowner will initiate foreclosure under option
theory. The reason for a difference in the value of the foreclosure option associated with ownership status is that different owners place different values
on the financial and non-financial benefits associated with homeownership.
For example, given that owner occupants reside in the home full time where
they are likely to develop an emotional attachment to their home, they are
likely to place a higher value on the non-monetary benefits associated with
homeownership. As a result, owner occupants are less likely to exercise their
foreclosure option even when there is a decline in the potential monetary
benefits of homeownership.
On the other extreme, non-owner occupants are more likely to initiate
foreclosure, for they are more likely to place a higher value on the potential
7 See Vandell (1995); Elmer and Seelig (1999); and Deng, Quigley, and Van Order (2000).
8 Bajari, Chu, and Park (2008); Foote, Gerardi, and Willen (2008); Haughwout, Peach, and

Tracy (2008); and Bhutta, Dokko, and Shan (2010) find that negative equity is highly correlated
with higher default rates.
9 See Hendershott and Van Order (1987); Kau, Keenan, and Kim (1994); and Kau and Keenan
(1995).

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

115

monetary benefits of homeownership. Since non-owner occupants do not
reside in the home full time, they consume less of the non-monetary benefits
of homeownership.10 As a result, non-owner occupants will place a higher
value on rental income and capital gains generated from homeownership,
net of holding costs.11 Among the groups of non-owner occupants, second
homeowners represent a hybrid group, for they are likely to fall in between
investors and owner occupants regarding their willingness to exercise their
foreclosure option. They place a higher value on their foreclosure option when
compared to owner occupants, but a lower value relative to investors. One
reason why second homeowners are different is that they occupy their second
homes more frequently than investors, causing them to place a higher value on
the non-financial benefits of homeownership. However, second homeowners
are more likely to place a higher value on the monetary benefits of their second
home when compared to the primary residence inhabited by owner occupants.
Specifically, second homeowners are more likely to use their second home to
generate rental income and are more likely to dispose of their second home if
a financial opportunity arises.

2.

DATA

The data used in this article come from LPS Applied Analytics, Inc. I collect
loan origination data for the time period 2004–2007. Individual loans originated for my sample are followed starting at their origination date and ending
when the loan is foreclosed or refinanced. If the loan remains active for the
whole time period, then the last observation date of record is June 2011. LPS
provides loan-level data compiled from the largest loan servicers and covers
around 67 percent of the U.S. mortgage market for the period analyzed in this
study.12 One of the benefits of the LPS data is that loan-level information is
available at the time of origination, including the risk characteristics of the
borrower. Specific variables such as loan amount, appraisal value, and borrower income can be found in the data. In addition, LPS provides information
that can be used as a proxy for the riskiness of the borrower. Such information includes the borrower’s credit rating (FICO score), loan-to-value, and
debt-to-income ratios at the time of origination.
10 “Ruthlessness” is an extreme variant of “option” theory. In this case, the borrower is
assumed to have no emotional attachment to his/her home, creating an incentive for the borrower
to view the homeownership decision strictly as a financial transaction.
11 Gerardi, Shapiro, and Willen (2008) note that non-owner occupants face a number of disadvantages, such as public assistance to delinquent borrowers usually targets owner-occupants. In
addition, since non-owner occupants must either forgo rental income or seek tenants, the search for
tenants involves administrative costs and risks, like property damage from renting or lost income
when tenants fail to remain current on their rent.
12 Cordell, Watson, and Thomson (2008) provide additional detail and insight into the LPS
data.

116

Federal Reserve Bank of Richmond Economic Quarterly

In addition to applicant information that is reported at the time the loan
is originated, LPS provides loan-level performance data for each month the
loan remains active with the reporting loan servicer. For example, the borrower’s payment status is provided (payment, prepayment, or default), including whether the borrower’s loan is current (30-, 60-, or 90-days delinquent).
For the purposes of this study, I will focus my attention on loan defaults that
are caused by foreclosure.
Additional data used in this study are provided by the Home Mortgage
Disclosure Act (HMDA) for the time period 1996–2006. The HMDA data
provides loan-level information for borrowers at the time of origination. One
of the primary differences between the LPS and HMDA data sets is that HMDA
provides loan-level information by lender and includes information on the race
and sex of the borrower.
As a supplement to the HMDA data, data obtained from the National
Association of Realtors (NAR) are used to acquire home sales data for the
time period 2003–2007. One of the benefits of using the NAR data is that
home sales to non-owner occupants can be disaggregated into either second
homeowners or investors. The HMDA data distinguishes between owner and
non-owner occupants at the time of origination, but within the non-owner
occupant group, HMDA does not provide the same level of detail.

3.

EVIDENCE OF SIGNIFICANCE AND IMPACT ON
FORECLOSURES TO NON-OWNER OCCUPANTS

Growth in Mortgage Market
Before I start to explore the role that non-owner occupants played in the
housing crisis by analyzing measures like foreclosure rates, I need to identify
the size of the housing market controlled by non-owner occupants.13 Using
data from HMDA, Figure 1 provides a breakdown of the number of mortgages
originated by ownership type. In 2000, the role non-owner occupants played
in the home purchase market was fairly small, constituting about 8 percent of
the first lien mortgages originated. However, the share of first lien mortgages
originated to non-owner occupants increased over time, reaching a peak of
almost 16 percent in 2005.
Data on home sales is provided by NAR in Figure 2. In 2003, home sales
for primary residents were slightly more than 4.5 million units, while home
sales to non-owner occupants were almost 2.25 million units. Similar to the
pattern observed in the HMDA data, home sales to non-owner occupants were
largest in 2005, where they reached almost 3.5 million units.
13 Avery, Brevoort, and Canner (2008) find a positive relationship between the share of mortgages originated to non-owner occupants and delinquency rates.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

117

Figure 1 Mortgage Originations for Home Purchase by
Occupancy Type
8
7

Owner-Occupied
Non-Owner-Occupied

6

Millions

5
4
3
2
1
0
1996

1997

1998

1999

2000

2001

2002

2003

2004

2005

2006

When using growth rates as a measure of the significance of the housing
market controlled by non-owner occupants, the data presented in Figure 2 show
a similar story. Data from the NAR for the time period prior to the housing
crisis show that home buying activity by non-owner occupants grew faster
than activity by owner occupants. For example, during the time period 2003–
2005, home purchases by investors increased almost 50 percent, while home
purchases to owner occupants rose just 6.4 percent. A similar result exists
when using HMDA data. For example, home purchase mortgage originations
to non-owner occupants rose 84 percent between 2003 and 2005. Over the
same time period, originations to owner occupants rose by 36 percent.
The data from both HMDA and NAR show that non-owner occupants
played a sizeable and increasing role in the housing market prior to the housing
crisis.14 However, while the size of the market for non-owner occupants
provides added justification for studying them separately as an ownership
14 The results presented using HMDA and NAR data show that non-owner occupants played
an increasing role in the housing market prior to the housing crisis. However, the relative size
of mortgages originated to non-owner occupants is quite different between the two data sources.
There are several explanations why HMDA and NAR provide different numbers. For example,
the data provided in NAR come from a survey, whereas HMDA captures actual home purchases
using mortgages. However, HMDA data may not provide a complete picture of originations in the
mortgage market. For example, HMDA only provides data on homes that are purchased using a
mortgage (no cash purchases). In addition, reporting requirements cause HMDA to underrepresent
mortgages originated by small lenders and lenders in rural markets. Also, for those homeowners
who use both a first and second mortgage to purchase a home, both mortgages show up in the data
as if two separate homes were being purchased. Avery, Brevoort, and Canner (2008) show that

118

Federal Reserve Bank of Richmond Economic Quarterly

Figure 2 Home Sales for Owner Occupants, Second Homeowners,
and Investors
9
Primary Residence
Vacation
Investment

8
7

Million Units

6
5
4
3
2
1
0

2003

2004

2005

2006

2007

group, their size and growth leading up to the housing crisis does not mean
that they played a significant role.15

Housing Prices
I argue above that non-owner occupants are more likely to view homeownership as a profit opportunity. This suggests that non-owner occupants may
be attracted to areas of the country where housing values are rising more
rapidly. It is also possible that this causation will flow in the opposite direction. For example, non-owner occupants may expect housing values to increase in the future, causing them to buy housing in anticipation of future price
appreciation.
While the direction of the relationship between home prices and nonowner occupant buying activity will not be determined in this study, I can
identify the strength of this relationship prior to the housing crisis. Figure
3 uses state data to compute cross-sectional correlations between the lagged
first and second mortgages for single home purchases increased substantially prior to the housing
crisis (2004–2006).
15 HMDA and LPS data may underreport the number of mortgages held by non-owner occupants. Both HMDA and LPS depend on homeowners to self-report the occupancy status of the
home, and LPS also relies on self-reporting to distinguish second homes from investment properties. Differences in underwriting standards based on the occupancy and usage of the home creates
an incentive to misreport homeowners’ true intentions.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

119

Figure 3 Cross-State Correlations Between Non-Occupants’ Share of
Home-Purchase Mortgage Originations and Annual
Percentage Changes in Housing Prices, 1997–2006
0.60
No Lags

0.50

Mortgage Share Lagged One Year
Price Change Lagged One Year

0.40
0.30
0.20
0.10
0.00
-0.10
-0.20
-0.30
-0.40

1997

1998

1999

2000

2001

2002

2003

2004

2005

2006

one-year percentage change in home prices and the share of mortgage originations to non-owner occupants (red bar). Because the direction of the causation is unknown, a correlation is also calculated between the lagged one-year
share of mortgages originated to non-owner occupants and the percent change
in home values (yellow bar). Lastly, the green bar shows the correlation between both variables using no lags. Coming into the 21st century, the two
variables in Figure 3 show a strong negative correlation. In other words,
the relationship between mortgage origination activity by non-owner occupants and home prices is negative. However, contrary to the earlier results,
all three correlation measures experience a strong positive relationship prior
to the housing crisis. These results show that the share of originations to
non-occupant owners either mirrored appreciation in home prices or were a
contributing factor. In other words, the results indicate that, just prior to the

120

Federal Reserve Bank of Richmond Economic Quarterly

housing crisis, non-owner occupants played a positive role in contributing to
a run up in housing prices.16,17

Risk Characteristics
As stated previously, theory suggests that non-owner occupants are expected
to have a higher mortgage foreclosure rate. If theory parallels reality, I would
expect higher underwriting standards for non-owner occupants in order to help
mitigate the higher perceived risk associated with their foreclosure option.18
Higher underwriting standards for non-owner occupants would take the form
of higher income and FICO scores, while having lower loan-to-income, debtto-income, and mortgage amount.
Figures 4, 5, 6, and 7 use data from HMDA and LPS at the time of
origination to compare borrower risk profiles for owner occupants and nonowner occupants. Using HMDA data, Figure 4 reports borrower income by
state for first lien originations disaggregated by occupancy type. The data
show that owner occupants have a lower median income when compared to
non-owner occupants. This result is even more striking given that only in the
case of owner occupants that purchased homes in California does the average
income meet or exceed the median income level for non-owner occupants. In
a typical state, the median income for owner occupants ranges from $60,000
in 2004 to $65,000 in 2007. For the same time period, the median income
for non-owner occupants is considerably higher, ranging from $100,000 to
$125,000 for the same period.
In order to observe differences in credit quality without using income by
ownership type, it is necessary to use LPS data. As mentioned earlier, the LPS
data has the ability to identify those homeowners who classify themselves as
either investors or second homeowners. Based on the discussion above, nonowner occupants are more likely to exercise their foreclosure option when
compared to owner occupants. Among the group of non-owner occupants,
second homeowners are expected to be less likely to exercise their foreclosure option when compared to investors because second homeowners place
more value on the non-monetary benefits associated with homeownership.
Figure 5 presents FICO scores for occupant owners, second homeowners,
16 For the years just prior to the housing crisis, the highest correlations in Figure 3 are for
the contemporaneous results and the lowest correlations are when the housing price appreciation
variable is lagged. It is also important to note that there is a slight asymmetry between the two
correlations using lagged variables, but this result is similar to Wheaton and Lee’s (2008) findings
on lead-lag relationships between sales and prices for total home purchases.
17 Wheaton and Nechayev (2008) find that the share of non-owner occupant mortgage activity
is positively correlated with errors in forecasting housing prices.
18 See Vandell (1995) for a survey of the empirical literature on mortgage default and a
discussion of individual variables in the default decision.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

121

Figure 4 Distributions Across States for the Median Income of Owner
Occupant and Non-Owner Occupant Mortgage Borrowers,
by Loan Purpose and Year of Origination (HMDA)
Panel A: Median Borrower Income by Occupancy (Home Purchase)
Owner Occupant

200
$ Thousands

Non-Owner Occupant
150
CA

100

CA
DC

CA

HA

CA

50
2004

2005

2006

2007

Panel B: Median Borrower Income by Occupancy (Refinance)

$ Thousands

160
140

Owner Occupant
Non-Owner Occupant

120
100

CA

CA

80
60
2004

2005

2006

2007

Notes: The data are median applicant income for each state and the District of Columbia,
segregated by occupancy. The line in each box represents the median incomes across the
50 states and the District of Columbia. Each box covers the interquartile range for income
(25th percentile and 75th percentile) of the distribution. The “whiskers” extend beyond
the box either to the end of the distribution or to a length of 1.5 times the interquartile
range, whichever comes first. The dots beyond the whiskers are classified as extreme
outliers and these dots are identified by their state code.

and investors. The data from Figure 5 show that at the time of origination,
occupant owners consistently have a lower median FICO score when compared
to non-owner occupants. Among the two groups of non-owner occupants, second homeowners consistently have a higher median FICO score. This result
seems surprising given that it was hypothesized that second

122

Federal Reserve Bank of Richmond Economic Quarterly

Figure 5 Distribution Across States for the Mean FICO Score of Owner
Occupant and Non-Owner Occupant Mortgage Borrowers,
by Year of Origination (LPS)
760

Owner Occupant
Second Home
Investor

FICO

740

720

700

680

660
2004

2005

2006

2007

Notes: The data represent the mean FICO score for each state and the District of
Columbia, segregated by occupancy. Loans that have a loan-to-value ratio that exceeds
400 are dropped from the sample.

homeowners would be less likely to exercise their default option when compared to investors. One possible explanation for this result is that investors
may be less likely to initiate default when a “trigger event” occurs, like a
loss of employment, because investors could use the investment property as
a source of income to help cover expenses. A similar trigger event may lead
to foreclosure for second homeowners given that they already have a primary
residence and the non-monetary benefits from owning the second home may
become less important under financial hardship. Another possible explanation
for higher FICO scores for second homeowners is that it takes more financial
resources to purchase an additional home primarily for enjoyment purposes.
As a result, a stronger financial position would translate into a higher FICO
score.
With respect to the loan-to-value ratio and debt-to-income ratio, homeowners, regardless of ownership status, will place a higher value on the foreclosure option if they hold less equity in their home or hold more overall

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

123

Figure 6 Distribution Across States for Loan-to-Value Ratios for
Owner Occupant and Non-Owner Occupant Mortgage
Borrowers, by Year of Origination (LPS)
Owner Occupant
Second Home
Investor

Loan-to-Value Ratio

80

75

70

65

60
2004

2005

2006

2007

Notes: Loan-to-value ratios here are computed as the original principal of the mortgage
divided by the appraised value of the property. Loans that have a loan-to-value ratio that
exceeds 400 are dropped from the sample.

debt.19,20 Figures 6 and 7 show that occupant owners on average have higher
loan-to-value and debt-to-income ratios when compared to non-owner occupants. However, based on median score, the loan-to-value ratios do not exhibit
that homeowners associated with any group were reaching in order to purchase
a home.21 Within the non-owner occupant group, investors consistently have
19 Data for loan-to-value are derived at the time the loan is originated, but this value may
have changed if the servicer has performed a post-origination appraisal on the property. As a result,
an unknown percentage of the mortgages in the sample may have post-origination loan-to-value
numbers.
20 Von Furstenberg (1969) and Campbell and Dietrich (1983) find evidence that initial loanto-value ratios alone are significant predictors of default.
21 It is important to note that it is not possible to make a definitive statement about the
financial capacity of borrowers without knowing additional financial information. For example,
a homeowner could have a mortgage that includes a relatively high interest rate, yet still have
a relatively low debt-to-income or loan-to-value ratio. In other words, the interest expense on
outstanding debts could be an important factor in assessing financial capacity, but this information
is not part of the calculation used to determine these numbers.

124

Federal Reserve Bank of Richmond Economic Quarterly

Figure 7 Distribution Across States for Debt-to-Income Ratios of
Owner Occupant and Non-Owner Occupant Mortgage
Borrowers, by Year of Origination (LPS)
45

Owner Occupant
Second Home
Investor

DTI

40

35

30

25
2004

2005

2006

2007

Notes: Loans that have a loan-to-value ratio that exceeds 400 are dropped from the
sample.

a higher risk profile when compared to second homeowners. This result is surprising given that investors have higher risk characteristics and lower FICO
scores when compared to second homeowners. As stated earlier, it is possible
that underwriters view the potential income from an investment property as a
mitigating risk factor in the underwriting decision.
In summary, the data for Figures 4–7 show that non-owner occupants have
higher median incomes, higher FICO scores, and lower debt-to-income and
loan-to-value ratios when compared to owner occupants. These measures are
all consistent with non-owner occupants being held to a higher underwriting
standard. Given the lower risk profile, it is possible that non-owner occupants
would be able to access larger amounts of mortgage credit. However, lenders
may restrict the size of loans that non-owner occupants can receive in an
attempt to reduce their exposure to default. In Figure 8, I use HMDA data to
compare the mortgage amount at the time of origination between owner and
non-owner occupants. The data show that the median mortgage amount for
non-owner occupants is smaller. In short, this result is consistent with our
earlier findings that lenders use higher underwriting standards for non-owner
occupants to help mitigate the increased probability of default associated with
the higher value of the default option. It is interesting to note that just prior

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

125

Figure 8 Distribution Across States for the Median Loan Amount for
Owner Occupants and Non-Owner Occupants, by Loan
Purpose and Year of Origination (HMDA)
Panel A: Median Mortgage Amount by Occupancy (Home Purchase)
400

Owner Occupant

$ Thousands

Non-Owner Occupant

300

200

100

0
2004

2005

2006

2007

Panel B: Median Mortage Amount by Occupancy (Refinance)
400

Owner Occupant
Non-Owner Occupant

$ Thousands

300

200

100

0
2004

2005

2006

2007

Notes: The data are median mortgage amounts for each state and the District of
Columbia, segregated by occupancy. Each box covers the 25th to 75th percentile; the
line in the box is the median.

to the housing crisis, a number of the risk measures mentioned above were
remaining steady or declining.

Foreclosures
As discussed above, I believe that non-owner occupants have a stronger financial incentive to exercise their foreclosure option when compared to owner
occupants. However, it is difficult to predict if the foreclosure rate for nonowner occupants will be higher given that this group has stronger financial
characteristics. It is possible to observe differences in foreclosure patterns

126

Federal Reserve Bank of Richmond Economic Quarterly

between owner and non-owner occupants using LPS data. In this study, a
mortgage is considered in foreclosure if the mortgage defaults following origination. For example, mortgages originated in 2004 to non-owner occupants
experience a foreclosure rate of 6.9 percent, which is slightly lower than the
7.5 percent foreclosure rate for owner occupants for the same origination
year. As discussed earlier in the article, it would be expected that foreclosure
rates for non-owner occupants would rise faster than for owner occupants as
economic conditions deteriorate, causing the potential financial benefit from
homeownership to decline. As expected, as the housing crisis started to unfold, foreclosure rates for both groups started to rise, but foreclosure rates
grew faster for non-owner occupants. For example, in 2005, foreclosure rates
for non-owner occupants exceeded those of owner occupants and remained
higher throughout the sample period. For the years 2005–2007, foreclosure
rates for non-owner occupants were 12.8 percent, 20.0 percent, and 17.4 percent, respectively. During the same years, the foreclosure rates for owner
occupants were 12.3 percent, 18.7 percent, and 15.1 percent, respectively.
While the data show that foreclosure rates for non-owner occupants grew
faster during the housing crisis, there has been little discussion regarding differences in housing market performance and ownership status at the state level.
In an attempt to explore this relationship, Figure 9 plots the relative foreclosure rates for owner occupied and non-owner occupied mortgages generated
by year of origination by state. The 45-degree line represents equality, where
foreclosure rates for both owner occupants and non-owner occupants are the
same. Consistent with the results presented above for the United States as a
whole, most states lie near the 45-degree line, but there is a movement above
the 45-degree line over time. In other words, for loans originated in 2004,
more states experience higher foreclosure rates among mortgages originated
to owner occupants. However, this relationship starts to change in 2005 as
more states experience higher foreclosure rates among non-owner occupants.
A few states that have not received much attention in the press experience
relatively high foreclosure rates for non-owner occupants. Specifically, the
symbols for Michigan, Indiana, and Ohio are well above the 45-degree line,
which means that in these states non-owner occupants are distinctly more
likely to be in foreclosure. The symbols for some of the Sunbelt states like
California, Florida, and Nevada, which have been identified in the press for
having high foreclosure rates, are shown to be near or below the 45-degree
line in most years. In other words, foreclosure rates among owner and nonowner occupants are relatively the same in these states. Given that Florida
and Nevada are destination states for vacation homeowners, it was somewhat
surprising that the relative foreclosure rates for non-owner occupants were not
higher.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

127

Figure 9 A Cross-State Comparison of Foreclosure Rates on
Non-Owner-Occupied and Owner-Occupied Mortgages, by
Year or Origination
Panel B: 2005 Foreclosure Rates by Occupancy
Non-Owner-Occupied Foreclosure Rate (%)

15

10

5

0

Non-Owner-Occupied Foreclosure Rate (%)

0

5

10
Owner-Occupied Foreclosure Rate (%)

Panel C: 2006 Foreclosure Rates by Occupancy

30

20

10

0
10
20
30
Owner-Occupied Foreclosure Rate (%)

20

15

10

5

0
0

40

0

25

15

Non-Owner-Occupied Foreclosure Rate (%)

Non-Owner-Occupied Foreclosure Rate (%)

Panel A: 2004 Foreclosure Rates by Occupancy
20

40

5

10
15
20
Owner-Occupied Foreclosure Rate (%)

25

Panel D: 2007 Foreclosure Rates by Occupancy
40

30

20

10

0
0

10
20
Owner-Occupied Foreclosure Rate (%)

30

Notes: Home purchase and refinance originations from LPS.

The Spatial Pattern of Foreclosures
As noted earlier, foreclosure rates were trending higher for non-owner occupants nationally and at the state level prior to the housing crisis. However,
the role non-owner occupants played in the housing crisis cannot be observed
by simply studying foreclosure rates, for doing so would imply that the share
of loans originated to owner occupants and non-owner occupants were the
same. As a result, it is necessary to recognize that the share of mortgages
originated to non-owner occupants varies across states and that this variation
may play a role in determining the impact non-owner occupants had in the
housing crisis.22
22 Doms, Furlong, and Krainer (2007) find a strong relationship between the share of mortgages originated to investors and delinquency rates among subprime borrowers.

128

Federal Reserve Bank of Richmond Economic Quarterly

Figure 10 Share of Foreclosure that Involves Non-Owner-Occupied
Properties (LPS Data for Mortgages Originated in 2006)

18.4

17.8

17.7
15.1

18.9
16.7
D.C. - 15.5
16.0

14.8

18.9
14.6
19.1

21.5

Less than 8 percent
8--10 percent
10--12 percent
12--14 percent
More than 14 percent

Cartographer: Michael Grover and Eli Popuch, February 2010.
Source: Breck Robinson, FRB Richmond, ESRI.

Figure 10 uses LPS data for mortgages originated in 2006 to show that the
share of foreclosures varies significantly across states. For example, the share
of foreclosures attributed to non-owner occupants in California is 7 percent.
However, the share of foreclosures in Florida is 19 percent.
While the share of foreclosures among non-owner occupants is greater
than the national average for a number of states, it would be inaccurate to
characterize the role non-owner occupants played in the housing crisis in
these states as problematic. For example, in states with relatively few foreclosures overall, the incidence of foreclosures on non-owner-occupied properties
could be low in an absolute sense and yet account for a high share of the
state’s few foreclosures. Conversely, in states with many foreclosures, nonowner-occupied properties could account for a relatively low share of overall
foreclosures and yet be much more common when compared to a state that has
few foreclosures. This is not just a hypothetical issue. For example, in Figure

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

129

10 the share of foreclosures among non-owner occupants was roughly equal in
Alabama and Arizona at 12 percent. However, foreclosure rates on mortgages
originated in 2006 to owner occupants were about two-and-a-half times higher
in Arizona as in Alabama. As a result, using the share of foreclosures among
non-owner occupants to measure impact will understate the role non-owner
occupants played in the housing crisis in Arizona and overstate their role in
Alabama.
In response to the shortcomings discussed above when using a measure
like the share of foreclosures to observe impact, a more comprehensive measure is needed that incorporates both the prevalence and performance of nonowner occupant mortgages. In order to observe the impact of foreclosures by
non-owner occupants, impact is broken down into two components: prevalence and performance. I define the prevalence measure as the number of
non-owner occupant mortgages divided by the total number of housing units
by year of origination.23 I could have used an alternative measure of prevalence, where the denominator is the total number of first lien home purchases
plus refinanced mortgages in the same calendar year. Results in this “per
mortgage” measure of prevalence for non-owner occupant mortgages are qualitatively similar to the “per housing unit” results. In the analysis, I use the
“per housing unit” measure because it is not sensitive to year-to-year fluctuations associated with mortgage lending activity, as noted by Mayer and
Pence (2008). The other component to the impact measure is performance.
I define performance as the number of foreclosures on non-owner occupant
mortgages divided by the total number of non-owner occupant mortgages by
year of origination. The product of these two measures represents the number
of foreclosures by non-owner occupant mortgages divided by the total number
of housing units:
I mpact = P revalence × P erf ormance.
Using LPS data for home purchases and refinances, Table 1 provides
information on prevalence and performance for mortgages originated for each
year between 2004–2007. In 2004, for example, there are 472 mortgages
originated to non-owner occupants for every 100,000 housing units in the
United States. Of the mortgages originated to non-owner occupants in 2004,
6.9 percent were foreclosed or in foreclosure by July 2011. Taken together,
the impact from non-owner occupant mortgages originated in 2004 implies
that there were about 33 foreclosures on mortgages originated to non-owner
occupants for every 100,000 housing units.
23 Total housing units is defined as first liens on home purchase loans, including refinancings

on single-family homes, excluding home improvement loans. The data on state housing units come
from the American Community Surveys for the time period 2004–2007.

130

Federal Reserve Bank of Richmond Economic Quarterly

Table 1 Prevalence, Performance, and Foreclosure Impact of
Non-Owner Occupant Mortgages in our LPS Data, 2004–2007
Year of Mortgage Origination
2004
2005
2006
2007
2004–2007

Performance
6.9
12.8
20.0
17.4
14.3

Prevalence
472
616
558
437
521

Impact
32.6
78.8
94.2
76.0
74.5

Notes: “Performance” refers to the percent of non-owner occupant mortgages foreclosed;
“prevalence” refers to the number of non-owner occupant mortgages per 100,000 housing
units; “impact” refers to the number of non-owner occupant mortgage foreclosures per
100,000 housing units.

Compared with 2004, the impact of foreclosures on non-owner occupants increased for loans originated in 2005 and 2006. This is partly due to
the increased prevalence of non-owner-occupied mortgages (616 and 558 per
100,000 housing units, respectively) during this time period. In addition, the
performance of mortgages to non-owner occupants declined sharply from 2004
to 2005 and 2005 to 2006. For example, the foreclosure rate increased roughly
by 14 percentage points between 2004 and 2007. As a result, our overall measure of impact (non-owner occupant foreclosures per 100,000 housing units)
rose to 78.8 in 2005 and then to 94.2 in 2006. The average number of foreclosures for the time period 2004–2007 is 74.5 foreclosures per 100,000 housing
units.24 The impact measure for non-owner occupant mortgages originated
in 2007 is 76.0 per 100,000 housing units, which is slightly above the 2004–
2007 national average. The lower number in 2007 is partly due to the start
of the housing crisis, which reduced the prevalence of non-owner-occupied
mortgages from 558 to 437 per 100,000 housing units.25 Table 1 provides an
overview of the impact of non-owner occupant foreclosures in 2004–2007.
Figure 11 shows how the prevalence, performance, and impact of nonoccupant foreclosures varied across the United States for the years 2004–2007.
In this figure, prevalence, performance, and impact are measured relative to
national norms. This means that the point 1.0 on the horizontal axis stands
for a level of prevalence equal to the 2004–2007 U.S. average for prevalence.
Similarly, the point 1.0 on the vertical axis stands for a performance level
equal to the 2004–2007 U.S. average for performance. The highest point on
24 The full impact of non-owner-occupied mortgages may be higher than the numbers reported

in this study, because the LPS data do not cover the entire mortgage market and may underestimate
the share of mortgages originated to non-owner occupants.
25 Another factor is that loans originated in 2007 had less time to enter foreclosure when
compared to loans originated in earlier years.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

131

Figure 11 The Relative Impact (per Housing Unit) of Non-Owner
Occupant Foreclosures for Mortgages Originated, by State
Panel B: 2005 Relative Non-Occupant Foreclosure
Impact per Housing Unit
Relative Non-Occupant Foreclosure Rate (US=1)

Relative Non-Occupant Foreclosure Rate (US=1)

Panel A: 2004 Relative Non-Occupant Foreclosure
Impact per Housing Unit
1.5

1.0

0.5

0.0
0

1
2
Relative Non-Occupant Mortgage Prevalence (US=1)

2.0

1.5

1.0

0.5

0.0
0

3

2.5

2.0

1.5

1.0

0.5

2.5

2.0

1.5

1.0

0.5

0.0
0.0

0

1

2

Relative Non-Occupant Mortgage Prevalence (US=1)

4

Panel D: 2007 Relative Non-Occupant Foreclosure
Impact per Housing Unit
Relative Non-Occupant Foreclosure Rate (US=1)

Relative Non-Occupant Foreclosure Rate (US=1)

Panel C: 2006 Relative Non-Occupant Foreclosure
Impact per Housing Unit

1
2
3
Relative Non-Occupant Mortgage Prevalence (US=1)

0.5

1.0

1.5

2.0

3
Relative Non-Occupant Mortgage Prevalence (US=1)

Notes: The middle line represents an impact factor that is equal to the LPS data 2004–
2007 national average of 37.6 non-owner occupant foreclosures per 100,000 housing units.
The lower and upper lines represent impact factors of half and three times the national
average.

the vertical axis in Figure 11, Panel A is labeled IN for Indiana, and it has
an x-axis value of about 0.64. This means that Indiana’s prevalence measure
in 2004 is about 0.64 times the corresponding prevalence measure for nonowner occupant mortgages for the United States over the period 2004–2007.
On the y axis, the value for Indiana is about 1.4. This means that Indiana has
a foreclosure rate for non-owner-occupied mortgages that is almost 1.4 times
higher than the corresponding 2004–2007 U.S. average. The product of these
two factors for Indiana is about .90. In other words, the degree of relative
impact from foreclosures on non-occupants in Indiana was 10 percent lower
than the corresponding national average.
Figure 11, Panel A also tells us something about why foreclosures on
non-occupant mortgages originated in 2004 are relatively important in each
state. Note that Indiana experienced an above-average impact from foreclosures on non-owner occupants because of performance issues. On the other
extreme, Nevada experienced a relatively low foreclosure rate on mortgages to

132

Federal Reserve Bank of Richmond Economic Quarterly

Figure 12 Non-Owner-Occupied Mortgage Prevalence per Housing
Unit, Relative to 2004–2007 U.S. National Average (LPS Data
for Mortgages Originated in 2006)

137.4

200.6
253.5
138.7 173.1

R.I.-133.7

156.4

153.0

184.7
160.3
152.1

253.3

0 to 75
76 to 100
101 to 125
More than 125

Cartographer: Michael Grover and Eli Popuch, February 2010.
Source: Breck Robinson, FRB Richmond, ESRI.

non-owner occupants, but had an above-average impact because mortgages to
non-owner occupants were prevalent.
Figure 11, Panels B, C, and D present the same analysis but for mortgages
originated to non-owner occupants for the years 2005, 2006, and 2007, respectively. From 2004–2005, the distribution of impact measures shift toward
the Northeast, as the prevalence of non-owner occupant mortgages increases.
At the same time, the average performance of mortgages in the sample started
to deteriorate. In three states—Nevada, Florida, and Arizona—the impact
of foreclosures to non-owner occupants reached or exceeded three times the
2004–2007 national average in 2005. This was driven by high prevalence in
Arizona and a combination of high prevalence and poor performance in Nevada
and Florida. For a cluster of Midwestern states (Indiana, Michigan, and Ohio),

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

133

Figure 13 Non-Owner-Occupied Mortgage Foreclosure Rate Relative to
the 2004–2007 U.S. National Average (LPS Data for
Mortgages in 2006)

132.4

151.6
231.4

292.9
214.7
140.8

215.2

D.E.-157.7
M.D.-126.4
D.C.-159.3

150.0

136.9

145.9

229.4

0 to 75
76 to 100
101 to 125
More than 125

248.4

Cartographer: Michael Grover and Eli Popuch, February 2010.
Source: Breck Robinson, FRB Richmond, ESRI.

the foreclosure impact for 2005 reached or exceeded the 2004–2007 national
average, even though mortgages to non-owner occupants were not especially
prevalent. However, these states had above-average impact measures due to
below-average performance of mortgages to non-owner occupants.
In 2006, both performance and prevalence of mortgages to non-owner
occupants declined. Visually, this result is observable as a shift toward the
Northwest in the distribution of impact measures. Several Midwestern states
continued to experience very poor performance combined with relatively low
prevalence. Deteriorating performance combined with high prevalence kept
the impact numbers very high in Nevada, Florida, and Arizona. In the case
of Hawaii, Idaho, Delaware, and Utah, high impact outcomes were driven by

134

Federal Reserve Bank of Richmond Economic Quarterly

Figure 14 Non-Owner-Occupied Mortgage Foreclosure Impact per
Housing Unit, Relative to 2004–2007 U.S. National Average
(LPS Data for Mortgages Originated in 2006)

242.4
136.8

R.I.-133.7

747.6
137.1
170.7

136.2

M.D.-132.6
D.C.-165.3

217.0
400.0

130.8
169.2

253.8

584.9

0 to 75
76 to 100
101 to 125
125 to 300
More than 300

Cartographer: Michael Grover and Eli Popuch, February 2010.
Source: Breck Robinson, FRB Richmond, ESRI.

high prevalence, while poor performance was a problem in states like Indiana,
Michigan, and Ohio.
In most states, a combination of lower prevalence and better performance
reduced the impact measures for mortgages originated in 2007. The reduction
in impact measures is observable as a shift toward the Southwest in the distribution of impact ratios in Figure 11, Panel D. However, the impact measures
for Arizona, Florida, and Nevada are still quite high when compared to the
national average for the time period 2004–2007.
To provide a clearer view of the geographic patterns in foreclosure to nonowner occupants and its underlying factors, Figures 12, 13, and 14 use maps
to show the prevalence, performance, and impact for mortgages originated in
2006 for all 50 states and the District of Columbia. Figure 13 shows that in

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

135

2006, non-owner occupant mortgages were relatively prevalent in the West
(including Hawaii) and along the mid- to lower-East Coast states from Florida
to New Jersey. Among the states that were identified as having a high prevalence of non-owner-occupied mortgages, only Arizona, California, Florida,
Georgia, Maryland, Nevada, and New Jersey also experienced poor performance. In addition to these states, a number of states in the Midwest and parts
of the Northeast experienced high foreclosure rates. In Figure 14, the impact
ratios for Arizona, Florida, and Nevada are 400 percent, 585 percent, and
748 percent, respectively, above the national average. There are a number of
states that have been highlighted in the press for having high foreclosure rates
in general, like California, Georgia, Maryland, and parts of the Midwest. It is
interesting to note that these same states were experiencing high impact ratios
for non-owner occupant mortgages in 2006. Our map also shows that Idaho,
South Carolina, and Utah were experiencing an above-average impact from
foreclosures on mortgages to non-owner occupants. This result is surprising
given that the press has not labeled any of these states as foreclosure hotspots.

4.

CONCLUSION

During the housing crisis, it was unknown if mortgages to non-owner occupants helped exacerbate the housing crisis. It has been discussed that nonowner occupants are sensitive to changes in home prices because they are more
likely to view homeownership as a financial asset, causing non-owner occupants to increase their demand for housing in areas where housing prices have
increased or are expected to increase. As a result, non-owner occupants are
more likely to exercise their option to default when compared to owner occupants. Subsequently, it would be expected that lenders would hold non-owner
occupants to a higher underwriting standard in order to reduce their probability of default. The results show that non-owner occupants have higher
incomes, higher credit scores, smaller loans, and generally a lower overall
risk profile. If markets are operating correctly, higher underwriting standards
for non-owner occupants should result in similar foreclosure rates relative to
owner occupants, but differences in foreclosure rates should widen during an
economic downturn when the financial benefits from homeownership decline.
I observe this pattern in foreclosure rates when using national data.
In an attempt to observe the impact of foreclosures at the state level, an
impact measure is decomposed to show the prevalence and performance of
non-occupant mortgages. States that experienced the highest impact from
foreclosures on properties owned by non-owner occupants (Arizona, Florida,
and Nevada) exhibit both relatively poor performance and relatively high
prevalence. However, a couple of states experienced an impact ratio that
exceeded the national average mainly due to poor performance (i.e., Indiana,
Michigan, and Ohio, and some other Midwestern and Northeastern states). By

136

Federal Reserve Bank of Richmond Economic Quarterly

contrast, Idaho and some other Western states had a high prevalence for mortgages originated to non-owner occupants, leading to a relatively high impact
measure.
The housing crisis and the subsequent hardships faced by homeowners
have been well chronicled in the press. All across the United States, homeowners have experienced declining home prices and high rates of foreclosure.
This has led policymakers to initiate programs to stabilize home values by reducing foreclosures. However, policymakers have given little attention to the
plight of non-owner occupants, even though the prevalence and performance
of mortgages originated to this group has helped exacerbate high foreclosure
rates in many states. The inability of previous programs to address the needs
of all homeowners may have been a contributing factor regarding the size of
the decline and the length of the housing crisis.

REFERENCES
Avery, Robert B., Kenneth P. Brevoort, and Glenn B. Canner. 2008.
“Changes in Mortgage Performance Across Geographies.” Board of
Governors of the Federal Reserve System Working Paper.
Bajari, Patrick, Chenghuan Chu, and Minjung Park. 2008. “An Empirical
Model of Subprime Mortgage Default from 2000 to 2007.” Working
Paper 14625. Cambridge, Mass.: National Bureau of Economic
Research (December).
Bhardwaj, Geetesh, and Rajdeep Sengupta. 2008. “Subprime Mortgage
Design” Federal Reserve Bank of St. Louis Working Paper 2008-039E.
Bhutta, Neil, Jane Dokko, and Hui Shan. 2010. “The Depth of Negative
Equity and Mortgage Default Decisions.” Board of Governors of the
Federal Reserve System Finance and Economics Discussion Series
2010-35.
Campbell, Tim S., and J. Kimball Dietrich. 1983. “The Determinants of
Default on Insured Conventional Residential Mortgage Loans.” Journal
of Finance 38 (December): 1,569–81.
Cordell, L., M. Watson, and J. Thomson. 2008. McDash Data Warehouse
Seminar.
Cowan, Adrian M., and Charles D. Cowan. 2004. “Default Correlation: An
Empirical Investigation of a Subprime Lender.” Journal of Banking &
Finance 28 (April): 753–71.

B. L. Robinson: Performance of Non-Owner-Occupied Mortgages

137

Deng, Yongheng, John M. Quigley, and Robert Van Order. 2000. “Mortgage
Terminations, Heterogeneity and the Exercise of Mortgage Options.”
Econometrica 68 (March): 275–307.
Doms, Mark, Fred Furlong, and John Krainer. 2007. “Subprime Mortgage
Delinquency Rates.” Federal Reserve Bank of San Francisco Working
Paper 2007-33 (November).
Doms, Mark, and John Krainer. 2007. “Mortgage Market Innovations and
Increases in Household Spending.” Federal Reserve Bank of San
Francisco Working Paper.
Elmer, Peter J., and Steven A. Seelig. 1999. “Insolvency, Trigger Events, and
Consumer Risk Posture in the Theory of Single-Family Mortgage
Default.” Journal of Housing Research 10 (1): 1–25.
Foote, Christopher L., Kristopher Gerardi, and Paul S. Willen. 2008.
“Negative Equity and Foreclosure: Theory and Evidence.” Journal of
Urban Economics 64 (September): 234–45.
Gerardi, Kristopher, Adam Hale Shapiro, and Paul S. Willen. 2008.
“Subprime Outcomes: Risky Mortgages, Homeownership Experiences,
and Foreclosures.” Federal Reserve Bank of Boston Working Paper
07-15 (May).
Haughwout, Andrew, Richard Peach, and Joseph Tracy. 2008. “Juvenile
Delinquent Mortgages: Bad Credit or Bad Economy?” Federal Reserve
Bank of New York Staff Report 341 (August).
Hendershott, Patric H., and Robert Van Order. 1987. “Pricing Mortgages: An
Interpretation of the Models and Results.” Journal of Financial Services
Research 1 (1): 19–55.
Immergluck, Dan, and Geoff Smith. 2004. Risky Business—An Econometric
Analysis of the Relationship Between Subprime Lending and
Neighborhood Foreclosures. Chicago: Woodstock Institute.
Kau, James B., and Donald C. Keenan. 1995. “An Overview of
Option-Theoretic Pricing of Mortgages.” Journal of Housing Research 6
(2): 217–44.
Kau, James B., Donald C. Keenan, and Taewon Kim. 1994. “Default
Probabilities for Mortgages.” Journal of Urban Economics 35 (May):
278–96.
Mayer, Chris, and Karen Pence. 2008. “Subprime Mortgages: What, Where,
and to Whom?” Board of Governors of the Federal Reserve System
Finance and Economics Discussion Series Paper 2008-29.

138

Federal Reserve Bank of Richmond Economic Quarterly

Quercia, Roberto G., and Michael A. Stegman. 1992. “Residential Mortgage
Default: A Review of the Literature.” Journal of Housing Research 3
(2): 341–79.
Vandell, Kerry D. 1995. “How Ruthless is Mortgage Default: A Review of
Synthesis of the Evidence.” Journal of Housing Research 6 (2): 245–64.
Von Furstenberg, George M. 1969. “Default Risk on FHA-Insured Home
Mortgages as a Function of the Terms of Financing: A Quantitative
Analysis.” Journal of Finance 24 (June): 459–77.
Wheaton, William C., and Nai Jia Lee. 2008. “Do Housing Sales Drive
Housing Prices or the Converse?” MIT Department of Economics
Working Paper 08-01 (January).
Wheaton, William C., and Gleb Nechayev. 2008. “The 1998–2005 Housing
‘Bubble’ and the Current ‘Correction’: What’s Different this Time?”
Journal of Real Estate Research 30: 1–26.

Economic Quarterly—Volume 98, Number 2—Second Quarter 2012—Pages 139–157

On the Benefits of
GDP-Indexed Government
Debt: Lessons from a Model
of Sovereign Defaults
Juan Carlos Hatchondo and Leonardo Martinez

W

hether governments should issue GDP-indexed sovereign debt—
that promise payments that are a function of the gross domestic
product (GDP)—continues to be the subject of policy debates. On
the one hand, several studies highlight possible benefits from tying sovereign
debt obligations to domestic GDP.1 One benefit from GDP-indexation is that
issuing debt that promises lower payments when GDP takes low values may
facilitate the financing of automatic stabilizers (such as an increase in unemployment benefits during economic downturns) and countercyclical fiscal
policy. Another benefit is that GDP indexation could diminish the likelihood
of fiscal crises for governments that face a countercyclical borrowing cost (in
part because of a countercyclical default risk). Kamstra and Shiller (2010) argue that GDP indexation would help investors who want exposure to income
growth (for instance, to protect relative standards of living in retirement) and
protection against inflation.
On the other hand, there are several difficulties in the implementation of
the basic idea described in the previous paragraph. First, GDP-indexed bonds
may introduce moral hazard problems by weakening the government’s incentives to implement growth-promoting policies (see, for instance, Krugman
[1988]). Second, GDP may not be easily verifiable. This is in part because the
For helpful comments, we thank Kartik Athreya, Huberto Ennis, Andreas Hornstein, and Tim
Hursey. The views expressed herein are those of the authors and should not be attributed to
the IMF, its Executive Board, or its management; the Federal Reserve Bank of Richmond; or
the Federal Reserve System. E-mail: juanc.hatchondo@gmail.com.
1 See, for instance, Shiller (1993), Borensztein and Mauro (2004), Borensztein et al. (2004),
Griffith-Jones and Sharma (2005), and the references therein.

140

Federal Reserve Bank of Richmond Economic Quarterly

government could manipulate the GDP calculation (however, reporting lower
GDP figures may imply a political cost). Moreover, even without manipulation, final GDP data are available with a significant lag.2 This could force a
government to make a high payment during a low GDP period because the
previous year GDP was high (problems created by lags in GDP statistics could
be mitigated by provisions on the government’s accounts; see United Nations
[2006]).3 Third, gains from indexing sovereign debt to GDP may be limited
because domestic GDP is not the only determinant of default risk and the government’s borrowing cost (think, for instance, about contagion, shocks to the
investors’ risk aversion, political shocks, etc.; see Tomz and Wright [2007]).
Perhaps because of the implementation difficulties described above, the majority of sovereign debt is not GDP indexed. However, past experiences show
that issuing GDP-indexed debt is feasible. For instance, Argentina issued GDP
warrants in 2005, during a period of renewed interest in these contracts (see
United Nations [2006]). The 2012 debt restructuring in Greece also included
the issuance of bonds carrying detachable GDP warrants.4
This article contributes to the debate on GDP-indexed sovereign debt by
discussing the effects of using this debt contract. We study a model in which
the government faces a countercyclical borrowing cost because of a countercyclical default risk. We use this model to discuss the effects of introducing
GDP-indexed bonds.
We introduce income-indexed sovereign bonds into the equilibrium default model studied by Aguiar and Gopinath (2006) and Arellano (2008), who
extend the framework proposed by Eaton and Gersovitz (1981) to analyze its
quantitative performance. We study a small open economy that receives a
stochastic endowment stream of a single tradable good. The government’s
objective is to maximize the expected utility of a representative private agent.
Each period, the government makes two decisions. First, it decides whether
to default on previously issued debt. Second, it decides how much to borrow
or save. The cost of defaulting is given by an endowment loss and temporary exclusion from capital markets. We study two versions of this model.
First, we assume that the government issues one-period bonds that promise a
non-contingent payment. Second, we assume the government can issue a oneperiod income-indexed bond that promises a payment function of next-period
2 For instance, payments for the GDP warrants issued by Argentina during its 2005 debt

restructuring are made effective with a one-year lag.
3 These problems could be addressed by indexing debt contracts to variables that are correlated
to GDP and that the government cannot control (such as commodity prices or trading partners’
growth rates; see Caballero [2002]).
4 Other experiences with GDP indexation include various “Value Recovery Rights” indexed
to GDP issued by Bosnia and Herzegovina, Bulgaria, Costa Rica, Nigeria, and Venezuela in the
early 1990s as part of the Brady bonds restructuring (Sandleris, Sapriza, and Taddei 2011). For
instance, Bulgaria issued, in 1994, bonds with a potential premium if Bulgaria’s GDP exceeded
125 percent of its 1993 level.

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

141

income. In both cases, bonds are priced in a competitive market inhabited by
risk-neutral investors.
We solve the model using the calibration in Arellano (2008), which is
based on an economy facing significant default risk: Argentina before its 2001
default. The ex-ante welfare gain from the introduction of income-indexed
bonds when there is no initial debt is equivalent to an increase of 0.5 percent
of consumption. Introducing income-indexed bonds results in welfare gains
because it allows the government to:
1. Eliminate defaults. In the model, debt and income are the only determinants of default. With income-indexed bonds, the government makes a
different payment promise for each level of next-period income, which
means that there is no uncertainty about whether a government promise
will be paid. Then, lenders would never pay for a payment promise
on which they know the government would default and a bond making
such a promise is not traded. In contrast, with non-contingent bonds,
when the government borrows it promises the same payment for all
next-period income levels. The government defaults in the next period
at income levels that are sufficiently low.
2. Increase its indebtedness from 4 percent to 18 percent of mean income.
The government is assumed to be eager to borrow (it discounts future
consumption at a rate higher than the risk-free interest rate). With
indexed bonds, the government can bring forward resources from future high-income states without increasing the default probability in
low-income states (the cost of defaulting is assumed to be lower in
low-income states). In contrast, with non-contingent bonds, the future resources the government can bring forward are limited by default
risk. If the government issued a non-contingent bond equivalent to 18
percent of mean income, for most current income levels the revenue
it would collect from that debt issuance would be even smaller than
the revenue it would collect from issuing debt equivalent to 4 percent
of mean income. The reason is that lenders would internalize that, at
a debt of 18 percent of mean income, there is a significant mass of
income realization states at which the government would default, and
lenders would thus offer to buy those bonds at a significant discount.
3. Reduce the ratio of standard deviations of consumption relative to income from 1.07 to 0.79. With income-indexed bonds, the government
chooses to smooth consumption by buying claims that pay in states
with lower income and borrowing against states with higher income.
Furthermore, the borrowing cost is constant because the government
does not pay a default premium. Thus, the government chooses to
borrow more when income is lower. In contrast, with non-contingent
bonds, the borrowing cost is countercyclical. In bad times, the cost

142

Federal Reserve Bank of Richmond Economic Quarterly
of defaulting is assumed to be lower and, therefore, the probability of
default and the cost of borrowing are higher. Consequently, optimal
borrowing becomes procyclical: In bad times, since the cost of borrowing is higher, the government chooses to finance more of its debt
service obligations by lowering consumption instead of borrowing.5

It should be noted that our analysis does not consider the implementation difficulties of GDP-indexed bonds that we mentioned above: We assume
that the government cannot affect GDP growth, that bond payments can be
determined using current income, and that income is the only determinant
of sovereign defaults. Thus, the gains from introducing GDP-indexed bonds
measured in this article should be seen as an upper bound. Relaxing the simplifying assumptions that limit our analysis increases the dimensionality of
the model’s state space and thus augments the computation time required to
solve the model. Relaxing these simplifying assumptions is the subject of our
ongoing research but is beyond the scope of this article.
In spite of the interest in GDP-indexed bonds among policymakers, there
are few formal studies of the effects of introducing these bonds. Athanasoulis
and Shiller (2001) and Durdu (2009) also study the effects of GDP-indexed
debt but in frameworks without endogenous borrowing constraints determined
by default risk.
Chamon and Mauro (2006) study the effects of introducing GDP-indexed
bonds using a debt sustainability framework, commonly used in policy institutions. Because of the low computation cost of solving this framework,
Chamon and Mauro (2006) can study a set of debt instruments richer than
the one we study in this article. However, a disadvantage of the sustainability
framework is that the government’s borrowing (the primary balance) is estimated using past data and is not the result of an optimization problem. Thus,
the analysis assumes that the government’s borrowing does not change when
indexed bonds are introduced (in contrast with our findings). Furthermore,
their debt sustainability framework does not allow default risk to affect the
borrowing cost. The framework is also not suitable for the derivation of the
optimal indexation. As we do, Chamon and Mauro (2006) find that indexation
could reduce default risk.
Faria (2011); Sandleris, Sapriza, and Taddei (2011); and Hatchondo,
Martinez, and Sosa Padilla (2012) study the effects of introducing GDPindexed sovereign debt in an environment with equilibrium default risk.
Comparing quantitative predictions of these studies is difficult because of
differences in the parameterizations and the reported statistics. Faria (2011)
and Sandleris, Sapriza, and Taddei (2011) present the effects of introducing
5 This is consistent with evidence of procyclical fiscal policy in emerging economies (that pay
a high and volatile interest rate), as documented by Gavin and Perotti (1997); Kaminsky, Reinhart,
and Vegh (2004); Talvi and Vegh (2005); Ilzetzki and Vegh (2008); and Vegh and Vuletin (2011).

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

143

an income-indexation that is not chosen by the government and is constant
over time. As in this article, Hatchondo, Martinez, and Sosa Padilla (2012)
allow the government to choose how to index its debt to future income in each
period. Hatchondo, Martinez, and Sosa Padilla (2012) compare the effects
of introducing income indexation with the ones of introducing interest-rate
indexation. The latter form of indexation is the main focus of that article.
The rest of the article proceeds as follows. Section 1 introduces the model.
Section 2 discusses the parameterization. Section 3 presents the results. Section 4 concludes.

1. THE MODEL
There is a single tradable good. The economy receives a stochastic endowment
stream of this good yt , with
log(yt ) = log(A) + ρ log(yt−1 ) + ε t ,


with |ρ| < 1, and εt ∼ N 0, σ 2 .
The government’s objective is to maximize the present expected discounted value of future utility flows of the representative agent in the economy,
namely
⎤
⎡
∞



Et ⎣
β j −t u cj ⎦ ,

(1)

j =t

where E denotes the expectation operator, β denotes the subjective discount
factor, and the utility function is assumed to display a constant coefficient of
relative risk aversion denoted by γ . That is,
u (c) =

c(1−γ ) −1
1−γ

log(c)

if γ = 1,
if γ = 1.

(2)

Each period, the government makes two decisions. First, it decides
whether to default. Second, it chooses the number of bonds that it purchases
or issues in the current period.6
There are two costs of defaulting (Hatchondo, Martinez, and Sapriza
[2007a] discuss the costs of sovereign defaults). First, a defaulting sovereign
is excluded from capital markets. In each period after the default period, the
country regains access to capital markets with probability ψ ∈ [0, 1].7 Second, if a country has defaulted on its debt, it faces an income loss of φ (y)
6 Bianchi, Hatchondo, and Martinez (2012) study a sovereign default framework where the
government can issue debt and accumulate assets simultaneously.
7 Hatchondo, Martinez, and Sapriza (2007b) solve a baseline model of sovereign default with
and without the exclusion cost and show that eliminating this cost affects significantly only the

144

Federal Reserve Bank of Richmond Economic Quarterly

units in every period in which it is excluded from capital markets. Following
Arellano (2008), we assume that
y − λ if y > λ
(3)
0
if y ≤ λ.
With this income loss function, the default cost rises more than proportionately
with income. This property of the income loss triggered by defaults helps the
equilibrium default model to match the high sovereign spreads—defined as
the difference between the sovereign bond yield and a risk-free interest rate—
observed in the data (see, for instance, the discussion of the effects of the
income loss function in Chatterjee and Eyigungor [forthcoming]). This is
also a property of the income loss triggered by default in Mendoza and Yue
(2012).8
We focus on Markov perfect equilibrium. That is, we assume that in each
period, the government’s equilibrium default and borrowing strategies depend
only on payoff-relevant state variables. As discussed by Krusell and Smith
(2003), there may be multiple Markov perfect equilibria in infinite-horizon
economies. In order to avoid this problem, we solve for the equilibrium of the
finite-horizon version of our economy, and we increase the number of periods
of the finite-horizon economy until value functions for the first and second
periods of this economy are sufficiently close. We then use the first-period
equilibrium functions as the infinite-horizon-economy equilibrium functions.
Government bonds are priced in a competitive market. Lenders can borrow or lend at the risk-free rate r, are risk neutral, and have perfect information
regarding the economy’s income.
We study two versions of this model. First, we assume the government
can issue non-contingent bonds. Each bond is a promise to deliver one unit
of the good in the next period. Second, we assume the government can issue
an indexed bond that promises a next-period payment that is a function of
next-period income.
φ (y) =

Recursive Formulation with Non-Contingent Bonds
Let b denote the government’s current bond position, and b denote its bond
position at the beginning of the next period. A negative value of b implies that
the government was a net issuer of bonds in the previous period. Let d denote
debt level generated by the model. Hatchondo, Martinez, and Sapriza (2009) argue that lower
borrowing levels after a default could be explained by political turnover that triggered a default
(see, also, Hatchondo and Martinez [2010] for a discussion of the interaction between political
factors and default decisions).
8 Mendoza and Yue (2012) introduce an endogenous channel through which defaults decrease
output in the defaulting economy: They assume that when the government defaults, local firms
lose access to foreign credit, which is necessary to finance the purchases of foreign inputs.

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

145

the current-period default decision. We assume that d = 1 if the government
defaulted in the current period and d = 0 if it did not. Let V denote the
government’s value function at the beginning of a period, that is, before the
default decision is made. Let V0 denote the value function of a sovereign not
in default. Let V1 denote the value function of a sovereign in default. Let
F denote the conditional cumulative distribution function of the next-period
endowment y  . Let h and g denote the optimal default and borrowing rules
followed by the government. The default rule h takes one of two values: 0 if
the rule prescribes to pay back, and 1 if the rule prescribes to default.
The price of a bond equals the payment a lender expects to receive discounted at the risk-free rate. The bond price is given by the following functional equation:

 


1
q(b , y) =
(4)
1 − h b , y  F dy  | y .
1+r
This bond price satisfies a lender’s expected-zero-profit condition and
is equal to the payment probability discounted by the risk-free interest rate.
Recall a bond promises to pay one unit of the consumption good next period.
Thus,
   the payment the
 holder
 of abond will receive next period with the state
b , y is given by 1 − h b , y  .
For a given price function q, the government’s value function V satisfies
the following functional equation:
V (b, y) = max {dV1 (y) + (1 − d)V0 (b, y)},
d{0,1}

where
V1 (y) = u (y − φ (y)) + β

(5)


 

ψV (0, y  ) + (1 − ψ) V1 (y  ) F dy  | y ,
(6)



V0 (b, y) = max
u y + b − q(b , y)b + β

b


  
V (b , y )F dy | y . (7)




Definition 1 A Markov perfect equilibrium is characterized by
1. a set of value functions V , V1 , and V0 ,
2. a default rule h and a borrowing rule g,
3. a bond price function q,
such that:
(a) given h and g, V , V1 , and V0 satisfy functional equations (5), (6), and
(7), when the government can trade bonds at the bond price function q;

146

Federal Reserve Bank of Richmond Economic Quarterly
(b) the bond price function q is given by equation (4); and

(c) the default rule h and borrowing rule g solve the dynamic programming problem defined by equations (5) and (7) when the price at which the
government can trade bonds is given by the bond price function q.

Recursive Formulation with the Indexed Bond
With the income-indexed bond, the government can choose what to promise
to pay next period for each realization of next-period income y  (payments can
be negative). Let b̂ denote the payment function promised by the government.
Let ĝ and ĥ denote the government’s borrowing and default rules, respectively.
As in the previous subsection, a bond price is equal to the expected payment a lender will receive, discounted at the risk-free rate. For the indexed
bond, this price is given by
q̂(b̂ , y) =

1
1+r


 

b̂ (y  ) 1 − ĥ(b̂ (y  ), y  ) F dy  | y .

(8)

Note that, with N possible income levels {y1 , y2 , ..., yN }, we could think
about the government choosing a portfolio of N defaultable Arrow-Debreu
securities instead of the payments of an income-indexed bond. For all i ∈
{1, 2, ..., N }, security i promises to deliver one unit of the good in the next
period if and only if y  = yi . The price of each of these securities is equal
to the expected payment the lender will receive. Let bi denote the number of
securities issued by the government promising to pay if and only if y  = yi .
Let Pi (y) denote the probability of y  = yi given current income y. The price
of a security promising to pay if and only if y  = yi is equal to 
the likelihood of


y = yi multiplied by the payment the lender would receive 1 − ĥ(bi , yi ) ,
and discounted at the risk-free rate:
q̃(bi , y) =


1 
1 − ĥ(bi , yi ) Pi (y).
1+r

(9)

Without loss of generality, we assume that the government only promises
payments b̂ (y  ) for which it would not choose to default. Since the government makes a different promise for each level of next-period income, and debt
and income are the only determinants of default, there is no uncertainty about
whether a government promise will be paid. Note that, for any payment b̂ (y  )
on which the government would choose to default (ĥ(b̂ (y  ), y  ) = 1), the contribution of b̂ (y  ) to the bond price in equation (8) is equal to zero. Then, the
government cannot gain from promising a payment b̂ (y  ) on which it would

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

147

choose to default.9 In contrast, without income indexation, the government
may issue a bond promising a payment on which it will default next period in
some states (y  ) but not in other states. Since the government may pay next
period, lenders are willing to pay for a defaultable bond.
Let W1 denote the value function of a government in default. Since a
defaulting government does not pay its debt, W1 is not a function of the debt
level.
Let W0 denote the value function of a government not in default. When
the government pays it debt, its expected utility is a decreasing function of its
debt level.10
Since W0 is decreasing with respect to the government’s debt level and W1
is not a function of the government debt level, for any income level y, there
exists a debt level B(y) such that the government defaults if and only if its
debt level is higher than −B(y). This debt threshold satisfies W0 (B(y), y) =
W1 (y), where

  
 

W0 (b, y) = max u (c) + β W0 (b̂ (y ), y )F dy | y
(10)
b̂

s.t.



1
b̂ (y  )F dy  | y ,
1+r
 

b̂ (y ) ≥ B(y ) for all y  ,
c =y+b−

(11)
(12)

and
W1 (y) = u (y − φ (y)) + β


 

ψW0 (0, y  ) + (1 − ψ) W1 (y  ) F dy  | y .

(13)
One way of thinking about the government’s lack of commitment to its
future default decisions is to suppose that, each period, decisions are made
by a different government, and that the current government has no control
over future governments’ decisions. For instance, the borrowing constraint
in equation (12) is exogenous to the current government because B(y  ) is
determined by the next-period government’s default decision and the current
government cannot control that decision.
The borrowing constraint in equation (12) is the only difference between
the economy with indexed bonds and an Arrow-Debreu economy. A binding
borrowing constraint would be the source of inefficiency in the indexed-debt
economy.
Definition 2 A Markov perfect equilibrium is characterized by
9 Equivalently, with Arrow-Debreu securities, if the government chooses a b for which it
i

would choose to default next period (ĥ(bi , yi ) = 1), lenders would not pay for bi (q̃(bi , y) = 0).
10 This is also a property of V . Chatterjee et al. (2007) provide a formal characterization
0
of equilibrium functions in a default model.

148

Federal Reserve Bank of Richmond Economic Quarterly
1. a set of value functions W0 and W1 ,
2. a borrowing rule ĝ,
3. debt thresholds B(y  ),
such that:

(a) given the borrowing rule ĝ and debt thresholds B(y  ), W0 and W1 satisfy functional equations (10) and (13);
(b) given debt thresholds B(y  ), the borrowing rule ĝ solves the dynamic
programming problem defined by equation (10); and
(c) B(y) satisfies W0 (B(y), y) = W1 (y).

2.

PARAMETERIZATION

We solve the model for the parameterization presented by Arellano (2008).
This parameterization was chosen to mimic some moments of the Argentinean
economy: properties of the GDP time series and the standard deviation of
the trade balance from 1993–2001, an average debt service-to-GDP of 5.53
percent between 1980 and 2001, and a default frequency of 3 defaults per 100
years chosen after counting 3 defaults in the last 100 years for Argentina. Each
period corresponds to a quarter. Table 1 presents the parameter values.
The parameterization studied by Arellano (2008) is a common reference
for quantitative studies of sovereign defaults. However, some important limitations of this parameterization have been documented in the literature. A
model with one-period bonds targeting the average debt service-to-GDP ratio results in debt levels that are too low compare to the data (Hatchondo and
Martinez [2009]; Arellano and Ramanarayanan [2012]; Hatchondo,
Martinez, and Roch [2012]; and Chatterjee and Eyigungor [forthcoming] study
frameworks with long-term debt). Targeting a default frequency of 3 defaults
per 100 years implies that the model generates sovereign spreads that are
lower than the ones observed in Argentina before its 2001 default. This occurs in part because the model assumes risk-neutral lenders (Lizarazo [2006],
Arellano [2008], and Borri and Verdelhan [2009] present models with riskaverse lenders).
We solve the models numerically using value function iteration. We find
two value functions: one for a government not in default, and one for a government in default (i.e., V0 and V1 , or W0 and W1 ). We discretize endowment
levels and we use spline interpolation for asset positions. The stochastic process for the endowment is discretized using Tauchen (1986) on a uniformly
distributed grid of endowment realizations. We center points around the mean

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

149

Table 1 Parameter Values
Sovereign’s Risk Aversion
Interest Rate
Income Autocorrelation Coefficient
Standard Deviation of Innovations
Income Scale
Exclusion Length
Discount Factor
Default Cost

γ
r
ρ
σ
A
ψ
β
λ

2
0.017
0.945
0.025
10
0.282
0.953
0.969 E(y)

and we use a width of three standard deviations. We use 200 endowment grid
points.11

3.

RESULTS

Table 2 reports moments in the simulations of the models with non-contingent
and indexed bonds. Statistics correspond to the mean of the value of each moment in 500 simulation samples. Each sample consists of 32 periods before a
default episode. The simulations in the economy with state-contingent claims
are computed using the same 500 samples of 32 periods that were used to compute the simulations in the benchmark economy. The interest rate spread (rs )
is expressed in annual terms. The trade balance (income minus consumption)
is expressed as a fraction of income (tb = y−c
). The logarithm of income and
y
consumption are denoted by ỹ and c̃, respectively. The standard deviation of
x is denoted by σ (x) and is reported in percentage terms. The coefficient of
correlation between x and z is denoted by ρ (x, z). Moments are computed
using detrended series. Trends are computed using the Hodrick-Prescott filter
with a smoothing parameter of 1,600.
Table 2 shows that the income-indexed bond allows the government to
avoid defaults. With non-contingent bonds, the government, when it borrows,
promises payments for which it would choose to default if next-period income
is low. In contrast, with income-indexed bonds the government cannot gain
from promising a payment for which it would choose to default.
Table 2 also shows that income-indexed bonds allow the government to
increase its mean level of indebtedness from 4 percent to 18 percent of mean
income. With non-contingent bonds, if the government were to promise to
pay 18 percent of mean income, the probability of default would be very high
and the government would have to pay a very high interest rate to compensate
11 We do not find significant differences in the welfare gains from introducing indexed debt

when we use 100 grid points instead (Hatchondo, Martinez, and Sapriza [2010] discuss the sensitivity of a default model’s predictions to changes in the grid specification).

150

Federal Reserve Bank of Richmond Economic Quarterly

Table 2 Simulation Statistics

σ (ỹ)
Defaults per 100 Years
E(rs )
σ (rs )
Mean Debt (% of Mean Income)
σ (c̃)/σ (ỹ)
σ (tb)
ρ (tb, ỹ)
ρ (rs , ỹ)
ρ (c̃, ỹ)

Non-Contingent Bonds
5.58
2.82
3.24
2.92
3.94
1.07
1.13
−0.24
−0.36
0.98

Indexed Bonds
5.58
0.00
0.00
0.00
17.89
0.79
1.81
0.69
0.00
0.96

lenders for default risk. That interest rate would be high enough to deter the
government from choosing such high debt levels. In contrast, with indexed
bonds, the government can promise to pay more when next-period income is
higher, which implies a higher cost of defaulting (see equation (3)). That is,
with indexed bonds, the government can bring to the present resources from
future high-income states without increasing the probability of default in lowincome states. Figure 1 illustrates how this is in fact what the government
chooses to do.12 Recall that in the model the government is eager to borrow
because it discounts future consumption at a rate higher than the risk-free
interest rate.
In addition, Table 2 shows that income-indexed bonds allow the government to reduce the ratio of standard deviations of consumption relative to
income from 1.07 to 0.79. A mirror result is that the trade balance is procyclical with income-indexed bonds and countercyclical with non-contingent
bonds. To account for this result, note first that income-indexed bonds allow
the government to smooth consumption by buying claims that pay in states
with lower next-period income and borrowing against states with higher nextperiod income (see Figure 1).
Furthermore, as shown in Table 2, the spread is countercyclical in the
economy with non-contingent bonds. In bad times, the cost of defaulting is
lower (see equation (3)) and, therefore, the probability of default and the cost of
borrowing are higher. Consequently, optimal borrowing becomes procyclical:
In bad times, since the cost of borrowing is higher, the government chooses to
finance more of its debt service obligations by lowering consumption instead of
borrowing. In contrast, with indexed bonds, the cost of borrowing is constant
and thus the government chooses to borrow more when income is lower.
12 The figure also shows that the indexed-debt borrowing limit binds for sufficiently high

next-period income. Furthermore, the figure shows that with non-contingent debt, the government
only issues debt with a face value of 1.2 percent of current income.

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

151

Figure 1 Borrowing Decisions in a State with Zero Debt and Income
Equal to its Unconditional Mean

Saving with indexed debt
Borrowing limits
Saving with non-indexed debt

0.3

Saving / Current Output

0.2
0.1
0.0
0.1
0.2
0.3
0.4
9.4

9.6

9.8

10.0

10.2

10.4

10.6

10.8

Next-Period Output

Notes: The dashed line represents the demand for claims contingent on next-period income chosen by the government. The solid line represents the thresholds at which the
government will be indifferent between defaulting and not defaulting in the next period.
The dotted line represents the saving decision in the economy with non-indexed debt.

Figure 2 presents the distribution of welfare gains from implementing
indexed bonds. We compute this distribution using all combinations of income
and debt levels in the simulations with non-contingent bonds, for periods with
access to capital markets. For each combination of debt and income, we
measure welfare gains as the constant proportional change in consumption
that would leave a consumer indifferent between living in the economy with
non-contingent debt and in the economy with income-indexed bonds. This
consumption change is given by


W0 (b, y)
V0 (b, y)





1
1−γ



− 1,

and can be easily derived from equations (1) and (2). A positive value means
that agents prefer the economy with income-indexed bonds. For instance, the
figure shows that for 50 percent of the combinations of income and debt levels
we consider, welfare gains are higher than 0.45 percent.

152

Federal Reserve Bank of Richmond Economic Quarterly

Figure 2 Distribution of Consumption Compensation that Makes
Domestic Agents Indifferent between Living in the Economy
with Non-Contingent Debt and the Economy with
Income-Indexed Bonds
0.7

Welfare Gain (%)

0.6
0.5
0.4
0.3
0.2
0.1
0.0
0

20

40

60

80

100

Percentile

Notes: The distribution (in percentage terms) is computed using the distribution of income and debt levels observed in the economy with non-contingent bonds in periods with
market access. For instance, the graph shows that, for half of the combinations of income and debt levels observed in the simulations, the welfare gain is no larger than 0.45
percent.

Figure 2 shows that, for all combination of income and debt levels we
consider, the welfare gain from introducing indexed bonds is positive. On
average, this gain is equivalent to an increase of 0.46 percent of consumption.
Figure 3 depicts the distribution of welfare gains computed comparing the
economy with indexed debt with a hypothetical economy in which there are no
income losses triggered by defaults but in which the government follows the
saving and default rules of the benchmark economy with non-contingent debt.
The figure indicates that income losses triggered by defaults play a relatively
small role in accounting for the welfare gains from introducing indexed debt.
Most welfare gains from intruding indexed debt come from the relaxation of
the government’s borrowing constraint: Indexed debt allows the government
to borrow more and smooth consumption. The small role of income losses
triggered by defaults is not surprising since defaults are infrequent and occur
in periods where income losses are small (see equation (3)).

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

153

Figure 3 Distribution of Consumption Compensation that Makes
Domestic Agents Indifferent between Living in Each of Two
Economies with Non-Contingent Debt and the Economy with
Income-Indexed Bonds
0.7

Welfare Gain (%)

0.6
0.5
0.4
0.3
0.2
0.1
0.0

Welfare gain vs. benchmark
Welfare gain vs. benchmark with no output cost
0

20

40

60

80

100

Percentile

Notes: The first economy with non-contingent debt is our benchmark economy (welfare
gains are represented with a dark line). The second economy with non-contingent debt
is a hypothetical economy in which there are no income losses triggered by defaults,
but the government follows the saving and default rules of the benchmark economy with
non-contingent debt (welfare gains are represented with a gray line).

4.

CONCLUSIONS

We introduced income-indexed bonds into a standard sovereign default model
and illustrated how a government may benefit from using these bonds instead
of non-contingent bonds. Income-indexed bonds allow the government to
avoid costly default episodes, increase its level of indebtedness, and improve
consumption smoothing.
There are difficulties from issuing income-indexed bonds that are not
present in our setup. First, we do not consider difficulties that may arise in
the verifiability of the state on which the debt contracts are written. Second,
there may be other shocks that could affect the willingness to repay. Third,
we circumvent moral hazard problems that could be created by the introduction of GDP-indexed bonds. Expanding our analysis would enhance the

154

Federal Reserve Bank of Richmond Economic Quarterly

understanding of the effects of introducing indexed sovereign bonds and is the
subject of our ongoing research.

REFERENCES
Aguiar, Mark, and Gita Gopinath. 2006. “Defaultable Debt, Interest Rates
and the Current Account.” Journal of International Economics 69
(June): 64–83.
Arellano, Cristina. 2008. “Default Risk and Income Fluctuations in
Emerging Economies.” American Economic Review 98 (June): 690–712.
Arellano, Cristina, and Ananth Ramanarayanan. 2012. “Default and the
Maturity Structure in Sovereign Bonds.” Journal of Political Economy
120: 187–232.
Athanasoulis, Stefano G., and Robert J. Shiller. 2001. “World Income
Components: Measuring and Exploiting International Risk Sharing
Opportunities.” American Economic Review 91 (September): 1,031–54.
Bianchi, Javier, Juan Carlos Hatchondo, and Leonardo Martinez. 2012.
“International Reserves and Rollover Risk.” Mimeo.
Borensztein, Eduardo, Marcos Chamon, Oliver Jeanne, Paolo Mauro, and
Jeromin Zettelmeyer. 2004. “Sovereign Debt Structure for Crisis
Prevention.” IMF Occasional Paper 237.
Borensztein, Eduardo, and Paolo Mauro. 2004. “The Case for GDP-Indexed
Bonds.” Economic Policy 19: 165–216.
Borensztein, Eduardo, and Ugo Panizza. 2008. “The Costs of Sovereign
Default.” IMF Working Paper 08/238 (October).
Borri, Nicola, and Adrien Verdelhan. 2009. “Sovereign Risk Premia.”
Manuscript, MIT.
Caballero, Ricardo. 2002. “Coping with Chile’s External Vulnerability: A
Financial Problem.” Central Bank of Chile Working Paper 154.
Chamon, Marcos, and Paolo Mauro. 2006. “Pricing Growth-Indexed
Bonds.” Journal of Banking & Finance 30 (December): 3,349–66.
Chatterjee, Satyajit, and Burcu Eyigungor. Forthcoming. “Maturity,
Indebtedness and Default Risk.” American Economic Review.
Chatterjee, Satyajit, Dean Corbae, Makoto Nakajima, and Jose-Victor
Rios-Rull. 2007. “A Quantitative Theory of Unsecured Consumer Credit
with Risk of Default.” Econometrica 75 (November): 1,525–89.

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

155

Durdu, Ceyhun Bora. 2009. “Quantitative Implications of Indexed Bonds in
Small Open Economies.” Journal of Economic Dynamics and Control
33 (April): 883–902.
Eaton, Jonathan, and Mark Gersovitz. 1981. “Debt with Potential
Repudiation: Theoretical and Empirical Analysis.” Review of Economic
Studies 48 (April): 289–309.
Faria, André L. 2007. “Growth-Indexed Bonds in Emerging Markets: A
Quantitative Approach.” Mimeo.
Gavin, Michael, and Roberto Perotti. 1997. “Fiscal Policy in Latin
America.” In NBER Macroeconomics Annual 1997 Volume 12, edited by
Ben S. Bernanke and Julio J. Rotemberg. Cambridge, Mass.: National
Bureau of Economic Research, 11–71.
Griffith-Jones, Stephany, and Krishnan Sharma. 2005. “GDP Indexed Bonds:
Making it Happen.” United Nations Department of Economic and Social
Affairs Working Paper 21 (April).
Hatchondo, Juan Carlos, and Leonardo Martinez. 2009. “Long-Duration
Bonds and Sovereign Defaults.” Journal of International Economics 79
(September): 117–25.
Hatchondo, Juan Carlos, and Leonardo Martinez. 2010. “The Politics of
Sovereign Defaults.” Federal Reserve Bank of Richmond Economic
Quarterly 96 (3): 291–317.
Hatchondo, Juan Carlos, Leonardo Martinez, and César Sosa Padilla. 2010.
“Debt Dilution and Sovereign Default Risk.” Federal Reserve Bank of
Richmond Working Paper 10-08R.
Hatchondo, Juan Carlos, Leonardo Martinez, and Francisco Roch. 2012.
“Fiscal Rules and the Sovereign Default Premium.” Federal Reserve
Bank of Richmond Working Paper 12-01 (March).
Hatchondo, Juan Carlos, Leonardo Martinez, and Horacio Sapriza. 2007a.
“The Economics of Sovereign Default.” Federal Reserve Bank of
Richmond Economic Quarterly 93 (Spring): 163–97.
Hatchondo, Juan Carlos, Leonardo Martinez, and Horacio Sapriza. 2007b.
“Quantitative Models of Sovereign Default and the Threat of Financial
Exclusion.” Federal Reserve Bank of Richmond Economic Quarterly 93
(Summer): 251–86.
Hatchondo, Juan Carlos, Leonardo Martinez, and Horacio Sapriza. 2009.
“Heterogeneous Borrowers in Quantitative Models of Sovereign
Default.” International Economic Review 50 (November): 1,129–51.

156

Federal Reserve Bank of Richmond Economic Quarterly

Hatchondo, Juan Carlos, Leonardo Martinez, and Horacio Sapriza. 2010.
“Quantitative Properties of Sovereign Default Models: Solution Methods
Matter.” Review of Economic Dynamics 13 (4): 919–33.
Ilzetzki, Ethan, and Carlos A. Vegh. 2008. “Procyclical Fiscal Policy in
Developing Countries: Truth or Fiction?” Working Paper 14191.
Cambridge, Mass.: National Bureau of Economic Research (July).
Kaminsky, Graciela L., Carmen M. Reinhart, and Carlos A. Vegh. 2004.
“When It Rains, It Pours: Procyclical Capital Flows and Macroeconomic
Policies” In NBER Macroeconomics Annual 2004, Volume 19, edited by
Mark Gertler and Kenneth Rogoff. Cambridge, Mass.: National Bureau
of Economic Research, 11–53.
Kamstra, Mark J., and Robert J. Shiller. 2010. “Trills Instead of T-Bills: It’s
Time to Replace Part of Government Debt with Shares in GDP.” The
Economists’ Voice Volume 7, Issue 3, Article 5 (September).
Krugman, Paul R. 1988. “Financing vs. Forgiving a Debt Overhang.”
Journal of Development Economics 29 (November): 253–68.
Krusell, Per, and Anthony A. Smith, Jr. 2003. “Consumption-Savings
Decisions with Quasi-Geometric Discounting.” Econometrica 71
(January): 365–75.
Lizarazo, Sandra. 2006. “Contagion of Financial Crises in Sovereign Debt
Markets.” ITAM Working Paper.
Mendoza, Enrique G., and Vivian Z. Yue. 2012. “A General Equilibrium
Model of Sovereign Default and Business.” Quarterly Journal of
Economics 127 (2): 889–946.
Sandleris, Guido, Horacio Sapriza, and Filippo Taddei. 2011. “Indexed
Sovereign Debt: An Applied Framework.” Collegio Carlo Alberto
Working Paper 104 (November).
Shiller, Robert. 1993. Macro Markets: Creating Institutions for Managing
Society’s Largest Economic Risks. New York, N.Y.: Oxford University
Press.
Talvi, Ernesto, and Carlos A. Vegh. 2005. “Tax Base Variability and
Procyclical Fiscal Policy in Developing Countries.” Journal of
Development Economics 78 (October): 156–90.
Tauchen, George. 1986. “Finite State Markov-Chain Approximations to
Univariate and Vector Autoregressions.” Economics Letters 20 (2):
177–81.
Tomz, Michael, and Mark L. J. Wright. 2007. “Do Countries Default in ‘Bad
Times?”’ Journal of the European Economic Association 5: 352–60.

J. C. Hatchondo and L. Martinez: GDP-Indexed Government Debt

157

United Nations. 2006. “GDP-Indexed Bonds: An Idea Whose Time has
Come.” Seminar Report held at the International Monetary Fund,
Washington, D.C., April 21.
Vegh, Carlos A., and Guillermo Vuletin. 2011. “How is Tax Policy Conducted
Over the Business Cycle?” University of Maryland Working Paper.