View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

PRESORTED STANDARD
U.S. POSTAGE

Third
FirstQuarter
Quarter2009
2014

PAID

Federal R eserve Bank of Philadelphia

Volume 97, Issue 1

PHILADELPHIA, PA
PERMIT #583

Ten Independence Mall
Philadelphia, Pennsylvania 19106-1574

ADDRESS SERVICE REQUESTED

Gettysburg National Historical Park, PA

Does the U.S. Trade More Widely Than It Appears?
Location Dynamics: A Key Consideration for Urban Policy				
Past and current Business Review articles can be downloaded for
free from our website. There you will also find data and other
information on the regional and national economy, consumer
finance issues, resources for teachers, information on our
community development initiatives, the latest research publications
from the Federal Reserve Bank of Philadelphia, and more.

explore

and

Brewing Bubbles: How Mortgage Practices Intensify Housing Booms
New Perspectives on Consumer Behavior in Credit and Payments Markets		
Research Rap

learn

Photo by Lisa DeCusati

www.philadelphiafed.org

INSIDE
Issn: 0007-7011

First QUARTER 2014

The Business Review is published four
times a year by the Research Department of
the Federal Reserve Bank of Philadelphia.
The views expressed by the authors are not
necessarily those of the Federal Reserve.
We welcome your comments at PHIL.
BRComments@phil.frb.org.

Does the U.S. Trade More Widely Than It Appears?

For a free subscription, go to www.
philadelphiafed.org/research-and-data/
publications. Archived articles may be
downloaded at www.philadelphiafed.
org/research-and-data/publications/
business-review. To request permission
to reprint articles in whole or in part,
click on Permission to Reprint at www.
philadelphiafed.org/research-and-data/
publications. Articles may be photocopied
without permission. Microform copies may
be purchased from ProQuest Information
and Learning, 300 N. Zeeb Road, Ann
Arbor, MI 48106.
The Federal Reserve Bank of Philadelphia
is one of 12 regional Reserve Banks that,
together with the U.S. Federal Reserve
Board of Governors, set and implement
U.S. monetary policy. The Philadelphia
Fed oversees and provides services to
banks and the holding companies of
banks and savings and loans in the Third
District, comprising eastern Pennsylvania,
southern New Jersey, and Delaware. The
Philadelphia Fed also promotes economic
development, fair access to credit, and
financial literacy in the Third District.
Charles I. Plosser
President and Chief Executive Officer
Loretta J. Mester
Executive Vice President and
Director of Research

TM

1

Given the importance of international trade for economic growth, why
in any given year do few U.S. firms export their wares, and why are most
U.S. goods not traded with most countries? Roc Armenter presents some
intriguing evidence suggesting the U.S. does export most of its products to
most countries, just not very often.

Location Dynamics: A Key Consideration for Urban Policy

9

What determines where businesses and households locate? Location
decisions can affect the economic health of cities and metropolitan areas.
But as Jeffrey Brinkman explains, how firms, residents, and workers go
about choosing where to locate can involve complex interactions with
sometimes unpredictable consequences.

Brewing Bubbles: How Mortgage Practices Intensify
Housing Booms

16

Even before the Great Recession, housing market bubbles have been
associated with severe financial crises around the world. Why do these
booms and busts occur? Leonard Nakamura explains that part of the
answer may lie with how mortgage lending practices appear to respond to
rising and falling house prices in somewhat unexpected ways.

New Perspectives on Consumer Behavior in Credit
and Payments Markets

25

Mitchell Berlin summarizes new research on household finance
presented at a joint conference sponsored by the Federal Reserve Bank of
Philadelphia’s Research Department and Payment Cards Center.

Colleen Gallagher
Research Publications Manager

Research Rap

Dianne Hallowell
Art Director and Manager

Abstracts of the latest working papers produced by the Research
Department of the Federal Reserve Bank of Philadelphia.

31

Now Available:
The Federal Reserve Historical Inventory
and History Web Gateway
The Federal Reserve’s inventory of historical collections offers
students, researchers, and others online access to thousands of
documents and artifacts related to the Fed’s 100-year history
from sources across the Federal Reserve System, universities,
and private collections. To view the inventory, go to http://www.
federalreserve.gov/newsevents/press/other/other20120530a1.
pdf. More content and features will be added over time. Do you
know of materials that should be included? Information may be
submitted at http://www.federalreserve.gov/apps/contactus/
feedback.aspx.
Also available is the new Federal Reserve History Web Gateway.
Designed to encourage deeper reflection on the Fed’s role in the
nation’s economy, the gateway presents biographies, timelines,
photographs, and more that tell the story of the Fed’s purpose,
key economic events affecting U.S. history, and the people who
shaped the Fed. Go to http://www.federalreservehistory.org/.
On December 23, 1913, President Woodrow Wilson signed the
Federal Reserve Act, establishing the Federal Reserve System
as the U.S. central bank. Its mission is to conduct the nation’s
monetary policy; supervise and regulate banks; maintain the
stability of the financial system; and provide financial services to
depository institutions, the U.S. government, and foreign official
institutions. The Federal Reserve Board opened for business on
August 10, 1914, and on November 16, 1914, the 12 regional
Reserve Banks were open for business.
Congress designed the Fed with a decentralized structure. The
Federal Reserve Bank of Philadelphia — serving eastern Pennsylvania, southern New Jersey, and Delaware — is one of 12
regional Reserve Banks that, together with the seven-member
Board of Governors in Washington, D.C., make up the Federal
Reserve System. The Board, appointed by the President of the
United States and confirmed by the Senate, represents the public sector, while the Reserve Banks and the local citizens on their
boards of directors represent the private sector.
The Research Department of the Philadelphia Fed supports the
Fed’s mission through its research; surveys of firms and forecasters; reports on banking, markets, and the regional and U.S.
economies; and publications such as the Business Review.

Does the U.S. Trade More Widely Than It Appears?
By Roc Armenter

rade matters. International commerce accounts for almost
one-fifth of the U.S. economy’s gross output. And by
finding foreign markets for their goods, U.S. manufacturers
provide jobs at home — even while competition from
cheaper foreign goods may dampen domestic employment.
Indeed, it is not a stretch to say that economics as a separate discipline
was born from the observations of David Ricardo and Adam Smith on
trade. But trade matters beyond its impact on national income. It affects
domestic workers and firms that face foreign competition, and as a
result, it is a recurrent topic of public discussion.

T

We often hear stories about some
developing country offering a product
at half the price of a made-in-America
equivalent and sending a domestic
industry into disarray and its workers
into unemployment. Or politicians debate the fairness and impact of China’s
trade policy on the U.S. economy. Indeed, China is the perfect example of
a country “making the leap” through
trade, catching up with the latest technology and being able to compete in
global markets. And going further back
in time, but much closer in space, the
cotton trade was instrumental in the
development of the U.S. economy in
the 19th century.
Given trade’s importance, it is perhaps surprising to learn that most of
the products manufactured in the U.S.
are actually not traded with the vast
majority of countries over the course of
a year.1 For example, the U.S. exports
several thousand distinct products to

Canada, spanning most of the nearly
9,000 product classifications provided
by the U.S. Commerce Department.
Yet, the U.S. sells just a few hundred
to many other countries. Why would
the U.S. sell a product in Germany and
not in, say, Poland? Another interesting observation is that few U.S. firms
actually engage in exporting. In 2005,
less than a fifth of all U.S. manufacturing firms had any foreign sales. Given
that the vast majority of manufactured
goods can be traded at a relatively low
transportation cost, why are so many
U.S. firms failing to compete abroad?
Are there insurmountable barriers to
1
My discussion will focus on trade in manufactured goods. Of course, services are also traded
internationally in the form of travel, royalties,
license fees, and so forth. Although the U.S.
actually exports more services than it imports,
and thus enjoys a small surplus in this category,
services remain a relatively small component
of total trade compared with goods and raw
materials.

Roc Armenter is a vice president and economist at the Federal
Reserve Bank of Philadelphia. The views expressed in this article
are not necessarily those of the Federal Reserve. This article and
other Philadelphia Fed reports and research are available at www.
philadelphiafed.org/research-and-data/publications.

www.philadelphiafed.org

trade, perhaps some of them manmade? Or is the U.S. manufacturing
sector much less competitive abroad
than we thought? In other words, what
is behind these “missing” trade flows?
Economists would like to understand
the underlying barriers to trade to be
able to answer all these questions.
Several researchers have made
substantial progress by documenting
strong links between trade and both
market size and firm size. First, the
U.S. is more likely to trade with larger,
closer countries. Second, it tends to
sell to these countries products that
represent a larger share of its exports.
Third, firms that export are also larger,
in terms of both revenue and employment, and they appear to be more productive and capable of manufacturing
a wide array of products.
These links between trade and
size have led economists to posit
theories of economies of scale in trade.
Economists say that a production technology of a good exhibits economies of
scale when the average production cost
decreases as total production increases.
The basic tenet in firm-level trade
models is that firms must incur a large
initial cost to begin selling their goods
in a foreign market. For example,
they may need to set up a distribution
network or modify the product to meet
the destination country’s standards.
But as the exporting firm sells more of
the product to the importing country,
these costs are offset by more sales revenue. Therefore, the bigger the firm,
the bigger the production run, and the
lower the cost of exporting per individual good sold. Economies of scale
theories can explain why small firms,
small countries, and low-demand products may not trade.
Business Review Q1 2014 1

However, as we will see, economies of scale are not adequate to explain certain key aspects of actual international trade flows. For example, it
is often the case that a product will be
exported to one destination one year
and not the next, and then shipped
there again the year after that. It is also
telling that many actual trade flows are
very small in quantity or value, which
casts some doubt on whether trade
barriers are in fact all that formidable.
To help explain these observations,
I will instead advance the possibility
that the U.S. does export most of its
products to most countries — just not
very often. It turns out that for many
possible trade flows, we should not
expect to see trade every year, but perhaps only once every few years. It thus
becomes difficult to assert whether a
missing trade flow in any one year is
indeed a relevant observation. The
distinction between missing and infrequent trade is important because the
latter implies that the impediments to
trade may be substantially smaller than
previously thought.
A RICH, QUIRKY TROVE
OF DATA
The U.S. collects and makes
available detailed data for both imports and exports through the Census
Bureau. At the monthly frequency,
trade data provide information about
each shipment, specifying its total dollar value, the country of origin (for imports) or destination (for exports), and
detailed information about the product
shipped. This trove has its origins in
tariff and duty collection, which, luckily for trade economists, requires detailed data, as the rates typically vary
with the type of product and country
of origin or destination.
Currently, each product is classified according to the Harmonized
System (HS) of unique 10-digit codes.
The first two digits indicate the broadest category, known as a chapter (for
2 Q1 2014 Business Review

example, cereals, pharmaceutical
products, or beverages); the next two
digits provide a more detailed description and so on. For example, a beverage is first classified as water, juice,
soda, beer, wine, and so on. Then if
the beverage is, say, wine, it is further
classified as fermented from grapes or
another fruit, as sparkling or not, and
finally as red or white.2 These codes
are valuable to trade economists, who
often use the 10-digit description to
indicate a distinct product. However,
we do need to recognize that the classification system was not designed with
academic research in mind. Sometimes
even a 10-digit classification is covering up a substantial amount of heterogeneity. Take code HS6110110020,
which covers the fairly broad category
of women’s wool sweaters. Meanwhile,
other codes introduce quite irrelevant
distinctions such as the size of the
container. Sometimes products receive
very close classifications because they
share some physical or production attributes, yet we would never think of
having one instead of the other. For
example, vinegar is classified with
wine as a beverage!
MISSING TRADE FLOWS
The data show that in any given
year the U.S. trades a surprisingly narrow range of products with a limited
set of destinations — trade being more
common with large, nearby countries.
To determine to what extent U.S. firms
are absent from foreign markets, let us
first construct a measure of all possible
trade flows. To keep the discussion
concise, we focus on U.S. exports in
2002.3 Take all the products the U.S.
sold somewhere and all the countries

where the U.S. sold something in 2002.
Combine both to construct all possible
product-country pairs; that is, vinegar
to Germany is one pair, vinegar to
Guatemala another one; women’s wool
sweaters to Guatemala is yet another.
Which fraction of these possible
trade flows did we actually observe in
2002? The surprising answer is very
few — less than one-fifth of them!
There are about 9,000 active product
classifications. Looking at countries,
we find that Canada received more
than 8,000 different products from the
U.S., but half of the countries received
fewer than 700 products, and onequarter of the countries received no
more than 150 products.4
Looking at products, we find that
half of the products were sold to only
35 or fewer countries, and a quarter
of them reached 15 countries at most.
Since there are questions about the HS
classification being the right definition
of a product, it is worth asking what
happens if we use a broader classification. Table 1 reports the share of
missing trade flows among all possible
product-country pairs for different
classification levels, from 10 digits (the
most detailed description) to two digits
(the broadest definition). The majority
of possible trade flows remain unobserved even when product definitions
are lumped together at the four-digit
level, encompassing more than 1,000
distinct categories. Even if we distinguish only among broad chapters
— there are only about 100 of them
— more than one-third of all possible
trade flows are missing. Similar results
are obtained for imports.

This is unfortunately the latest data available
at the firm level.

3

The HS system is maintained by the World
Customs Organization, with the first six-digit
classification being common across countries.
More detailed descriptions are often associated
with tariff legislation. A complete guide to the
HS system can be found at http://www.usitc.gov/
tata/hts/bychapter/index.htm.

2

4
Shipments valued at less than $2,000 do not
need to be reported, so it is possible that the
fraction of actual trade flows is larger. Available
estimates of low-value shipments suggest that
the difference in the total fraction is unlikely to
be great.

www.philadelphiafed.org

TABLE 1
Missing Product-Country Trade Flows
for Different Classifications
Classification level Number of traded products Missing trade flows
10 digit

8,877

82%

6 digit

5,182

79%

4 digit

1,244

66%

2 digit

97

36%

Sources: Census Bureau and author’s calculations.

Regarding which products are sold
where, there is a clear pattern based on
market size. For each destination country, it is possible to construct a measure
of its market size, starting with the
country’s gross domestic product and
adjusting it by the country’s distance to
the U.S. and by other variables known
to increase trade costs. The resulting formula — known as the “gravity
equation” in trade for its similarity to
physics: closer and larger objects (or
countries) exert a greater pull on (or
trade more with) others — is excellent
at predicting bilateral trade volumes.
The data show clearly that the
U.S. sells more products to and buys
more products from larger, closer
countries. Most possible trade flows
with Canada and Mexico do indeed
occur. Similarly, the U.S. engages in
much trade with Germany and Japan,
which are farther away but represent
economic heavyweights.5 Figure 1 plots
each destination country’s market size
against the number of products the
U.S. sells there. Because the differences
in market sizes across countries are
very large, we need to use a log scale
for the axes.6 Market size is captured as

the country’s market share in total U.S.
exports. The number of U.S. products
sold clearly increases, becoming quite
tight as market size increases. Note

of a third one, it will appear to be halfway between the two in a log scale but would instead
show up much closer to the smaller country in
a linear scale.

that the number of exported products
increases rapidly at first but then slows
down for destinations with very large
market sizes. In these countries, most
of the products are traded. Recall that
by virtue of the classification system,
no more than about 9,000 products can
be sold to a given country.
Of course, different products
also have different market sizes. It is
perhaps not surprising to learn that automobiles make up a larger fraction of
U.S. trade than turnips do. There are
several techniques to identify variation in product-market size. A simple
approximation is to use aggregate trade
shares across products or, for example,
the trade shares for Canadian exports.
Using either measure, the data are
clear: The U.S. is more likely to export
products with large markets to more
countries. Figure 2 brings this point
home. It is a scatter plot as in Figure 1,

FIGURE 1
More Products Exported to Countries
with Larger Markets

5
Japan’s GDP is about triple Canada’s, and
Germany’s is about two times bigger.

A log scale measures relative rather than
absolute differences. For example, if a country is
twice as big as another country but half the size
6

www.philadelphiafed.org

Sources: Census Bureau and author’s calculations.

Business Review Q1 2014 3

on log axes. Now we plot the market
size of the product against the number
of countries to which the product is
sold. Again, the relationship increases,
though the trend is noisier than it is
for countries.
FIRMS AND EXPORTS
Countries do not decide what to
trade; firms and consumers do. So let
us look at firms.7 Only 18 percent of
U.S. manufacturing firms sold goods
abroad in 2002, and the ones that did
were consistently larger: Their total
foreign and domestic sales were four to

7
Unfortunately, firm-level data are proprietary,
but we can look at the big picture by combining
the work of several economists as well as Commerce Department trade data from 2002. For a
complete overview of exporters, see Alessandria
and Choi (2010). A classic article in the literature is Bernard, Jensen, Redding, and Schott
(2007). The facts that follow pertaining to firms
and foreign sales are based on their analysis.

five times larger on average than those
of firms that did not export. There are
also systematic differences regarding
employment, wages, and measures of
firm performance such as labor productivity. Exporting firms employ more
workers, pay higher wages, and have
higher average output per worker-hour
than nonexporting firms. In contrast,
the differences across sectors were
small. Less than 40 percent of the
firms had foreign sales in the sectors
for computers and electronic products
and electrical equipment, appliances
and components — the quintessential
modern traded goods. The share of
firms that exported was much lower
in other sectors — as low as 5 percent
in printing, publishing, and similar
products, and 7 percent for furniture
and fixtures.
So perhaps we are zeroing in on
the reason the U.S. trades so few products to so few countries, yet where it

FIGURE 2
Products with Large Markets Exported
to More Countries

Sources: Census Bureau and author’s calculations.

4 Q1 2014 Business Review

does trade it does so in large quantities: Most U.S. firms are either unable
or unwilling to sell any amount abroad,
but those that do are very large and
competitive.
WHY IS THERE NOT MORE
TRADE?
One possibility accounting for
missing trade flows is that the U.S.
is specializing in some products due
to a comparative advantage, perhaps
because of different factor endowments such as access to raw materials
or a skilled workforce. This hypothesis
runs afoul of the data: Most trade is
intraindustry. For example, the U.S.
sells cars to Germany, but Germany
also sells cars to the U.S. Thus, neither
can be said to specialize in cars. The
relationship with size, especially at the
firm level, is also puzzling. For the comparative advantage theory to hold, the
source of the advantage would need to
be systematically related to market size.
Trade economists instead currently favor a theory based on economies
of scale in trade. The basic idea is that
a firm faces a fixed cost, independent
of actual sales, when accessing a foreign market. Unless the net revenues
can cover the fixed expense, the firm
would not sell in that particular market. Clearly, net revenues are tied to
market size; thus, economies of scale
can explain the relationship between
missing trade and market size and why
some trade flows go missing.8 Most
of these models trace their lineage to
Melitz (2003).
Economies of scale can also explain why some firms export and some
don’t. More productive firms are able
to sell more and thus are more likely to
be willing to incur the fixed cost. They
will employ more workers and venture

8
Baldwin and Harrigan (2011) document how
several models with economies of scale perform
against the data, focusing on the facts reported
in the previous section.

www.philadelphiafed.org

into additional product lines. The
same can be said about firms capable
of producing better-quality or highmargin goods. See The Relationship
Between Size and Exporting for a useful
example.
Economies of scale in trade have
some important implications in the
event of a reduction in trade costs or
tariffs. In particular, they predict that
trade leads to an improvement in productivity industrywide, a very appealing prospect to trade economists, who
have long suspected that liberalizing
trade boosts efficiency. The mechanism is quite simple. As discussed
before, more productive firms are more
likely to be exporters. Now, a reduction
in trade costs has two immediate effects. First, it increases the revenues of
exporters as the cost of shipping their
products to foreign markets decreases.

Second, it reduces the revenues of
domestic firms that do not export as
they face increased competition from
foreign firms that do export.
In short, exporters expand, while
nonexporters contract. Employment
then shifts from the latter to the former. Since exporters are more productive, the average productivity of the
industry and the economy increases.
Although the increase in overall productivity represents a long-run gain
for the economy, short-run costs may
be significant. Smaller, less productive firms that sell only domestically
may be driven out of business, leaving
their workers unemployed, at least for
a time. If these firms are concentrated
geographically or economically, the
reallocation of resource and workers to
the more productive, exporting firms
may be slow.

The Relationship Between Size and Exporting

S

ay U.S. firms must incur a cost of $10 to gain access to a foreign market. Trinkets & U is a successful firm known for its
uniquely useful trinkets. For each dollar’s worth of trinkets
sold, the firm makes a profit of 10 cents. Canada, a large country accessible by road and rail, is an attractive market. The
firm knows it would sell $200 worth of trinkets, making $20 in
profits. It will thus recoup the $10 cost of exporting, and it gladly incurs it.
Now consider Andorra, a small, landlocked country across the Atlantic
Ocean. The U.S. firm expects to sell no more than $40 worth of trinkets there,
which adds up to a paltry profit of $4 — not enough to cover the expense of
$10 needed to access the Andorran market.
Returning to the U.S., we meet Gadgets Inc., a failing firm that produces
quite useless gadgets. As a result, Gadgets will sell only $120 worth of goods
in Canada. To top it off, an inefficient production process shaves most of the
profit down to only 5 cents per dollar. As a result, Gadgets Inc. does not sell in
Canada, since it would net only $6 in revenues, not enough to cover the fixed
cost of $10.
Note that if Gadgets Inc. would have managed to sell as much as Trinkets
& U, even while making only 5 cents per dollar, it would have chosen to export to Canada. Similarly, if it had sold only $120 but had a margin of 10 cents
per dollar, it would have gone ahead and exported. The larger picture should
be clear: Firms with low productivity and/or small margins are less likely to be
exporters. These firms are also likely to be smaller, selling less and employing
fewer workers.

www.philadelphiafed.org

IS TRADE BROADER
THAN IT SEEMS?
Economies of scale theories
perform reasonably well in explaining why trade is more likely to involve
large firms, high-demand products, and
large, close destination countries. But
as we will see, these theories run afoul
of the data in some key respects. First,
we see lots of small — actually, tiny
— trade flows, adding up to no more
than a couple of shipments in a given
year. Barriers to trade thus cannot be
particularly large, or otherwise these
firms are losing money. Second, a lot
of products and destinations appear
and disappear year to year in the data,
only to reappear years later, which we
would not expect if economies of scale
were the whole story. This infrequency
seems to suggest that trade barriers are
not only small but change often. These
observations lead us to explore an alternative hypothesis: The U.S. does trade
most products with most countries, just
not very frequently. That is, a missing
trade flow does not indicate that the
U.S. never sells a particular product
to a particular country; it just has not
done so in the year being examined.
What is so special, after all, about the
time it takes the Earth to go around
the sun? It may well be that no trade
shipment enters the U.S. in the time
it takes to read this article. We will be
overreacting a lot if we conclude that
we have stopped trading completely!
The distinction between infrequent and nonexistent trade flows is
very important, for the latter are the
backbone of the trade theories based
on economies of scale. Infrequent sales
cannot possibly bring home much net
income. Their existence is thus compatible only with a very low fixed cost
of accessing the foreign market. In other words, the barriers to trade, through
the lens of a model with economies
of scale that emphasizes fixed costs,
would have to actually be small if there
is infrequent trade.
Business Review Q1 2014 5

Now, it is clearly untrue that all
trade flows are infrequent; we could
end up with no trade at all! In Armenter and Koren (forthcoming), we
show how to develop a simple statistical model that uses the data on aggregate country and product trade flows
to compute a probability that a shipment belongs to a particular productcountry pair. The number of shipments
exactly reflects the data, but each of
them is randomly assigned to a product
and a country category, akin to balls
falling into bins at random. As simple
as it sounds, the model is capable of
predicting missing trade flows (that
is, empty bins) and the size of the observed trade flows (how many balls do
we expect to find in a nonempty bin?).
A trade flow’s relationship with the
size of a firm or market is given by the
probability that a trade flow in each
category will occur, or, if you will, the
size of the bin. For instance, Canada
and autos have large bins and thus are
very likely to catch many balls. Turnips
and Andorra have very small bins, and
thus it is very likely that they end up
catching no balls at all. The framework
does not elaborate on why Canada has
more total trade than Andorra or autos sell better abroad than turnips; we
just take these as given or approximate
them through a gravity equation (for
countries) and some model of productspecific trade costs (for products).
An example may be useful at this
point. Assume that Canada’s market
size is 100 times larger than Andorra’s.
For the sake of simplicity, these two
countries are the only trade partners
the U.S. has, and we do not distinguish
among products. The Canadian bin is
100 times larger than the Andorran
bin. Total trade is 10 shipments per
year. The difference in bin sizes implies
there is a 99 percent chance a shipment goes to Canada, and only a 1 percent chance it goes to Andorra. The
probability that more shipments end
up going to Andorra in any year is vir6 Q1 2014 Business Review

tually negligible: Canada is expected to
receive 9.9 shipments, while Andorra
only a tenth of a shipment.
But shipments (or balls, for that
matter) do not split! What does it

two, and so on. Indeed, the number
of shipments per trade flow conforms
very well with what are called count
data. This is usually associated with
rare or infrequent events.9

The distinction between infrequent and
nonexistent trade flows is very important,
for the latter are the backbone of the trade
theories based on economies of scale.
mean for Andorra to be expected to
have a tenth of a shipment? It simply
says that a shipment to Andorra is
expected to be observed about once
every 10 years. In other words, the
probability that we observe any shipment to Andorra in a given year is only
10 percent.
Given data limitations, it is not
straightforward to sort out if a missing
trade flow is actually nonexistent or
just infrequent. There are, though,
some observations that are distinct between the two hypotheses. If the number of shipments per product-country
pair is zero, we cannot say much; that
trade flow may be infrequent and we
were just unlucky, or it may never happen. Now, things are different if the
trade flow is observed. The infrequent
trade hypothesis predicts we should
see a very small number of shipments,
possibly a single one. For, if a shipment is a rare event, two shipments
are twice as rare! In contrast, the
economies of scale hypothesis suggests
that we should see a substantial number of shipments — enough for the
firm to cover its fixed costs of accessing the market.
The data are clear on this aspect.
Table 2 breaks down all the productcountry pairs with positive trade for
U.S. exports in 2005 according to the
number of shipments that year. Among
all pairs shipped that year, the most
common number of shipments was
one. The second most common was

What happens when we look beyond a single year’s data? After all, if a
trade flow is not observed in one year,
why not look at two-year or five-year
intervals? Indeed, as more years are
combined, the number of missing trade
flows decreases, albeit slowly. Table 3
shows the share of missing productcountry trade flows when several years
are combined.10 However, it would
be quite unfair to dismiss the models

9
A classic example was the tally of deaths by
horse kicks in the Prussian army, collected by
Ladislaus von Bortkiewicz at the end of the 19th
century. See Quine and Seneta (1987) for a
discussion of the famous data and the associated
law of small numbers.

TABLE 2
Shipments Across
Traded Pairs
Number of
shipments

Share of
traded pairs

1

28.7%

2

12.8%

3

7.8%

4

5.4%

5

4.1%

6-9

9.9%

10 and above

31.4%

Sources: Census Bureau and author’s calculations.

www.philadelphiafed.org

of economies of scale at this point; it
could well be that, over the time examined, trade barriers decreased, thus
explaining the increasing number of
observed trade flows.
Another interesting observation
surfaces when we extend our view
beyond one year. When we consider
two consecutive years, we can look
for product-country pairs that appear
anew in the second year as well as pairs
that were dropped — that is, pairs that
were observed in the first year but not
the second. The data also speak loudly
here: There was a lot of churning. That
is, a lot of new trade flows cropped up,
and a lot were dropped. Table 4 reports
the new product-country pairs traded
from year to year, as well as the pairs
that stopped being traded, as a rate
over traded pairs in the previous year.
The second column repeats the calculation by weighting the pairs by their
trade value. Every year close to onequarter of the product-country pairs
observed had not been traded the year
before. And more than 20 percent of
them were not traded the next year! In
net terms, the total count of productcountry pairs grew just over 2 percent,
a full order of magnitude less than the
gross changes.

Now, this churning is a challenge to models with economies of
scale but is to be expected in a model
of infrequent trade. To be consistent
with economies of scale models, the
churning would imply a lot of year-toyear variation in trade barriers, but
this seems unlikely. In the infrequent
trade hypothesis, though, churning comes naturally. For example, all
trade flows that are expected to be
observed once every two years are
bound to create churning.
Viewing “missing” trade flows as
simply infrequent suggests we should
not be looking at frictions or costs at
the firm or product level. That is, the
question should not be why firms do
not trade or products are not traded
with certain countries: We should instead ask why there are not more shipments. There are certainly some fixed
costs per shipment — for example,
whether a truck is full or half empty,
a firm needs to pay the full wages of
the driver. These fixed costs cannot
be too large, since more than half of
the shipments are valued at $15,000
or less. And then some goods such as
planes and satellites are so large that
they are necessarily a single shipment.
These goods tend to be durable, and
we should not expect countries to purchase them frequently. For example,
Andorra may buy a U.S. plane and not
buy another until it is time to replace it
several years later.

10
Data are an average of the annual changes
from 1990 to 2001 for U.S. imports. Import data
over that period are somewhat more consistent
regarding product classifications.

CONCLUSION
Trade is now a pervasive fixture of
the modern world. Yet, economists are
still explaining why there is not even
more trade and, in particular, why so
few products are shipped to and from
most countries in a given year. Models
with economies of scale are the leading
theory of missing trade flows because of
their ability to explain trade’s relationship with market size and the characteristics of firms that export. There are
questions, though, whether the data
on actual trade flows support some
unique implications of these models.
Of course, it takes a model to beat a
model, and until recently there had
been no viable alternative to theories
featuring economies of scale. Recent
work suggests that many missing trade
flows are perhaps simply low-probability
but not zero-probability events. BR

TABLE 3

TABLE 4

Missing Pairs over Multiple Years

Entry and Exit, by Count and Value

Number of years

Share of missing trade flows

1

92.0%

2

90.3%

3

89.1%

5

86.6%

Sources: Census Bureau and author’s calculations.

www.philadelphiafed.org

By count

By value

Newly traded pairs

24.6 %

1.1%

Disappearing pairs

22.4%

0.8%

Net difference

2.2%

0.3%

Sources: Census Bureau and author’s calculations.

Business Review Q1 2014 7

How Much Larger Should We Expect Exporters to Be?

C

an the idea of infrequent trade also explain why some firms export and some do not? The answer is no.
It takes only one product sold to one foreign market for a firm to qualify as an exporter. Thus, to assert
that nonexporting firms are just infrequent exporters, we would need to say that trade, as a whole, is
infrequent, which it is not. In Armenter and Koren (forthcoming), we show that the balls-and-bins
model predicts that about three-quarters of the firms should be exporting — completely at odds with
the data. Indeed, the model also gets wrong the relationship with size. Even though the model predicts
close to four times more exporters than in the data, exporters are predicted to be even larger than the data show.
In Armenter and Koren (2010), we show that models with economies of scale also overpredict the size of exporters
— by a lot. The reason is simple. It may appear that a four- or five-fold difference in size is large. But in the context of
the distribution of firm size, it turns out to be very small. If larger firms are more likely to be exporters, they should be
concentrated at the top of the firm-size distribution. Since about one-fifth of the firms export, the average firm in the
top fifth of the firm size distribution should be a good approximation for exporters. Yet, the average top-quintile firm
is more than 100 times larger than the average firm in the bottom four quintiles! That is, the theory overstates the size
advantage of exporters by a factor of 25.
This suggests that productivity is not the only determinant of whether a firm exports. As a matter of fact, it is very
likely not even the main determinant. Other determinants include the firm’s location within the U.S., its ethnic or
family links to the destination country, and the industry the firm belongs to.

REFERENCES
Alessandria, George, and Horag Choi.
“Understanding Exports from the Plant
Up,” Federal Reserve Bank of Philadelphia
Business Review (Fourth Quarter 2010).

Armenter, Roc, and Miklós Koren. “A
Balls-and-Bins Model of Trade,” American
Economic Review (forthcoming).

Alessandria, George, and Horag Choi. “Do
Sunk Costs of Exporting Matter for Net
Export Dynamics?” Quarterly Journal of
Economics (February 2007), pp. 289-336.

Baldwin, Richard, and James Harrigan.
“Zeros, Quality, and Space: Trade Theory
and Trade Evidence,” American Economic
Journal: Microeconomics, 3 (May 2011),
pp. 60-88.

Armenter, Roc, and Miklós Koren. “Economies of Scale and the Size of Exporters,”
Federal Reserve Bank of Philadelphia
Working Paper 09-15 (2009).

Bernard, A.B., L.B. Jensen, S.J. Redding,
and P. K. Schott. “Firms in International
Trade,” Journal of Economic Perspectives,
21:3 (2007), pp. 105-130.

8 Q1 2014 Business Review

Melitz, M.J. “The Impact of Trade on Intra-Industry Reallocations and Aggregate
Industry Productivity,” Econometrica, 71:6
(2003), pp. 1,695–1,725.
Quine, M.P., and E. Seneta. “Bortiewicz’s
Data and the Law of Small Numbers,”
International Statistical Review, 55:2 (1987),
pp. 173-181.

www.philadelphiafed.org

Location Dynamics:
A Key Consideration for Urban Policy
By Jeffrey Brinkman

T

he policies that cities adopt regarding such things as taxes,
transportation infrastructure investment, zoning, schools,
and police have important and often unpredictable effects
on where businesses locate and individuals decide to live
and work. In turn, these location decisions have real
consequences for cities’ general welfare and economic health. So to fully
understand the long-term effects of their policies, cities must consider
the complex ways by which firms, residents, and workers go about
choosing where to locate.

Take, for example, London’s decision in 2003 to implement a new plan
to reduce congestion in the center of
the city. At the time, development was
booming and traffic congestion was
becoming increasingly troublesome.
Rather than try to increase capacity
through construction of more highways and other automobile infrastructure, London introduced a congestion
pricing policy. The idea of congestion
pricing is simple and has wide support
from economists and policy analysts.
Because congestion has many negative
social effects, including slower travel
times, increased carbon emissions, and
reduced local air quality, a tax on congestion can have net positive effects for
society by encouraging people to travel
by other modes or at different times.
London initially levied a charge
of £5 on any car travelling into central London, with the price increas-

ing to £8 in 2005 and eventually to
£10 (about $15), where it stands now.
There is some evidence that this
policy has worked by initially reducing traffic volume by 27 percent and
increasing vehicle speeds by 17 percent inside central London.1 While
London was one of the first major
cities to implement congestion pricing, the idea has caught on with other
European cities that are looking to
the policy as both a source of revenue
and a solution to ever-increasing traffic congestion.
However, the efficiency of congestion pricing is partially based on
the assumption that the locations of
people and businesses in a metropolitan area are fixed, an assumption that

Jonathan Leape provides some analysis of the
effects of congestion charges in London.
1

Jeffrey Brinkman is an economist at the Federal Reserve Bank of
Philadelphia. The views expressed in this article are not necessarily
those of the Federal Reserve. This article and other Philadelphia
Fed reports and research are available at www.philadelphiafed.org/
research-and-data/publications.

www.philadelphiafed.org

may be valid in the short term. However, in the long run, when faced with
a new toll, people might not switch
to transit. They might just choose to
work or shop somewhere else, which
could have negative economic consequences for the city.2 This is one
example of why location decisions are
important in understanding the effects
of urban policies.
THE COMPLEX INTERACTIONS
OF LOCATION DECISIONS
The average person is familiar
with the process of deciding where to
live within a metro area. The decision, while sometimes difficult, seems
fairly straightforward. People think
about how much it costs to live in
various neighborhoods and municipalities, how far they are from work and
family, various amenities such as low
crime rates and good schools, as well as
access to services such as shopping or
entertainment. They look at the city
and its environs, weigh their options,
and make a decision.
In an analogous way, businesses
make decisions about where to locate
in metro areas. They think about the
cost and production advantages of
various locations, certainly considering
the cost of land and facilities, as well as
access to customers or employees.

2
In my working paper (2013), I present evidence
suggesting that although congestion pricing does reduce traffic, the net effect on the
economy is slightly negative. This outcome
occurs because over the long run, congestion
pricing reduces the concentration of businesses,
which lowers productivity by reducing knowledge spillovers. Also see Gerald Carlino’s 2001
Business Review article.

Business Review Q1 2014 9

These decision methods make
perfect sense from an individual point
of view. One individual’s or firm’s
decision is unlikely to affect the overall
characteristics of a large urban area.
However, when we consider all of these
people and businesses making decisions simultaneously or over the course
of time, things get more complicated.
For example, if the quality of a school
depends on the educational level of the
parents in the district or tax revenue
drawn from the income of residents,
then a question arises as to how highquality school districts are formed in
the first place and if they will continue
at the same level of quality.
Another complication arises with
the fact that business and residential
decisions are connected. When a
business moves, how does this affect
where its customers or employees
live? Conversely, when customers or
potential employees move, how does
this affect business location decisions?
This simultaneous decision process
complicates our understanding of the
geographic distribution of population
and employment in cities.
Finally, individual decisions may
directly affect others in the form of an
externality. In other words, one individual’s or firm’s actions may impose a
cost on others or may deliver a benefit.
The urban congestion described above
creates a negative externality, since
individual commuting decisions can
cause congestion and slow everyone
else down. Conversely, an example of
a positive externality in urban areas
is agglomeration. This is the idea
that employment density has positive
benefits for production, in that a firm’s
decision to locate near other firms
leads to positive spillover effects. Externalities like these are of particular
interest to economists and policymakers because they suggest that direct
policy intervention has the potential to
unambiguously improve efficiency in
the economy.
10 Q1 2014 Business Review

HOW RESIDENTS SORT
THEMSELVES INTO
NEIGHBORHOODS
One important aspect of location
decisions within cities revolves around
how people self-sort into different local
jurisdictions in a metro area for various
reasons.3 This choice can be based on
the innate characteristics of the various locations. For example, wealthy
individuals will probably pay the most
to live next to a beach. Or it might be
the case that the characteristics of a
neighborhood depend on the demographic composition of individuals living in that neighborhood. For example,
the quality of the schools may depend
on the education level of the parents
living in that school district.
An early treatment of the role
of sorting in cities was presented by
Charles Tiebout in 1956. The key thrust
of his paper is that, all else being equal,
people will gravitate toward communities that provide the public services they
desire. This is a powerful idea because
it suggests that the existence of multiple
local jurisdictions can possibly improve
overall welfare by matching people with
desired public amenities, not unlike the
mechanism that drives the market for
private goods.4
More recently, Dennis Epple,
Thomas Romer, and Holger Sieg,
among others, have more rigorously
investigated the implications of sorting
in cities and have also developed methods to test this implication empirically
using observed sorting within cities.
By considering that people have both
different preferences for public services
and different incomes, and recognizing
that these two characteristics might
be correlated, they are able to explain
relative income and public service

3
For the purposes of this discussion, we are
assuming that people are free to choose where
to live. Of course, historically and currently,
globally as well as in the United States, this
right has often been denied.

provision across jurisdictions. They
also show that people are sophisticated
in their decision-making such as voting
behavior, in the sense that residents
recognize the effects of public service
provision on their location choices.
BUSINESS LOCATION
DECISIONS INVOLVE
TRADEOFFS
Firms also make location decisions
within cities. Ignoring for a moment
the location of residents, who act as
both customers and employees and thus
are important in firms’ decisions, firms
still face tradeoffs in their location
choices in urban areas. Firms must
weigh the production advantages of a
location versus the costs of a location,
in particular, the land prices or rents.
The production advantages of a
given location can be separated into
two distinct types. The first type
is the natural or innate production
advantages of a location. This could
include, for example, proximity to
natural resources, desirable climate,
or natural transportation hubs. The
second type of production advantage
arises from the concentration of firms
and production. In its most general
form, this is the idea that a firm’s efficiency improves when it locates in
close proximity to other firms. These
are referred to as agglomeration economies or agglomeration externalities.
There is strong evidence that productivity rises in areas where employ-

4
It should be noted that providing all public
services at a local level is not efficient. For
example, when there are spillover effects, as is
the case with public parks or law enforcement,
where neighboring jurisdictions get benefits
from the provision of services, or if there are
returns to scale, as in transportation networks
or public utilities, that require large fixed investment and network connectivity, the efficiency
of fragmented jurisdictions comes into question.
In other words, when public goods have certain
characteristics, it is often more efficient for
service provision or funding to happen at the
regional, state, or national level.

www.philadelphiafed.org

ment is concentrated. The source of
agglomeration economies has several
explanations, including sharing of
labor markets or inputs, or knowledge
spillovers across firms resulting in improved technology.
Research on the source of agglomeration economies has been reviewed
in previous Business Review articles
by Jeffrey Lin and by Gerald Carlino.
There is strong evidence of production
advantages in dense areas in the form
of high rents, wages, or more direct
measures of productivity. However,
one aspect of the research that both
Carlino and Lin emphasize is the difficulty in identifying the different sources of production advantages. Lin suggests that an important consideration
is the relative importance of natural
advantage versus agglomeration effects,
and he discusses methods for identifying these separately. Carlino makes
the point that if people have different
skills or educational levels, and these
skills are correlated with location
choice, then measured productivity in
cities may be partially due to the sorting of high-productivity workers into
cities, thus overstating the importance
of agglomeration externalities.
Much of the research on firm
location has focused on firm location
decisions and agglomeration economies across metropolitan areas or on
a citywide scale. However, there is
strong evidence that the concentration of firms is important even at a
neighborhood or district scale within
urban areas, given that dense business
districts are a prevalent feature of the
urban landscape. Mohammad Arzaghi and Vernon Henderson, when
looking at the advertising industry in
New York, found that the production
advantages of proximity to other firms
declined rapidly across space even on
a city-block scale. In their study, Stuart Rosenthal and William Strange
also present evidence that the advantages of agglomeration externalities
www.philadelphiafed.org

decline significantly over a few miles.
In a joint paper, Daniele CoenPirani, Holger Sieg, and I study the
dynamics of firm location in urban
areas. By looking at location choices
— including entry, exit, and relocation decisions of firms — in dense
business districts versus sparse suburban locations, we are able to consider
sorting effects simultaneously with the
agglomeration productivity advantages. Using data from Pittsburgh, we
find that more productive firms do, in
fact, sort into dense business districts.
However, they do so to take advantage

of agglomeration economies, which our
estimates, based on select service industries, suggest can boost productivity
by as much as 8 percent, implying that
both sorting and productivity effects
are important in urban areas. This
productivity increase may seem large,
but when one considers the high rents
and wages that businesses pay in some
neighborhoods relative to others, it is
not surprising.
Table 1 shows some of the characteristics of firms in dense business
districts versus more sparse locations
in U.S. cities. Many of these business

TABLE 1
Establishment Characteristics Inside
and Outside Dense Business Districts
Average
Average
Establishment Establishment
Employment
Employment
Inside
Outside
Business
Business
Districts
Districts

Total
Employment
Outside
Business
Districts

Total
Employment
Inside
Business
Districts

Atlanta

1,115,398

229,002

15.79

29.25

Boston

1,728,075

531,349

15.66

39.01

Chicago

3,070,387

528,529

15.86

24.47

705,534

63,278

18.69

23.73

Metro Area

Columbus
Hartford

499,718

18,783

17.26

26.95

Houston

1,720,625

286,574

16.38

28.47

Jacksonville

491,959

24,315

15.24

25.38

Los Angeles

4,257,269

974,693

15.02

19.39

Philadelphia

1,921,626

196,428

15.91

27.66

Phoenix

1,551,921

64,793

18.31

27.78

Pittsburgh

822,013

157,009

14.58

40.04

Salt Lake City

440,239

53,086

15.22

21.08

655,740

26,572

17.21

20.49

Seattle

1,260,335

179,230

14.55

20.33

St. Louis

1,253,959

84,034

16.38

42.57

Washington, D.C.

1,930,848

303,770

15.42

21.68

San Antonio

Note: Business districts are defined as Zip codes with more than 10,000 workers per square mile.
Sources: Data are drawn from the 2008 Zip code business patterns data. This table is taken from
Brinkman, Coen-Pirani, and Sieg.

Business Review Q1 2014 11

districts are the familiar downtown
central business districts, although
larger cities can have multiple dense
business districts. For example, Los
Angeles has 30 Zip codes spread
throughout the metro area that meet
the criteria of a dense business district.
The evidence shows that establishments are larger in dense business districts.5 A familiar example might be
banks, where larger banks are usually
headquartered in downtowns of major
metro areas, while smaller regional
banks are often located in suburbs or
smaller cities. Table 2 shows a more
detailed comparison of establishments
in the central business district of
Pittsburgh versus the rest of the Al-

5
Establishments are single physical business
locations, as opposed to firms, which may be
composed of multiple establishments.

legheny County for service industries.6
These data provide more insight into
the production advantages of dense
business districts as well as the sorting
of firms. Establishments are not only
larger in the central business district,
but they are also older and have higher sales per employee. This evidence
is robust across most industries.
INTERPLAY OF DECISIONS
Further complicating the spatial
distribution of firms and workers in
cities is the fact that their decisions are

6
Service industries here are defined by North
American Industry Classification System (NAICS) codes 51-62, which correspond to fairly
high-skill services such as finance, management, education, and health care. We focus on
these industries because they are the most concentrated industries in dense business districts
in cities. In addition, the relative importance of
these industries has increased significantly over
the past several decades.

mutually dependent. Firms must consider the location of customers as well
as the location choices of employees.
Likewise, workers want to be located
close to their place of employment as
well as to services. This makes the
task of fully characterizing location in
cities quite complicated.
First, let’s consider the problem
facing firms when residents act as customers, as in the retail sector. In this
setup, we will think about cities’ role
in consumption. This problem was
introduced in 1929 by Harold Hotelling, who proposed a theory on the
location of firms with a fixed, uniform
distribution of customers along a line.
The basic idea is that multiple firms
would strategically decide where to
locate to capture the largest share of
the market.7 The original framework
proved to be neither rich enough nor
rigorous enough to capture the real

TABLE 2
Pittsburgh Service Establishments: Central Business District
vs. Rest of Allegheny County
Inside Central Business District
Age of
Firms
(years)

10th

Outside Central Business District

Number of
Employees

Facility
Size
(sq. feet)

Annual
Revenue/
Employee

Age of
Firms
(years)

Number of
Employees

Facility Size
(sq. feet)

Annual
Revenue/
Employee

2

2

1,432

$47,481

2

1

1,565

$40,000

25th

5

2

1,873

60,000

4

2

2,119

50,000

50th

13

3

2,499

70,000

10

2

2,474

64,000

75th

26

9

4,200

95,000

21

4

3,471

84,000

90th

42

28

8,470

140,000

34

11

5,276

116,077

95th

57

51

14,625

265,337

44

23

8,228

164,550

99th

108

288

53,563

890,257

76

99

22,841

495,803

Percentile

Note: Business districts are defined as Zip codes with more than 10,000 employees per square mile.
Sources: Data come from the 2008 Dun and Bradstreet’s Million Dollar Database and include only service industries (NAICS 51-62). This table is
based on calculations by Brinkman, Coen-Pirani, and Sieg.

12 Q1 2014 Business Review

www.philadelphiafed.org

economy, but it paved the way for
future work. For example, Timothy
Bresnahan and Peter Reiss show the
important tradeoff between customer
access and competition in firm location decisions. This study looked
across different cities, but the insight
provided applies to location decisions
within urban areas.
The retail location decision is
further complicated by the fact that
customers are free to move within cities as well. The models above assume
that customer location is fixed, but in
the long run, customers will move in
order to be located close to retail or
other services. Edward Glaeser, Jed
Kolko, and Albert Saiz suggest that
people are locating in cities increasingly for the culture, arts, retail,
entertainment, and other amenities
that cities provide. There might also
be positive feedback in the sense that
crowds of people attract more people,
suggesting that there may be consumption externalities analogous to
the production agglomeration externalities discussed above.
Another complication arises from
the employer-employee relationship
and its effect on firm and worker decisions. Here we are mostly concerned
with cities’ role in production and the
costs associated with commuting to
work. Early work by Edwin Mills and
others analyzed where workers would
live if all jobs were located at the
center of a city. Later on, Masahisa
Fujita and Hideaki Ogawa in 1982
and Robert Lucas and Esteban RossiHansberg in 2002 developed models
that freed firms and workers to locate
anywhere within the city. These
papers also consider the effect of ag-

7
Hotelling’s model has mostly been applied as a
metaphor for product differentiation, but in its
literal sense, it is a useful framework in urban
economics. The similarities are apparent given
that location is a form of product differentiation
and therefore leads to market power.

www.philadelphiafed.org

glomeration economies due to the
density of firms. In that sense, these
papers looked at the simultaneous location decisions of firms and workers
in urban areas.
To understand how these simultaneous location decisions are made,
it is important to understand all of
the tradeoffs faced by both firms and

sumption is a vital determinant of
city structure. In a current working
paper, I look at the data from several
cities to check the predictions of the
theory described above and estimate
the key determinants of city structure.
Some of the important characteristics of location in cities are contained
in Figure 1, which shows densities,

Firms must consider the tradeoff between the
productivity of a location and the costs of being
in that location, including rents and wages.
Workers are concerned about the tradeoff
between commuting times and costs on the
one hand and the price of housing on the other.
workers in an urban economy. Firms
must consider the tradeoff between the
productivity of a location and the costs
of being in that location, including
rents and wages. For their part, workers are concerned about the tradeoff
between commuting times and costs
on the one hand and the price of housing on the other. In the presence of
agglomeration economies, firms prefer
to concentrate in dense areas, given
that proximity to other firms increases
productivity. However, this concentration leads to increased congestion,
suggesting that workers would require
higher wages to travel into these areas
to offset their commuting costs. For
the urban economy as a whole, the
important consideration is whether
the increased production is worth the
extra costs of congestion.
Ultimately, the final form of a
metropolitan area, in terms of the spatial distribution of jobs and workers,
will depend on the relative strength
of agglomeration economies versus
the cost of commuting into congested
areas. Additionally, the relative value
of land for production versus con-

land prices, land use, and commuting
times for the area around the central
business district of Columbus, OH.
The features illustrated here are more
or less common around business
districts in cities and reflect the tension and tradeoffs that determine the
structure of an entire city.
Indeed, as would be expected,
employment density and residential
density both decline as one moves
away from business districts, although employment remains much
more concentrated than residential
population. This prevalence of dense
business districts suggests that the
strength of the agglomeration effects
outweighs the cost of commuting and
congestion. Otherwise, we would
expect to see much more equally
distributed employment across space.
In addition, land prices decline away
from dense business districts, while
commercial use gives way to more
residential use farther away from the
business district. Finally, commuting
times increase for residents away from
business districts, consistent with the
tradeoff faced by workers.
Business Review Q1 2014 13

FIGURE 1
Tradeoffs in Location Decisions: Columbus, OH

CONCLUSION
Understanding all of these questions is important in the implementation of public policy in cities. Let’s
now return to the policy of congestion pricing implemented in London
that was discussed earlier. At one
level, congestion pricing seems to be a
win-win proposition for policymakers.
Consider that congestion is a negative externality, in the sense that one
person’s commuting decision places a
cost on others. Then the idea behind
congestion pricing is that by taxing
congestion, people will make better
commuting decisions, and this will improve efficiency. Given that congestion also has environmental consequences, and the fact that this policy is
a potential source of revenue, it seems
like a no-brainer.
However, once we consider business location decisions, the efficacy of
this policy comes into question. The
policy, by design, will make it more
costly for people to travel into dense
business districts, and workers will
therefore require higher wages to do so.
Paying these higher wages might not
be worth it for businesses, and therefore, some businesses will leave the
business district, reducing employment
density. Given the strong evidence
for agglomeration economies, or some
proximity-related economies of scale,
there will be some loss in production.
Understood in this way, the efficiency
of congestion pricing becomes ambiguous. This suggests that a better policy
may be to reduce the costs associated
with congestion rather than charge fees
to discourage commuting into dense
areas. BR

Source: U.S. Census Bureau; Franklin County, OH, Auditor’s Office. Data are for 2000.

14 Q1 2014 Business Review

www.philadelphiafed.org

REFERENCES
Arzaghi, Mohammad, and J. Vernon
Henderson. “Networking Off Madison
Avenue,” Review of Economic Studies, 75:4
(2008), pp. 1,011-1,038.

Carlino, Gerald. “Knowledge Spillovers:
Cities’ Role in the New Economy,” Federal
Reserve Bank of Philadelphia Business
Review (Fourth Quarter 2001).

Lin, Jeffrey. “Geography, History, Economies of Density, and the Location of Cities,” Federal Reserve Bank of Philadelphia
Business Review (Third Quarter 2012).

Bresnahan, Timothy F., and Peter C. Reiss.
“Entry and Competition in Concentrated
Markets,” Journal of Political Economy
(1991), pp. 977-1,009.

Epple, Dennis, Thomas Romer, and Holger Sieg. “Interjurisdictional Sorting and
Majority Rule: An Empirical Analysis,”
Econometrica, 69:6 (2003), pp. 1,437-1,465.

Lin, Jeffrey. “Urban Productivity from Job
Search and Matching,” Federal Reserve
Bank of Philadelphia Business Review (First
Quarter 2011).

Brinkman, Jeffrey. “Congestion, Agglomeration, and the Structure of Cities,” Federal
Reserve Bank of Philadelphia Working
Paper 13-25 (2013).

Fujita, Masahisa, and Hideaki Ogawa.
“Multiple Equilibria and Structural Transition of Non-Monocentric Urban Configurations,” Regional Science and Urban
Economics, 12:2 (1982), pp. 161-196.

Lucas, Robert E., and Esteban Rossi-Hansberg. “On the Internal Structure of Cities,”
Econometrica, 70:4 (2003), pp. 1,445-1,476.

Brinkman, Jeffrey, Daniele Coen-Pirani,
and Holger Sieg. “Estimating a Dynamic
Equilibrium Model of Firm Location
Choices in an Urban Economy,” Federal
Reserve Bank of Philadelphia Working
Paper 12-26 (2012).
Carlino, Gerald. “Three Keys to the City:
Resources, Agglomeration Economies, and
Sorting,” Philadelphia Federal Reserve
Business Review (Third Quarter 2011).

www.philadelphiafed.org

Glaeser, Edward L., Jed Kolko, and Albert
Saiz. “Consumer City,” Journal of Economic
Geography, 1 (2001), pp. 27-50.
Hotelling, Harold. “Stability in Competition,” Economic Journal, 39:153 (1929),
pp. 41-57.
Leape, Jonathan. “The London Congestion
Charge,” Journal of Economic Perspectives,
20:4 (2006), pp. 157-176.

Mills, E.S. “An Aggregative Model of
Resource Allocation in a Metropolitan
Area,” American Economic Review (1967),
pp. 197-210.
Rosenthal, Stuart S., and William C.
Strange. “Geography, Industrial Organization, and Agglomeration,” Review of Economics and Statistics, 85:2 (2003),
pp. 377-393.
Tiebout, Charles M. “A Pure Theory of
Local Expenditures,” Journal of Political
Economy (1956), pp. 416-424.

Business Review Q1 2014 15

Brewing Bubbles:
How Mortgage Practices Intensify Housing Booms
By Leonard Nakamura

T

he infamous housing bubble was composed of two parts:
an unprecedented, decade-long surge in U.S. home prices
that began in the mid-1990s, followed by an equally
unprecedented fall in prices from 2007 to 2011. The
bubble was a major factor in the financial crisis associated
with the Great Recession. Similar housing booms and busts in the
past have repeatedly led to severe financial crises in many parts of
the world. Why these booms occur is not yet fully understood, but we
have recently made some progress in our understanding. In particular,
it appears that changes in mortgage lending practices can contribute
to the strength of booms once they get started.

A feedback loop can occur when
strong demand for homes creates rising
home prices and those rising prices
increase demand, rather than reducing
it as we would normally expect higher
prices to do. This paradox occurs
because home price inflation tends
to make it easier for more people of
varying means to get mortgages, which
by boosting demand in turn further
increases home prices. The reverse
also holds true — falling home prices
generally make mortgages harder to
obtain, further decreasing demand and
worsening the downturn. These phenomena are called procyclical because
they tend to intensify both the booms
and the busts.
Studying these phenomena — and
seeing whether we can moderate them
— may help us learn how to promote

not only housing market stability but
also general financial stability. While
these procyclical movements are the
normal workings of free financial markets, they may need to be constrained
if we are to limit these cycles in the
future.
ROLE OF PRICE EXPECTATIONS
Asset price movements are generally hard to predict, meaning that one
year’s price movements usually don’t
tell us anything useful about what will
happen the next year. But home prices
are an exception. If home prices go
up more than normal this year, they
are likely to do the same the following
year.1 Suppose we ask the question:
In any given quarter, how much have
real stock prices gone up in the past
year?2 In Figure 1, we can see the four-

Leonard Nakamura is a vice president and economist at the Federal
Reserve Bank of Philadelphia. The views expressed in this article
are not necessarily those of the Federal Reserve. This article and
other Philadelphia Fed reports and research are available at www.
philadelphiafed.org/research-and-data/publications.

16 Q1 2014 Business Review

quarter changes in stock prices quarter
by quarter as reflected in the Standard & Poor’s 500 stock index and in
home prices measured by the Federal
Housing Finance Agency house price
index, both deflated by the personal
consumption expenditure deflator.
For example, we can see that stock
prices rose 14 percent from the end of
the second quarter of 2012 to the end
of the second quarter of 2013. It is
obvious that U.S. home price changes
are much smoother than movements
in U.S. stock prices, which are quite
volatile. We also see that home price
movements tend to be persistently positive for a few years, while the same is
rarely true for stock price movements.
We can formalize this observation by asking what is the correlation
between one year’s real home price
movement and the next year’s. Over
the past 30 years, if we take the rate
of four-quarter change in the real U.S.
home price for each quarter, we find
that the following year’s real home
price percent change has a correlation
of 69 percent. That is, a higher than
average home price growth rate this
year means that it is likely that next
year there will also be a high growth
rate.3 The same holds true, although
with a somewhat lower correlation

1
For a discussion of why asset price movements
are generally hard to predict, see Burton Malkiel’s 2007 book. For a prescient discussion of
bubbles, see Robert Shiller’s 2005 book.
2
By real stock prices, we mean prices adjusted
for inflation, that is, adjusted for changes in
what the stock values can purchase. Throughout this article we will use the U.S. personal
consumption expenditure deflator to adjust for
inflation.

www.philadelphiafed.org

FIGURE 1
U.S. House Prices Less Volatile Than Stock Prices

Sources: Standard & Poor’s, Federal Housing Finance Agency.
Note: Quarterly data reflect inflation-adjusted four-quarter change.

of 51 percent, at the state level. By
contrast, the correlation for real stock
prices from one year to the next is
close to zero: –4 percent. Why home
prices display this correlation is an important open research question.
This greater predictability of home
price movements, as we shall see, tends
to feed on itself because of its connection to mortgages. We shall see that
a number of practices associated with
mortgage lending are procyclical —

3
For this calculation, we use data from the
first quarter of 1980 to the second quarter of
2013. We take the growth rate of annual real
house prices quarter by quarter and correlate it
with the annual rate of real house prices four
quarters later.

www.philadelphiafed.org

that is, they tend to reinforce housing
booms and worsen the busts that follow.
Rising prices should discourage purchases because the purchase
becomes more expensive. But paradoxically, rising home prices can also
partially facilitate increased demand
due to these procyclical aspects.
PROCYCLICALITY:
MAKING BOOMS BIGGER
To illustrate this pattern, suppose
home prices go up one year. Housing
then becomes less affordable. That
should dampen demand. But as we
have seen, if home prices went up this
year, they are likely to go up again next
year. Potential homebuyers therefore
may buy this year, fearing that prices

will be even higher next year. Put
another way, the homebuyer hopes to
gain from the expected post-purchase
rise in price. That boost to demand in
turn fuels a demand for credit — the
homebuyer needs a mortgage, particularly if the home price is already high.4
But the fact that house prices are
likely to go up next year also makes
the mortgage lender more willing to
supply credit. Even when the homebuyer will need to stretch to make
the mortgage payments, the mortgage
lender may be less concerned about
the ability of the homebuyer to keep
making the payments, because the
collateral for the loan — the home
itself — is likely to become more valuable and thus help prevent the lender
from taking a loss. In a rising market, a homeowner unable to make the
payments can sell the house and clear
more than enough money to pay off
the mortgage.
Thus, lending standards may
become weaker, and those weaker
standards may increase the number of
potential homebuyers. Rising house
prices thus help create even more
demand, which may increase the tendency of the housing market to create
bubbles. We will discuss below some
recent empirical work that suggests
that this force was at work in the years
leading up to the bust, but first we will
turn to another factor in mortgage
making that can have a procyclical
impact: The rapid pace of transactions
during booms leads to more accurate
and reliable home appraisals.
RISING PRICES, FAVORABLE
APPRAISALS
Whenever money is loaned on
collateral, the lender has two potential
sources from which to obtain a return
on the loan: repayment with inter-

4

See Ronel Elul’s 2006 Business Review article.
Business Review Q1 2014 17

est from the borrower or the proceeds
from the sale of the collateral if the
borrower defaults. For a collateralized
loan, a key question is what the collateral will sell for. If the collateral is
gold jewelry at a pawn shop, the gold
content of the jewelry and the current
market price of gold might determine
the value of the collateral. For an auto
loan, the sale price or the book value
of a used car might determine the
value of the collateral. While for some
goods the resale value may be relatively
transparent, determining the value of
a particular home typically requires
some due diligence. A home’s location
and individual characteristics such as
its size and condition are key determinants of its value, and thus recent sale
prices for comparable homes nearby
are important evidence in determining
the resale value of a particular home.
For a home, the value of the collateral
is thus determined by the sale price of
the home and by sales of similar homes
nearby. In the U.S., mortgage lenders
use home appraisals to determine the
value of the collateral.
A home appraisal is an estimate,
made by a home appraiser, of the resale
value of a home for which a potential
homebuyer is seeking a mortgage. The
appraiser bases the estimate partly on
the prices of similar homes that have
been recently sold in the same area.
Using recent nearby sales helps protect
the lender against lending too much
to a homebuyer who has overpaid for a
home, which could make it harder for
the lender to recoup its loss in case of
default.
When many nearby homes are
being sold, as occurs during a housing
boom, then the appraisal will be more
accurate and the resale value more
certain, as it is easier for the appraiser
to find recently sold homes that closely
resemble the home being appraised.
This abundance of recent sales will
make it less likely that the appraisal
will be enough out of line with the
18 Q1 2014 Business Review

contract price to scuttle the deal, and
more likely that the mortgage lender
will approve the loan. On the other
hand, when the boom comes to an
end, typically the number of sales slows
down. As demand slips, the dearth of
buyers may force sellers to lower their
prices. But if cutting the price would
mean losing money on their homes,

homebuyers and to ensure that mortgages were highly likely to be repaid.
Indeed, delinquencies and foreclosures
on such mortgages were rare. For
example, according to the Mortgage
Bankers Association survey, during
the housing boom years of 1998 to
2006, prime fixed-rate mortgages had
a severe distress rate — defined as the

When sales become less frequent, appraisals
become less accurate, making it more likely
that the mortgage lender will deny the loan.
would-be sellers may instead pull their
homes off the market or decide not
to list them. When sales become less
frequent, appraisals become less accurate, making it more likely that the
mortgage lender will deny the loan.
And if the sale falls through, then the
next attempted sale in the neighborhood becomes more difficult, as it will
have been even longer since a nearby
house has been sold. Thus, appraisal
accuracy facilitates a boom but worsens
a slowdown.5
CHANGING CREDIT
STANDARDS
The typical mortgage loan was,
for many years, the prime loan, with
a fixed rate of interest and a fixed
monthly payment. The borrower typically was an owner-occupier who made
a 20 percent down payment and had
an excellent credit score, demonstrating a history of paying debts on time.
The basic requirements of this prime
mortgage loan were established during
the Great Depression as part of the
New Deal to restore access to credit to

5
See my 1993 article with William Lang for
the underlying economic theory and the 2007
article by McKinley Blackburn and Todd Vermilyea for empirical evidence. Home appraisals
are discussed more fully in my Business Review
article from 2010.

share of mortgages that are more than
90 days delinquent or in foreclosure —
of 0.6 percent, versus 3.7 percent during the bust years of 2008 to 2012.
As we now know only too well,
mortgage standards became far more
relaxed during the housing boom from
1995 to 2006. Subprime mortgage
loans were made to borrowers who
lacked strong credit scores, fueling
sales in less well-off communities, and
alternative mortgage products were offered to better-off borrowers who were
stretching to buy into the more expensive communities. And the result was
that far too many mortgages have been
foreclosed on over the past few years.
But why did lenders make these
riskier loans? In large part, of course,
it was because they could charge these
borrowers higher rates of interest and
so could make more money on them.
One narrative has it that mortgage
lenders didn’t care about credit quality
because they were able to securitize
mortgages — bundling loans of varying
credit quality into single securities and
selling them to unwitting investors.6
Another narrative is that securitization allowed institutions to earn money
off instruments whose risk of default

6
See, for example, the 2009 article by Benjamin
Keys and others.

www.philadelphiafed.org

was deemed likely only in rare circumstances. That is, the mortgage-backed
securities would do badly only if the
entire U.S. mortgage market failed,
and otherwise would earn high profits.7
But these securitization narratives fail
to explain why banks took large losses
on their own mortgage portfolios, and
indeed, why bank mortgage portfolios
swelled so much during the run-up. In
1995, U.S. banks and thrifts held $755
billion in residential real estate loans
(in 2009 dollars), and in 2006 they
held $1.72 trillion worth, or more than
double in real terms. And between
2007 and 2012, some $181 billion of
those loans were charged off, a loss rate
of roughly 10 percent.
Rather, banks and thrifts held
these mortgages because they saw that
during this period losses were low because house prices were continuing to
rise. That is, these types of loans had
been profitable in the recent past, and
lenders thought these loans were likely
to continue to be profitable as long as
prices kept rising, or at least didn’t fall.
Creditworthiness. Our central
thesis is that favorable home price
expectations, generated by previous
increases in home prices, may cause
lenders to be less cautious about the
creditworthiness of borrowers. The
reason is simple: As the collateral
becomes stronger, reliance on the
borrower may weaken. And as lending standards weaken, the number
of potential homeowners will likely
expand as those whose credit standing had previously been too weak
to qualify them for loans enter the
market. As the number of potential
owners increases, house prices may be
bid even higher, extending the price
boom and fulfilling the expectation of
rising prices.
We expect then to see a correla-

tion between rising house prices and
falling credit standards. But which is
the cause and which is the effect? Are
rising house prices mainly feeding the
drop in lending standards? Or is the
drop in lending standards feeding the
rise in home prices?
In my 2012 article with Jan
Brueckner and Paul Calem, we argue
that the rise in house prices has an
important causal role in this process.
The way we identify the direction
of causation is to use the prior year’s
home inflation rate as a proxy for current expectations for the next year’s
home inflation rate. Specifically, our
regressions look at whether the rate of
home inflation four quarters ago predicted a decline in credit scores in the
current period.8 Because the inflation
occurred four quarters ago, it seems
unlikely to have been caused by the
current decline in credit standards.
We examine quarterly house price

Are rising house prices mainly feeding the drop
in lending standards? Or is the drop in lending
standards feeding the rise in home prices?
inflation data at the state level from
2001 to 2008. We then take stateby-state credit scores for those people
who obtained new mortgages, dividing these new mortgagers into firsttime homebuyers, repeat homebuyers
(those who had a previous mortgage),
investors (those with more than two
mortgages), and refinancers. We measure credit scores using data from the
Federal Reserve Bank of New York/
Equifax Consumer Credit Panel database, and we use the state mean, the
25th percentile, and the 10th percentile

only the interest due for the first five
or 10 years, compressing the timeframe
for paying off the principal and making
later payments higher.
The average 30-year fixed-rate
mortgage had an interest rate of 6.1
percent from 2003 to 2006, whereas
the average adjustable-rate mortgage
had an interest rate of 5 percent. Thus,
a $200,000 mortgage might have had

Credit scores are numerical measures of
creditworthiness based on the borrower’s history
of timely repayment of debts. Lenders typically
use credit scores as one element in their decisions to offer credit to a borrower.

10

8

7
See the 2009 article by Joshua Coval and
others.

www.philadelphiafed.org

of credit scores so that we examine a
broad profile of credit scores.9 It is useful to examine the weaker credit scores
as well as the mean to see whether the
minimum credit score necessary for a
mortgage is falling as well as the average credit score. For all four groups,
and for all three measures, we find that
past home inflation rates led to reductions in credit scores. Thus, the pool
of borrowers whom lenders considered
eligible for loans was widening in response to rising home prices.10
Alternative mortgage products.
As people pay higher and higher prices
for homes, some borrowers may find
it difficult to make their mortgage
payments out of their current income.
One way to make a house more affordable is to switch to an alternative
“back-loaded” mortgage that has lower
payments in the early years of the loan
and higher payments later. U.S. borrowers with this type of mortgage pay

9
The 25th percentile represents the highest
credit score of the lowest quarter of new mortgage borrowers in the state that year; the 10th
percentile represents the highest score of the
lowest tenth of new mortgage borrowers.

Qualitatively similar results showing that
house price inflation accompanies declines in
credit standards can be found in 2012 articles
by Giovanni Dell’Arricia and others and by
William Goetzmann and others.
Business Review Q1 2014 19

an interest payment of $12,200 per
year under a fixed-rate mortgage and
$10,000 under the adjustable-rate
mortgage. On top of that, the mortgage borrower would pay an additional
$2,300 in the first year toward paying down principal, a process called
amortization. Thus, for a standard
30-year fixed-rate mortgage, a borrower would be paying about $14,500
per year. By contrast, with an interestonly adjustable-rate mortgage, the
borrower might be able to pay $3,000
or $4,000 less annually, depending on
the premium the borrower is charged
for the interest-only feature. Borrowers who find themselves paying more
for a home than they had hoped in a
hot real estate market may be tempted
to go for the interest-only mortgage to
make the payments affordable, hoping
that their earnings will rise, or that
they can refinance, before the five- or
10-year grace period is up.
In my 2013 working paper with
Brueckner and Calem, we show that
expectations of increased home prices
led to more widespread use of backloaded mortgages, including interestonly adjustable rate mortgages (IO
ARMs) and so-called option ARMs,
which permitted negative amortization,
allowing borrowers to pay even less for
a few years.11
When home prices turned downward, the rate of default and delinquency turned out to be very high. We
show that default rates on these backloaded mortgages were unusually high,
even after accounting for factors such
as unemployment rates, house price
changes, and the like. This higher default rate is not surprising, in that these
products catered to home purchasers
who were stretching to be able to afford

11
Similar results are found in the 2012 article
by Michael LaCour-Little and Jing Yang. For a
complementary point of view on back-loaded
mortgages, see the 2012 working paper by Gadi
Barlevy and Jonas Fisher.

20 Q1 2014 Business Review

their homes and thus would be most
vulnerable to an economic downturn.
Moreover, as we shall explore further
below, some of those who took advantage of the low payment requirements
of these loans were likely investors
who had bought the homes only in the
hopes of further price increases and
who walked away from the mortgages
when house prices fell.
Finally, we show that, unlike subprime mortgages, many of these backloaded mortgages were retained on
banks’ balance sheets. And the default
rates of these back-loaded mortgages
were in most cases worse than those
for securitized mortgages. Thus, for
this class of mortgages, it does not
appear that lenders sold off the worst
mortgages. Rather, they ate their own

percent, scarcely different from 2000
or 2001, before these lower-quality
mortgages had become prevalent. But
by 2009, the rate had risen 7 percentage points to 9.2 percent. Moreover,
notice that the distress rate of adjustable-rate prime mortgages had risen
16 percentage points, while that of
fixed-rate prime mortgages had risen 4
percentage points. Severely distressed
subprime mortgages overall had risen
23 percentage points.12 Thus, it is
evident that lowered credit standards,
as reflected in the widespread use of
adjustable-rate and subprime mortgages, were a preponderant factor in the
extremely high distress rates of mortgages. This also suggests that requiring
lenders to keep some of the mortgages
they originate on their own books

It is evident that lowered credit standards were
a preponderant factor in the extremely high
distress rates of mortgages.
cooking. This behavior strongly suggests that lenders believed that these
mortgages would be reasonably profitable, although this turned out not to
be the case.
We can see in Table 1 that from
2003 to 2006, mortgages of lower
credit quality — subprime and alt-A
— ballooned from 10 percent of all
mortgages originated to 39 percent.
We can also see that from 2004 to
2006, IO ARMs and option ARMs
similarly ballooned from 8 percent to
25 percent of all mortgages originated.
These adjustable-rate mortgages were
sometimes subprime and alt-A, and
sometimes prime.
The outcomes can be seen in Table 2, which depicts severely distressed
mortgages — those in foreclosure or
with payments three months or more
overdue. In 2006, when these mortgages were being made, the overall rate
of severely distressed mortgages was 2.2

rather than sell them into the securitization market — so lenders bear more
of the risk of their lending decisions —
may not be sufficient to prevent risky
mortgage lending in a boom. Limiting
the use of alternative mortgages may
also need to be considered.
Finally, these risky mortgages have
now effectively disappeared from the
mortgage market. They expanded demand during the boom, but now they
are rarer than in 2000, well before the
worst of the house price boom. This
has contracted the potential demand
for homes, contributing to the steepness of the decline in home prices.
Flippers. Buy low; sell high.
That is the basic hope of any investor, in any market. Normally, inves-

In this survey, respondents are asked to
classify mortgages into prime and subprime; it
is generally believed that alt-A mortgages are
primarily classified as subprime.

12

www.philadelphiafed.org

TABLE 1
First Lien Mortgage Originations During Housing Boom
Mortgages by Major Type
Total,
in billions of
dollars

Subprime

2003

$3,725

2004

2,590

2005

Option
ARMs*

Alt-A

Agency
prime

Government

Jumbo

IO ARMs*

$310

$85

$2,460

$220

$650

NA

NA

540

190

1,210

135

515

55

145

2,755

625

380

1,090

90

570

418

238

2006

2,550

600

400

990

80

480

387

255

2007

2,081

191

275

1,151

116

348

295

111

2008

1,384

23

42

928

293

98

76

8

2009

1,759

4

6

1,201

451

97

8

0

2010

1,581

4

4

1,092

377

104

9

0

2011

1,420

4

4

948

294

170

0

0

2012

1,861

4

4

1,270

380

203

0

0

Percent of total
2003

100%

8%

2%

66%

6%

17%

NA

NA

2004

100

21

7

47

5

20

2

6

2005

100

23

14

40

3

21

15

9

2006

100

23

16

39

3

19

15

10

2007

100

9

13

55

6

17

14

5

2008

100

2

3

67

21

7

5

1

2009

100

0

0

68

26

6

0

0

2010

100

0

0

69

24

7

1

0

2011

100

0

0

67

21

12

0

0

2012

100

0

0

68

20

11

0

0

Source: Inside Mortgage Finance, 2013.
Notes:
Subprime: For borrowers with low credit scores.
Alt-A: For those who fail to qualify for prime mortgages but have high credit scores.
Agency prime: Originated, guaranteed, and securitized by the government-sponsored enterprises Fannie Mae and Freddie Mac.
Government: Guaranteed by the Federal Housing Administration or Department of Veteran Affairs.
Jumbo: Too large to be securitized by Fannie Mae or Freddie Mac.
IO ARMs: Interest-only adjustable rate.
Option ARMs: Adjustable rates plus the option of minimum payments that do not cover even the interest owed.
*Figures for IO and option ARMs are also included within the results listed for subprime, alt-A, agency prime, and jumbo mortgages.

tors hope to identify assets that are
for sale for less than they are intrinsically worth, buy them, and then sell
them as other buyers come to see their
intrinsic worth. For example, during
the housing bust, many homes came
www.philadelphiafed.org

to be sold at very low prices, and real
estate groups invested in these homes,
hoping to rent them for a time and
later sell them at higher prices. These
professional investors help to stabilize
markets, particularly in times of crisis.

These investors often do not require
mortgages and instead pay cash. They
can be identified in large part because
they buy properties that are cheap
relative to the rest of the home market.
They will frequently buy from soBusiness Review Q1 2014 21

TABLE 2
Percent of Mortgages That Became Severely Distressed
All
Mortgages

All
Prime

Prime
Fixed
Rate

Prime
Adjustable
Rate

All
Subprime

Subprime
Fixed Rate

Subprime
Adjustable
Rate

FHA

VA

1998

1.8

0.9

0.7

1.5

5.7

5.3

5.8

3.8

3.2

1999

1.8

0.7

0.5

1.1

7.6

7.4

7.8

3.5

2.9

2000

1.8

0.6

0.5

1.1

8.9

9.3

8.7

3.5

2.5

2001

2.3

0.8

0.7

1.4

11.9

12.7

11.0

4.4

2.9

2002

2.3

0.8

0.7

1.3

11.4

11.7

10.6

5.2

3.2

2003

2.1

0.9

0.7

1.2

8.3

8.1

8.2

5.7

3.4

2004

2.0

0.8

0.7

0.8

6.5

7.4

5.8

5.5

3.1

2005

2.0

0.8

0.8

0.8

6.3

6.2

5.9

5.9

2.8

2006

2.2

0.8

0.7

1.4

7.7

6.0

9.0

5.5

2.5

2007

3.5

1.6

1.0

4.0

14.1

8.1

20.1

5.7

2.7

2008

6.1

3.6

2.1

9.9

22.4

13.1

33.0

6.5

3.9

2009

9.2

6.7

4.7

17.3

29.2

20.9

41.2

8.9

5.1

2010

8.3

6.0

4.4

16.5

26.2

19.8

37.4

8.1

4.6

2011

7.5

5.2

4.0

13.5

23.8

18.8

33.8

8.6

4.6

2012

6.6

4.3

3.4

10.5

21.3

17.6

30.1

8.3

4.3

Average
1998-2006

2.0

0.8

0.6

1.2

8.2

8.2

8.1

4.8

2.9

Average
2008-2012*

7.5

5.1

3.7

13.5

24.6

18.0

35.1

8.1

4.5

Source: Mortgage Bankers Association, via Haver Analytics.
Note: Severely distressed mortgages are those 90 days or more past due or in foreclosure.
*Averages were calculated without factoring in 2007, largely a transition year between boom and bust.

called motivated sellers, such as homeowners who have to move because they
have taken jobs beyond commuting
distance from their current homes or
from a bank that has foreclosed on a
property.
However, because home price
increases tend to be predictable,
unsophisticated home investors may
also come in who believe that home
prices will continue to rise. If you live
in a hot real estate market in a home
worth, say, $200,000, and suddenly you
find that comparable homes nearby
are worth $300,000, you may think
to yourself that your home has earned
22 Q1 2014 Business Review

more money than you did by working.
Since you now have $100,000 in unexpected home equity, you may decide to
borrow against it to buy an additional
house or two, planning to make some
minor improvements and sell them in
a year or two. If home prices are rising
rapidly enough, you can make a profit
even if you bought houses that were
not especially intrinsically cheap. Of
course, you will borrow as much as you
can to limit your cash outlay, to stretch
your home equity. If home prices fall,
you may quickly walk away from the
homes, mortgages and all.
This type of investment is particu-

larly pernicious to the housing market
because the homes often remain unoccupied, since the buyer is not a professional real estate investor and has no
easy way to rent them out. These
types of purchases exaggerate the apparent demand for homes, and thus
the market appears to have a more
unequal balance between supply and
demand, which also tends to prolong
the boom and drive prices higher.
A 2011 working paper by Andrew
Haughwout and his coauthors at the
New York Fed showed that this type of
investor became surprisingly prevalent in the later years of the housing
www.philadelphiafed.org

boom. They estimate that in the states
hit hardest by the bubble — Arizona,
California, Florida, and Nevada — as
much as 20 percent of all home purchases were made by borrowers who
already had two or more mortgaged
homes. Patrick Bayer and others,
in a careful study of investors in Los
Angeles County during the housing
boom, are able to show that there were
two types of investors — those who
bought houses relatively cheaply and
those who bought houses more or less
at the market rate. The latter tended
to come into the market as house price
inflation increased, earned rates of
return no different from others in the
market, and were statistically associated with price instability in their
markets.
BOOMS, BUSTS, AND CRISES
We have seen in recent years repeated financial crises associated with
housing booms and busts in the United
States, Europe, and Asia. Carmen
Reinhart and Kenneth Rogoff (2009)
present evidence that the most intractable recessions have been associated
with financial crises related to housing
booms and busts. They note that the
five worst financial crises post-World
War II and before the latest world crisis — Spain in 1977, Norway in 1987,
Finland and Sweden in 1991, and
Japan in 1992 — all coincided with
very large housing booms and busts.

www.philadelphiafed.org

They point out that a “massive run-up
in housing prices usually precedes a
financial crisis” (p. 217). This perspective suggests that reducing housing
booms and busts might well reduce the
magnitude of ensuing financial crises
and recessions.
Indeed, around the world, regulators have stepped up efforts to contain
housing booms. In a July 2013 article
in the Wall Street Journal, David Wessel and Alex Frangos examine efforts
in South Korea, Israel, and Canada to
use housing regulations to slow housing booms. For example, the South
Korean government and central bank
have required homebuyers in certain
neighborhoods to come up with down
payments as high as 50 percent and
limited the ratio of mortgage payments
to income. To discourage investors,
they have imposed high taxes on sales
by people who own more than one
home. In Canada, government-insured
mortgage loans have to be paid off
in 25 years instead of 30, raising the
required monthly payment. Those efforts have succeeded, at least as of this
writing, in slowing home price booms.
But in Israel, Wessel and Frangos
report, despite higher down payment
requirements, home prices continue to
rise at double-digit rates.
At the same time, we should recognize that government regulation may
be part of the problem as well. The
U.S. government has a long history of

support for homeownership, and ironically, this support may have contributed to problems in the housing market.
See Wenli Li and Fang Yang’s 2010
article for a discussion of government
support for homeownership.
FINANCIAL STABILITY AND
PROCYCLICALITY
A key question, then, for financial
stability is whether we can tone down
housing booms and busts by moderating the procyclical impact of appraisals, credit standards, and alternative
mortgage instruments. Our research
has not yet reached the stage of showing us how to optimally moderate
housing booms and busts. But we have
identified some mechanisms that appear to make boom and bust cycles
greater and therefore more dangerous
to financial stability. They may point
the way toward strategies for moderating these cycles. Indeed, as we have
seen, regulators and central banks
around the world are already taking
steps in hopes of moderating these
cycles.
Preserving financial stability may
require a tradeoff between allowing
the mortgage market to adapt freely
to changes in demand and ensuring a
stable housing market through greater
regulation of mortgages and appraisals.
But precisely where that balance lies is
beyond our current understanding and
deserves further study. BR

Business Review Q1 2014 23

REFERENCES
Barlevy, Gadi, and Jonas D. M. Fisher.
“Mortgage Choices and Housing Speculation,” Federal Reserve Bank of Chicago
Working Paper 2010-12 (2011).
Bayer, Patrick, Christopher Geissler,
and James W. Roberts. “Speculators and
Middlemen: The Role of Intermediaries
in the Housing Market,” Duke University
Working Paper (February 2013).
Blackburn, McKinley, and Todd Vermilyea. “The Role of Information Externalities
and Scale Economies in Home Mortgage
Lending Decisions,” Journal of Urban Economics, 61 (2007), pp. 71-85.
Brueckner, Jan, Paul Calem, and Leonard
Nakamura. “Subprime Mortgages and the
Housing Bubble,” Journal of Urban Economics, 71 (March 2012), pp. 230-243.
Brueckner, Jan, Paul Calem, and Leonard
Nakamura. “House-Price Expectations, Alternative Mortgage Products, and Default,”
Federal Reserve Bank of Philadelphia
Working Paper 13-36 (2013).
Coval, Joshua, Jakub Jurek, and Erik Stafford. “Economic Catastrophe Bonds,”
American Economic Review, 99 (June
2009b), pp. 628-666.

24 Q1 2014 Business Review

Dell’Ariccia, G., D. Igan, and L. Laeven,
“Credit Booms and Lending Standards:
Evidence from the Subprime Mortgage
Market,” Journal of Money, Credit and
Banking, 44:2-3 (March-April 2012),
pp. 367-384.
Elul, Ronel. “Residential Mortgage Default,” Federal Reserve Bank of Philadelphia Business Review (Third Quarter
2006).
Goetzmann, W.N., Y. Peng, and J. Yen.
“The Subprime Crisis and House Price
Appreciation.” Journal of Real Estate Finance and Economics 44:1-2 (January 2012),
pp. 36-66.
Haughwout, Andrew, Donghoon Lee,
Joseph Tracy, and Wilbert van der Klauuw.
“Real Estate Investors, the Leverage Cycle,
and the Housing Market Crisis,” Federal
Reserve Bank of New York Staff Report
514 (September 2011).
Keys, Benjamin J., Tanmoy Mukherjee,
Amit Seru, and Vikrant Vig. “Financial
Regulation and Securitization: Evidence
from Subprime Loans,” Journal of Monetary
Economics (July 2009), pp. 700-720.
LaCour-Little, Michael, and Jing Yang.
“Pay Me Now or Pay Me Later: Alternative Mortgage Products and the Mortgage
Crisis.” Real Estate Economics, 38 (2010),
pp. 687-732.

Lang, William W., and Leonard I.
Nakamura. “A Model of Redlining,”
Journal of Urban Economics, 33 (March
1993), pp. 223-234
Li, Wenli, and Fang Yang. “American
Dream or American Obsession? The Economic Benefits and Costs of Homeownership,” Federal Reserve Bank of Philadelphia Business Review (Third Quarter 2010).
Malkiel, Burton G. A Random Walk Down
Wall Street, Updated, 2007, New York:
WW Norton.
Nakamura, Leonard. “How Much Is That
Home Really Worth? Appraisal Bias and
Home-Price Uncertainty,” Federal Reserve
Bank of Philadelphia Business Review (First
Quarter 2010).
Reinhart, Carmen M., and Kenneth S.
Rogoff. This Time Is Different: Eight Centuries of Financial Folly, 2009, Princeton:
Princeton University Press.
Shiller, Robert. Irrational Exuberance, Second Edition, 2005, Princeton: Princeton
University Press.
Wessel, David, and Alex Frangos. “Central
Bankers Hone Tools to Pop Bubbles,” The
Wall Street Journal (July 19, 2013); http://
online.wsj.com/article/SB100014241278873
24069104578527683704380960.html.

www.philadelphiafed.org

New Perspectives on Consumer Behavior
in Credit and Payments Markets
By Mitchell Berlin

A

t the Federal Reserve Bank of Philadelphia’s latest
conference on consumer credit and payments, researchers
presented the results of the following seven studies on
topics including household financial decision-making, the
effects of regulation on credit card markets, and the effect
on individuals of interactions between credit and labor markets.1

STICKING TO YOUR PLAN:
HYPERBOLIC DISCOUNTING
AND CREDIT CARD DEBT
PAYDOWN
Theresa Kuchler, of New York
University’s Stern School of Business,
reported on an empirical study of individuals’ success in carrying out plans
to reduce their credit card balances.
Broadly, Kuchler had two objectives.
Her first objective was to find evidence
for present-biased behavior, in which
consumers make plans to reduce future
borrowing but systematically deviate
from their plans by acting impatiently
in the future. Her second objective was
to determine the extent to which individuals are sophisticated about their
own behavior, in the sense that they
understand that they act in a present-

The seventh biennial conference on consumer
credit was hosted jointly on October 3-4, 2013,
by the Philadelphia Fed’s Payment Cards Center
and Research Department. The papers presented may be found at http://www.philadelphiafed.
org/research-and-data/events/2013/consumercredit-and-payments/agenda.cfm.

1

biased way and make borrowing decisions that reflect this understanding.
Kuchler developed a simple model
of consumer borrowing behavior that
could be used to make predictions
about how different types of consumers
would behave. She tested her predictions using a remarkably detailed data
set from an online financial management service. Individuals use this
service to make plans to reduce their
credit card balances, although the
service doesn’t impose penalties if they
fail to meet those plans. Individuals
provide demographic information —
for example, age, income, and education — as well as information about
their paycheck receipts and detailed
information about their credit card
use, bank account behavior, and expenditures. Moderating concerns that
the people using a financial planning
service are not representative of the
broader population, Kuchler explained
that according to observable demographic measures, the sample is reasonably similar to the general population.

Mitchell Berlin is a vice president and economist at the Federal
Reserve Bank of Philadelphia. The views expressed in this article
are not necessarily those of the Federal Reserve. This article and
other Philadelphia Fed reports and research are available at www.
philadelphiafed.org/research-and-data/publications.

www.philadelphiafed.org

In the first part of the study,
Kuchler sought to measure present bias.
Specifically, she measured present bias
by the sensitivity of an individual’s discretionary expenditures — restaurant
and entertainment expenditures — to
the receipt of a paycheck. Intuitively,
a larger expenditure on discretionary
items as soon as a paycheck arrives
is consistent with impatient behavior, especially when this expenditure
conflicts with a prior plan to use the
income to reduce credit card balances.
She finds that many consumers’ discretionary expenditures are very sensitive
to the receipt of a paycheck, a finding
consistent with present bias. (Kuchler
explained that such behavior was also
consistent with other explanations, a
matter she addressed later.)
Kuchler argued that present-biased individuals might, nonetheless, be
fully rational and aware of their behavior (thus being sophisticated). Alternatively, they might be naïve, and simply
not understand that in the future they
are likely to act in a way that frustrates
their current plans. Her model offers
predictions about how a present-biased
but sophisticated individual would behave differently from one who was also
present-biased but naïve. Specifically,
the model predicts that very impatient
but sophisticated individuals will typically pay down less of their debt than
those who are also sophisticated but
less impatient. Intuitively, a sophisticated, impatient individual reasons
that, “I know in the future I am going
to consume more than my current plan
for future consumption. Therefore, I
can achieve a smoother consumption
path if I consume more today, which
Business Review Q1 2014 25

will, in turn, reduce future consumption.” Naïve individuals don’t reason
this way because they don’t understand
that they will act in a way that frustrates their plans for the future. Accordingly, the level of impatience will
not affect the extent to which they pay
down their debt.
Kuchler’s empirical results confirmed her strategy for identifying
individual degrees of impatience and
also her distinction between sophisticated and naïve individuals. She
found that all individuals reduced their
credit card balances less than they
had planned but that sophisticated
individuals were more successful. She
also found that the extent to which
sophisticated individuals paid down
their debt was related to their level of
impatience, while for naïve individuals
it was not, as her theory predicted.
She concluded by considering
alternative explanations for her empirical results, notably credit constraints
or habits-driven behavior. She argued
that other plausible models of borrowing behavior are either inconsistent
with her results or else have no predictions about behavior regarding debt
repayment.
FINANCIAL CONSTRAINTS
AND CONSUMERS’ RESPONSE
TO EXPECTED CASH FLOWS:
DIRECT EVIDENCE FROM
FILING TAX RETURNS
Brian Baugh from the Ohio State
University presented the results of a
study conducted with Itzhak Ben-David
and Hoonsuk Park on household consumption behavior in response to filing
tax returns and receiving tax refunds.
Using a proprietary data set from a
financial institution that included data
on individuals’ credit card usage, as
well as information about tax filings,
the authors examined the role of credit
constraints on consumption behavior. Broadly, the authors found strong
evidence of credit constrained behavior,
26 Q1 2014 Business Review

as households that received refunds
increased their consumption only
modestly at the filing date but increased
consumption by a significantly larger
amount when the refund was actually
received. Furthermore, household consumption was not affected by the size
of the prior year’s refund, even though
previous refunds were good predictors
of current refunds.

and the percentage increase when the
refund was received was larger. The
authors found similar effects for the
probability of shopping following these
dates. They found no significant effect
on consumption by households that did
not receive a refund.
Restricting their sample to those
for whom they had two successive tax
filings, the authors then examined

All individuals reduced their credit card
balances less than they had planned,
but sophisticated individuals were
more successful.
The authors had anonymized data
from a financial institution on the
credit and debit card use of 500,000
individuals from July 2010 to December 2012. Ultimately, the sample size
was reduced to about 15,000 individuals primarily because the authors
required information on the date on
which tax returns were filed. Baugh
argued that the actual filing provided
a good estimate of the household’s expected refund. The authors assumed
that the filing date was reasonably
well measured by the date on which
the individual paid a fee to a tax
preparation service such as TurboTax
or H&R Block.
The authors’ main findings were
that households increased consumption
only moderately at the time of filing,
but they increased consumption significantly more when the refund was actually received. Specifically, they found
that households that received refunds
increased consumption by approximately 3 percent at the time of filing,
while they increased their consumption
by two to four times that amount when
the refund was received, depending
on the precise empirical specification.
Focusing on low-income households
alone, the percentage increase in consumption at the filing date was smaller

whether households used the information on past tax refunds to form
expectations about future tax refunds.
The authors argue that the prior year’s
refund is a good (albeit imperfect)
predictor of the current year’s refund.
Accordingly, they divided the population into households with positive
surprises — that is, their refund was
larger than the preceding year’s refund
— and negative surprises. They found
that both those with positive and negative surprises increased consumption
when they received the refund. The
authors concluded that people’s consumption was unaffected by the prior
year’s refund, even though it is a very
good predictor. Baugh suggested that
this finding raised some doubts about
economic models in which households
form rational expectations about future consumption.
ARE YOUNG BORROWERS
BAD BORROWERS? EVIDENCE
FROM THE CREDIT CARD ACT
OF 2009
Andra Ghent of Arizona State
University presented the results of her
study conducted along with Peter Debbaut and Marianna Kudlyak on the
relative default behavior of young borrowers. One of the goals of the CARD
www.philadelphiafed.org

Act of 2009 was to limit the marketing
of credit cards to individuals younger
than 21 years old, premised on the
view that young borrowers were more
likely to get into financial difficulties.
While the authors found that the act
was largely successful in restricting
credit card access for young individuals, they also found evidence that
young borrowers were significantly less
likely to default than older individuals.
Ghent argued that their results called
into question the fundamental premise
of those sections of the act restricting credit card access — that is, that
young borrowers were poorly equipped
to manage their credit card borrowings
compared with older borrowers.
First, the authors use the Federal
Reserve Bank of New York’s Consumer Credit Panel/Equifax to evaluate whether the CARD Act had the
desired effect.2 They found that after
implementation of this law, individuals
under 21 (i) were 8 percentage points
less likely to have a card, (ii) had fewer
cards, conditional upon having a
card at all, and (iii) were 3 percentage
points more likely to have a cosigned
card. The authors concluded that the
act had successfully restricted access to
credit cards by the young.
Then the authors examined whether young borrowers actually were delinquent more often than older borrowers.
While young borrowers were more likely
to suffer minor delinquencies (less than
90 days), the authors found that young
people were actually significantly less
likely than older borrowers to be more
than 90 days delinquent. Instead, serious delinquencies followed an inverse
U-shaped pattern over a borrower’s
lifetime, increasing until age 40-44, at
which point a borrower was 12 percentage points more likely to be seriously
delinquent than a 19-year-old.

2
All data from this data set are anonymized.
The researchers have no access to personally
identifiable information about individuals.

www.philadelphiafed.org

Ghent noted that lower delinquency rates for young borrowers suggested
that the young were not less creditworthy. But to evaluate the effect of the
restrictions in the act, we must take
into account that prior to the imposition of the new law, young borrowers
chose whether to acquire credit — that
is, there was a selection effect. In
principle, this selection effect might go
either of two ways. While the borrowers below the age of 21 who acquired
credit cards prior to the act might
have been less capable of managing
their finances than more experienced
borrowers, they might also have been
more prudent or forward-looking than
the typical borrower. The authors use
the passage of the act as a laboratory
to identify the selection effect.
Specifically, the authors identified two groups of borrowers. Those
in Group 1 got their first credit card at
age 21 after the act was passed. Those
in Group 2 got their first card at age 21
before the act was passed; that is, they

quired their cards and also later in life.
In addition, Group 1 members were
more likely to have a mortgage at age
22 or 23 than were members of Group
2. The authors interpreted these findings as evidence that individuals who
entered the credit market early before
the passage of the act were likely to
have been relatively good credit risks
and that these borrowers were trying to establish a good credit history,
perhaps to qualify for homeownership.
Thus, the authors found no evidence
that by limiting access to young borrowers, the act was protecting borrowers who were less prudent or less
capable of managing debt than others.
FINANCIAL EDUCATION
AND THE DEBT BEHAVIOR
OF THE YOUNG
Meta Brown of the Federal Reserve Bank of New York presented the
results of her study with Wilbert van
der Klaauw, Jaya Wen, and Basit Zafar
on the effects of education and the

While young borrowers were more likely to
suffer minor delinquencies (less than 90 days),
the authors found that young people were
actually significantly less likely than older
borrowers to be more than 90 days delinquent.
could have legally acquired a card before age 21 but had not. The differences in behavior of these two groups help
identify the selection effect. While not
all members of Group 1 would necessarily have qualified to receive a card,
presumably some would have qualified
and would have chosen to acquire a
card had they been permitted to do so.
The authors found that individuals from Group 2 were significantly
more likely to experience serious delinquencies than those in Group 1, both
in the years immediately after they ac-

borrowing behavior of young individuals. Specifically, the authors examined the effects of taking courses in
mathematics, financial literacy, and
economics on credit market outcomes.
Their study exploited the fact that
states vary widely in their high school
course requirements in these three
areas and that a large number of states
had introduced new requirements
during the sample period, 1998-2012.
Brown argued that the authors found
that required courses in these three
areas had statistically and economiBusiness Review Q1 2014 27

cally significant effects on the borrowing behavior of individuals in their
twenties. Mathematics and financial
education courses appeared to promote more savvy borrowing behavior,
although Brown cautioned against
drawing welfare conclusions from the
empirical results.
The authors created a data set
that compiled state-by-state changes
in required courses in high school
from 1998 through 2012. The data
set included whether a state had increased required math courses by one
year, whether a state had imposed a
new requirement that students take
at least one financial literacy course,
and whether the state had imposed a
new requirement that students take at
least one economics course. Using the
FRBNY Consumer Credit Panel/Equifax data, the authors followed the borrowing behavior of individuals born in
or after 1984 — who were thus likely
to have attended high school during
the sample period.3 They collected a
number of measures of credit market
behavior for these individuals at age
22 to 28, including whether they had
credit reports, their Equifax risk scores,
various measures of delinquency,
whether they had entered bankruptcy,
and their debt balances, including
mortgages, credit card balances, auto
loans, and student loans. The authors
also collected data on unemployment
rates and income in each individual’s
Zip code to control for economic
conditions. In addition, the authors
included various measures of educational quality for each state, such as
per capita educational expenditures.
The authors found that educational requirements had significant effects
on borrowing behavior. Brown argued
that focusing on behavior subsequent

3
All data from this data set are anonymized.
The researchers have no access to personally
identifiable information about individuals.

28 Q1 2014 Business Review

to the introduction of a new educational requirement strengthened the
view that differences in behavior were
causally related to the educational
requirement. Qualitatively, the effects
of more required math courses and a
required financial literacy course had
similar effects along most dimensions,
with the notable difference that only
the financial literacy requirement increased the likelihood that an individual would have a credit report. Brown
suggested that having a credit report
might be an indicator of an individual’s
understanding the value of building a
credit history. Both math and financial
literacy requirements were associated
with higher credit scores, lower balances, and, for the most part, fewer
adverse credit outcomes. One notable
difference is that math requirements
were associated with a higher probability of bankruptcy. Brown suggested that this might be an indicator
of greater financial savvy, rather than
a measure of imprudent behavior, as
some prior studies have found that
households tend to forgo the option to
enter bankruptcy even when it would
appear to be economically rational.
These effects were economically
significant as well. For example, an
additional year of math was associated
with a decline in auto loan and credit
card balances of $890. Similarly, the
introduction of the financial literacy
requirement was associated with a
decline in auto loan and credit card
balances of $580.
Brown explained that the effects
of the economics course requirement
were quite different. The economics requirement was not associated
with a higher probability of having
a credit score, but it was associated
with higher average debt balances, as
well as a greater prevalence of repayment problems. Brown suggested that
an economics course might demystify
debt usage without promoting greater
financial savvy.

HOUSE PRICES, COLLATERAL,
AND SELF-EMPLOYMENT
Manuel Adelino of Duke University discussed the results of his study
with Antionette Schoar and Felipe
Severino on the effects of higher house
prices during 2002-07 on the growth
of very small businesses. Adelino explained that there are numerous channels through which higher house prices
might affect small-business growth.
The authors sought evidence for the
collateral effect, in which higher
house prices ease credit constraints by
permitting small-business owners to
post their houses as collateral for bank
loans. Adelino argued that the authors
had indeed found compelling evidence
for this collateral channel, despite formidable empirical challenges.
The main challenge was to disentangle the collateral channel from demand-driven effects, in which stronger
demand promotes both small-business
growth and higher house prices. The
authors’ primary identifying assumption was that while higher demand
should affect both larger and smaller
firms, the collateral channel should
operate only for small firms. Since
borrowing needs for larger firms are
likely to be much larger than the value
of a house, higher house prices were
unlikely to have an appreciable effect
on larger firms’ ability to borrow. Using county-level data from the Census
Bureau that identifies the number of
employees at each establishment, the
authors found that higher house prices
were significantly associated with higher employment growth at the smallest
enterprises (one to four employees) and
that this positive effect declined monotonically with firm size, consistent with
growth at the small enterprises being
driven by the collateral channel.
The authors proceeded to use
detailed data about firm characteristics from a number of other sources,
both to lend greater plausibility to
their claim for the collateral channel
www.philadelphiafed.org

and, particularly, to rule out demanddriven effects. Adelino argued that
even for very small firms, house values
were unlikely to be an important
determinant of the ability to borrow
if the firms’ capital needs were large.
The authors used the Census Bureau’s
Survey of Business Owners Public Use
Microdata Sample, which surveys small
firms about capital outlays at their
startup, among other firm characteristics. Consistent with the authors’
hypothesis, the positive effect of higher
house prices was much stronger for
those firms with lower capital needs.
They examined whether their results
were driven by firms in the nontradable goods sector — arguably those
firms most likely to be affected by local
demand — or by firms engaged in construction — those firms most likely to
be directly affected by a local housing
boom. They found that the positive
relationship between house prices and
employment growth was not driven by
these types of firms. The authors also
found that their results held for firms
in industries that ship their goods long
distances, again addressing the concern that employment growth might be
affected by local demand.
Finally, the authors performed a
back-of-the-envelope calculation to
estimate the economic importance of
the collateral channel. Adelino estimated that the collateral channel can
account for 15 percent to 25 percent
of the increase in employment growth
due to higher house prices during the
sample period, compared with approximately 40 percent that can be assigned
to the effect of higher home prices on
household demand.
UNEMPLOYMENT INSURANCE
AND CONSUMER CREDIT
Brian Meltzer of Northwestern
University reported on the results of
a study with Joanne Hsu and David
Matsa that measured the effects of
unemployment insurance (UI) on
www.philadelphiafed.org

both household delinquency and the
supply of credit. Meltzer argued that
while other studies have examined
the various effects of unemployment
insurance — for example, the effect
on labor search or on households’
ability to smooth consumption — the
authors’ study was the first to examine
whether more generous unemployment
insurance might affect credit market
outcomes. Broadly, the authors found

cross-state differences in household
delinquency and foreclosure associated
with differences in extended unemployment benefits put in place during
the Great Recession.
Meltzer and his coauthors hypothesized that suppliers of credit would be
more prone to offer credit on attractive
terms in those states where household
income was stabilized through higher
unemployment insurance. Consis-

The authors found that unemployment was
less likely to lead to mortgage delinquency
and foreclosure in those states where
unemployment insurance was more generous.
evidence that more generous unemployment insurance was associated
with statistically and economically
significant reductions in household
delinquency and increases in access
to credit. The authors’ approach was
to exploit the variation across states
and over time in the generosity of
unemployment benefits as a means
to identify the causal effects of UI on
credit markets. They used a number of
different data sets, covering the sample
period 1992-2011.
The main results were striking.
In particular, the authors found that
unemployment was less likely to lead
to mortgage delinquency and foreclosure in those states where unemployment insurance was more generous.
The economic effects were large. For
example, a $1,000 increase in a state’s
maximum unemployment benefit was
associated with a 5 percent decline
in delinquency (compared with the
sample mean) for unemployed households in that state. Furthermore a
$1,000 increase was associated with a
12 percent decline (compared with the
sample mean) in foreclosures for unemployed households. The authors found
similar effects when they examined the

tent with their hypothesis, the authors
found evidence of lower mortgage
spreads in those states with higher
maximum unemployment insurance
benefits. In addition, the authors examined cross-state variation in home
equity line of credit (HELOC) offers
using data from Mintel Comperemedia, a data provider that tracks credit
card offers reported by their sample of
households. The Mintel data set contains demographic information about
participating households, which permitted the authors to identify supply
effects with more precision. During
the sample period 2000-11, the authors
found that unemployed homeowners
in those states with more generous unemployment benefits were more likely
to receive a HELOC offer. In addition,
they found that all households in such
states were more likely to receive credit
card offers and that the offers were on
more generous terms, while the effects
were strongest for low-income households. Specifically, they found that for
every $1,000 of additional maximum
UI benefits, low-income households
were offered $900 in additional credit
and that interest rates were 50 basis
points lower.
Business Review Q1 2014 29

BANK PROFITABILITY AND
DEBIT CARD INTERCHANGE
REGULATION: BANK
RESPONSES TO THE DURBIN
AMENDMENT
Mark Manuszak of the Federal Reserve Board presented his joint research
with Benjamin Kay and Cindy Vojtech
into the effects of the Durbin Amendment of the Dodd-Frank Act on bank
profitability. Among other provisions,
the Durbin Amendment, codified in
Regulation II, included ceilings on
interchange fees for debit card transactions for all banks with assets exceeding $10 billion. Manuszak cited industry participants who predicted that
banks would respond to the price ceiling by raising deposit account fees or
by cutting costs in other parts of their
operations. Broadly, the authors found
evidence that banks did raise deposit
account fees, although not enough to
offset the decline in fees due to price
ceilings, but they found no evidence of
changes in operations to reduce costs.
The authors’ identification strategy was to exploit the exemption from
the interchange fee ceiling for banking
organizations with assets of less than
$10 billion, plausibly an exogenous
source of variation. Manuszak argued
that balance sheet differences between
banks above and below the $10 billion
cutoff after the imposition of Regulation II can be ascribed to the imposi-

30 Q1 2014 Business Review

tion of price ceilings.
Using data collected quarterly by
banking regulators to examine progressively broader revenue categories, the
authors found that interchange fee income — the narrowest category, which
includes both credit card and debit
card interchange income — declined
approximately 36 percent in response
to the price ceiling.4 Thus, banks did
not successfully make up for their loss
of interchange income on debit cards
by increasing interchange income on
credit cards (which were not subjected to price ceilings under the new
regulation). A broader category, other
noninterest income, fell by nearly 20
percent, suggesting that other sources
of noninterest income did not rise
enough to offset the fee ceiling.
The broadest category they
considered, total noninterest income,
was not affected significantly by the
ceiling. Manuszak explained that one
of the components of total noninterest income, deposit fees, increased by
4 percent to 8 percent. This offset
13 percent to 25 percent of the lost
interchange income. The authors
viewed this increase as evidence of
market power, with banks raising the
price of a bundled product in response
4
Specifically, the authors use data collected
about bank holding companies, the so-called
Y-9R.

to a price ceiling on another product
in the bundle.
Using the Federal Deposit Insurance Corporation’s Summary of
Deposit data set, the authors found
no evidence that Regulation II led to
branch closings. Nor did they find
any evidence from Call Report data of
other adjustments in operations to cut
costs in response to the lost revenue
from the ceilings; instead, the authors
found evidence of higher expenses,
perhaps an indication of higher quality,
according to the authors.
Finally, the authors examined in
more detail their assumption that the
$10 billion cutoff was actually exogenous. Informally, Manuszak argued
that while many provisions of the
Dodd-Frank Act included revenue cutoffs — including some with the $10 billion cutoff — these provisions were imposed at many different times. Using
the actual date on which Regulation II
was imposed as the event date for the
present study significantly reduced the
likelihood that other provisions were
muddying their findings. Formally, the
authors tested for the possibility that
banks near the $10 billion cutoff might
have strategically limited asset growth
or reduced total assets to fall below the
threshold. Supporting their assumption that the $10 billion threshold was
exogenous, they found no evidence for
such behavior. BR

www.philadelphiafed.org

Research Rap

Abstracts of
research papers
produced by the
economists at
the Philadelphia
Fed

Economists and visiting scholars at the Philadelphia Fed produce papers of interest to the professional researcher on banking, financial markets, economic forecasting, the housing market, consumer
finance, the regional economy, and more. More abstracts may be found at www.philadelphiafed.org/
research-and-data/publications/research-rap/. You can find their full working papers at
http://www.philadelphiafed.org/research-and-data/publications/working-papers/.

Assessing DSGE Model Nonlinearities
The authors develop a new class of nonlinear time-series models to identify nonlinearities
in the data and to evaluate nonlinear DSGE
models. U.S. output growth and the federal
funds rate display nonlinear conditional mean
dynamics, while inflation and nominal wage
growth feature conditional heteroskedasticity.
They estimate a DSGE model with asymmetric
wage/price adjustment costs and use predictive checks to assess its ability to account for
nonlinearities. While it is able to match the
nonlinear inflation and wage dynamics, thanks
to the estimated downward wage/price rigidities, these do not spill over to output growth or
the interest rate.
Working Paper 13-47. S. Borağan Aruoba,
University of Maryland, Federal Reserve Bank
of Philadelphia Visiting Scholar; Luigi Bocola,
University of Pennsylvania; Frank Schorfheide,
University of Pennsylvania, National Bureau of
Economic Research, Federal Reserve Bank of
Philadelphia Visiting Scholar.
Natural Amenities, Neighborhood
Dynamics, and Persistence in the Spatial
Distribution of Income
The authors present theory and evidence
highlighting the role of natural amenities in
neighborhood dynamics, suburbanization, and
variation across cities in the persistence of the
spatial distribution of income. The authors’
model generates three predictions that they
confirm using a novel database of consistentboundary neighborhoods in U.S. metropolitan
www.philadelphiafed.org

areas, 1880-2010, and spatial data for natural features such as coastlines and hills. First, persistent
natural amenities anchor neighborhoods to high
incomes over time. Second, downtown neighborhoods in coastal cities were less susceptible to
the suburbanization of income in the mid-20th
century. Third, naturally heterogeneous cities
exhibit spatial distributions of income that are
dynamically persistent.
Working Paper 13-48. Sanghoon Lee, University of British Columbia; Jeffrey Lin, Federal Reserve
Bank of Philadelphia.
Competition, Syndication, and Entry in the
Venture Capital Market
There are two ways for a venture capital
(VC) firm to enter a new market: initiate a new
deal or form a syndicate with an incumbent. Both
types of entry are extensively observed in the
data. In this paper, the author examines (i) the
causes of syndication between entrant and incumbent VC firms, (ii) the impact of entry on VC
contract terms and survival rates of VC-backed
start-up companies, and (iii) the effect of syndication between entrant and incumbent VC firms on
the competition in the VC market and the outcomes of incumbent-backed ventures. By developing a theoretical model featuring endogenous
matching and coalition formation in the VC market, the author shows that an incumbent VC firm
may strategically form syndicates with entrants to
maintain its bargaining power. Furthermore, an
incumbent VC firm is less likely to syndicate with
entrants as the incumbent’s expertise increases.
The author finds that entry increases the likeliBusiness Review Q1 2014 31

hood of survival for incumbent-backed start-up companies
while syndication between entrants and incumbents dampens
the competitive effect of entry. Using a data set of VC-backed
investments in the U.S. between the years 1990 and 2006,
the author finds empirical evidence that is consistent with
the theoretical predictions. The estimation results remain
robust after she controls for the endogeneity of entry and
syndication.
Working Paper 13-49. Suting Hong, Federal Reserve Bank
of Philadelphia.
A Tale of Two Commitments: Equilibrium Default
and Temptation
The author constructs the life-cycle model with equilibrium default and preferences featuring temptation and selfcontrol. The model provides quantitatively similar answers to
positive questions such as the causes of the observed rise in
debt and bankruptcies and macroeconomic implications of
the 2005 bankruptcy reform, as the standard model without temptation. However, the temptation model provides
contrasting welfare implications, because of overborrowing
when the borrowing constraint is relaxed. Specifically, the
2005 bankruptcy reform has an overall negative welfare
effect, according to the temptation model, while the effect
is positive in the no-temptation model. As for the optimal
default punishment, welfare of the agents without temptation
is maximized when defaulting results in severe punishment,
which provides a strong commitment to repaying and thus a
lower default premium. On the other hand, welfare of agents
with temptation is maximized when weak punishment leads
to a tight borrowing constraint, which provides a commitment against overborrowing.
Working Paper 14-1. Makoto Nakajima, Federal Reserve
Bank of Philadelphia.
The Perils of Nominal Targets
A monetary authority can be committed to pursuing an
inflation, price-level, or nominal output target yet systematically fail to achieve the specified goal. Constrained by the
zero lower bound on the policy rate, the monetary authority is
unable to implement its objectives when private-sector expectations stray from the target in the first place. Low-inflation
expectations become self-fulfilling, resulting in an additional
Markov equilibrium in which both nominal and real variables
are typically below target. Introducing a stabilization goal for
long-term nominal rates anchors private-sector expectations
on a unique Markov equilibrium without fully compromising
the policy responses to shocks.
Working Paper 14-2/R. Roc Armenter, Federal Reserve Bank
of Philadelphia.

32 Q1 2014 Business Review

Recall and Unemployment
Using data from the Survey of Income and Program
Participation (SIPP) covering 1990-2011, the authors document
that a surprisingly large number of workers return to their previous employer after a jobless spell and experience more favorable labor market outcomes than job switchers. Over 40% of
all workers separating into unemployment regain employment
at their previous employer; over a fifth of them are permanently separated workers who did not have any expectation of
recall, unlike those on temporary layoff. Recalls are associated
with much shorter unemployment duration and better wage
changes. Negative duration dependence of unemployment
nearly disappears once recalls are excluded. The authors also
find that the probability of finding a new job is more procyclical and volatile than the probability of a recall. Incorporating
this fact into an empirical matching function significantly
alters its estimated elasticity and the time-series behavior of
matching efficiency, especially during the Great Recession. The
authors develop a canonical search-and-matching model with
a recall option where new matches are mediated by a matching
function, while recalls are free and triggered by both aggregate
and job-specific shocks. The recall option is lost when the
unemployed worker accepts a new job. A quantitative version
of the model captures well the authors’ cross-sectional and
cyclical facts through selection of recalled matches.
Working Paper 14-3. Shigeru Fujita, Federal Reserve Bank
of Philadelphia; Giuseppe Moscarini, Yale University, National
Bureau of Economic Research.
Shrinkage Estimation of High-Dimensional Factor Models
with Structural Instabilities
In high-dimensional factor models, both the factor
loadings and the number of factors may change over time.
This paper proposes a shrinkage estimator that detects and
disentangles these instabilities. The new method simultaneously and consistently estimates the number of pre- and
post-break factors, which liberates researchers from sequential
testing and achieves uniform control of the family-wise model
selection errors over an increasing number of variables. The
shrinkage estimator only requires the calculation of principal
components and the solution of a convex optimization problem, which makes its computation efficient and accurate. The
finite sample performance of the new method is investigated
in Monte Carlo simulations. In an empirical application, the
authors study the change in factor loadings and emergence of
new factors during the Great Recession.
Working Paper 14-4. Xu Cheng, University of Pennsylvania; Zhipeng Liao, University of California, Los Angeles; Frank
Schorfheide, University of Pennsylvania, National Bureau of
Economic Research, Federal Reserve Bank of Philadelphia Visiting Scholar.
www.philadelphiafed.org

INSIDE
Issn: 0007-7011

First QUARTER 2014

The Business Review is published four
times a year by the Research Department of
the Federal Reserve Bank of Philadelphia.
The views expressed by the authors are not
necessarily those of the Federal Reserve.
We welcome your comments at PHIL.
BRComments@phil.frb.org.

Does the U.S. Trade More Widely Than It Appears?

For a free subscription, go to www.
philadelphiafed.org/research-and-data/
publications. Archived articles may be
downloaded at www.philadelphiafed.
org/research-and-data/publications/
business-review. To request permission
to reprint articles in whole or in part,
click on Permission to Reprint at www.
philadelphiafed.org/research-and-data/
publications. Articles may be photocopied
without permission. Microform copies may
be purchased from ProQuest Information
and Learning, 300 N. Zeeb Road, Ann
Arbor, MI 48106.
The Federal Reserve Bank of Philadelphia
is one of 12 regional Reserve Banks that,
together with the U.S. Federal Reserve
Board of Governors, set and implement
U.S. monetary policy. The Philadelphia
Fed oversees and provides services to
banks and the holding companies of
banks and savings and loans in the Third
District, comprising eastern Pennsylvania,
southern New Jersey, and Delaware. The
Philadelphia Fed also promotes economic
development, fair access to credit, and
financial literacy in the Third District.
Charles I. Plosser
President and Chief Executive Officer
Loretta J. Mester
Executive Vice President and
Director of Research

TM

1

Given the importance of international trade for economic growth, why
in any given year do few U.S. firms export their wares, and why are most
U.S. goods not traded with most countries? Roc Armenter presents some
intriguing evidence suggesting the U.S. does export most of its products to
most countries, just not very often.

Location Dynamics: A Key Consideration for Urban Policy

9

What determines where businesses and households locate? Location
decisions can affect the economic health of cities and metropolitan areas.
But as Jeffrey Brinkman explains, how firms, residents, and workers go
about choosing where to locate can involve complex interactions with
sometimes unpredictable consequences.

Brewing Bubbles: How Mortgage Practices Intensify
Housing Booms

16

Even before the Great Recession, housing market bubbles have been
associated with severe financial crises around the world. Why do these
booms and busts occur? Leonard Nakamura explains that part of the
answer may lie with how mortgage lending practices appear to respond to
rising and falling house prices in somewhat unexpected ways.

New Perspectives on Consumer Behavior in Credit
and Payments Markets

25

Mitchell Berlin summarizes new research on household finance
presented at a joint conference sponsored by the Federal Reserve Bank of
Philadelphia’s Research Department and Payment Cards Center.

Colleen Gallagher
Research Publications Manager

Research Rap

Dianne Hallowell
Art Director and Manager

Abstracts of the latest working papers produced by the Research
Department of the Federal Reserve Bank of Philadelphia.

31

Now Available:
The Federal Reserve Historical Inventory
and History Web Gateway
The Federal Reserve’s inventory of historical collections offers
students, researchers, and others online access to thousands of
documents and artifacts related to the Fed’s 100-year history
from sources across the Federal Reserve System, universities,
and private collections. To view the inventory, go to http://www.
federalreserve.gov/newsevents/press/other/other20120530a1.
pdf. More content and features will be added over time. Do you
know of materials that should be included? Information may be
submitted at http://www.federalreserve.gov/apps/contactus/
feedback.aspx.
Also available is the new Federal Reserve History Web Gateway.
Designed to encourage deeper reflection on the Fed’s role in the
nation’s economy, the gateway presents biographies, timelines,
photographs, and more that tell the story of the Fed’s purpose,
key economic events affecting U.S. history, and the people who
shaped the Fed. Go to http://www.federalreservehistory.org/.
On December 23, 1913, President Woodrow Wilson signed the
Federal Reserve Act, establishing the Federal Reserve System
as the U.S. central bank. Its mission is to conduct the nation’s
monetary policy; supervise and regulate banks; maintain the
stability of the financial system; and provide financial services to
depository institutions, the U.S. government, and foreign official
institutions. The Federal Reserve Board opened for business on
August 10, 1914, and on November 16, 1914, the 12 regional
Reserve Banks were open for business.
Congress designed the Fed with a decentralized structure. The
Federal Reserve Bank of Philadelphia — serving eastern Pennsylvania, southern New Jersey, and Delaware — is one of 12
regional Reserve Banks that, together with the seven-member
Board of Governors in Washington, D.C., make up the Federal
Reserve System. The Board, appointed by the President of the
United States and confirmed by the Senate, represents the public sector, while the Reserve Banks and the local citizens on their
boards of directors represent the private sector.
The Research Department of the Philadelphia Fed supports the
Fed’s mission through its research; surveys of firms and forecasters; reports on banking, markets, and the regional and U.S.
economies; and publications such as the Business Review.

PRESORTED STANDARD
U.S. POSTAGE

Third
FirstQuarter
Quarter2009
2014

PAID

Federal R eserve Bank of Philadelphia

Volume 97, Issue 1

PHILADELPHIA, PA
PERMIT #583

Ten Independence Mall
Philadelphia, Pennsylvania 19106-1574

ADDRESS SERVICE REQUESTED

Gettysburg National Historical Park, PA

Does the U.S. Trade More Widely Than It Appears?
Location Dynamics: A Key Consideration for Urban Policy				
Past and current Business Review articles can be downloaded for
free from our website. There you will also find data and other
information on the regional and national economy, consumer
finance issues, resources for teachers, information on our
community development initiatives, the latest research publications
from the Federal Reserve Bank of Philadelphia, and more.

explore

and

Brewing Bubbles: How Mortgage Practices Intensify Housing Booms
New Perspectives on Consumer Behavior in Credit and Payments Markets		
Research Rap

learn

Photo by Lisa DeCusati

www.philadelphiafed.org