View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Why Didn’t Canada
Have a Financial Crisis?

Riding
the Rails

Interview with
Mark Gertler

VOLUME 17
NUMBER 4
FOURTH QUARTER 2013

COVER STORY

14

Risky Business? Insurance is boring … or at least it’s supposed to be

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
DIRECTOR OF RESEARCH

FEATURES

John A. Weinberg

18

EDITORIAL ADVISER

Kartik Athreya

Directing Traffic: Development patterns and other factors have
shaped transportation options in the Fifth District. Where does
rail transit fit in?

EDITOR

Aaron Steelman
SENIOR EDITOR

David A. Price
MANAGING EDITOR /DESIGN LEAD

22

Why Was Canada Exempt from the Financial Crisis? Canada has
avoided crises for 180 years, while we have been prone to them.
Can we recreate its stability here?

Kathy Constant
STA F F W R I T E R S

Renee Haltom
Jessie Romero
Tim Sablik
E D I TO R I A L A S S O C I AT E

Lisa Kenney

26

Credit Scoring and the Revolution in Debt: How a number
changed lending (and got some Americans in over their heads)

CONTRIBUTORS

Jamie Feik
Charles Gerena
Santiago Pinto
Karl Rhodes
DESIGN

28

The Future of Coal: Prospects for West Virginia’s highest-paying
industry appear bleak

ShazDesign
Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/RichFedResearch

DEPARTMENTS

1
2
6
10
11
12
13
32
38
42
43

President’s Message/Rethinking the “Lender of Last Resort”
Upfront/Regional News at a Glance
Federal Reserve/The Last Crisis Before the Fed
Jargon Alert/Public Goods
Research Spotlight/The Case of the Missing Deflation
Policy Update/Allocating Airwaves
The Profession/The Future of History
Interview/Mark Gertler
Economic History/Water Wars
Around the Fed/The Power of Words
Book Review/Fortune Tellers: The Story of America’s First
Economic Forecasters
44 District Digest/Urban Crime: Deterrence and Local Economic Conditions
52 Opinion/Down But Not Out

COVER CONCEPT: KATHY CONSTANT

Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications
or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics below.
Permission from the editor is
required before reprinting photos,
charts, and tables. Credit Econ
Focus and send the editor a copy of
the publication in which the
reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025X (Online)

PRESIDENT’SMESSAGE

Rethinking the ‘Lender of Last Resort’
any people have come to think of central banks
as a necessary, almost inherent, part of a healthy
financial system. But for some functions the Fed
has adopted since it was created in 1913, there is no reason,
in principle, that they must be performed by a central bank.
An example is the function of “lender of last resort,”
a phrase often used to describe how central banks have
tended to respond to financial crises. When many market
participants withdraw funding at once, it can mean
financial distress for fundamentally solvent borrowers,
a situation often called a “run.” Temporary central
bank lending to sound institutions can, therefore, prevent
unnecessary failures.
Two articles in this issue of Econ Focus explore alternatives
to a central bank as lender of last resort. One discusses the
little-known Panic of 1914, which occurred before the Fed
had officially opened its doors. To end the crisis, the
Treasury issued fully collateralized emergency currency, and
private clearinghouses extended loan certificates to their
members. Both helped banks meet depositors’ demands.
While the Treasury played a strong role in this response, the
episode demonstrated that fast access to an asset-backed
currency — which need not come from the central government — could stem a run.
Another article in this issue explores why Canada has
been able to fend off financial crises almost entirely, while
the United States has been especially prone to them. Part of
the answer is that, from Canada’s inception, its banking
system was structured to be less vulnerable to shocks. Banks
could establish wide networks of branches to diversify their
portfolios. They were also allowed to issue new currency
backed by their own general assets to meet the demands of
depositors. Both features enabled banks to lend reserves
to one another in emergencies and expand the supply of
currency elastically.
These responses were effectively ruled out by laws in the
United States, as the articles discuss. The lack of diversification and reliable access to reserves made banks rather
vulnerable to local economic shocks and seasonal shifts in
currency demand — which often led to the very bank runs
that our currency and branching restrictions left the banking system ill-equipped to handle.
The Fed was given lender of last resort powers in order to
provide a more elastic currency to stave off panics in a way
that existing laws prevented the banking system from
doing on its own. Thus, the Fed was founded to ameliorate
the destabilizing effects of other policies.
There is a parallel to today. The Fed’s functions have
evolved, now with a much larger role in setting monetary
policy, the core mission of the Fed and other central banks.
But its lender of last resort powers remain. Many people

M

have argued that the Fed’s
“emergency” lending powers
under section 13(3) of the
Federal Reserve Act — which
enables the Fed to extend
broad-based loans to troubled
markets — must be preserved
to enable a response to
all manner of “runs” in the
financial system.
But one important source
of excessive risk, in my view,
is the government safety net itself, which includes the
lender of last resort. A government backstop reduces the
borrowers’ incentives to design contracts and credit market
instruments that are less vulnerable to runs. It also diminishes the incentives of lenders to monitor borrowers. That
makes crises more likely and the government’s liquidity
support more likely to be called upon. As a result, the
government’s financial “safety net” only grows over time.
Estimates by Richmond Fed researchers show that 57 percent of the entire financial sector is either explicitly
guaranteed by the government or market participants can
reasonably expect guarantees based on past statements and
actions. To the extent that the safety net results in excessive
risk-taking, the Fed’s present-day lender of last resort powers
exist to counteract instabilities created by flawed policies,
just as they did when the Fed was founded.
A better way to deal with financial instability would be to
scale back the incentives for excessive risk-taking. A smaller
government safety net would give markets greater incentive
to adopt their own protections against runs. This may not
rule out runs entirely. But history has convinced me that the
self-fulfilling nature of government backstops cannot reliably prevent runs either and can in fact cause instability. The
experiences of 1914 and Canada force one to consider that
there could be alternatives to a centralized role of lender of
last resort, some which could conceivably be devised by
markets themselves under the right set of incentives.
In American history, we have often treated financial
system instabilities with reforms that don’t address the fundamental problem. My hope is that the Fed’s second century
will prove that policymakers are willing to take a harder look
at the true sources of instability in our financial system. EF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND
ECON FOCUS | FOURTH QUARTER | 2013

1

UPFRONT

Regional News at a Glance

Bringing Home the Bacon

China’s WH Group Buys Virginia’s Smithfield Foods

PHOTOGRAPHY: COURTESY OF THE BOSTON PUBLIC LIBRARY/TICHNOR BROS. COLLECTION

Last September, shareholders of Virginia-based Smithfield Foods, the United States’ largest
pork producer, approved the company’s purchase by the Chinese company WH Group for
$4.7 billion.
The deal valued Smithfield at $7.1 billion taking into
account assumed debt. Shareholders received $34 per
share, a 31 percent premium over the closing price the
day before the purchase was announced in May. The
acquisition was the biggest purchase ever of an
American company by a Chinese company.
WH Group, a holding company that also owns
China’s largest meat processor, purchased Smithfield
primarily to gain access to a reliable source of imports.
Pork accounts for more than three-quarters of China’s
meat consumption, and demand has multiplied as more
people move into the middle class. But the country’s
farm system is fragmented and inefficient, and processors have struggled to keep pace. At the same time,
Chinese consumers are increasingly wary of Chinese
brands after a history of food safety scandals, including
the discovery of the illegal additive clenbuterol in the
pork of a WH Group subsidiary.
The purchase is also part of WH Group’s efforts to
appeal to global investors, as was a name change in
January from Shuanghui International. WH Group is
currently owned by an assortment of private equity

Smithfield Foods got its start in 1936 as a small meatpacking company in Smithfield, Va., the home
of “genuine Smithfield hams.”

2

ECON FOCUS | FOURTH QUARTER | 2013

firms, including the private equity arm of Goldman
Sachs, but the company started taking orders for a
$5.3 billion initial public offering in mid-April. The IPO
will be the largest in Asia since Japan Airlines raised
$8.5 billion in September 2012.
For Smithfield, the deal is an opportunity to keep
growing despite a stagnant domestic market. Total U.S.
pork consumption has been flat for the past three
decades as per capita consumption has steadily
declined. “China is the fastest-growing and largest overseas market,” Smithfield CEO Larry Pope said in a
statement. “Increasing our sales to China is central to
our growth strategy.”
American pig farmers also stand to benefit from
increased exports; the North Carolina Pork Council
endorsed the deal for leading to “expanded overseas
sales and more opportunities for the thousands of
North Carolinians who work in the pork production
chain.” North Carolina is the United States’ secondlargest pig farming state, with $2.56 billion in sales
in 2012.
But some politicians and farmers’ unions are worried
about the effects on U.S. food safety, national security,
and intellectual property. The Senate Agriculture
Committee held a hearing about the merger in July at
which senators expressed concern that the purchase
could harm the United States’ food supply, or that it
was a ploy by WH Group to appropriate Smithfield’s
expertise and technology in order to encroach on U.S.
producers’ share of other export markets.
The acquisition required federal clearance. The
Committee on Foreign Investment in the United States
(CFIUS), a multi-agency body, reviews proposed
foreign takeovers of U.S. companies for their potential
national security implications. Both Smithfield and
WH Group executives have stated that WH Group will
not seek to export Chinese pork to the United States.
In addition, WH Group has pledged that it will retain
Smithfield’s entire management team, continue to
operate all of Smithfield’s facilities, and honor the
company’s existing union contracts. CFIUS approved
the merger without conditions in early September.

Matthew Slaughter, associate dean of Dartmouth’s
Tuck School of Business, says that the deal is likely to
be good for both Smithfield and the economy as a
whole. “What often happens is that when a U.S.
company is acquired by a foreign company, the
foreign company’s connections and expertise in
other parts of the world will help pull exports from
America,” he explains.
And more generally, Slaughter says, foreign direct
investment can lead to higher productivity and wages
in the receiving country. “The preponderance of evidence for the United States and many other countries

shows that on net, cross-border foreign investment
generates large benefits for the countries that
are involved, and for a lot of the workers that are
involved too.”
Some Smithfield workers are nervous about the
change, but so far it appears to be business as usual.
WH Group’s chairman and CEO visited Smithfield’s
headquarters on their first official day as owners,
which happened to be employee appreciation day.
While the executives toured the facility, their new
employees ate burgers and barbecue (pork, of course)
on the lawn outside.
—JESSIE ROMERO

Rising Tide

Reforming Flood Insurance Causes Hardship for Homeowners
hen Hurricane Betsy hit Louisiana in 1965,
it flooded thousands of homes and caused
$1.5 billion in damage ($11.1 billion in today’s dollars).
In the aftermath, Congress created the National
Flood Insurance Program (NFIP). Today the NFIP
covers about 5.6 million participants and insures
$1.29 trillion in assets. Almost exactly 40 years after
Betsy, when Hurricane Katrina slammed the Gulf
Coast, the NFIP was available to insure homeowners
— but claims from Katrina and subsequent storms
like Hurricane Sandy left the fund more than
$20 billion underwater. To make up the deficit,
Congress passed the Biggert-Waters Flood Insurance
Reform Act of 2012, which began phasing out subsidized flood insurance rates in 2013.
Subsidies were seen as necessary to encourage participation early in the NFIP’s history. They applied
largely to properties constructed before the creation
of a national flood insurance rate map in 1974.
Properties that were later remapped into higher-risk
flood zones could also be “grandfathered” into lower
rates if they had been built using the best practices
at the time. As a result, about 20 percent of NFIP
policyholders receive some form of subsidy. By
raising these rates to reflect current flood risks,
Congress hoped to put the program back on solid
financial footing. But affected homeowners balked at
the costs.

W

“It has had a tremendous impact,” says Tomp
Litchfield, president of the North Carolina
Association of Realtors. “I have seen rates in our area
go up by anywhere from 50 percent to well over 100
percent, or in some cases 200 percent.”
Some homeowners reported rate increases of
more than $20,000 a year. These sudden increases
trapped residents, making their longtime homes
both unaffordable and unsellable. Litchfield says he
has seen several home sales fall through because of
the uncertainty surrounding flood insurance rates.
In response to public outcry, the Senate and House
passed legislation to limit yearly increases. President
Obama signed the bill in March.
The fiscal challenges currently facing the NFIP
have been anticipated since its inception, according
to Erwann Michel-Kerjan, executive director of the
Risk Management and Decision Processes Center at
the University of Pennsylvania’s Wharton School of
Business. In a 2010 Journal of Economic Perspectives
article, Michel-Kerjan wrote that the NFIP was
“designed to be financially self-supporting, or close
to it, most of the time, but cannot handle extreme
financial catastrophes by itself.”
A key problem in providing flood insurance is that
the risks are highly correlated. In other insurance
markets, such as health or auto, the burden of risk
can be spread across a wide geographic area and

ECON FOCUS | FOURTH QUARTER | 2013

3

Members of the FEMA Urban Search and Rescue task force survey
New Orleans following Hurricane Katrina in 2005. Flood damage
claims from the devastating storm left the National Flood
Insurance Program about $17 billion in debt.

varying risk profiles, making it unlikely that all
policyholders will face catastrophic risks at the same
time. (See “Risky Business?,” p. 14.) But only homeowners in flood zones are likely to purchase flood
insurance, and catastrophic events, if they occur, are
likely to affect a large portion of policyholders at the
same time, placing a significant financial burden on
the insurer.

Raising rates to reflect true flood risks can
mitigate the financial risk to the insurer, as well as
address another problem: moral hazard. Insuring
against consequences can encourage greater risktaking, and subsidies increase this danger by further
isolating policyholders from the costs of risky
behavior. In the case of flood insurance, subsidies
may encourage overbuilding in flood-prone areas.
Indeed, only about 1 percent of insured properties
are classified as “repetitive-loss properties” by
FEMA, but nearly all of them pay subsidized flood
insurance rates, and they have accounted for roughly
a quarter of all claim payments between 1978 and
2008. About 10 percent of these repetitive-loss properties have cumulative flood insurance claims that
exceed the value of the property itself.
In a 2013 report, the Government Accountability
Office (GAO) noted that even if the NFIP’s
debt were forgiven, rates would need to increase
“significantly” to build up a reserve fund for future
catastrophes.
“The financial reforms included in the [BiggertWaters Act] could go a long way toward reducing
the financial exposure created by the program,” the
GAO concluded.
—TIM SABLIK

Making Amends

NC Offers Payments to Victims of Its Eugenics Program
orth Carolina is now collecting claims from
victims of its 48-year forced sterilization program. The gathering of claims is part of a $10 million
compensation plan signed into law by Gov. Pat
McCrory last July. North Carolina is the first state to
offer compensation to victims of such programs.
About 7,600 men and women were sterilized
under North Carolina’s program, which ran from
1929 to 1977, with the last victims sterilized in 1974.
The state authorized the practice for “any mentally
diseased, feebleminded or epileptic inmate or
patient” in a public institution; in addition, social
workers could petition for the sterilization of members of the public. Sterilization could be done in the
best interest of the patient — or “for the public
good.” This phrase, combined with vague designations such as “feebleminded,” led to sweeping
implementation. Victims included children as young

PHOTOGRAPHY: JOCELYN AUGUSTINO, FEMA

N

4

ECON FOCUS | FOURTH QUARTER | 2013

as 10, illiterate teenagers, rape victims, and the poor.
North Carolina was not alone in its implementation of “eugenics” — a widespread movement that
believed certain conditions should be eliminated
from the population by sterilizing anyone who might
pass them on. More than 60,000 people suffered
under such programs in 32 states.
Why is North Carolina the only state, thus far,
offering compensation? The reason may be very
simple — its victims are likely to still be alive. Most
states abolished their eugenics practices after World
War II, while North Carolina sterilized 70 percent of
its victims after 1945. This was partly because in the
late 1940s, North Carolina began using sterilization
as a way to combat poverty, which led to an increase
in victims who did not reside in state institutions.
In 2002, North Carolina became one of the first
states to formally apologize for its eugenics program,

and in 2011, a gubernatorial task force on eugenics
was created. In 2012, the task force submitted recommendations that became the basis for the final
compensation program.
The $10 million will be distributed evenly among
all claimants. In order to be eligible, claimants must
have been alive on June 30, 2013, and must prove they
were involuntarily sterilized by the state (including
minors and incompetent adults who were sterilized
with parental/guardian consent). The N.C. Office of
Justice for Sterilization Victims was created to assist

with this process. All claims must be received by June
30, 2014, with payments expected to be made on June
30, 2015.
Information packets were mailed to 800 potential
claimants in November, though public information
officer Chris Mears says it is still too early to estimate
how many will file claims. According to a Washington
Post article in December, unnamed state officials estimated around 200 claimants, which would mean a
one-time, tax-free payment of $50,000 per person.
—LISA KENNEY

Lowering the Bar for Beauticians

SC Weighs Proposal to Reduce Cosmetology Training
n South Carolina, it takes 1,500 hours of training
in a state-approved beauty school to become a
licensed cosmetologist. It takes only 200 hours of
training to become an emergency medical technician.
The state’s Department of Labor, Licensing and
Regulation (DLLR) compared those requirements in
late 2013 after Gov. Nikki Haley asked state agencies
to evaluate the effects of their rules and regulations
on economic growth. Among many recommendations that emerged from that review, the DLLR
stated that South Carolina should reduce the hours of
training required to become a licensed cosmetologist.
The DLLR supported its recommendation by making a classic barriers-to-entry argument: Reducing the
required training would improve economic development by making it easier and cheaper for people to
obtain jobs as cosmetologists. According to testimony
by industry representatives before a state legislative
subcommittee, students must spend $16,000 to
$20,000 to obtain the necessary training.
Economists have long hypothesized that trade and
professional associations lobby for licensing regulations to erect occupational barriers to entry. These
barriers raise wages for licensed providers, primarily
by limiting competition and improving quality.
In a 2009 study, economists Morris Kleiner of the
University of Minnesota and Alan Krueger of
Princeton University attempted to measure the influence of occupational licensing on the labor market.
They agreed that occupational licensing “serves as a
means to enforce entry barriers.” They further found
that licensing in the United States is associated with

I

wages that are about 14 percent higher for workers
who are licensed.
Kleiner and Krueger also noted that licensing laws
have proliferated significantly since 1950. According
to the Council of State Governments, state licensing
laws covered less than 5 percent of U.S. workers in the
early 1950s. That share increased to at least 20 percent by 2000, according to data from the Census
Bureau and the Department of Labor. In a combined
measure of all government licensing, a Westat survey
found that approximately 29 percent of U.S. workers
needed licenses to hold their jobs in 2008.
“The occupational association has a significant
ability to influence legislation and its administration,
especially when opposition to regulatory legislation is
absent or minimal,” they wrote. Today, most states’
licensing requirements for cosmetologists are similar
to those in South Carolina. Cosmetologists and
owners of beauty schools have convinced state
governments that these requirements are necessary
to protect the public, but the DLLR disagrees.
“Most people style their own hair every day and
commercial hair dyes are sold to the public for home
use,” the DLLR noted in its report to Haley’s
Regulatory Review Task Force. “Nail technicians
essentially paint fingernails and toenails and apply
artificial nails. Estheticians practice skin care. These
are functions that many people perform at home
without any training.”
The South Carolina Association of Cosmetology Schools
did not accept invitations to comment for this article.
—KARL RHODES

ECON FOCUS | FOURTH QUARTER | 2013

5

FEDERAL RESERVE

The Last Crisis Before the Fed
BY T I M S A B L I K

ou gentlemen are to form
the bulwark against financial
disaster in this nation,”
Treasury Secretary William Gibbs
McAdoo told the members of the first
Federal Reserve Board as they took
their oath of office on Aug. 10, 1914.
The seven men — including McAdoo,
who served as the first chairman of the
board — would not have to wait long
for their first test. Less than two
months earlier, Austria’s Archduke
Franz Ferdinand and his wife were
assassinated by a Bosnian-Serb nationalist, plunging Europe into war.
The United States would also be
swept up in the conflict, but its first
battles were waged in financial markets.
European powers needed money to
finance fighting, and that meant gold.
At the time, the United States was a
debtor nation and a minor financial
power, but the warring European
nations could no longer trade with each
other. They quickly began selling their
holdings on the New York Stock
Exchange, converting dollars to gold.
In June and July, the United States had

Y

The members of the first Federal Reserve Board are sworn in on Aug. 10, 1914.
Treasury Secretary William G. McAdoo is seated in the center of the front row.
6

ECON FOCUS | FOURTH QUARTER | 2013

nearly $70 million in net gold exports.
The effect of several European nations
calling in their debts simultaneously
created a significant external drain on
U.S. gold reserves, threatening to place
constraints on banks’ ability to lend
domestically.
It would have been a golden opportunity for the nascent Federal Reserve
to save the day. According to 19th century British economic writers Henry
Thornton and Walter Bagehot, a lender
of last resort could counter such a
threat by raising interest rates to stem
the outflow of gold to foreign nations
while lending freely to sound financial
institutions to satisfy domestic demand
for money. The Federal Reserve System
had the capacity to do just that, but
there was one problem: It wasn’t
actually up and running yet.

Panics and Reform
The tool that would rescue America in
1914 would come from the previous
banking crisis, the Panic of 1907. In
October of that year, a loss of confidence in certain New York trusts
prompted runs on those institutions.
Trusts held assets for clients, like banks,
but they were subject to fewer regulations and so were able to offer higher
returns and engage in more speculative
investments. The run on trusts quickly
spread to banks and became a fullfledged panic. While a crisis sparked by
trusts was new, banking panics in the
United States were not: Between 1863
(when the National Banking Act was
passed) and 1913 (when the Federal
Reserve Act was passed), there were six
major banking panics. The National
Banking Act had replaced competing
currencies issued by state-chartered
banks with a national currency backed
by government bonds. Prior to the Act,
there were thousands of state bank
notes in circulation, creating a nightmare for consumers to verify the value
and authenticity of any currency used

PHOTOGRAPHY: LIBRARY OF CONGRESS, PRINTS & PHOTOGRAPHS DIVISION, PHOTOGRAPH BY HARRIS & EWING, [REPRODUCTION NUMBER, E.G., LC-USZ62-123456]

On the eve of
World War I,
the United States
averted a
financial crisis

Although the Act addressed
for trade. (See “The CounterThere was no certainty that any bank
some of the shortcomings of
feiting Weapon,” Region Focus,
could depend on its neighbors for aid;
clearinghouses, it was a tempoFirst Quarter 2012.)
consequently, at the first sign of panic
rary fix. It would expire on June
National bank notes helped
every one of them made a frantic
30, 1914. Many contemporary
standardize money in the
observers doubted that it would
United States, but they
effort to call in loans and get together
succeed at preventing panics
possessed their own shortcomas much cash as possible — a
should the need arise during its
ings. The new notes were highly
procedure which invariably made
short lifespan. Some were even
inelastic, meaning the banking
matters worse instead of better.
more critical. Writing in the
system could not easily expand
— William G. McAdoo
Journal of Political Economy after
its supply of notes in response
the bill’s passage, University of
to increased demand. Since the
notes were backed by government bonds only, banks needed
Chicago economist J. Laurence Laughlin called it “a
to raise additional capital in order to increase their stock of
Pandora’s box of unknown possibilities for evil.”
currency. Even assuming they could do so, the process of
The legislation established a committee to develop a
getting new notes was slow. According to a May 2013 paper
more permanent solution, ultimately the Federal Reserve
by Jon Moen, a professor of economics at the University of
Act, which was signed into law on Dec. 23, 1913. To provide
Mississippi, and Ellis Tallman, a professor of economics at
time for setting up the new central bank, Congress extendOberlin College and senior economic advisor at the
ed the Aldrich-Vreeland Act by a year to June 30, 1915.
Cleveland Fed, “National bank note issues took nearly three
It would prove to be a prescient decision.
weeks from request to delivery.”
Banks attempted to accommodate sudden demand for
McAdoo’s War
currency through clearinghouses. These acted as something
The crisis building in the summer of 1914 had the potential
like a central bank for members: The clearinghouses could
to be far worse than the Panic of 1907. That panic had ended
issue loan certificates to them based on the value of various
in part because gold inflows from European investors looktypes of assets that a bank held, not just government bonds.
ing to capitalize on low U.S. stock prices eased liquidity
These certificates could be used to pay obligations to other
constraints. But in the summer of 1914, war-torn Europe
member banks, which freed up currency to pay depositors.
wasn’t sending gold, it was demanding it. The major
But this solution was imperfect because the clearinghouse
European stock markets shut down at the outset of hostilicertificates were not currency and therefore could not be
ties. Once Great Britain entered the war, London, the
issued to the general public. Clearinghouses also required
financial center of the world, closed its doors, throwing the
the cooperation of their members, making them unreliable
foreign exchange market into chaos. European investors
lenders of last resort.
scrambled to draw funds from the one major market still
Describing each national bank as an “isolated island of
open: America. There was no guarantee the United States
money,” McAdoo recounted the problems of that era in his
would be up to the task, however.
autobiography: “There was no certainty that any bank could
“The United States was definitely not a financial power,”
depend on its neighbors for aid; consequently, at the first
says William Silber, a professor of economics at New York
sign of panic every one of them made a frantic effort to
University’s Stern School of Business and author of a book
call in loans and get together as much cash as possible — a
about the 1914 crisis. “The instability of the American bankprocedure which invariably made matters worse instead of
ing system factored into the minds of European investors,
better.”
and given that there were these huge gold drains in 1914, that
Additionally, some financial institutions lacked access to
led to speculation that the Treasury might be forced to
clearinghouse funds, as was the case with New York trusts in
suspend convertibility.”
1907. In response to these shortcomings, there was a strong
The New York Stock Exchange fell 6 percent on July 30,
push for financial reform after 1907. Some legislators wanted
the biggest one-day drop since March 14, 1907, as European
to create an asset-backed currency, which would have
nations rushed to withdraw funds before such a suspension
allowed banks to issue currency backed by assets other than
occurred. This posed a double problem. If such sales had
government bonds, making the money supply more elastic.
occurred domestically, then the money would have remained
Others wanted a central bank to oversee an elastic currency
in the American banking system to be loaned out to
for the entire financial system. Unable to reach agreement,
investors interested in buying, helping to arrest the decline.
Congress passed a temporary measure: the Aldrich-Vreeland
But in this case, all of the money was flowing out of the
Act of 1908. It allowed the Treasury to act as a lender of last
United States, draining the funds that banks could lend.
resort during a crisis via a reserve of $500 million in emerWriting in 1915, Harvard economist O.M.W. Sprague, a leadgency currency. National banks could obtain this currency
ing scholar of financial panics, described the situation:
by pledging various types of bonds or commercial paper as
“No banking system in a debtor country could be devised
collateral.
which would be able to endure the double strain which was
ECON FOCUS | FOURTH QUARTER | 2013

7

Aldrich-Vreeland Currency in Circulation
400
350

of their liquidity, impairing their ability to conduct ordinary
operations. McAdoo’s answer to the foreign threat had sown
the seeds for a domestic banking crisis.

$MILLIONS

300
250
200
150
100

The Home Front

50
6/8/15

5/8/15

4/8/15

3/8/15

2/8/15

1/8/15

12/8/14

11/8/14

10/8/14

9/8/14

8/8/14

0

SOURCE: Comptroller of the Currency, Annual Report, Dec. 6, 1915

imposed upon the banks of the United States by the wholesale dumping of securities by foreign investors on the New
York market. To supply gold to the full amount of the
purchase price and at the same time to grant loans to enable
purchasers to carry the securities was soon seen to be a
manifest impossibility.”
McAdoo was not content to sit idly as a financial crisis
loomed. He had always been drawn to the big stage of public action. As a young Tennessee lawyer, he embarked on a
quest to electrify the streetcar system in Knoxville.
Although the venture proved unprofitable and ultimately
bankrupted him, he was undeterred. Moving to New York,
he oversaw the successful construction of a railroad tunnel
beneath the Hudson River between New York and New
Jersey. The urge to leave a lasting public legacy was an
integral part of his character.
“I had a burning desire to acquit myself with distinction
and to do something that would prove of genuine benefit to
humanity while I lived,” he wrote. As he watched the U.S.
financial system slip toward panic in the summer of 1914,
that same instinct compelled him to take the stage once
more.
“He wanted to make the United States a financial superpower,” says Silber. In his book, he argues that McAdoo saw
the crisis as an opportunity to usurp London’s place as the
financial capital of the world.
Doing so would require improvisation, however. In early
August, the nominees to the Fed’s Board of Governors were
still locked in congressional hearings, and the 12 district
banks had yet to be chartered. Although McAdoo’s preference was to accelerate the process for opening the banks,
Benjamin Strong, a prominent banker who would become
the first president of the New York Fed and an early system
leader, argued in favor of delay. Strong worried the district
banks would not have enough gold reserves to meet
demands, and failing to pay out gold would do irreparable
harm to the reputation of the new central bank. McAdoo
would have to find another way.
He spoke with the governing board of the New York
Stock Exchange on the morning of July 31 and persuaded
them to suspend trading. The exchange would remain closed
until Dec. 12, 1914, an unprecedented length of time. While
the action halted the foreign gold drain, it also created a
domestic problem. The assets of many banks were held in
the stock exchange. Banks were cut off from a major source
8

ECON FOCUS | FOURTH QUARTER | 2013

Without the Fed to provide liquidity to sound institutions,
McAdoo had to turn to the Aldrich-Vreeland Act. Loans
under the Act would not be bailouts, as any bank seeking
emergency currency would have to put up full collateral and
pay increasing interest to ensure the funds would be retired
quickly after the crisis passed.
McAdoo had actually invoked the Act to offer emergency
loans the previous summer, when legislation to reduce tariffs
prompted a decline in stock prices as businesses worried
about greater foreign competition. Although no banks
applied for the emergency currency, the stock market reacted favorably to the announcement. Almost exactly one year
later, McAdoo made a similar announcement: “There is in
the Treasury, printed and ready for issue, $500,000,000 of
currency, which the banks can get upon application under
that law.”
This time, banks were keenly interested in obtaining the
currency, but there were some problems. The Act allowed
national banks to apply for the emergency loans only if they
had already issued national bank notes equal to at least 40
percent of their capital. The restriction was intended to prevent overuse of the currency, but it meant that many major
banks could not participate at all. For example, National
City Bank in New York had $4 million bank notes in circulation in 1913 — only 16 percent of its capital. Additionally,
state-chartered banks and trusts could not borrow under
Aldrich-Vreeland, mirroring the lack of access that had
escalated the Panic of 1907.
Recognizing the potential problem, McAdoo visited
Congress the same day he invoked the Act, asking legislators
to remove the restrictions.
“If depositors thought that certain institutions didn’t
have the currency, there might have been a run on those
institutions. So the fact that the major New York banks
could not have qualified for Aldrich-Vreeland money could
have been an impediment,” says Silber.
Once Congress amended the Act, the emergency notes
flowed to banks quickly (see chart). Just one week after
McAdoo’s announcement, banks had requested $100 million in Aldrich-Vreeland currency, and large trucks delivered
bags full of the preprinted notes to Treasury offices around
the country. But while the amendment opened access to
non-national banks, it did so with a caveat: Carter Glass,
chairman of the House Committee on Banking and
Currency and co-author of the Federal Reserve Act, added a
requirement that state banks and trusts become members of
the Federal Reserve System to obtain Aldrich-Vreeland
currency. The change would ultimately dissuade those institutions from signing up for the emergency currency, as they
were not interested in the regulations and costs associated
with being Fed system members.

Fortunately, they still had access to clearinghouse loans.
After the experience of 1907, the clearinghouses had
expanded to include trusts as members. In a March 2013
paper with Margaret Jacobson, a research analyst at the
Cleveland Fed, Tallman found that financial institutions
borrowed nearly as much from the clearinghouses as they
did through Aldrich-Vreeland. In 1914, approximately
$385.6 million Aldrich-Vreeland notes and $211.8 million
clearinghouse loan certificates were issued.
“The combination of clearinghouse loan certificates and
emergency currency formed a composite good that was a
more complete substitute for legal money which was thus
able to generate a temporary increase in the monetary base,”
Jacobson and Tallman wrote.
In the three months after the start of the crisis, the
money supply increased at an annual rate of 9.8 percent; in
contrast, the first three months of the Panic of 1907 saw the
money supply contract at a rate of nearly 11.6 percent. Since
they couldn’t be used as cash, clearinghouse loan certificates
alone did not effectively increase the money supply in previous crises.
“In 1914, Aldrich-Vreeland served that purpose,” says
Tallman. “Banks used it to give cash to their depositors.”
Thanks to the Aldrich-Vreeland notes, banks were able to
continue operations and honor all requests for withdrawals.
The emergency notes peaked at the end of October and
quickly declined as the threat of crisis subsided.

Lessons of 1914
With the help of the Aldrich-Vreeland Act, McAdoo was
able to avert a costly panic and improve America’s standing
as a world financial power. By 1915, New York was receiving

international loan requests, while London was embroiled in
war. Many had thought that a central bank would be
necessary for such a feat. Upon voting to amend the AldrichVreeland Act in August 1914, Carter Glass had remarked,
“If the Federal Reserve System were fully organized there
would be no earthly necessity for the action proposed here
today.” But when tested during the Great Depression, the
Fed did not perform as well as Glass had envisioned. While
there were many elements that played into the severity of
the Depression, many economists point to the Fed’s failure
to provide adequate liquidity to the system as a contributing
factor because it allowed the money supply to contract.
Ultimately, it is difficult to say how much of AldrichVreeland’s success was due to its design versus McAdoo’s
resourcefulness. Had McAdoo not pushed to amend the Act
at the outset of the crisis or had he not been as quick to take
action, things might have turned out much differently. But
the overall success in 1914 still points to other roads the
United States could have taken to solve the problem of
banking panics. Writing about the founding of the Fed,
Elmus Wicker, an emeritus professor of economics at
Indiana University, observed that “had the experience of the
1914 banking crisis been available earlier, the question of
panic prevention would have been resolved without the
necessity for a central bank.”
Other countries mitigated panics with systems very
different from the one the United States ultimately adopted
(see “Why Was Canada Exempt from the Financial
Crisis,” p. 23). A central bank fulfills many other functions in
addition to panic prevention, but if panics were the only
problem, the success of Aldrich-Vreeland suggests that there
may have been alternatives to the Fed.
EF

READINGS
Bordo, Michael D., and David C. Wheelock. “The Promise and
Performance of the Federal Reserve as Lender of Last Resort
1914-1933.” National Bureau of Economic Research Working Paper
No. 16763, February 2011.

Silber, William L. When Washington Shut Down Wall Street:
The Great Financial Crisis of 1914 and the Origins of America’s
Monetary Supremacy. Princeton, N.J.: Princeton University Press,
2007.

Jacobson, Margaret M., and Ellis W. Tallman. “Liquidity Provision
during the Crisis of 1914: Private and Public Sources.” Federal
Reserve Bank of Cleveland Working Paper No. 13-04, March 2013.

Sprague, O.M.W. “The Crisis of 1914 in the United States.”
American Economic Review, Sept. 1915, vol. 5, no. 3, pp. 499-533.

Moen, Jon, and Ellis W. Tallman. “Close but not a Central Bank:
The New York Clearing House and Issues of Clearing House Loan
Certificates.” Federal Reserve Bank of Cleveland Working Paper
No. 13-08, May 2013.

Wicker, Elmus. The Great Debate on Banking Reform: Nelson Aldrich
and the Origins of the Fed. Columbus, Ohio: The Ohio State
University Press, 2005.

The Economic Brief series includes web-exclusive essays
based on research by Richmond Fed economists.
Take a look at our April Economic Brief:

The First Time the Fed Bought GSE Debt
To access the Economic Brief and other research publications,
visit http://www.richmondfed.org/publications/research/
ECON FOCUS | FOURTH QUARTER | 2013

9

JARGONALERT
Public Goods

n the centuries before radar and GPS, lighthouses
guided ships safely through dangerous waters. Today,
they exist mostly as relics of the past, providing scenic
backdrops for postcards and photos. But lighthouses have
also fulfilled an important role in economics textbooks:
illuminating the concept of public goods.
There are two qualities that set public goods apart from
other goods. They are “nonrival,” meaning their use or consumption by one party does not inhibit their use or
consumption by another, and they are “nonexcludable,”
meaning that it is impossible (or too costly) to prevent any
consumers from using them. In the case of lighthouses,
one ship captain can make use of the light to avoid
danger without inhibiting other captains from doing the
same. Additionally, once a lighthouse is constructed, it is
impossible to block any ship on
the water from using its light.
Other textbook examples of
public goods include fireworks
displays, national defense, and
environmental quality.
Nonexcludability can create
a “free rider” problem. Imagine
there is an entrepreneur who
wants to build a new lighthouse.
He knows the lighthouse provides a valuable service to ship
captains, and he asks each captain to contribute to its
construction. The captains want to see the lighthouse built,
but they also know they can enjoy the benefits of the
completed lighthouse whether or not they paid for it. This
means they can choose to contribute nothing and hope to
“free ride” on the generosity of others. But if enough of the
captains think this way, then the entrepreneur will not raise
sufficient funds, and the lighthouse won’t be built. This has
led many economists to conclude that public goods represent a form of market failure that the government can
correct by providing them through tax revenue.
Paul Samuelson, who provided the modern economics
definition of public goods in a 1954 article, wrote in his
seminal textbook: “A businessman could not build [a lighthouse] for a profit, since he cannot claim a price from each
user. This certainly is the kind of activity that governments
would naturally undertake.”
But in the decades that followed, economists began to
challenge the assumption that public goods could only be
provided by the public sector. In a 1974 paper, Ronald Coase
investigated the history of lighthouses in England. He
discovered that, contrary to common assumption, many of
the lighthouses had been built and maintained by private

I

10

ECON FOCUS | FOURTH QUARTER | 2013

individuals. These individuals raised money for the lighthouses by collecting a fee from ship captains at ports. This is
an example of what economists would later call “tying”;
that is, lighthouse owners were able to tie the use of the
public good (the lighthouse) with the use of another good
for which private property rights are assigned (the port).
Any captain who refused to pay for the lighthouses could
easily be excluded from the port. Lighthouses in England
continue to be funded the same way today.
Changes in technology can also make it viable to
privately provide goods that once seemed nonexcludable.
When TV debuted, it was seen as a public good. Anyone
with a receiver in range of the signal could enjoy the broadcast, making it impossible to charge for TV and exclude
those who refused to pay.
But as technology improved,
private cable companies were
able to exclude nonpayers
by requiring proprietary cable
boxes to descramble their
signal.
Not all economists agree
that public goods should be
provided privately even if it is
feasible to do so, however.
Because such goods are also
nonrival, it is in theory costless to provide them to any number of consumers. In the case of TV broadcasts, Samuelson
argued that it was not in the best interest of society to
exclude any individuals from watching programs, since
doing so would only diminish society’s overall happiness.
But other economists countered that providing public
goods always entails costs. Economist Jora Minasian argued
in a 1964 article that TV broadcasters must determine
which programs to provide with finite resources. Making
that choice efficiently requires some sort of market pricing
system to determine the programs that will generate the
most utility for viewers. Minasian concluded that “the
theory of public goods is of little help in distinguishing
those goods that are best provided via community action
from those that should be left to individual decisions and
preferences.”
Research conducted by Coase, Minasian, and many
others during the 1960s and 1970s revealed that there were
in fact fewer examples of truly public goods than economists
initially thought. Rather, the public or private provision of
any good involves costs and benefits, and it may not always
be immediately clear which solution results in the best
outcome. Additionally, those tradeoffs can change over time
as technology improves.
EF

ILLUSTRATION: TIMOTHY COOK

BY T I M S A B L I K

RESEARCH SPOTLIGHT

The Case of the Missing Deflation
BY T I M S A B L I K

reality. Gordon then repeats the simulation using data from
n the late 1950s and early 1960s, economists observed
1962 through 2006, forecasting inflation for 2007 through
that inflation and unemployment tended to move in
2013. This time, the standard model predicts ever-increasing
opposite directions, a relationship known as the
deflation after 2007, while the triangle model again forecasts
Phillips curve. Today, economists use a revised version of
inflation very close to actual observed values.
the Phillips curve, called the New Keynesian Phillips curve,
What explains the different predictions of the two
to forecast inflation. Although there are different versions
models? Gordon points to the role of supply shocks and
of the New Keynesian Phillips curve, a common one relies
the longer lags in the triangle model. “While the high
on recent inflation and the gap between current unemunemployment rate pushed the inflation rate down in
ployment and the natural rate of unemployment (or
2009-2013, the inflation rate was pushed up by higher
NAIRU) to predict the likely path of future inflation.
energy prices and declining productivity growth,” he writes.
Simply put, low recent inflation plus high unemployment
Because the New Keynesian Phillips Curve did not
equals low inflation or even deflation in the near future.
include such explicit supply shocks, it incorrectly predicted
Deflation is a particularly troubling prospect for central
deflation.
bankers, since expectations of future deflation can cause
Still, Gordon’s initial model is not a perfect match: It
consumers to delay spending, compounding the problem
forecasts inflation that is too low for 2012-2013. He hypothand creating a deflationary spiral. In early 2009, the threat
esizes that this may be due to the
of such deflation looked very real
different inflationary pressures
to many, but it never materialized.
“The Phillips Curve is Alive
exerted by short-term versus
Where did the deflation go?
and Well: Inflation and the NAIRU
long-term unemployment. Some
In a recent working paper,
during the Slow Recovery.”
research has suggested that workRobert Gordon of Northwestern
ers who have been unemployed
University argues that “the puzzle
Robert J. Gordon. National Bureau
for six months or more may put
of missing deflation is in fact no
of Economic Research
less downward pressure on prices
puzzle.” Gordon presents a modiWorking Paper No. 19390, August 2013.
and wages because they have less
fied Phillips curve to show that
impact on the labor market.
the deflationary pressures of the
Employers may view them as “unemployable,” either
2007-2009 recession were not as great as the standard
because their long absence from the workforce signifies
model predicted. This “triangle” model relies on three main
some hidden negative quality or because their marketable
variables to forecast inflation: inertia, demand, and supply.
skills have eroded during that period. The 2007-2009 recesInertia refers to expectations of future inflation; people gension was notable for the dramatic increase in long-term
erally expect tomorrow’s inflation to be similar to today’s.
unemployment, which could have influenced the lack of
Demand refers to the gap between current employment and
deflation. Gordon tests this hypothesis by rerunning his
the NAIRU, similar to the standard Phillips curve.
simulations with the triangle model using only short-term
Supply shocks, the final component, include sudden
unemployment in his employment gap measure. He finds
changes in energy prices or in productivity growth. The New
that this specification more closely predicts actual inflation
Keynesian Phillips curve does not explicitly include such
through 2013. It supports the view that deflation was less
shocks. Gordon argues that this limits the ability of the
severe because a significant portion of the unemployment
model to explain the movements of inflation and unemployduring the recession was long-term.
ment. Depending on the combination of supply and demand
In Gordon’s model, the elevated long-term unemployshocks, the relationship between unemployment and inflament also has implications for NAIRU. He finds that
tion can be either negative (as it was in the 1960s, when
NAIRU may have shifted from 4.8 percent in 2006 to 6.5
unemployment was falling and inflation was rising) or posipercent in 2013. “There may be less slack in the U.S. labor
tive (as it was in the 1970s, when they rose together).
market than is generally assumed, and it may be unrealistic
Gordon compares the forecasting performance of the
to maintain the widespread assumption that the unemployNew Keynesian Phillips Curve and his triangle model in two
ment rate can be pushed down to 5.0 percent without
simulations. In the first test, he uses data from 1962 through
igniting an acceleration of inflation,” he writes. Gordon
1996 to simulate forecasts for the first quarter of 1997
concludes that Phillips curve models should include both
through the first quarter of 2013. The standard model predemand and supply shocks in order to appropriately forecast
dicts much higher inflation than actually occurred, while the
and explain inflation behavior.
EF
triangle model predicts an inflation pattern very close to

I

ECON FOCUS | FOURTH QUARTER | 2013

11

POLICY UPDATE

Allocating Airwaves
BY T I M S A B L I K

he 21st century has witnessed the decline of broadcast media and the rise of wireless communication.
In the 1960s, nearly all TV-owning households in
the United States relied solely on over-the-air broadcast
transmissions; today, only about 7 percent do. In contrast,
data traffic in the United States from smartphones and
other wireless devices ballooned from 388 billion megabytes
per year in 2010 to nearly 1.5 trillion megabytes in 2012,
nearly a fourfold increase. The spectrum currently allocated to broadcast TV is highly desired by mobile providers
because of its ability to carry signals over long distances
and penetrate obstructions like buildings. In its 2010
National Broadband Plan, the Federal Communications
Commission (FCC) set a goal of making slightly over
40 percent of that spectrum available for new uses through
a new “incentive auction” process.
That auction was scheduled to take place this year but
was delayed until mid-2015 due to its complexity. While the
FCC has conducted nearly 100 spectrum auctions since
1994, they were mostly conventional “one-sided” auctions —
participants bid on a predetermined supply of spectrum.
The incentive auctions will be “two-sided” — one auction to
determine supply and one to determine demand. In the
supply auctions, better known as “reverse” auctions, TV
licensees will place bids signaling the amount of money
they would accept either to cease broadcasting or to share
spectrum with another station. TV stations also have the
option to continue broadcasting. The FCC will then move
the spectrum allocations of the remaining TV stations to
create a continuous band of free spectrum to offer in the
demand (or “forward”) auctions.
The primary challenge with this new approach is coordinating both auctions. In order to pay for the spectrum
offered by stations in the reverse auction, the FCC must
raise enough money in the forward auction. At the same
time, the FCC does not know how much supply it has to
offer in the forward auction until it conducts the reverse
auction.
Although the FCC will not announce the official rules for
its auction until later this year, economists have suggested a
few solutions to the coordination challenge. One approach
would be to conduct both auctions simultaneously using a
descending clock auction format. The FCC would set an
initial price and check which participants are willing to sell
(in the case of the reverse auction) or buy (in the case of the
forward auction) at that price. The price would then move
down or up in regular intervals until there are no participants left in the auction. The FCC could use this data
to construct supply and demand curves and calculate the
optimal reallocation of spectrum.

T

12

ECON FOCUS | FOURTH QUARTER | 2013

The advantage of the more complex two-sided auction is
that it allows for the new spectrum band plan to be marketdetermined. In previous auctions, like the 2008 auction for
spectrum freed up by the nationwide switch from analog to
digital TV, the FCC split available spectrum into blocks of
varying size and geographic coverage.
In a 2013 paper, University of Maryland economics
professor Peter Cramton found that prices in the 2008
auction were significantly higher for blocks with larger
geographic coverage. Wireless companies were mostly
interested in assembling continuous coverage, he argued,
and while bidders could assemble such coverage from small
licenses, that carried greater risk. The bidder might fail to
acquire all the necessary pieces for the desired package or be
forced to pay higher prices to holdouts on key licenses. The
incentive auctions could mitigate these problems by offering
generic licenses in the initial forward auction, allowing
bidders simply to signal the quantity and distribution of
spectrum they desire, leaving the assignment of specific
frequencies for later.
The FCC has said that its role as an auction facilitator
will help bidders overcome the costs of negotiating with
hundreds of license holders, but not everyone agrees it is the
best solution.
“The system is extremely rigid because of the nature of
the rights that have been assigned,” says Thomas Hazlett, a
professor of law and economics at George Mason University
who contributed to the National Broadband Band Plan.
“Those rights are not spectrum ownership rights, but rather
very truncated rights to do particular things.”
Hazlett argues that even if TV licensees were willing to
sell their holdings to wireless companies in the market,
those companies could only use the new spectrum for TV
broadcasting because of the way the licenses were originally
structured. Hazlett applauds the FCC’s decision to offer
flexible-use licenses in the incentive auctions, giving buyers
more control over how the spectrum is used in the future,
but he would take it one step further. The FCC has the
authority to issue overlay licenses to the TV band, which
would allow TV broadcasters to continue broadcasting if
they want, but would also grant rights for other uses, allowing them to sell their licenses freely to non-broadcasters
outside of an FCC auction process.
The FCC considered using overlays in its National
Broadband Plan but dismissed them as too costly for bidders
to negotiate with licensees. But Hazlett argues that the
incentive auctions also entail costs that the FCC did not
consider, such as administrative and legislative delays.
“It’s an economic problem we face,” he says, “not an
engineering one.”
EF

THE PROFESSION

The Future of History
BY J E S S I E RO M E RO

n the mid-2000s, when economic stability seemed like
it was here to stay, a well-regarded economist applied
for a National Science Foundation grant to study
economic crises. The application was rejected because, in
the words of one referee, “We know those things don’t
happen anymore.”
That referee was soon proven wrong, of course, and his
comment illustrates what some see as a serious problem: the
waning influence of economic history as a discipline, which
seems to have left the economics profession without the historical context it needs to interpret current events. Many
economic historians feel, as Robert Whaples of Wake Forest
University wrote in a 2010 article, that it is “a neglected field,
a field on the margins.”
But the perception of neglect is nothing new — and
might not be accurate, according to Price Fishback, an
economic historian at the University of Arizona and executive director of the Economic History Association (EHA).
“We’ve been saying that economic history is on the decline
ever since I’ve been in the field.” (Fishback completed his
Ph.D. in 1983.) “But the field looks pretty stable to me.”
Certainly, there are worrying signs. In the 1960s and
1970s, it was common for university faculties to have at least
one economic historian, and few economics students could
escape graduate school without studying economic history.
Today, economic history has all but vanished from graduate
school curricula. And job opportunities for economic
historians are slim. Out of 256 recent job postings on the
American Economic Association’s website, only nine listed
economic history as a preferred specialty. “It is a small
market for economic historians,” admits Fishback.
“Everybody who goes out on the job market as an economic
historian typically goes out as something else as well.”
Economic historians are disappearing from history
departments as well. Between 1975 and 2005, the number of
history departments in the United States with an economic
historian fell from 55 percent to 32 percent, despite the fact
that the number of history professors overall more than
doubled.
In part, the shift reflects the increasing importance of
mathematics in economics, Fishback says. “When I started
grad school back in the 1970s, there were people who were
taking calculus courses at the last minute. These days you
pretty much have to be a math minor to enter an economics
Ph.D. program.”
The specialty also was changed by the “cliometrics revolution” that began around 1960. Cliometrics is the
application of rigorous quantitative techniques and economic theory to historical analysis. (“Clio” was the Greek muse
of history.) Exemplified by the research of future Nobel

I

laureates Robert Fogel and Douglass North into topics such
as railroads, slavery, and the importance of institutions,
cliometrics quickly became dominant.
But there were unintended consequences: Because
economic history was now using the same tools as other
specialties, separate courses were deemed unnecessary and
economic historians were no longer considered a distinct
group. Cliometrics also made economic history less accessible to historians who lacked formal economics training.
At the same time, the use of quantitative approaches has
spurred new interest in economic history, says Philip
Hoffman, an economic historian at the California Institute
of Technology and the current president of the EHA.
“Economic theory has helped revive economic history.
There is fascinating research being done by people outside
of economic history but who use historical data.” And
Fishback notes that work that might be categorized as
economic history is increasingly published in top-tier mainstream journals.
Lately, interest in economic history has been especially
high as policymakers and economists have tried to understand the financial crisis and recession. And economists with
historical expertise have been prominent in policymaking.
Christina Romer of the University of California, Berkeley,
who chaired the Council of Economic Advisers in 20092010, has written extensively about the Great Depression,
as has former Fed Chairman Ben Bernanke.
Many people also believe that reinstating economic
history courses in graduate programs could help economists
recognize the next crisis sooner. As Kevin O’Rourke of
Oxford University wrote on the economics website VoxEU,
“Historical training would immunise students from the
complacency that characterised the ‘Great Moderation.’
Zoom out, and that swan may not seem so black after all.”
Of course, not everyone agrees that more training in
history is the cure for what (if anything) ails economics.
As Harvard University economist Edward Glaeser wrote in a
chapter for the 2012 book What’s the Use of Economics, knowledge of history is important for economic policymaking, but
graduate school isn’t necessarily the place to impart it.
“We should trust Ph.D. programs to deliver competent
researchers and hope that later experience provides the
judgment that the wider world demands.” Others believe
that the more important curriculum change is greater study
of credit markets and financial frictions.
Either way, the financial crisis and recession are starting
to recede into history themselves. But even if the current
vogue for economic history proves to be a blip, the economy
will continue to present questions that cannot be answered
fully without turning to the past.
EF
ECON FOCUS | FOURTH QUARTER | 2013

13

C O V E R

S T O R Y

BY J E S S I E RO M E RO

I

n ancient Babylon, around 1800 B.C., merchants transporting
their goods to markets in the Mediterranean and Persian Gulf

had to worry about thieves, pirates, and sinking ships. So they
developed a system to share the risk of transport with their
investors: Merchants paid a premium above the going interest
rate in exchange for a promise that the lender would cancel the
loan in the event of a mishap.
By collecting premiums from many merchants, an investor could afford to cover
the losses of a few.
Nearly 4,000 years later, the basic
model of insurance hasn’t changed much,
although its size and scope have increased
dramatically. Today, there are more than
3,700 insurance companies in the United
States alone, selling insurance on everything from crops to vacations to fantasy
football teams. Insurance premiums
(excluding health insurance) totaled $1.1
trillion in 2012, about 7 percent of GDP.
It’s also possible to buy insurance on
insurance itself. This practice, known as
reinsurance, helps insurance companies
limit their exposure to risk and free up capital for other uses. But it also increases the
interconnectedness of the insurance industry, which, in the wake of the financial
crisis, has some regulators concerned
14

ECON FOCUS | FOURTH QUARTER | 2013

about the potential for systemic risk.
Those concerns are exacerbated by the
recent trend of insurance companies purchasing reinsurance from companies they
own — creating a so-called “shadow insurance” industry. Are tools intended to help
insurance companies manage their risks
actually making the industry as a whole
more risky?

Insurance and the Economy
In general, there are two types of insurance
companies apart from health insurers:
property/casualty companies and life
insurance companies. Property/casualty
companies sell products designed to
protect consumers and businesses from
financial loss due to damage or liability.
Life insurance companies sell life, disability, and long-term care insurance, as well as
annuities and other financial products that

provide individuals with an income stream
during retirement.
Although property/casualty companies
far outnumber life insurance companies —
there are more than 2,700 of the former in
the United States, compared with about
1,000 of the latter — by most measures the
life insurance sector is much larger. Life
insurance accounts for 58 percent of written
premiums, and life insurance companies
hold $5.6 trillion in assets, compared with
the $1.6 trillion held by property/casualty
companies. Many insurance liabilities are
long term in nature, but companies must also
be able to pay out claims quickly and sometimes unexpectedly; they thus tend to invest in stable, liquid assets.
About 70 percent of property/casualty insurers’ assets and
54 percent of life insurers’ assets are invested in bonds.
(See chart.) That makes them a major source of funding for
corporations, state and local governments, housing, and the
federal government. For example, life insurance companies
own 18 percent of all outstanding foreign and corporate
bonds.
Insurance companies have a lot to invest because of
“float,” which is money that has been collected in premiums
but not yet paid out in claims — or “free money,” in the
words of Warren Buffett, whose company Berkshire
Hathaway owns GEICO as well as several other smaller
insurance companies. Particularly in the property/casualty
sector, float is the primary source of profit; many companies
show a loss on underwriting, meaning that they collect less
in premiums than the total of their current expenses and
expected future payouts. State Farm, the largest insurer in
the United States, incurred an underwriting loss in nine of
the past 12 years, while still earning billions in net profit.

Insuring the Insurers
State Farm and other property/casualty companies will
insure your home against the risk of damage from hail,
lightning, wind, or fire, but they won’t insure against flood
damage. That’s because insurers depend on the “law of large
numbers” to limit their exposure to risk. The law of large
numbers is a statistical rule stating that the larger the
number of individual risks, the more likely it is that the average outcome will equal the predicted value. For example,
flipping a coin 20 times is more likely to yield 50-50 results
than flipping it twice. So even if it’s impossible to predict
when lightning will strike a single home, it is possible to
determine the average likelihood of a lightning strike across
many homes. By selling a large number of policies, insurers
are able to calculate with some confidence how much they
are likely to pay out to the entire pool, and set their premium levels accordingly. That’s not the case with a flood or
other catastrophic event, which could cause an unpredictable amount of damage to many homes within the same
geographic area at the same time.

Sometimes the law of large numbers isn’t
enough protection, as proved to be the case
in 2005 when hurricanes Katrina, Rita, and
Wilma — three of the top 10 most expensive
hurricanes in U.S. history — all struck the
southeast United States within a few months
of each other, causing $80 billion in insured
losses. (The National Flood Insurance
Program, which is run by the federal government, paid out an additional $18 billion.)
To help hedge the risks of such large claims,
property/casualty insurers buy insurance for
themselves — a practice known as reinsurance. (Life insurance companies also
purchase reinsurance, but property/casualty companies
make up the majority of the market.) Reinsurers covered
about 45 percent of the losses resulting from the 2005
hurricane season, and 60 percent of the losses related to
the Sept. 11, 2001, terrorist attacks. “Reinsurance quite
literally makes the property/casualty market possible,” says
Tom Baker of the University of Pennsylvania Law School.
In a reinsurance contract, the company that wishes to
purchase reinsurance is called the cedent. The cedent pays a
premium to cede certain risks to the reinsurer. In exchange,
the reinsurer promises to pay some portion of the cedent’s
losses, or to pay for losses once a certain threshold is
reached. The cedent is then allowed to claim a credit on its
financial statements for the ceded risks, either as an asset or
a reduction in liabilities. This enables primary insurers to
write more policies and to take on risks that they might not
otherwise insure. That trickles down to consumers and businesses in the form of cheaper policies and more insurance
for new or untested ventures.
Reinsurers manage their risks by writing policies for
companies all over the world, since the risk of an earthquake
in New Zealand is uncorrelated with the risk of a hurricane
in the United States. Reinsurers also are located all over the
world: In 2011, U.S. insurance companies purchased reinsurance from nearly 3,000 reinsurers domiciled in more than
100 foreign jurisdictions. Still, the market is dominated by
a small number of large companies; the 10 largest nonlife

Insurance Companies Asset Allocation
Property/Casualty Insurers
($1.6 trillion)

Life Insurers
($5.6 trillion)
Mortgage-backed Securities
State & Municipal Bonds
Federal Government Bonds
Corporate, Foreign & Other Bonds
Real Estate & Loans
Equities
Cash & Other Investments

NOTE: Life insurers’ assets includes both general account and separate account assets.
SOURCES: Federal Reserve Bank of Chicago; American Insurance Association; Econ Focus calculations
ECON FOCUS | FOURTH QUARTER | 2013

15

reinsurers account for about half of global premiums.
And the majority of reinsurers tend to be located in just a
few countries: Germany, Switzerland, the United States, and
especially Bermuda, home to 16 of the world’s 40 largest
reinsurers, where less stringent regulatory and capital
requirements make it relatively easy to set up a reinsurance
company.

Is Reinsurance Risky?
Since the 2008 financial crisis, the insurance industry has
come under new scrutiny from both U.S. and international
regulators. That’s largely because American International
Group, the second-largest insurance company in the United
States, received a government bailout in 2008. AIG didn’t
get into trouble through its traditional insurance business —
the problems stemmed from its derivatives and securities
lending operations — but the company’s near-collapse
underscored the role that large nonbank institutions play in
the financial sector. Since the crisis, both AIG and
Prudential have been designated “systemically important
financial institutions” by the Financial Stability Oversight
Council, making them subject to additional supervision.
Prudential is the third-largest U.S. insurance company, and is
closely connected to capital markets through its derivatives
and securities lending portfolios, among other activities.
(See “First Designations of ‘Systemically Important’ Firms,”
Econ Focus, Third Quarter 2013.)
Regulators also are looking specifically at reinsurance.
For example, the Federal Insurance Office, a new division of
the Treasury Department that was created by the DoddFrank Act, has been charged with preparing a report for
Congress on the “breadth and scope” of the reinsurance
industry. (The report was due in September 2012 but has not
yet been completed.) The International Association of
Insurance Supervisors (IAIS), which has developed a framework for identifying “global systemically important
insurers,” is considering developing a separate methodology
for reinsurers.
The potential for systemic risk stems from the degree of
interconnectedness created by reinsurance. If an insurance
company writes a policy and purchases reinsurance for that
policy, it still carries the risk of whether the reinsurance
counterparty will pay when it is supposed to. And the reinsurer might purchase reinsurance itself, which creates
additional counterparty risk, explains Anna Paulson, the
director of financial research at the Chicago Fed. “Part of
the issue has to do with opacity and being able to see who
ultimately bears the risk,” she says.
Still, research suggests that while the failure or insolvency of a major reinsurer could lead to a crash within the
insurance industry, the damage would be unlikely to spill
over to the rest of the economy, as J. David Cummins and
Mary Weiss of Temple University concluded in a 2010 working paper. (Cummins and Weiss do note that the risk
increases if reinsurers are heavily involved in noninsurance
activities, such as derivatives trading or asset lending.) The
16

ECON FOCUS | FOURTH QUARTER | 2013

IAIS reached the same conclusion in a 2012 report, noting
that in reinsurance the payments are strictly tied to the
occurrence of an insured event. Unlike in banking, there is
no overnight lending, and there are no payments or cash
calls on demand, either of which might spark a run on reinsurance. (Financial institutions that rely on short-term loans,
such as overnight loans, to fund longer-term assets face significant liquidity risks if their counterparties become
unwilling to provide or roll over the loans.) Between 1980
and 2011, 29 reinsurance companies failed with minimal
impact on the broader insurance industry.
“Imagine these institutions are running a marathon,” says
Etti Baranoff, a professor of insurance and finance at
Virginia Commonwealth University. “The banks are holding
hands, so if one falls down, it pulls the others down with it.
But the reinsurers are just running beside the insurance
companies. If one of them falls down, the insurer might
run a little slower, but he could still make it to the
finish line.”

A New Shadow Industry?
While traditional reinsurance might not be of great concern
to regulators, the same is not true of “captive” reinsurance, a
vehicle used primarily by life insurance companies that has
become popular over the past decade. A captive reinsurance
company is a wholly owned subsidiary of a primary insurance company, usually domiciled in a different state or
offshore. The primary insurer cedes a block of policies to the
captive, which often has lower capital and collateral requirements than its parent company. As with third-party
reinsurance, this reduces the liabilities on the books of the
parent company and allows it to make other use of the
capital it had set aside for those liabilities, such as paying
dividends to shareholders or issuing securities. The
difference is that the amount of risk hasn’t been reduced.
“Normally, with reinsurance, you’re actually transferring risk
off of your balance sheet and out of the consolidated organization. But captive insurance often doesn’t provide the
benefit of a risk transfer. The risk stays within the consolidated organization,” says Paulson.
Captives have been used by noninsurance companies
since the 1960s as a way to self-insure against risks that
might be very expensive to insure through a third party; oil
companies, for example, have used captives to insure
themselves against environmental claims. Because the
parent company is the only one at risk, capital requirements
are relatively low, which makes them cheap to set up. And
the parent company can claim significant tax deductions for
the premiums it pays to its new subsidiary.
Life insurers got into the game in the early 2000s, after
the National Association of Insurance Commissioners
issued new guidelines for state regulators that required life
insurers to hold much higher reserves on certain term and
universal life policies. But if an insurance company could
cede some of those policies to captives, it could take credit
for reinsurance and reduce its required reserves. Around the

same time, states began changing their rules to allow life
insurers to establish captives, led by South Carolina in 2002.
The practice grew quickly. In 2002, companies with captives ceded them 2 cents of every dollar insured. By 2012,
they ceded 25 cents of every dollar insured. Over the same
period, the total amount of “shadow insurance” grew from
$11 billion to $364 billion, according to research by Ralph
Koijen of the London Business School and Motohiro Yogo
of the Minneapolis Fed.
States such as South Carolina, Utah, Hawaii, and especially Vermont, which is the largest captive domicile in the
United States, began courting captive insurers in an effort to
compete with Bermuda for the business travel, white-collar
jobs, and tax revenue they create. Since 2005, the captive
industry has contributed more than $100 million to South
Carolina’s economy, according to the South Carolina
Department of Insurance. “South Carolina has a strong
interest in economic development and job growth and the
captive sector does just that,” the department said via email.
More than 30 states and Washington, D.C., currently
advertise themselves as captive domiciles, although they
differ in the types of risks they allow companies to reinsure.
In June 2013 North Carolina became the most recent state to
allow captives.
Critics of captive insurance in the life insurance industry
are concerned that this competition will lead to a regulatory
“race to the bottom.” State insurance regulations generally
treat captives much more leniently, since regulators aren’t
concerned about the effects of the captive’s solvency on the
state’s consumers. And one way for a state to lure more captives is to have lower capital formation and reserve
requirements than its neighbor. But those concerns aren’t
well founded, according to the South Carolina Department
of Insurance, since states can also compete on factors such
as the cost of doing business, the prevalence of professional
service firms in the state, or the experience of the state
insurance department’s staff.
The bigger concern of critics, however, is that the use of
captive insurance makes life insurers appear healthier than
they are, by allowing them to increase their capital buffers
without actually transferring risk or raising new capital.
According to Koijen and Yogo’s research, accounting for
captive reinsurance reduces the median company’s riskbased capital by 53 percentage points, or three ratings
notches. (The credit rating agency A.M. Best assigns insurance companies one of 16 ratings, from A++ to S.) Expected
losses for the industry increase by between $19 billion and

$61 billion; because states operate guaranty funds in the
event of an insurance company insolvency, those losses could
potentially be borne by taxpayers and other insurance
companies. And because the new state laws allow captives to
keep their financial statements confidential, it is difficult for
consumers, shareholders, and regulators to find out how
much an insurer relies on captive reinsurance.
Benjamin Lawsky, New York’s superintendent of financial services, recently called for a moratorium on the
formation of new captive reinsurance companies. Dave
Jones, California’s insurance commissioner, told the New
York Times that California would not allow captives to form
in the state because it was “concerned about systems that
usher in less robust financial security and oversight. ... We
need to ensure that innovative transactions are not a
strategy to drain value away from policyholders only to provide short-term enrichment to shareholders and investment
bankers.” The University of Pennsylvania’s Tom Baker is
more blunt: “Anytime a company is setting up its own
captive, hold on to your wallet.”
But captives also provide real benefits to their parent
companies and, by extension, to consumers, and there could
be costs to eliminating them. “Through captives, insurers
are able to … avoid credit downgrades and reductions in
the availability and affordability of some life insurance
products,” the South Carolina Department of Insurance said
via email.
Koijen and Yogo also note that captive reinsurance
reduces financial frictions for the companies that use it,
which may lower their marginal cost and thus increase the
supply of insurance to consumers. They estimate that eliminating captives could raise marginal cost by 17.7 percent for
the average company and reduce the amount of insurance
underwritten annually by $21.4 billion, from its current level
of $91.5 billion. “There are pluses and minuses of captives,
and they need to be analyzed together,” says Paulson.
“Ultimately the debate is, have we found an appropriate
balance? Do we collectively have enough insight into what’s
going on?”
From ancient Babylon to modern Bermuda, insurance
has evolved to meet the needs of consumers and corporations — and of the insurers themselves. Captive insurance
and reinsurance might be innovations that increase the
efficiency and profitability of the industry, or they might
cause significant harm to the financial sector, or both.
Either way, they will not be the last innovations debated by
regulators, economists, and policymakers.
EF

READINGS
Cummins, J. David, and Mary A. Weiss. “Systemic Risk and the
U.S. Insurance Sector.” Temple University Working Paper,
Sept. 14, 2010.
Koijen, Ralph S.J., and Motohiro Yogo. “Shadow Insurance.”
National Bureau of Economic Research Working Paper No. 19568,
October 2013.

International Association of Insurance Supervisors. “Reinsurance
and Financial Stability.” IAIS Policy Paper, July 19, 2012.
McMenamin, Robert, Anna Paulson, Thanases Plestis, and Richard
Rosen. “What Do U.S. Life Insurers Invest In?” Federal Reserve
Bank of Chicago Chicago Fed Letter No. 309, April 2013.

ECON FOCUS | FOURTH QUARTER | 2013

17

New development hugs
the tracks along the
LYNX light rail line
in Charlotte, N.C.

Development patterns and other factors have shaped
transportation options in the Fifth District.
Where does rail transit fit in?

I

t’s the middle of the morning rush in Charlotte, N.C.
Within earshot of the roar of traffic from Interstate
485, an electric train quietly pulls into the last stop on
the LYNX, the Queen City’s light rail line.
The silver and blue train fills up quickly. Some passengers
are dressed in business attire, making their way to jobs less
than half an hour away in Uptown Charlotte. Others are
heading to the courthouse or one of the other government
offices near the central business district.
People have their reasons for riding the LYNX — or any
other train — instead of joining the masses on the interstate.
For policymakers who envision the economic and environmental benefits of rail transit, the challenge is in expanding
ridership beyond this customer base. They believe it is
worth the investment of taxpayer money to expand transit
service over the long term and attract more of the so-called
“choice riders” who can be enticed into leaving their cars
at home.
18

ECON FOCUS | FOURTH QUARTER | 2013

In recent decades, that has usually meant building light
rail systems with streetcars or two-car trains. These systems
typically carry fewer passengers than a subway and travel at
grade level over a semi-exclusive right of way. They usually
run on electricity, so trains whir by like a spaceship.
Baltimore built a 30-mile light rail line in the 1990s to
connect the city’s downtown to the surrounding suburbs.
It took more than a decade for the Fifth District to develop
additional light rail options: the LYNX in 2007 and the Tide
in Norfolk, Va., in 2011. While it’s too early to judge the
success of either effort — especially since each is much
smaller and younger than Baltimore’s system — both have
managed to attract a growing number of passengers.
Thus far, however, the LYNX’s ridership growth has
been outpaced by the growth in population in the surrounding city and metropolitan area (see adjacent chart). This
record highlights the challenges of introducing a rail system
into a metro area with a widely dispersed population that

PHOTOGRAPHY: JAMES P. KARNER

BY C H A R L E S G E R E N A

has been traditionally served by automobiles and buses.
“The one thing about the South that is especially
challenging is that we don’t have dense development,” says
Stephen Billings, assistant professor of economics at the
University of North Carolina at Charlotte. Billings has
studied the impact of the LYNX on surrounding property
values. “Density is a huge component of the success of
transit. If you look at the transit investments throughout the
South, they have more limited impacts because they just
can’t serve as many people for a given route.”
Critics of mass transit believe federal funding encourages
policymakers to improve a regional transportation system
using the most expensive rail options, even when they aren’t
the most appropriate given the region’s development
patterns. Instead, more time and money should be spent on
improving bus service and modernizing roads.

Many Modes, Many Reasons
The choice of transportation mode partly depends on how
one values time. For example, rather than drive 10 minutes
to a downtown campus, a college student may take 30 minutes to walk to a bus stop, wait for a bus, and travel to school
to avoid paying hefty parking fees. A banking executive may
not mind being stuck in traffic for 45 minutes because the
evening commute provides an opportunity to unwind.
Unforeseen circumstances can also dictate one’s choice
of transportation. David Hartgen, a transportation consultant and senior fellow at the libertarian Reason Foundation,
believes that mass transit primarily offers mobility to those
who find themselves with no other means of getting around.
This captive market changes over time.
“Most transit systems have 30 to 40 percent turnover in
ridership every year,” says Hartgen. “Fixed systems don’t
work nearly as well in that kind of churning market environment. Bus systems are much more flexible."
Finally, and most important, transportation preferences
depend on how real estate development has occurred in a
metropolitan area. Generally, the more people who choose
to live and work along corridors, the better high-capacity
transit options like trains perform and the worse automobiles and interstates perform. If residential and commercial
development is spread out and not in clusters that can be
linked together, then rail transit has a harder time getting
people out of their BMWs and Darts.
Development patterns have shaped the transportation
options available in the Fifth District, says Adie Tomer, a
senior research associate and associate fellow at the
Brookings Institution. Tomer studies transportation infrastructure in metropolitan areas. “For centuries, the
Southeast had a more rurally driven economy and more
land-intensive industry than its northern neighbors,” he
says. The region became more industrialized at the same
time that the automobile started influencing the development of its metro areas. Also, land was plentiful, making it
“very easy to institute sprawling development” that favors
automobile travel.

As a result, buses blanket the sprawling metro areas south
of the Mason-Dixon Line. Bus routes are not fixed, which
enables transit operators to respond to shifting population
patterns. In contrast, higher capacity, fixed-route subways
like the Metro in Washington, D.C., and commuter rail
systems like the MARC in Baltimore link together the more
densely populated metros north of the Mason-Dixon.
At one time, it wasn’t government transit agencies that
responded to changes in transportation needs. Most urban
transit systems were privately owned and operated, from the
days of horse-drawn railcars in the mid-1800s to the advent
of electric streetcars in places like Richmond in the late
1880s to the bus systems that replaced streetcars after World
War II. Government involvement primarily came in the
form of awarding exclusive franchises to private operators in
exchange for some oversight.
Over the years, private operators went bankrupt or sold
out to state and local governments as the interstate highway
system and the population flight to the outer suburbs eroded ridership on buses and trains in inner cities. When
Congress passed the Urban Mass Transportation Act of 1964
and a similar funding bill in 1970, billions of dollars became
available to help cover the cost of mass transit systems, primarily capital expenses. This supported state and local
governments as they took over private operators.
The influx of federal money had another unintended
consequence: It encouraged governments to favor transit
projects with higher capital costs, namely rail lines. In contrast, bus lines have lower capital costs.
Light rail has been favored over buses for another reason,
according to Randall O’Toole, who has studied transportation issues at the free-market-oriented Cato Institute.
O’Toole says policymakers have expanded transit services
beyond urban neighborhoods where people have traditionally used them in order to justify taxing suburbanites for
transit. But ridership has not increased as quickly as service
has expanded, pushing down the number of passengers
transported per vehicle hour (see chart on next page). Rail
advocates have argued that in order to attract choice riders
who don’t ride the bus, governments need to build fixed
route systems like bus rapid transit and light rail that are
grade-separated from traffic, have covered stations, and are

Growth in Population vs. Light Rail Ridership
in Charlotte, N.C.
Percent Change from 2007 to 2012
MSA Population

13.4%

City Population
LYNX Monthly Boardings

16.8%
5.4%

NOTE: Monthly boardings are number of unlinked passenger trips for the
month of December.
SOURCES: National Transit Database, Charlotte Chamber of Commerce,
BEARFACTS-Bureau of Economic Analysis
ECON FOCUS | FOURTH QUARTER | 2013

19

first to support transit-oriented development.
Norfolk was in a unique position to encourage
50
economic
development along the 7.4 miles of its
45
Tide
light
rail line. Urban renewal efforts of the
40
1950s
and
1960s
left a blank slate from which to
35
redevelop most of its downtown, including empty
30
lots around the Tide’s stations. Officials have
25
rezoned that land to support denser, pedestrian20
15
friendly development.
10
In Charlotte, city planners worked with officials
5
in Mecklenburg County and six town councils to
0
create special zoning districts around the stations
on the first leg of the LYNX light rail system, the
Blue Line. Each new development in a district must
NOTE: Passenger trips/hour refers to unlinked passenger trips per vehicle revenue hour, a count of the number of
people who boarded a mass transit system for every hour that the system was in service.
meet a minimum level of density and be walkable
SOURCE: National Transit Database
and attractive. At the same time, the city upgraded
sidewalks, installed new light fixtures, and improved
roadways in the districts.
served by shiny new vehicles.
So far, investors have ponied up $288 million to build resUNC Charlotte’s Stephen Billings agrees that while
idential, retail, and office space around the Blue Line’s
buses are more flexible and cost-effective, there is a cultural
stations from 2005 to 2013, while another $522 million of
bias against them. “Buses are considered an inferior type of
development is under construction. Just a quick glance out
transportation. They have a perception of being dirty or
the window of the LYNX confirms this rush of activity.
dangerous,” he notes.
The view changes quite a bit as you travel the 9.6-mile
Also, bus service isn’t as efficient in metropolitan areas
length of the Blue Line, however. Most of the development
that are less dense, since longer and more frequent trips
has occurred around the seven northernmost stations in
through neighborhoods are required to provide adequate
Charlotte’s central business district and South End, a revitalservice. “If it’s more than a 15-minute wait between buses or
ized industrial section of the city flanked by stately 19th
stops are more than half a mile away,” says Billings, “that is a
and 20th century homes. Modern condos and upscale
big deterrent to taking a bus.”
restaurants hug the train tracks, separated only by a black
The flexibility of a bus system also means that
fence and a paved walking path. In contrast, not much new
stops — and paying customers — can be moved out of a
development has occurred on the south end of the
neighborhood. As a result, businesses may be less willing to
Blue Line. The LYNX shares the tracks with freight trains
invest in new development along a bus route. In contrast,
and is surrounded primarily by residential neighborhoods
“if you have a light rail line, they know it’s not going to
and clusters of industrial and low-density commercial
change,” says Billings.
development.
Billings published a paper in November 2011 that
Transit as Economic Driver
compared residential and commercial development along
The promise of rail transit as an economic driver is one of
the Blue Line with development activity along alternative
the reasons Charlotte and Norfolk developed their light
alignments that weren’t selected. He found that while
rail systems, even though both cities are significantly less
the presence of the light rail line had a small impact,
dense than Baltimore or Washington, D.C. (see adjacent
“it’s definitely not as big an impact as it first looks.”
chart). Policymakers hoped that trains would spur new resiThe LYNX may look good when you compare property
dential and commercial development.
Recent research has indicated a positive relationship
between a stop on a transit line and surrounding land values
Densities of 5E Cities with Rail Transit
in some cases. Billings points to the potential of agglomera12,000
tion economies, whereby a certain level of density results in
Population Per Sq. Mile
real increases in economic activity. For example, a young
10,000
Housing
Units Per Sq. Mile
couple may view the combination of restaurants, apart8,000
ments, and a light rail stop within walking distance as an
6,000
attractive option. “The question is does [rail transit] invest4,000
ment spur enough concentrated development that leads to
substantially more?”
2,000
Indeed, the effects of mass transit on development have
0
Norfolk
been found to be relatively modest and limited by distance.
D.C.
Baltimore
Charlotte
SOURCE: U.S. Census Bureau
Furthermore, land-use regulation usually has to be changed

1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012

PASSENGER TRIPS/HOUR

U.S. Mass Transit Usage

20

ECON FOCUS | FOURTH QUARTER | 2013

values and the amount of new development along the Blue
Line with the rest of the city. But its route “was picked for a
reason — it was an area that had potential,” says Billings.
“Maybe all we’re seeing is people investing in a place that was
doing well anyway, and if you hadn’t invested in light rail,
it would have been the same story."
The Institute for Transportation and Development
Policy, a nonprofit that works with cities to develop transit
systems, recently released a report that examined the development potential of streetcar, light rail, and bus rapid transit
systems in 13 cities, including Charlotte. The report found
that the marketability of the land along transit corridors and
government support were the most important determinants
of development.
“Some transit corridors were able to stimulate really high
levels of development and other corridors stimulated almost
none,” says Annie Weinstock, a co-author of the report.
“It’s not like you build mass transit and then you have development. There are a lot of things that have to come into
play.”
Directing growth along corridors and clustered around
stops on a light rail line requires a lot of work, especially in
the short term. And not every lever that steers economic
development is under the control of government planners.
For example, banks have to be willing to fund transitoriented development projects. So, expectations should be
set accordingly.
“Too often people expect a mass transit investment to do
a lot more than it is designed to do,” explains Weinstock.
“It can have other benefits in terms of linking communities,
changing the character of a street, and helping to stimulate
transit-oriented development. But the main thing that mass
transit does is provide a better and shorter trip for the most
people possible.”
If rail transit provides a viable alternative for the millions
of people who can’t drive to work, it could be an economic
driver in another way. It can help reduce labor market
frictions by connecting workers in or near a metro area’s
urban core with the employers in the suburbs who need
their skills.
A 2012 study by Adie Tomer at Brookings found that 72 of
the nation’s 100 largest metropolitan areas have more jobs in
the suburbs than in their central cities. Yet only 64 percent
of suburban jobs — and only 52 percent of jobs in southern
suburbs — are accessible to mass transit.

Transit as Traffic Decongestant
Finally, by offering alternatives to driving, rail transit promises to help relieve traffic congestion in a metropolitan area.
In turn, this can have environmental benefits and reduce
parking and travel delays.
The key is to draw a sufficient number of drivers off of
roads. Buses and trains consume their share of fossil fuels —
even electric ones do so indirectly — so they have to carry
enough people to generate a lower amount of pollution per
commuter than individuals traveling by themselves on
interstates.
Some transportation researchers aren’t convinced that
transit projects can reduce congestion. Erick Guerra, an
assistant professor of city and regional planning at the
University of Pennsylvania, points to the same problem that
arises when roads are expanded to serve densely populated
areas. As you improve travel conditions, the freed up capacity fills up quickly. “Someone leaves for their commute at 7
a.m. instead of 6:30 a.m. because the road is less congested,”
says Guerra. “It winds up getting as congested as it was
before.”
Congestion on the interstates that parallel Charlotte’s
and Norfolk’s light rail lines continues to be a problem.
Upon first glance at traffic counts at various points, one
wouldn’t see much change. Of course, there is no telling
whether those counts would have gone much higher in the
absence of light rail.
Guerra believes a better alternative to mass transit is
better management of traffic via congestion pricing of
roads. “Even though drivers are spending a lot on their cars,
they are not spending anywhere near the cost of the land
that they are traveling on,” he notes. Current user fees
barely cover road maintenance, so a lot of the money comes
from general taxes that everyone pays. The problem with
that approach is “if you drive on local roads 100 miles a day,
you’re paying the same amount for those roads as someone
who doesn’t drive at all.”
It may sound like Guerra and other researchers are
against rail transit in general. In fact, recent research has
indicated that transit is neither the cure-all nor the debacle
it is often portrayed to be. Rather, transit is an option that
can make a difference, if it is developed in the right
place and part of a comprehensive effort to improve the
accessibility and efficiency of a region’s transportation
infrastructure.
EF

READINGS
Billings, Stephen. “Estimating the Value of a New Transit Option.”
Regional Science and Urban Economics, November 2011, vol. 41, no. 6,
pp. 525-536.
Cervero, Robert, and Erick Guerra. “Urban Densities and Transit:
A Multi-dimensional Perspective.” University of California
Institute of Transportation Studies Working Paper No. 2011-06,
September 2011.

Hook, Walter, Stephanie Lotshaw, and Annie Weinstock.
“More Development for Your Transit Dollar: An Analysis of 21
North American Transit Corridors.” Institute for Transportation
and Development Policy, September 2013.
Tomer, Adie. “Where the Jobs Are: Employer Access to Labor by
Transit.” Brookings Metropolitan Opportunity Series, July 2012.

ECON FOCUS | FOURTH QUARTER | 2013

21

Canada has avoided crises for 180 years, while we have
been prone to them. Can we recreate its stability here?
BY R E N E E H A LT O M

s the worst financial crisis in generations hit the
United States in 2007 and 2008, Canada was a pillar
of resilience.
No Canadian financial institutions failed. There were no
government bailouts of insolvent firms (just a couple of lending programs to address market volatility relating to
problems in the United States). Canada was the only G-7
country to avoid a financial crisis, and its recession was
milder than those it experienced in the 1980s and early
1990s. For the last six years, the World Economic Forum has
ranked Canada first among more than 140 countries in banking stability.
It’s not just one-time luck. If you define “financial crisis”
as a systemic banking panic — featuring widespread suspensions of deposit withdrawals, bank failures, or government
bailouts — the United States has experienced 12 since 1840,
according to a recent study by Charles Calomiris, professor
of finance and international affairs at Columbia University,
and Stephen Haber, professor of history and political
science at Stanford. That’s an average of one every 14 and a
half years. Canada has had zero in that period. Its largely
export-driven economy has seen more than its share of
recessions, and even some notable bank failures, but it has
almost completely avoided systemic problems. Even during
the Great Depression, when more than 9,000 of our banks
failed, Canada lost a grand total of one — to fraud.
One might suspect that it’s because Canadian financial
institutions tend to be more tightly regulated; they have
higher capital requirements, greater leverage restrictions,
and fewer off-balance sheet activities. But Canada’s financial
system was largely unsupervised until the late 1980s. In a
period in which both Canada and the United States had virtually no official supervision or regulation of bank
risk-taking — from the 1830s to the advent of the Fed in 1913
— America experienced no fewer than eight systemic banking crises, while Canada had only two short-lived episodes in
the 1830s relating to problems here. That suggests regulation
alone can’t explain Canada’s stability.
All the while, Canadian banks provide ample credit to
the economy. According to the World Bank, Canada ranks in
the middle among high-income countries in the provision of

A

22

ECON FOCUS | FOURTH QUARTER | 2013

credit, with bank lending as a percent of GDP averaging
95 percent over time, compared with 52 percent here.
Canada has seemingly found a way to balance the provision of credit with the containment of risk. The question is,
would adopting some of its characteristics produce the same
success here?

What’s Different in Canada?
The financial systems of Canada and the United States provide the same basic services. The striking difference is in
how they are provided.
America has one of the world’s more fragmented financial systems, with almost 7,000 chartered banks and a legion
of regulators. Depending on its charter, an American bank
can be regulated by the Fed, the Federal Deposit Insurance
Corporation, the Office of the Comptroller of the Currency,
or state regulators — and that’s just the list for banks. By
contrast, Canada has just 80 banks, six of which hold 93 percent of the market share, according to the International
Monetary Fund. It has one overarching financial regulator,
the Office of the Superintendent of Financial Institutions
(OSFI), which oversees all manners of financial firms: banks,
mortgage lenders, insurance companies, pension funds, and
more. (Securities markets are regulated by Canada’s 13
provincial and territorial governments, but their regulations
are largely harmonized.)
What explains these differences? The answer requires a
bit of onion peeling. A financial system’s structure is, in part,
a response to regulation. But regulation is an evolutionary
process; policymakers tend to tweak regulatory rules and
procedures in response to financial crises or major bank failures. So to truly understand a country’s financial landscape,
you have to go back — all the way back — to its beginning.
Financial regulation in a new world typically starts with one
question: Who has the authority to charter banks?
This seemingly small choice sets off a chain reaction,
according to Michael Bordo and Angela Redish,
Canadian economists at Rutgers University and the
University of British Columbia, respectively, and Hugh
Rockoff, a monetary expert also at Rutgers. They’ve
studied the differences between Canada and the United

States in several papers dating back to the 1990s.
They argue that the states here prohibited banks from
branching, while Canada did not. These differences don’t
exist today; American and Canadian banks alike are free to
establish branches virtually anywhere they are economically
viable. But for most of U.S. history, up to 1994, most states
had some form of restrictions that prohibited branching
across state lines, and within states in some cases. The result:
a lot of U.S. banks. At the peak almost 100 years ago, there
were 31,000 individual institutions, virtually one distinct
bank for every city and town, and almost no branches.
Many economists have argued that this “unit banking” in
the United States made banks more fragile. For one thing,
banks were rather undiversified. “Their assets would be
mostly local loans, mortgages on farms or farm machinery,
depending on whatever crop was grown in the area. If the
price of wheat fell, loans depending on those local crops
could be in trouble,” says Rockoff. A single bad harvest was
liable to set off a wave of local failures, tightening credit to
the entire region.
Unit banking aggravated other unstable features of
U.S. banking. Regional shocks often produced bank runs,
and the rush of deposit withdrawals would drain banks of
cash. It was difficult to issue more currency because all notes
had to be backed by government bonds, which were costly
and slow to obtain. Addressing this problem is one reason
the Fed was created in 1913, to expand the currency supply
quickly in times of need.
Canadian banks solved the bank run problem with no
central bank. Scholars have chalked this up to a few things.
First, its banks were inherently less risky because diversification helped them absorb shocks. Second, its banks could
respond to depositors’ demand for cash by printing their
own currency backed by general assets. Third, the system’s
high concentration facilitated coordination in emergencies.
The Canadian Bankers Association, a private consortium of
banks, established a fund to honor notes issued by failed
banks and arranged takeovers of failing banks when our
country was enduring the panics of 1893 and 1907. As a
result, note holders and depositors rarely experienced losses.
Competing banks had an incentive to prevent such losses
because, in a highly concentrated banking system, a single
failure would be bad for everybody. In exchange for support,
they policed each other to prevent excessive risk-taking.
“People were fairly confident that something would be
worked out, so Canada didn’t get the panicky bank runs that
we did in the United States,” Rockoff says. American banks
tried the same with private clearinghouses and coinsurance
schemes, but these efforts often failed; the banks’ interests
sometimes proved too diffuse to provide confidence that
panics would be averted. (The Canadian government did
backstop the banking system on some occasions, mostly
through regulatory forbearance, Redish says. At the request
of farmers, it loosened collateralization requirements on
note issuance in 1907 and itself issued additional notes in
1914. Some scholars have also argued that banks were

insolvent during the Depression but avoided runs because of
an expected backstop by the government.)
From its beginning, Canada’s banking system was structured to be less vulnerable to shocks and thus did not give
rise to the need for a central bank to achieve stability.
By contrast, the Fed was created to offset vulnerabilities in
the American banking system.

Political Roots of Instability
The Fed’s founders didn’t address branching restrictions,
however, even with full understanding that small, vulnerable
banks were part of the core problem. In 1910, the financial
scholar O.M.W. Sprague of Harvard University, studying
on behalf of the congressionally established National
Monetary Commission, concluded that unit banking was
“a deep-seated cause of weakness in the financial system”
and the single most important difference between banking
in the United States and other countries, almost all of which
allowed branching. But if unit banking was known to be such
a problem, why was it allowed to persist?
According to the recent study by Calomiris and Haber,
set out in their 2014 book Fragile By Design, united factions
with an interest in keeping banks small succeeded in shooting down attempts at branching liberalization until the
1980s. They argued that the unique structure of the U.S.
political system allows popular interests to sway policy more
than in other countries. The U.S. Constitution gave all functions not explicitly handed to the federal government, such
as regulatory policy, to the states. Interests needed only to
win legislative fights at the local level, which was a far easier
task than in today’s relatively more federalized system,
Calomiris and Haber contended. Thus, they argued that the
origins of a country’s financial stability — or lack thereof —
are mainly political.
Small farmers opposed branching because it would allow
banks to take credit elsewhere after a bad harvest. Small
banks wanted protection from competition. And many
others opposed any signs of growing power concentrated in
any one institution — or bank. “Even in recent years, there
was a feeling that local community banks were doing something really good and should be protected or encouraged in
some way,” Rockoff says.
As the financial system evolved, branching was defeated
at every turn. The first attempts at creating a central bank —
in 1791 and 1816 — temporarily established a dual system of
both state- and nationally chartered banks. But fears about
the concentration of power, including opposition to branching, led to the charters of both central banks not being
renewed. After 1830, several states experimented with “free
ECON FOCUS | FOURTH QUARTER | 2013

23

banking,” which allowed individuals to establish banks anywhere, but free banks were still prohibited from branching.
National banks were created to fund the Civil War by issuing
notes backed by government bonds but were forced to
honor state branching limitations. The political infeasibility
of branching meant the Fed’s founders, despite Sprague’s
conclusions, barely even discussed it as a realistic option.
North of the border, the balance of power was different.
“In Canada those same groups existed, and they tried the
exact same things as in the U.S., but they didn’t succeed,”
Calomiris says.
The architects of Canada’s Constitution had precisely
the opposite objective from those of the U.S. Constitution:
After the French population in Quebec staged a revolution
in 1837-1838, “the British realized they had to build a set of
institutions to make it hard for the people who hated
their guts to create disruptions,” Calomiris says. Canadians
weren’t as fearful of the concentration of power; their independence came in 1867 through legislation, not revolution.
The first Canadian banks were established by Scots, who
mimicked Scotland’s branched system. In addition, Canada’s
export-based economy was better served by a national system that could help products move not from city to city, but
from country to country.
The Canadian constitution gave the regions equal weight
in the upper house of the legislature, much like the U.S.
Senate, to dilute the French influence in Quebec. Population
determines representation in the lower house, much like the
U.S. House of Representatives, creating incentives for centrist parties that cater to the median voter. Laws passed by
the lower house can be overruled by the Senate, whose seats
are filled by appointment of a governor general of Her
Majesty the Queen and held until age 75.
Many times, the Canadian government defeated populist
measures that would have changed the banking system. One
law passed in 1850 tried to replicate U.S. free banking,
including the requirement that notes be backed by government bonds to encourage government bond purchases. But
the legislature refused to end branching, and the free banks
simply weren’t viable in comparison. Few free bank charters
were ever issued. In response to the episode, provisions were
included in the 1867 constitution to ensure that banking
policy was made at the national level.

Domino Effects
Not only did branching restrictions persist in this country,
but new laws served to protect small banks. Many such laws
were enacted after the Depression, when a third of the
nation’s banks, most of them small, failed. Federal deposit
insurance, created in 1933, originally applied only to small
banks. It was added to the Glass-Steagall Act of 1933 at the
last minute to gain support from Henry Steagall, the powerful representative from agrarian Alabama. The Act was the
culmination of no fewer than 150 attempts over the previous
50 years at passing a federal deposit insurance system for
small banks, Calomiris and Haber argue. Other bank restric24

ECON FOCUS | FOURTH QUARTER | 2013

tions in Glass-Steagall — like Regulation Q’s ceilings on the
interest rates that banks could pay depositors, and rules
prohibiting banks from securities dealing — were intended
to prevent excess speculation but also served to keep
banks small.
That had a side effect: The shadow banking system took
off. “Regulations limited the amount of credit that the
commercial banking system could extend to industry, so
instead it was provided through other financial markets —
the stock and bond markets, investment banks, and others,”
Rockoff says. With shadow banking came a disparate set of
nonbank regulators, such as the Securities and Exchange
Commission and others, helping to explain the relatively
fragmented regulatory system we have today.
Canada also suffered during the Great Depression when
its money supply plummeted. “The political situation was as
dire in Canada as it was in the United States; the government
has to do something in the depths of a depression,” Redish
says. The prime minister launched the Royal Commission on
Banking and Currency to consider a central bank. Some
scholars, including Redish and Bordo, have argued that
there was no economic necessity for a central bank. Instead,
it was seen as something that could be done in the national
interest — meeting the political demand for reform — that
wouldn’t do much harm. The Bank of Canada opened its
doors in 1935.
Though deposit insurance finally stemmed bank runs in
the United States, it wasn’t instated in Canada until 1967.
But overall, says Redish, “there was very little regulation of
the Canadian banking system until 1987. There were two
bank failures in the early 1980s that kind of woke everybody
up; they were the first bank failures in 60 years.” By comparison, the United States had 79 bank failures in the 1970s
alone. The precursor to OSFI, Canada’s current regulator,
had just seven bank examiners in 1980, compared with thousands of examiners here. When OSFI was established in
1987, it encompassed most financial activity, including off
balance sheet activities.
When the economy changed in the decades preceding
the 2007-2008 financial crisis, the financial systems of
Canada and the United States were structured to respond
differently. This was especially true during the inflationary
1970s. With interest rates on deposits capped here, investors
sought protection from inflation elsewhere, such as money
market mutual funds that allowed check writing and other
deposit-like features. Deregulation moved additional funds
out of the banking system.
In Canada, the reverse happened; after walls between
securities brokerage and banking were removed in 1987,
banks absorbed securities brokerages, mortgage lending,
and other activities that occur outside of the banking sector
in America. According to a June 2013 study by Bank of
Canada economists Toni Gravelle, Timothy Grieder,
and Stéphane Lavoie, shadow banking activities are about
40 percent the size of Canada’s economy, compared with
95 percent in the United States. Not only is a significant

portion of that activity in Canada undertaken by banks,
60 percent is regulated and explicitly guaranteed by the
government, for example, through insurance or access to a
lender of last resort. Canada’s bank regulations and charters
— all of them — are revised every five years, an attempt to
help regulation adapt to innovation and emerging risks.
Nowhere did Canada’s structural and regulatory
differences manifest themselves more clearly than in mortgage finance. Canadian banks tend to hold on to mortgages
rather than selling them to investors. Fewer than a third of
Canadian mortgages were securitized before the financial
crisis, compared to almost two-thirds of mortgages in the
United States. Some have argued that this, combined with
tight regulatory standards, gives Canadian banks stronger
incentive to make those mortgages safe. Fewer than 3 percent of Canadian mortgages were classified as subprime
before the crisis, compared with 15 percent here. In Canada,
banks can’t offer loans with less than 5 percent down, and
the mortgage must be insured if the borrower puts less than
20 percent down. Mortgage insurance is available, moreover,
only if the household’s total debt service is less than 40 percent of gross household income. Not only did Canada have a
much smaller housing boom than us, but its mortgage delinquencies barely rose above the historical average of less than
1 percent. At the peak, 11 percent of American mortgages
were more than 30 days overdue.
The lesson is not that shadow banking is bad, Rockoff
says, nor that regulations and a lender of last resort are a
panacea. It’s that if you have two parallel banking systems
within a country, and one is regulated but the other has only
vague constraints, it’s clear where the risks will gravitate. At
the same time, it’s hard to use regulation to bring risk into
the fold. “I think next time around we’d just find problems
somewhere else,” Rockoff says. The better solution, he says,
is to align private incentives against excessive risk-taking.
For much of Canada’s history, that has occurred naturally
because banks monitored each other in exchange for the
implied promise of mutual support in crises. Overall, monitoring has been made more feasible by the fact that its
system includes only a small number of players.
Branching was finally made inevitable after the 1980s by
globalization and technological innovation, which made the
geographic boundaries of banks less relevant. The final U.S.
restrictions were repealed by the Riegle-Neal Interstate
Banking and Branching Efficiency Act in 1994. But by then,
our fragmented system was already in place.

Working With the System We Have
It can be tempting to look at the outward characteristics of
another country’s stable financial system and conclude that
its regulations or structure will produce the same stability
here. But doing so may not address the fundamental sources
of instability and could create new problems. “It’s very hard
to imitate success once you realize that success is based
on political institutions with deep historical roots,”
Calomiris says.
Moreover, there may be ways in which our financial
system outperforms Canada’s. Critics claim that Canada’s
tightly regulated system is slower to innovate and fund
entrepreneurs. And because there are only a few large banks,
the failure of one could be difficult for the financial system
to weather.
As for the way policy is made here, there are important
cultural reasons for it. “If you went to Americans right now
and said, ‘We can fix our problem; let’s just change the 17th
Amendment so we no longer have a popularly elected
government,’ I don’t think you’d find many takers,”
Calomiris says. He is quick to point out that a less representative government does not produce greater stability; he and
Haber overwhelmingly found that democracies outperform
autocracies in financial stability. Instead, they emphasize
that stability tends to prevail in democracies in which policy
is made with an eye toward overall stability rather than
popular interests.
Democracies do tend to take constructive steps when
financial problems affect the median voter. That happened
when President Carter nominated Paul Volcker to the Fed
chairmanship in 1979 to end rampant inflation. But households could understand inflation and felt directly that it
harmed them. The challenge today is that banking is
nuanced; on that topic, it is harder to create an informed
electorate.
There have been many proposed explanations for why
our financial system proved much less resilient than
Canada’s in 2007 and 2008, from insufficient regulation, to
lax mortgage lending, to our history of government rescues.
The longer lens of history shows, however, that any one
explanation for financial instability — and therefore any one
regulatory attempt to fix it — may be too simple.
Even if unit banking is a relic of the past, it is still with us
through its effects on the evolution of the U.S. financial
system — just as reforms today will determine the shape and
stability of the financial system of the future.
EF

READINGS
Bordo, Michael D., Angela Redish, and Hugh Rockoff.
“Why Didn’t Canada Have a Banking Crisis in 2008 (Or In 1930,
Or 1907, Or…)?” National Bureau of Economic Research Working
Paper No. 17312, August 2011.
Bordo, Michael D., and Angela Redish. “Why Did the Bank of
Canada Emerge in 1935?” Journal of Economic History, June 1987,
vol. 47, no. 2, pp. 405-417.

Calomiris, Charles W., and Stephen H. Haber. Fragile By Design:
The Political Origins of Banking Crises and Scarce Credit. Princeton,
N.J.: Princeton University Press, 2014.
Gravelle, Toni, Timothy Grieder, and Stéphane Lavoie.
“Monitoring and Assessing Risks in Canada’s Shadow Banking
Sector.” Bank of Canada Financial System Review, June 2013,
pp. 55-63.
ECON FOCUS | FOURTH QUARTER | 2013

25

How a number changed lending (and got
some Americans in over their heads)
BY DAV I D A . P R I C E

hether you’re applying for a mortgage, signing
up for a credit card, or thinking about how to
finance a small business, you’ll quickly come
face to face with one of the transformative developments
of the digital age: the credit score. It doesn’t enjoy the
glamorous image of the social network or the smartphone
— but much as those tools have spread access to information, the credit score has been a powerful catalyst in broadening access to credit.
Credit scoring is a process for analyzing a borrower’s
credit risk — the likelihood of repaying the loan — using a
computer model and expressing that risk as a number.
Creators of scoring models statistically analyze the records
of a large number of consumers, perhaps more than a million, and determine the extent to which different factors in
those records were associated with loan repayment or
default. Those factors then become the basis for calculating
scores of future borrowers or prospective borrowers.
Lenders use the scores to solicit customers (for example, to
select individuals to target with credit card offers), to decide
whether to grant credit, and to determine the interest rate
that a borrower will be offered. Studies have found that
credit-scoring systems outperform the judgment of human
underwriters, on average; moreover, they do so at a fraction
of the cost.
Perhaps because credit scoring is a process innovation,
rather than a product that is highly visible to consumers in
its own right, its role in the growth of credit has been little
heralded. Without it, however, today’s financial system in
many ways would be unrecognizable.

W

Emergence of Credit Scoring
Credit scoring has its roots in local credit bureaus that started in the 1830s to help firms decide whether to extend trade
credit — that is, credit to a business customer, such as a
retailer buying on credit from a manufacturer or wholesaler.
Until then, the nature and scale of trade in the United States
had made it reasonable for suppliers to rely on letters of
recommendation in determining a customer’s creditworthiness. When credit bureaus came into the picture, they
offered hard data on the customer’s payment record shared
by other businesses in the community. Companies manually
26

ECON FOCUS | FOURTH QUARTER | 2013

evaluated the information from credit-bureau files together
with the contents of the customer’s credit application.
Consumer credit bureaus followed later in the century,
many of them established by groups of merchants or an
area’s chamber of commerce. A handful of them banded
together in 1906 to share information, forming an organization known as Associated Credit Bureaus. To meet
increasing demand for credit information on out-of-town
and out-of-state consumers, bureaus joined the association
at a rapid pace; according to a 2005 history by Robert Hunt
of the Philadelphia Fed, its membership grew from fewer
than 100 in 1916 to around 800 in 1927 and 1,600 in 1955.
The ensuing decades saw consolidation among the credit
bureaus. Additionally, in the 1950s, a new tool with possibilities for the credit information industry arrived: the
computer. All the ingredients for automated credit scoring
were now in place.
“What made credit scoring possible was three things,”
says Fed economist Glenn Canner. “One, you needed data.
The data became widely available in the 1950s through the
emergence of larger credit bureaus. Two, you needed computing power. And three, you needed someone with bright
ideas.”
The bright ideas came from William Fair and Earl Isaac,
an engineer and a mathematician who had worked together
at Stanford Research International (SRI), a think tank,
where they created mathematical models on computers for
the military. They came to believe that the combination of
the new digital machines and their own mathematical talents could be the basis for a profitable consulting firm
serving the private sector. In 1956, investing $400 apiece,
they left SRI to start Fair Isaac Corporation.
As they explored various directions for the business,
inspiration struck, and they began trying to interest consumer credit companies in the concept of credit scoring.
They sent letters in 1958 to the 50 largest consumer lenders
in the country and received only one reply. But a single client
was all they needed to show the value of their idea; that year,
a finance company, American Investments, had them create
the first credit-scoring system.
For the next decade, use of credit extended by retailers
continued to dominate over use of general-purpose charge
cards and credit cards. Accordingly, national department
store chains, rather than banks, led in the adoption of the
new technology. “Unsecured consumer credit from financial
institutions did not appear in any significant amount in the
United States until the late 1960s,” says Richmond Fed
economist Kartik Athreya.
By 1978, however, banks and retailers held around the
same amounts of revolving credit, according to the
Philadelphia Fed study — and by 1993, revolving credit balances at banks totaled more than three times the balances at
retailers. Credit card issuers and major automobile lenders,
who needed a reliable measure of credit quality for a nationwide pool of customers, began relying on credit scores
during this period.

A final step in spurring widespread adoption of credit
scoring was their adoption by the behemoths of the mortgage market, Freddie Mac and Fannie Mae, which began
requiring credit scores for their new automated mortgage
underwriting systems in the mid-1990s.

Changing Consumer Lending
Today, all three major consumer credit-reporting agencies —
Experian, Equifax, and TransUnion — use credit-scoring
models created for them by the company that Fair and Isaac
founded, now known as FICO. The scores that the agencies
report may include not only generic scores (that is, scores
not keyed to a particular type of financial product), but also
educational scores (the ones provided to consumers) and
industry-specific scores for auto loans or bank cards. In
addition to FICO-based scores, the agencies also offer
VantageScores, which are calculated using models created by
VantageScore Solutions, a FICO rival that the agencies
jointly own. (FICO scores are generally on a scale from
300 to 850; the VantageScore scale is 501 to 990.)
The now-universal use of credit scores in consumer lending has had a number of effects. In addition to the obvious
one, faster and cheaper processing of applications, it has
changed the way credit is priced. Several studies have found
that the rise of credit scoring has been associated with an
increase in the dispersion of interest rates — that is, an
increase in the variations in rates charged to different consumers. One of these studies, published in American
Economic Journal: Macroeconomics in 2012 by Athreya, Xuan
Tam of the City University of Hong Kong, and Eric Young of
the University of Virginia, found that the variance in credit
card interest rates more than tripled from 1983 to 2004.
Researchers attribute this trend to the improved ability of
lenders to distinguish borrowers with different levels of
credit risk; risky borrowers are paying more interest on their
balances and safe borrowers are paying less. Within the
credit card industry, the value of this information has been
further increased by regulatory changes that have reduced
restrictions on credit card rates.
The question of whether this is good or bad is up for
grabs. In one sense, risk-based pricing is more equitable,
rewarding consumers for responsible management of their
finances. At the same time, there are distributional implications that may be troubling to some. (A weak credit history
may also prove costly in the homeowners and auto insurance
markets, where companies look at “insurance scores” based
on credit information to estimate a consumer’s risk of loss.)

Credit scoring may also affect other terms of a loan. Such
effects were highlighted in a recent study of a large auto
finance company by Liran Einav and Jonathan Levin of
Stanford University and Mark Jenkins of the University of
Pennsylvania. In a 2013 article in the RAND Journal of
Economics, they looked at the adoption of credit scoring by
the unnamed company, which specializes in lending to consumers with low incomes or poor credit records; they found
that with credit scores, the company offered higher loans to
low-risk borrowers and required larger down payments from
high-risk borrowers. The result: an increase in profits of over
$1,000 per loan, on average.
With the ability to distinguish risk levels of borrowers
has come a greater amount of consumer lending overall. The
more lenders know about borrowers, the more confident
they are in their ability to price credit profitably. Juan
Sánchez of the St. Louis Fed noted in a 2010 working paper
that the credit card balances of consumers — credit card balances and other credit lines, but not including secured debt
— rose as a share of income from 2.6 percent in 1983 to
4.2 percent in 2004, nearly a 50 percent increase. Even these
figures, moreover, do not include the dramatic rise in home
equity lines and cash-outs from mortgage refinancings
during the later years of the period.
One might assume that if lenders have more information
about borrowers, the result will be fewer defaults. But that
has not been the case: The widespread adoption of credit
scoring by the financial services industry coincided with
a rapid rise in consumer bankruptcies. Bankruptcy filings
increased more than fivefold from 1983 to 2004, and far faster
than the growth of consumer credit. Not only have bankruptcies become more frequent, they have become larger.
Those changes are no accident, researchers have found.
Better information, by enabling greater access to large
amounts of credit, appears to be giving more borrowers the
rope, so to speak, with which to hang themselves. The study
by Athreya, Tam, and Young estimated that the availability
of more information about borrowers accounts for around
46 percent of the increase in bankruptcies.
Like other developments of the digital revolution, credit
scoring can prove either helpful or damaging from one
person to another, from one situation to another. “There has
been a democratization of credit,” says Canner. “It’s true
that more people will go bankrupt. On the other hand, more
people will have had access to credit to do lots of things that
are very productive for them, including going to school and
starting businesses. There’s a lot of upside.”
EF

READINGS
Athreya, Kartik, Xuan S. Tam, and Eric R. Young. “A Quantitative
Theory of Information and Unsecured Credit.” American Economic
Journal: Macroeconomics, July 2012, vol. 4, no. 3, pp. 153-183.
Board of Governors of the Federal Reserve System. Report to the
Congress on Credit Scoring and its Effects on the Availability and
Affordability of Credit. August 2007.

Hunt, Robert M. “A Century of Consumer Credit Reporting in
America.” Philadelphia Fed Working Paper no. 05-13, June 2005.
Miller, Margaret J. (ed.), Credit Reporting Systems and the International
Economy. Cambridge, Mass.: MIT Press, 2003.

ECON FOCUS | FOURTH QUARTER | 2013

27

Prospects for West Virginia’s
Highest-Paying Industry
Appear Bleak
BY K A R L R H O D E S

orty years ago, President Richard Nixon announced
Project Independence, a series of steps to make the
United States energy self-sufficient by 1980. At the
top of the list, Nixon wanted factories and utilities to burn
less oil and more coal.
Four years later, President Jimmy Carter declared “the
moral equivalent of war” on the energy crisis and echoed
Nixon’s call for “the renewed use of coal.” Oil and natural gas
supplies, Carter said, were “simply running out.”
Fortunately, the nation did not run out of oil and natural
gas — quite the opposite. In recent years, U.S. producers
have learned how to economically extract vast new supplies,
mostly from hydraulic fracturing (fracking), a process that
breaks apart shale to release oil and natural gas.
Now the U.S. Energy Information Administration (EIA)
is predicting that the United States will become a net
exporter of liquefied natural gas by 2016 and dry natural gas

F

28

ECON FOCUS | FOURTH QUARTER | 2013

by 2019. EIA also notes that U.S. crude oil production has
increased 30 percent from 2008 through 2012, causing net
imports of petroleum and other liquid fuels to decrease
significantly. Energy independence, defined as becoming a
net exporter of energy, will arrive in 2023, according to
energy consultant and author Philip Verleger Jr.
Verleger and other analysts who are forecasting energy
independence cite burgeoning supplies of domestic oil and
natural gas. They barely mention the nation’s vast coal
reserves. To be sure, dramatic predictions regarding fossil
fuel production and consumption have been notoriously
wrong over the years. But the U.S. coal industry seems to be
facing unprecedented economic and regulatory challenges,
including substantially lower natural gas prices, more stringent environmental standards for coal-fired power plants,
and higher costs for mining methods that are common in
West Virginia.
Tony Yuen, a global energy strategist for Citi, is pessimistic about the future of coal — especially thermal coal
mined in West Virginia. “If electricity demand is basically
flat, and renewables are continually rising, then something
has to give,” he says, and coal already is losing ground.
The days when presidents from both major parties
recommended burning more coal are long gone — primarily

PHOTOGRAPHY: NORFOLK SOUTHERN CORP.

A Norfolk Southern train
hauls coal across Dryfork
Bridge near Bluefield, W.Va.

because coal-fired power plants emit about twice the carbon
dioxide of gas-fired plants. Federal regulations have made
it more costly to mine coal in recent years, and the
Environmental Protection Agency (EPA) has proposed CO2
emission standards that would make it nearly impossible for
electric utilities to build profitable coal plants in the United
States. In an effort to extend that policy throughout the
world, the Treasury Department announced in October 2013
that it was “ending U.S. support for multilateral development bank funding for new overseas coal projects except in
narrowly defined circumstances.” The announcement was
largely symbolic, since the United States has no such veto
power, but it put developing nations on notice that they
should not attempt to replicate the United States’ coal-fired
path to industrialization.

West Virginia Gold
Coal fueled the industrial revolution, and in the United
States much of that coal came from West Virginia. For heating buildings and fueling steam engines, burning coal was
much more efficient than burning wood. Also, metallurgical
(met) coal helped produce the steel that supported the
nation’s flourishing factories and soaring skylines.
There is some overlap between thermal coal and met
coal, but met coal generally has higher carbon content and
fewer impurities. Met coal is primarily used to produce coke
(nearly pure carbon), a key ingredient for making steel. West
Virginia continues to be a major producer of met coal.
By the early 20th century, coal barons were making vast
fortunes in West Virginia, and by mid-century, the labor
movement was beginning to spread more of that wealth to
miners and other residents of the state. Coal mining remains
West Virginia’s highest-paying industry, but coal mining
jobs in the state have plummeted from an all-time high
of 130,457 in 1940 to a 100-year low of 14,281 in 2000.
(See chart.) Technological improvements accounted for the
vast majority of that 88 percent decline.
During the 1960s and 1970s, the environmental movement began to affect the coal industry. The Clean Air Act
clamped down on six common pollutants — particulate matter, sulfur oxides, nitrogen oxides, carbon monoxide,
ground-level ozone, and lead. The new emission standards
initially helped West Virginia’s thermal coal producers
because West Virginia coal generally burned cleaner than
coal from the Illinois Basin. But as more coal plants added
scrubbers to remove these pollutants from their emissions,
West Virginia lost much of its “clean coal” competitive
advantage.
In the years that followed, the coal industry passed
through boom and bust cycles. Most recently, downward
trends in domestic demand were more than offset by a surge
in exports that helped keep West Virginia’s unemployment
rate below the national average during the recession of
2007-2009. As the U.S. economy slumped, China was swinging from a net exporter to a net importer of coal. At the same
time, there were massive floods in Queensland, Australia,

Coal Mining Jobs in West Virginia
140,000
120,000
100,000
80,000
60,000
40,000
20,000
1900
1920
1940
1960
1880
NOTE: Employment numbers do not include independent contractors.
SOURCE: West Virginia Office of Miners’ Health, Safety and Training

1980

2000

which produced far more met coal than any other region.
“You had a classic price spike,” recalls Jeremy Sussman, a
coal analyst for Clarkson Capital Markets in New York.
“Companies at the peak were selling metallurgical coal
for $400 a ton. It was something they had never really
witnessed.”
The Dow Jones U.S. Coal Index soared to its all-time high
of 741 in June 2008, but it plunged to 111 by that November
and has stayed below 200 since May 2012. (See chart on next
page.) Clearly the market is not expecting another domestic
coal boom, but the market also has demonstrated that
prospects for the industry can change quickly. Even so,
energy analysts are pessimistic about the long-term outlook
for West Virginia coal.
“You won’t see metallurgical coal prices like you did
before the financial crisis, but you will see metallurgical coal
prices where companies can make a healthy margin,”
Sussman says. “I could easily envision a scenario where,
in the not-too-distant future, West Virginia is producing
almost all metallurgical coal, and only the absolute lowestcost thermal mines survive.”

Energy Mix
Carter’s “moral equivalent of war” speech traced 200 years
of energy consumption — from wood to coal to oil and
natural gas and back to coal.
But fracking has shaken up the energy mix. The availability of natural gas has increased dramatically, and prices
have come down accordingly. Some environmentalists have
claimed that fracking has contaminated ground water. (See
“The Once and Future Fuel,” Region Focus, Second/Third
Quarter 2012.) But few, if any, peer-reviewed studies have
reached that conclusion.
The federal government is researching contamination
claims but doing nothing to slow down the fracking frenzy,
Yuen notes. To the contrary, the Department of Energy
(DOE) has been approving more export terminals for liquefied natural gas, and the Obama administration’s Climate
Action Plan relies heavily on burning less coal and more
natural gas. “It looks like the government is supportive of
natural gas development,” Yuen says, “and the whole of
natural gas development right now is based on the ability to
drill and hydraulically fracture.”
ECON FOCUS | FOURTH QUARTER | 2013

29

Dow Jones U.S. Coal Index
800
600
400
200
0
2003

2005

2007

2009

2011

2013

NOTE: The index measures the market value of publicly traded coal companies in the United
States. The figure shows the index’s daily closing values from Jan. 2, 2002, through Jan. 2, 2014.
The shaded area indicates the recession of 2007-2009.
SOURCE: Dow Jones

In the long run, natural gas will fare better than coal
under more stringent CO2 standards, but the more immediate problem for West Virginia’s thermal coal is the relative
price of coal and natural gas. After spiking above $13 per million Btu in 2008, the price of natural gas generally has ranged
from $2.50 to $5.
“At $3 gas or below, coal that is produced west of the
Mississippi can compete with natural gas, but coal east of
the Mississippi just absolutely cannot compete,” Sussman
says. During a few weeks in 2012, the gas price dipped below
$2, and electric utilities generated more power from natural
gas than coal for the first time in U.S. history. Since then, the
gas price has recovered to $4.50, but that’s still not high
enough to put the majority of West Virginia’s thermal coal in
the money, and no one is predicting a large increase anytime
soon. “Assuming the status quo for regulations and so on, it’s
tough to imagine natural gas prices getting anywhere close
to the levels where they were before fracking,” Sussman says.
As a result, electric utilities are shutting down old, inefficient coal plants and increasing production at gas plants.
Yuen expects the coal shutdowns to peak in 2015, but in that
year alone, there could be 15 plant retirements, he says. A lot
of those old plants are in Pennsylvania, New Jersey,
Maryland, and the Southeast, where power companies tend
to use West Virginia coal.
“As long as the natural gas price stays low, it can hide
many sins, but gas prices will go up sooner or later,” says
Howard Herzog, senior research engineer in the MIT
Energy Initiative in Cambridge, Mass. Herzog believes the
United States should continue developing “clean coal”
technologies that would create a more robust electricitygeneration system by keeping coal in the energy mix.
He cites the example of New England: “We are at the end
of the gas pipeline, and we were never big on coal anyway,
and a lot of our coal plants went away,” he says. “We have
some nuclear, but we are very dependent on natural gas. Two
winters ago, this was no problem because it was a very mild
winter. Then we had a pretty cold winter last year, and the
price of natural gas to utilities went up” — driving wholesale
electricity prices four times higher than they were during
the previous winter.
Population centers in the Mid-Atlantic region might
30

ECON FOCUS | FOURTH QUARTER | 2013

get caught in the same trap if they become too reliant on
natural gas. West Virginia coal has served Mid-Atlantic utilities for many years, but coal transportation costs are higher
than natural gas transportation costs via pipeline, “especially now that the heart of Pennsylvania has one of the largest
gas fuel reserves in the world,” Yuen says.
Nuclear power also competes with West Virginia coal,
and nuclear plants emit almost no CO2, but no one has built
a new nuclear plant in the United States in more than 30
years, and utilities have been closing down some older
reactors. A few new units are under construction, but the
Fukushima disaster has put a damper on nuclear power —
much like the Three Mile Island accident did in 1979, less
than two years after Carter’s speech.

Bleak Prospects
The proposed CO2 standards do not apply to existing coal
plants, so the United States will continue burning coal for a
long time. About 40 percent of the nation’s electricity still
comes from coal, and EPA officials have indicated that CO2
proposed standards for existing coal plants — due out in
2014 — will be far less stringent than proposed requirements
for new plants.
EPA officials realize that coal plants remain a necessary
part of the electricity-generation mix, Sussman says, but
West Virginia’s thermal coal has lost key competitive advantages. Thermal coal supplies from the Illinois Basin and
Wyoming’s Powder River Basin are significantly cheaper
because open-pit mining is less costly than the underground
mining and mountaintop removal that are common in West
Virginia. Also, West Virginia mining accidents in 2006 and
2010 prompted new regulations that have made underground mining and mountaintop removal even more
expensive relative to open-pit mining.
Sussman estimates the gap by comparing Alpha Natural
Resources, the largest Eastern producer, with Arch Coal, a
large producer in the Powder River Basin. From 2007 to
2012, Alpha’s cost per ton increased 64 percent, while Arch’s
cost per ton increased only 37 percent, he says. “The majority of that difference can be attributed to increased mining
regulations” that affect Eastern coal mining more than
Western coal mining.
East of the Mississippi River, “low-cost regions, like parts
of Pennsylvania and parts of Illinois, will be just fine,”
Sussman predicts. “Higher-cost regions, which unfortunately would encompass West Virginia, are going to have a much
more difficult time.”
West Virginia remains, however, a large producer of metallurgical coal. West Virginia’s production is currently split
roughly 50-50 between thermal and met coal, but nearly
two-thirds of the state’s coal sales come from met coal.
The state has exported more met coal in recent years, but
global demand for U.S. coal softened in 2013. China continues to drive worldwide demand for met coal, and West
Virginia has exported more coal to China in recent years, but
Yuen expects that trend to decline as well.

“A lot of steel-making happens in China these days, but
there is an economic transition from this investment-led
growth model in China toward something a little more consumer-led,” he says. “Then the demand for steel may not
really be there as strong as what other people expect.”

Clean Coal
The future of West Virginia’s thermal coal may hinge on new
technologies for carbon capture and sequestration (CCS) —
a way to capture CO2 from coal plants and inject it into rock
formations far below the earth’s surface.
“The U.S. appears to have considerable capacity for carbon sequestration located suitably in relation to coal plants,”
wrote Ernest Moniz in 2007, while he was director of the
MIT Energy Initiative. (He has since become secretary of
energy.) Carbon capture, however, presents a bigger challenge. “A dramatic cost reduction is needed, and this calls for
a large-scale research program that emphasizes new concepts and scales the promising ones to commercial
demonstration,” Moniz wrote. “If this is not accomplished,
coal would eventually be squeezed out of the U.S. electricity
mix by future stringent CO2 emission constraints.”
Six years later, the CO2 constraints are imminent, but
only one large-scale coal plant with CCS technology is under
construction in the United States. Mississippi Power is
building a 582-megawatt plant in Kemper County, Miss., but
the project has been plagued by delays and cost overruns
despite a DOE grant of $270 million and another $133 million in tax credits.
MIT’s Herzog advocates a market-based approach
instead of relying on subsidies from the DOE and “command-and-control” standards from EPA. “The only way you
are going to get markets that are big enough and pay enough
to make these CCS projects commercial is through climate
policy, and the climate road we are on now — new-source

performance standards — gives gas a free ride,” he argues.
“If gas gets a free ride and coal takes a hit, it just widens that
gap.” Instead, Herzog believes the government should tax all
CO2 emissions. This approach would affect coal plants
about twice as much as gas plants, but it could narrow the
regulatory gap compared with a system that requires expensive CCS for new coal plants and nothing for new gas plants.
CCS projects start to look attractive if carbon is taxed
between $50 and $100 per ton of CO2, according to Herzog.
“The question is how would the competitors do — such as
nuclear and the like. It’s hard to tell, but there is a strong
feeling in the CCS community that we would see modern
coal plants.”
Herzog concedes, however, that the politics of carbon
taxation are extremely difficult. “It’s internal U.S. politics
and also international politics,” he says. “You have China just
building coal plants without CCS. If you can’t get them into
the fold, then you start to have issues of international competitiveness.”
A sustained period of cheap and abundant natural gas
would give many U.S. industries a new competitive advantage, but the economic consequences for thermal coal
mining in West Virginia would be dire. Researchers at West
Virginia University’s College of Business and Economics
expect overall coal employment in the state to decline gradually through 2018, but private sector analysts anticipate far
more severe job losses.
“If gas prices stay where they are, Central Appalachian
production will be down somewhere between 25 percent and
40 percent five years from now,” Sussman says. If productivity continues to decline due to increased regulation and
coal-seam depletion, coal mining employment in West
Virginia likely will decline substantially as well. “The companies have to focus on low-cost production, and you don’t
lower your cost by hiring more miners.”
EF

READINGS
“Annual Energy Outlook 2013 with Projections to 2040.”
U.S. Energy Information Administration, April 2013.

Verleger, Philip K. Jr. “The Amazing Tale of U.S. Energy
Independence.” The International Economy, Spring 2012, p. 8.

“Energy 2020: Independence Day.” Citi Global Perspectives and
Solutions, February 2013.

“West Virginia Economic Outlook 2014.” Bureau of Business and
Economic Research, West Virginia University College of Business
and Economics.

ECON FOCUS | FOURTH QUARTER | 2013

31

INTERVIEW

Mark Gertler

EF: How did you become interested in economics in
general and macroeconomics in particular?
Gertler: When I was an undergraduate, like most undergraduates in my day, I was interested in law school. But I
realized in my junior year that my heart wasn’t completely in
it. I happened to take intermediate macroeconomics and I
had a great teacher, Don Nichols. He was inspiring. What I
liked about macroeconomics was that it was math applied to
real-world problems. It was interesting to see how you could
set up a model, shift some curves around, and possibly do
some good with it in terms of economic policy.
It just seemed like a nice combination of mathematics, in
which I was interested, and something that seemed socially
useful. I found it both interesting and relevant, so I figured
maybe it was my calling.
EF: Is there anything you’ve learned from the Great
Recession about the role of finance that you weren’t
aware of before?
Gertler: I liken the crisis to 9/11; that is, there was an inkling
that something bad could happen. I think there was some
sense it was going to be associated with all the financial
innovation, but just like with 9/11, we couldn’t see it coming.

32

ECON FOCUS | FOURTH QUARTER | 2013

PHOTOGRAPHY: NYU PHOTO BUREAU: ASSELIN

Mark Gertler, one of the most cited researchers in
macroeconomics, has spent much of his career looking
at how conditions in financial markets affect the real
economy — Main Street. In doing so, he has shed light
on one of the curious properties of modern economies:
Setbacks to an economy that seem relatively minor in
the overall scheme of things can nonetheless lead to
large negative effects across the system. His work on
these issues, with collaborators, innovatively combined
elements of microeconomics, banking and finance, and
business cycle theory.
Gertler met one of those collaborators in the early
1980s when he was an economics Ph.D. student at
Stanford. Gertler and a new junior professor, one with
an outsized interest in the Great Depression, took a
liking to each another and became frequent co-authors.
Among the concepts that emerged from their partnership was that of “financial accelerators” — mechanisms
that could cause a short-lived shock to financial conditions to translate into persistent fluctuations in the
economy. Later, in 2007 and 2008, Gertler’s collaborator, Ben Bernanke, would put the lessons of their work
to practical use as Bernanke led the formulation of the
Fed’s responses to the financial crisis.
Apart from an advisory role at the New York Fed and
a one-year stint as a visiting scholar there, Gertler
himself has never walked the well-trodden path
between university economics departments and positions in the Fed, the White House, the Treasury
Department, and elsewhere in government. He has
spent most of his career at New York University, where
— in addition to his research and teaching — he led an
aggressive long-term effort as department chairman to
upgrade the school’s status within the discipline.
Formerly “a solid small-market team,” in the words of
the New York Times Magazine, NYU became, in the eyes
of many, a top-tier department.
In addition, Gertler performed a signal service to
the Richmond Fed by serving on the dissertation
committee of its future president, Jeffrey Lacker, when
Lacker was a doctoral student at the University of
Wisconsin-Madison.
David A. Price interviewed Gertler at his office
at Columbia University, where he is visiting for the
academic year, in December 2013.

relative to the deregulation. That
When we look back, we can piece
When we look back
is, if we had adequate regulation of
everything together and make sense
of things, but what we didn’t really
on the crisis, we can explain subprime lending, then I don’t
think the low interest rates would
understand was the fragility in the
most of what happened
have contributed to the crisis
shadow banking system, how it made
given existing theory.
at all.
the economy very vulnerable. I always
Also, people fail to take into
think of the Warren Buffet line, “You
It’s just we couldn’t see it
account the trade-offs. We had a
don’t know who’s naked until you
at the time.
very weak employment situation.
drain the swimming pool.” That’s sort
Had we raised interest rates only a
of what happened here.
little bit, we would have done nothing to curb the housing
I think when we look back on the crisis, we can explain
bubble, and if we’d raised them quite a bit, we would have
most of what happened given existing theory. It’s just we
killed the economy.
couldn’t see it at the time.
EF: What should policymakers have done differently in
the run-up to the crisis?
Gertler: Perhaps the biggest mistake involved regulation in
the subprime lending market. We all thought homeownership sounded like a very appealing idea, but getting
everybody into the housing market involved lowering lending standards, which meant risky mortgage lending. Second,
we let a largely unregulated intermediary sector grow up outside the commercial banking sector. The biggest mistakes
probably involved too much deregulation.
EF: What do you think is the best explanation for the
policies that were pursued?
Gertler: At the time, I think it was partly unbridled belief in
the market — that financial markets are competitive
markets, and they ought to function well, not taking into
account that any individual is just concerned about his or her
welfare, not about the market as a whole or the exposure of
the market as a whole. And so you had this whole system
grow up without any outside monitoring by the government.
It just had individuals making these trades and making these
bets; nobody was adding everything up and understanding
the risk exposure. And there was this attitude that we ought
to be inclusive about homeownership — that was going on
as well.
Plus, complacency set in. We had the Great Moderation
of the 1980s and 1990s, and we all thought we’d solved the
major problems in macroeconomics. There were some
prominent macroeconomists saying, “Look, we shouldn’t be
wasting our time on these conventional issues; we’ve already
solved them.” That led to most people just being asleep at
the wheel.
EF: Do you think that monetary policy should have
been different during this period?
Gertler: It’s possible that short-term interest rates contributed to the growth of the subprime market, because
there were a number of borrowers taking variable rate mortgages, but I think that consideration was second order,

EF: Speaking of interest rates, would you say the low
interest rates today are a result of monetary policy
levers that are being adjusted in Washington, or do they
simply ratify conditions in the real economy?
Gertler: I think it’s a little bit of both. The economy is
weak. The natural rates of interest are low, and they’re
arguably negative now, so the Fed has pushed down shortterm rates as far as it can; we’re at the zero bound, or about.
As for longer-term rates, I think they’re influenced both by
where the economy naturally is and by policy. I think there’s
an expectation that three to four years from now the economy will recover, pushing future short rates up, which puts
upward pressure on long rates. On the other hand, we’ve had
a lot of quantitative easing, which puts downward pressure
on long rates, so I say for longer-term rates it’s both policy
and the natural forces of the economy at work.
EF: Along with Ben Bernanke and Simon Gilchrist, you
helped to develop the concept of financial accelerators,
linking financial market conditions with those of the
real economy. Can you explain what you found?
Gertler: I think the way we got started was that I had done
some earlier work with Bernanke, and we were interested in
understanding why there was such a sharp contraction in the
Great Depression and why it was so persistent. We were
drawn to a theory originally put forward by Irving Fisher in
1933, the debt-deflation theory. Fisher argued that the deflation at the time increased the real debt burden of borrowers,
and that led to a reduction in their spending, which put
downward pressure on the economy, and further deflation,
and so on. What we saw in that was a kind of feedback
mechanism between the real sector and the balance sheets in
the financial sector that amplified the cycle.
That’s what we wanted to capture with the financial
accelerator, that is, the mutual feedback between the
real sector and the financial sector. We also wanted to
capture the primary importance of balance sheets —
when balance sheets weaken, that causes credit to tighten,
leading to downward pressure on the real economy,
which further weakens balance sheets. I think that’s what
ECON FOCUS | FOURTH QUARTER | 2013

33

one saw in the financial crisis.
So we were inspired by Fisher’s
debt-deflation theory, and we were
trying to formalize that idea using
modern methods. Then we found
some other implications, like the role
of credit spreads: When balance
sheets weaken, credit spreads
increase, and credit spreads are a natural indicator of financial distress.
And again, you saw something similar
in the current crisis — with a weakening of the balance sheets of financial
institutions and households, you saw
credit spreads going up, and the real
economy going down.
I didn’t speak to Bernanke a lot
during the height of the crisis. But
one moment I caught him, asked him
how things were going, and he said,
“Well, on the bright side, we may
have some evidence for the financial
accelerator.”

Mark Gertler
➤ Present Position
Henry and Lucy Moses Professor of
Economics, New York University (on
leave); Wesley Clair Mitchell Visiting
Professor of Economics, Columbia
University
➤ Selected Previous Positions
Visiting Professor, MIT (2002); Visiting
Professor, Yale University (1997);
Visiting Professor, Princeton University
(1993); Assistant Professor, Associate
Professor, and Professor, University of
Wisconsin (1981-1989); Assistant
Professor, Cornell University (1978-1981)

did, and that’s how I got to know
Bernanke.
EF: The Fed, as you know, has
been buying and selling private
securities on a significant scale
since the financial crisis. You’ve
suggested that once a crisis calms
down, the buying and selling
of private securities should
be carried out by the Treasury
Department rather than the Fed.
Why is that?

Gertler: There are politics involved
in the holding of private securities,
➤ Education
and you’d like to keep the Fed
B.A. (1973), University of Wisconsin;
as independent of politics as possiPh.D. (1978), Stanford University
ble. On the other side of the
coin, the Fed is the only agency
➤ Selected Publications
in Washington that can respond
“The Financial Accelerator and the
quickly to a crisis. In this case, the
Flight to Quality,” Review of Economics
mortgage market was collapsing, the
and Statistics, 1996 (with Ben Bernanke
mortgage-backed securities market
and Simon Gilchrist); “Agency Costs,
EF: That sounds like gallows
was collapsing, and so I think it was
Net Worth, and Business Fluctuations,”
humor, and not —
important for the Fed to go in and act
American Economic Review, 1989
(with Ben Bernanke); numerous other
as a lender of last resort, as it did. But
articles in such journals as the Journal of
then, as time passes, this job should
Gertler: No, not enthusiasm, no.
Money, Credit and Banking, Journal of
be taken over by a political entity,
[Laughs.] Not enthusiasm at all. He
Monetary Economics, Journal of Political
that is, by the Treasury. That’s what
would have been happy to find the
Economy, and Quarterly Journal of
happened in the savings and loan
theory completely wrong.
Economics
crisis; we set up the Resolution Trust
Corporation that acted like a public
EF: When you and Bernanke startfinancial intermediary. Right now, the Fed is acting like a
ed this work, were you drawn to the Great Depression
public financial intermediary, and I think that for political
as a subject because it’s like Mount Everest for mounreasons, the Treasury is unwilling to assume responsibility
taineers, or what was the attraction?
for the mortgage portfolio.
So I think it was entirely appropriate for the Fed to get
Gertler: It was Bernanke who was originally inspired to
into that market, because it had to fulfill its responsibilities
work on the Depression, and his motivation was that if
as a lender of last resort, but now it would be better for the
you’re interested in geography, you study earthquakes. I got
Treasury to take it over.
really interested in it through him. At the time we started
working together, you were starting to see financial crises
EF: As a result of its asset purchase programs, the Fed
around the globe, some in emerging markets, and then also
now has about $2.4 trillion in excess reserves from
the banking crises in the late 1980s in the United States.
That made us think, wow, maybe this stuff is still relevant.
depository institutions. But since the Fed now pays
Maybe it’s not just a phenomenon of the Great Depression.
interest on reserves, the money doesn’t flow into the
real economy. Have policymakers found a free lunch?
EF: How did you get to know each other?
Gertler: The way I think about it is that we had a collapse of
Gertler: We had a mutual friend, Jeremy Bulow. Jeremy was
the shadow banking system, a drastic shrinkage of the shadow banking system. What were shadow banks doing? They
a student at MIT, where Bernanke studied, but he would
were holding mortgage-backed securities and issuing shortspend time at Stanford, where I studied. In the early 1980s,
term debt to finance them. What’s happened is that that
Bernanke was coming to Stanford as I was leaving. Jeremy
market has moved to the Fed. The Fed now is acting as an
had actually sublet his house from Bob Hall; it was a rather
investment bank, and it’s taking over those activities.
huge house, so he needed roommates. He invited Bernanke
Instead of Lehman Brothers holding these mortgage-backed
and his wife and me to sublet the house with him, which we
34

ECON FOCUS | FOURTH QUARTER | 2013

securities, the Fed is. And the Fed is
issuing deposits, if you will, against
these securities, the same way these
private financial institutions did. It’s
easier for the Fed, because it
can issue essentially risk-free
government debt, and these other
institutions couldn’t. I don’t think
there’s any free lunch going around,
other than that it’s easier for the
Fed to borrow in a crisis than it is for
a private financial institution.

When we intervene, we want
to help prevent a crisis, but
on the other hand, there’s
an issue of moral hazard.
Just knowing we are going to
intervene is going make some
financial institutions
take more risk.

EF: Does the fact that the quantity of reserves is so high
matter for how the economy is going to perform in the
future?
Gertler: It’s possible, as interest rates go up, that the Fed
could take some capital losses, as private financial institutions do. But the beauty of the Fed is it doesn’t have to mark
to market; it can hold these assets until maturity, and let
them run off. So I’m in a camp that thinks there’s been probably a little too much preoccupation with the size of the
balance sheet. It could be a problem if the economy continues to grow slowly, and the balance just keeps growing
without bound, but I don’t think we’re quite there yet.
EF: Your work with Jordi Gali and Richard Clarida in
the 1990s helped to reorient the debate on the Great
Inflation and the Great Moderation. The role of monetary policy in these episodes seems self-evident now,
looking back. But it wasn’t then, was it?
Gertler: I certainly think there was the notion going around
that the Fed was highly accommodative in the 1970s, and
then Volcker and Greenspan changed that with more focus
on inflation. What we did was fairly simple; we basically
used the Taylor Rule analysis as just a way to say sharply what
was going on.
So I think what we did was kind of straightforward. We
just happened to be at the right place at the right time.
The Taylor Rule apparatus was there, and the econometrics
techniques of Lars Hansen’s that we used were there.
We were in a good position to say something.
EF: There was a perception in the 1970s that the price
pressure was coming from negative shocks, namely oil
price shocks. Did you feel you were swimming upstream
to some extent in telling your story that it was monetary
policy?
Gertler: Not really. I think the conventional wisdom of the
time was that oil shocks, and this goes back to Friedman,
had put on transitory pressure, but that you needed monetary policy to accommodate it and make it persistent.
We were able to use this really simple setup to clearly show

what was going on, but I think the
ideas were certainly floating around
at the time.
EF: What do you think are the
most important questions about
the role of finance in the macroeconomy that are still open at the
moment?

Gertler: I think that the basic
questions are still open. The first is,
what do we do ex ante before a crisis? How should regulation
be designed? That’s a huge question that we still haven’t figured out. For example, what’s the optimal capital ratio for a
financial institution? And, second, how far should the regulatory net be spread to cover every systemically relevant
financial institution? How do we figure out which ones are
and which aren’t? When we lay down a regulation, how do
we figure out whether some financial institutions are going
to get around it?
Then what do we do ex post? When we intervene, we
want to intervene to help prevent a crisis from creating a
recession or depression, but on the other hand, there’s an
issue of moral hazard. Just knowing we are going to intervene is going make some financial institutions take more
risk. I think those questions still largely haven’t been
answered.
EF: As you know, Congress addressed many of those
questions in the Dodd-Frank Act. Are there aspects of it
that you think are particularly ill advised or well
advised?
Gertler: The first order thing is we needed to do something
like Dodd-Frank. If we had gone through this crisis, one
where we bailed out many large financial institutions, and
then left it at that, it would have been laying the seeds for
the next crisis. So something like Dodd-Frank was desperately needed. There was no simple way to do it cleanly, and
there’s still a long way to go, but it was an important first step.
EF: You mentioned capital requirements. Is there a
sense that capital requirements and related requirements in places like Basel III are chosen in a way that
isn’t firmly grounded in empirical work?
Gertler: I’m reminded of a comment Alan Blinder makes.
There are two types of research: interesting but not important, and incredibly boring but important. And figuring out
optimal capital ratios fits in the latter category.
The reality is that we don’t have definitive empirical work,
and we don’t have definitive theory that gives us a clear answer.
EF: Moving to another side of your work, you reportedly persuaded the president of NYU, John Sexton, in
ECON FOCUS | FOURTH QUARTER | 2013

35

say, are following that style. When I first came out, you had
“freshwater” economists from the Midwest — Minnesota,
Chicago, and so on; you had “saltwater” economists on the
East Coast. If you look at the field now, those distinctions
have just blurred, and I would say our department was one of
the first to make a strong effort to blur that distinction. You
have an honest competition of ideas.

the early 2000s to make a major bet on NYU’s economics department. Is that what happened, and if so,
how did you make the case?
Gertler: It’s nice to tell the story that way, but let me set the
record straight: It came from Sexton. He saw that our
department had been doing well, and there were a number of
people who contributed to our department doing well in
recruiting over the years. Jess Benhabib played an important
role. I was also involved. Douglas Gale was another; he was
chairman before me.
Because our department had been doing well, and
because Sexton was looking to make a splash, he turned to
economics. He figured economics was a high-profile field
and we’d shown good judgment in our hiring. He also figured
out that economics is relatively cheap because we have so
many students.

EF: Was there a time, as this was unfolding, when you
realized that people seemed to be looking at NYU
differently?
Gertler: I found the most interesting barometer was the
graduate students, when the quality of graduate students we
were drawing really improved. Now the faculty jokes, but it’s
not completely joking, that they’re not sure they could even
get accepted into our department now. I would say that the
clearest signal was our ability to attract graduate students.

EF: Having reached the decision to invest in economics,
what guidance did he give you to build up the department?

EF: What do you think about the role of blogs in facilitating or hindering communication among economists?
And between economists and non-economists?

Gertler: Just to be aggressive. We were lucky Tom Sargent
came along. Nobody could believe it at the time; usually,
when you recruit, the batting average isn’t very high, and it’s
lower the greater the stature of the person you’re going after.
But Sargent had expressed some interest. He had offers at
the time from MIT and Chicago, and we thought there was
no way he was interested in us, but he kept telling us he was.
Sure enough, it worked out, and that was probably the key.
Then we had a number of other good people come.

Gertler: I occasionally read the blogs, but more for entertainment than to learn something. When they’re describing
different opinions about the economy and what might be
going on, I find that kind of interesting. As a place to have
scientific debates, I’m not so sure.
EF: With regard to your influences, you mentioned
Professor Nichols at the outset. Were there others who
were strong influences on you in your development as an
economist?

EF: What were the biggest challenges in attracting talent to an economics department that wasn’t yet in the
top tier?

Gertler: There was a spectacular group of macroeconomists
in the cohort ahead of me. I think there were three in particular who had a lot of influence, namely Tom Sargent, Bob
Hall, and John Taylor. The common denominator of the
three is they all engaged in significant debates in macroeconomics; they all asked significant questions. And they all in
their work used a mix of state-of-the-art theory and empirical methods. For me, they were very good role models.
Then, I’ve been fortunate to have, throughout my career,
excellent co-authors. Early on, I met Rao Aiyagari when I
was an assistant professor at Wisconsin, and he really
educated me as to the developments and methodology
coming out of Minnesota, which I had totally missed out on
in my Ph.D. training. Then I also associated with Ben
Bernanke, and of course that was a great experience. For me,
working with Bernanke highlighted most of all the importance of asking good questions and backing up the answers
with data.
EF

Gertler: In my own case, what attracted me to NYU in 1990
was that they had a couple of really good researchers, Boyan
Jovanovic and Jess Benhabib. I looked at them and said,
“Well, these guys are very successful, so even though it’s not a
top-ranked department, I could come here and do well.”
Part of the recruiting strategy was to play off of New
York. The city was very attractive to Europeans and South
Americans. You look at our department and see there’s a
large mixture of people from these countries. And then of
course when Sargent came in 2002, that kind of changed
things. [Sargent received the Nobel Prize in economics in
2011 with Christopher Sims of Princeton University.]
Another thing we did at NYU is we were very eclectic.
We didn’t want to be in one camp or the other; we just
wanted people who were good, and whose work everybody
would read. I think a number of other departments, if I may



36

ECON FOCUS | FOURTH QUARTER | 2013

ECONOMIC HISTORY
Water Wars
BY J E S S I E RO M E RO

hiskey is for drinking,
and water is for fighting.”
It’s a saying often heard
in the arid American West, where
precipitation in some states averages
as little as five inches per year, and
multiple states may depend on a
single watershed to supply their
homes, farms, and industry. But over
the past two decades, water wars have
become a staple of politics in the relatively water-rich Southeast as well.
In the Fifth District alone, competition for water has pitched Maryland
against Virginia, Virginia against
North Carolina, and North Carolina
against South Carolina. Farther south,
Georgia, Alabama, and Florida have
been battling over the ApalachicolaChattahoochee-Flint river basin since
1990, a dispute that also affects South
Carolina.
Historically, the South’s rivers and
lakes have provided ample water
to satisfy the needs of both city
dwellers and farmers, fishermen and
manufacturers. But the region’s rapid

W

Western gold miners, such as these in El Dorado, Calif., circa 1850, built elaborate
ditch systems throughout the countryside to divert water for
panning gold and operating mines.
38

ECON FOCUS | FOURTH QUARTER | 2013

economic development, combined
with a series of droughts beginning in
the 1990s, has increased the tensions
among the various interest groups.
The result has been a series of prolonged and expensive lawsuits. As
population growth and climate change
place new demands on the country’s
water supplies, states and metro areas
may need to develop new solutions to
allocate an increasingly scarce
resource.

Go West, Young Man
In 1845, journalist John O’Sullivan
wrote that it was Americans’ “manifest destiny” to migrate westward,
carrying the “great experiment of
liberty” to the Pacific Ocean. Millions
of Americans heeded his call in the
decades that followed, as gold was
discovered in California, the
Homestead Act gave free land to new
settlers, and the Transcontinental
Railroad connected the coasts.
Between 1860 and 1920, the population of California grew from 380,000
to nearly 3.5 million.
All those people needed water, and
miners, farmers, and city officials
competed fiercely to divert water
from the region’s rivers and streams.
Sometimes those competitions
turned violent. In 1874, for example, a
Colorado man named Elijah Gibbs got
into a fight with a neighboring rancher, George Harrington, about drawing
water from a nearby creek. Later the
same night, someone set fire to one of
Harrington’s outbuildings. When he
went out to investigate, he was shot
and killed. The killing led to a yearlong feud known as the Lake County
War that took the lives of several
more men, including Judge Elias Dyer,
who was shot in his own courtroom.
In the early 1900s, the farmers and
ranchers of Owens Valley, in eastern
California, were supposed to be the
beneficiaries of a federal irrigation

PHOTOGRAPHY: LIBRARY OF CONGRESS

Fighting over water
is as American as
apple pie

project that would bring the Owens River to their land. But
more than 200 miles away, officials in Los Angeles realized
that the city couldn’t grow unless it found a new source of
water, so they began buying up land and water rights in
Owens Valley — using quite a bit of bribery and deception,
according to many accounts. By 1913, Los Angeles had completed building an aqueduct that diverted nearly all of the
Owens River to the San Fernando Valley, and just a decade
later the Owens Lake had dried up. Owens Valley residents
twice blew up sections of the aqueduct to protest the loss of
their water, but the aqueduct was repaired, and Los Angeles
grew into the second-largest city in the United States.
Less violent but no less notorious is the ongoing battle
for water from the Colorado River, which supplies water for
30 million people in seven different states and in Mexico. In
1922, after years of disagreement, then-Secretary of
Commerce Herbert Hoover negotiated the Colorado River
Compact. The compact divided the states into the Upper
Division (Colorado, New Mexico, Utah, and Wyoming) and
the Lower Division (Arizona, California, and Nevada) and
apportioned the water equally between the two divisions.
The compact was controversial from the start: Arizona
refused to ratify it until 1944, and even called out the
National Guard in 1934 in an unsuccessful attempt to block
California from building a dam. Over the years, numerous
lawsuits have been filed by tribal groups, environmental
organizations, and the states themselves, and every few years
the states have had to renegotiate certain details of the compact. (During the 2008 presidential election, John McCain
was leading in Colorado until he said in an interview that the
compact should be changed to give the Lower Division
states more water — infuriating Colorado politicians and
perhaps costing him the state’s electoral votes.) Recently, it
has become clear that the compact was signed during a period of unusually heavy rainfall, making the current
appropriations unrealistic. That fact, combined with rapid
population growth and more than a decade of severe
drought, has left federal and state authorities scrambling to
manage the existing supply and uncertain about how the
water will be allocated in the future.

Oysters and Office Parks
The first water war in the Fifth District predates the existence of the United States. In 1632, King Charles I of
England granted all of the Potomac River to the colony of
Maryland, giving it access to the river for transportation,
fishing, and, most lucratively, oyster dredging. Virginia was
somewhat mollified by getting rights to part of the
Chesapeake Bay in exchange, but the truce didn’t last for
long, and for more than three centuries there was periodic
violence between oyster dredgers, fishermen, and the state
governments. As recently as 1947, the Washington Post
wrote about the fights between Marylanders and Virginians:
“Already the sound of rifle fire has echoed across the
Potomac River. Only 50 miles from Washington men are
shooting at one another. The night is quiet until suddenly

shots snap through the air. Possibly a man is dead, perhaps a
boat is taken, but the oyster war will go on the next night and
the next.”
By the end of the 20th century, Northern Virginia’s economy was booming and the region depended on the Potomac
River to power its looming office towers and hydrate its rapidly increasing population. Between 1993 and 2003, water
withdrawals from the Potomac by the Fairfax County Water
Authority, which serves Northern Virginia, increased 62 percent, compared to an increase of 19 percent for the D.C.
metro area as a whole.
In 1996, Virginia wanted to build an additional withdrawal pipe, but Maryland denied the request because it was
concerned about the effects of Virginia’s sprawl on the
region. Virginia spent several years filing administrative
appeals with Maryland’s Department of the Environment,
to no avail, and finally filed a complaint with the U.S.
Supreme Court in 2000. (The Court has original jurisdiction
over lawsuits between states.) The court ruled in Virginia’s
favor in 2003, granting it equal access to the river, and
Northern Virginia’s growth has continued unabated.

On Second Thought, Young Man, Go South
It’s not only Northern Virginia that is growing. In the South
as a whole, the population has more than doubled over the
past 50 years, growing about 30 percent faster on average
than the nation as a whole. Just since 2001, the population of
the Southeast has grown twice as fast as the Northeast.
Today it is the largest population region in the country, with
60 million people.
Many factors have contributed to that growth — the
advent of air conditioning, for example, made the hot climate tolerable — but a major draw has been jobs, especially
in manufacturing. First, textile and furniture manufacturing
companies moved from the Northeast to the South in search
of cheaper labor. As those industries moved overseas in
search of even cheaper labor, the region started attracting
automobile manufacturers from the Midwest and from
overseas. Most recently, a cluster of aerospace manufacturing companies has formed in South Carolina, and numerous
advanced manufacturing firms have located around
Charlotte, N.C.
Over the past three decades, Charlotte also has become
the second-largest financial center in the country. The population more than doubled between 1980 and 2011, and from
2000 to 2010 Charlotte was the fastest-growing city in the
country, with population growth of more than 64 percent,
compared to less than 10 percent in the country as a whole.
That growth has placed serious demands on the Catawba
River, which supplies more than 30 cities in the Carolinas
with drinking water. The Catawba River begins in the Blue
Ridge Mountains in North Carolina and turns into the
Wateree River in South Carolina before reaching the
Atlantic Ocean. In 2007, North Carolina’s Environmental
Management Commission approved the diversion of
10 million gallons of water per day from the Catawba to two
ECON FOCUS | FOURTH QUARTER | 2013

39

the Savannah River, along the
Charlotte suburbs, in addition to the
Across the United States,
border with South Carolina, to
33 million gallons that were already
the allocation of water
meet Atlanta’s water needs.
being diverted for the city.
Georgia officials assert that they
(Industrial users in the area, includis largely a
remain focused on Lake Lanier,
ing Duke Energy, withdraw an
but South Carolina has threatened
additional 40 million gallons per
political process,
legal action over Georgia’s withday.) The transfers reduced the
fought over in statehouses
drawals from the Savannah. Last
amount of water available for downlegislators from the two
stream users in South Carolina,
and debated in courtrooms. February,
states formed the Savannah River
which sued to stop them. The U.S.
Basin Caucus to try to settle their
Supreme Court ruled on procedural
differences outside the courts. So far, no one has sued.
matters early in 2010, and the states eventually reached a
settlement later that year. (The settlement laid out ground
rules for future water transfers but did not limit current
Who Owns the Water?
transfers.)
The rules governing water are a jumble of common law, state
In the 1980s, North Carolina was on the opposite side of
legislation, federal environmental regulations, interstate
a dispute with Virginia over the water in Lake Gaston, which
compacts, and private deals. But underlying that complicatstraddles the North Carolina-Virginia border. At that time,
ed mix are two basic principles: riparian rights, common in
Virginia Beach did not have an independent source of freshthe East, and prior appropriation, common in the West.
water and bought surplus water from Norfolk. In 1982,
In the East, where water is plentiful, “riparian” rights are
concerned about the reliability of that surplus, city officials
accorded to whomever owns the land through which the
decided that the city needed to find its own water and set
water flows. That person or entity does not own the water
out to build a 76-mile pipeline from Lake Gaston. North
itself, but has a right to use it as long as they do not infringe
Carolina sued, Virginia Beach countersued, and over the
on usage by other riparian owners, such as other homeownnext 15 years, the states fought about the effects of the
ers along a lakefront or a city farther downstream. Under the
pipeline on Lake Gaston’s striped bass population, the defiriparian system, water rights can only be transferred with
nition of the word “discharge,” and alleged collusion
the sale of the land.
between federal agency officials and North Carolina offiRiparian rights were borrowed from English common
cials. The case eventually reached the U.S. Court of Appeals
law, and U.S. courts initially maintained the English tradition
for the D.C. Circuit, where more than 40 states’ attorneys
that a riparian owner could not disturb the “natural flow” of
general and the Justice Department filed friend-of-thethe water. But by the mid-1800s, more and more industrial
court briefs in support of North Carolina’s right to block the
users needed water to power their mills, and conflict
pipeline. Still, the court ruled in Virginia Beach’s favor, and
abounded between mill owners who wanted to build dams
today the city is powered by 60 million gallons per day of
and other users up- and downstream, who might see their
Lake Gaston water.
fields flooded or their own power source diminished. In
Perhaps the most contentious water fight in the South is
their efforts to settle these disputes, the courts began to
occurring outside the Fifth District, among Georgia,
allow riparian owners to divert water for any “reasonable
Alabama, and Florida. Known as the “tri-state water war,”
use,” generally defined as economically productive use. “In
the dispute is over the Apalachicola-Chattahoochee-Flint
pre-industrial times, the focus was on the rights of a riparian
basin, which begins in northwest Georgia, flows along the
user to the quiet enjoyment of their property,” says Richard
border with Alabama, and empties into the Apalachicola
Whisnant, a professor in the School of Government at the
Bay in Florida. In 1956 the Army Corps of Engineers
University of North Carolina at Chapel Hill and the former
completed the Buford Dam on the Chattahoochee River,
general counsel for the North Carolina Department of
creating Lake Lanier in northwest Georgia. Since 1990, the
Environment, Health and Natural Resources. But as industhree states have been involved in multiple lawsuits and
try grew, “the courts were trying to figure out ways that they
failed negotiations over how the Corps should allocate the
could turn these disputes into something that promoted
lake’s water. Georgia wants the water for booming Atlanta;
development. They wanted to give priority to water users
Alabama is worried about Atlanta getting more than its fair
who were generating economic activity.”
share; and Florida is concerned that reduced water flows will
Economic activity also was at the center of the Western
hurt the oysters in the Apalachicola Bay. The dispute
system of prior appropriation, or “first in time, first in right.”
appeared close to resolution in 2011, after the U.S. Court of
Under this system, the first person to divert a water source
Appeals for the 11th Circuit ruled in favor of Georgia on
for “beneficial use,” such as farming or industry, becomes the
various issues, but Florida filed a new suit against Georgia in
senior rights holder, regardless of who owns the land
the U.S. Supreme Court in October 2013. The Court has yet
adjacent to the water. Each year the user with the most sento decide whether it will hear the case.
ior appropriation gets their allotment first, and users with
Some people are concerned that Georgia might turn to
later appropriation dates get the leftovers. In a dry year,
40

ECON FOCUS | FOURTH QUARTER | 2013

that might mean that more junior rights holders don’t get as
much water as they need. Unlike a riparian right, a prior
appropriation right can be bought, sold, and mortgaged like
other property.
The system was the invention of miners in California,
whose camps were often in remote areas far from any water
source. To get the water they needed for panning gold and
later for operating hydraulic mines, they built elaborate
ditch systems throughout the countryside. To the miners,
the riparian system of tying water rights to land ownership
didn’t make any sense: “If two men, or companies, came in
and diverted a whole stream, so be it. If just one took the
whole stream, so be it. They needed it; they depended on it;
they had rights to it,” wrote Charles Wilkinson in his 1993
book Crossing the Next Meridian: Land, Water, and the Future of
the West. Prior appropriation also made sense to the region’s
new farmers and ranchers, who, like the miners, needed
water from wherever they could find it. Prior appropriation
quickly became the de facto law of the land. States across
the West officially adopted the doctrine after 1882, when
Colorado’s Supreme Court ruled that the Left Hand Ditch
Company could divert the South Saint Vrain creek to another watershed, depriving a farmer downstream of water for
his land.

Let the Market Decide
Perhaps the most fundamental tenet of economics is that
the allocation of a resource is best achieved through the
price mechanism, by allowing buyers and sellers to negotiate
a price that reflects the good’s relative scarcity. “If you don’t
price water, or any scarce resource for that matter,” says Jody
Lipford, an economist at Presbyterian College in South
Carolina, “you don’t force the consumers of that resource to
prioritize use.”
But in both the eastern and western United States, the
allocation of water is largely a political process, fought over
in statehouses and debated in courtrooms. Legislators and
judges are unlikely to have all the necessary information to
determine the most productive use of the water, however,
and legislators in particular might be subject to interestgroup influence. That argues for letting price, rather than
politics, decide who gets the water.
In the mid-2000s, Lipford studied the Apalachicola-

Chattahoochee-Flint conflict and proposed several marketbased solutions to resolve it, including charging a higher
price to people in Atlanta; giving the Army Corps of
Engineers the authority to charge users higher fees during
times of drought or increased demand; or issuing marketable
permits to water users, allowing them to buy and sell their
allocations. Many people are resistant to the idea of
buying and selling water rights, however. “There’s this idea
that we’re talking about water. Water belongs to all of us;
you can’t make people pay for it. And in the East, where
water has been abundant, people don’t want to pay for it,”
Lipford says.
Water markets may also involve significant transactions
costs. In many cases, the markets might be thin, composed
of only a few buyers and sellers who bargain infrequently.
Water trades also can be highly idiosyncratic, depending on
a multitude of factors that vary with each transaction. Both
these conditions make it difficult for people to know what
prices to charge or to offer. In addition, a water trade could
have significant externalities that complicate the negotiations, both positive (for example, if a new lake is created for
people to enjoy boating or fishing) and negative (if farmland
is fallowed).
In the West, where people are more used to thinking of
water as a scarce resource and where water rights can be
sold, some markets have been established. In 2003, for
example, San Diego County in California began buying
water from Imperial Valley, a primarily agricultural area; the
county is paying $258 per acre-foot (a measure of water equal
to approximately 325,000 gallons) for water that cost the
farmers about $16 per acre-foot. Overall, however, markets
remain rare.
The question of how best to allocate water is unlikely to
go away. Many scientists predict that during this century,
climate change will alter the water supply in the United
States, and the U.S. Forest Service projects that water
yields across the United States could decline more than
30 percent by 2080. At the same time, the U.S. population is
expected to grow more than 30 percent by 2060. As water
becomes more scarce and people become more abundant,
states and other interest groups will be forced to figure out
who gets how much — whether they decide via bullets,
lawsuits, or dollars.
EF

READINGS
Libecap, Gary D. “Transaction Costs: Valuation Disputes,
Bi-Lateral Monopoly Bargaining and Third-Party Effects in Water
Rights Exchanges. The Owens Valley Transfer to Los Angeles.”
National Bureau of Economic Research Working Paper No. 10801,
September 2004.
Lipford, Jody W. “Averting Water Disputes: A Southeastern Case
Study.” Property and Environment Research Center Policy Series
No. PS-30, February 2004.

Schmidt, Paul. “Un-Neighborly Conduct: Why Can’t Virginia
Beach and North Carolina Be Friends?” William & Mary
Environmental Law and Policy Review, 1999, vol. 23, no. 3,
pp. 893-920.
Wilkinson, Charles F. Crossing the Next Meridian: Land, Water, and
the Future of the West. Washington, D.C.: Island Press, 1993.

ECON FOCUS | FOURTH QUARTER | 2013

41

AROUNDTHEFED

The Power of Words
BY C H A R L E S G E R E N A

“A Short History of FOMC Communication.” Mark A.
Wynne, Federal Reserve Bank of Dallas Economic Letter,
vol. 8, no. 8, September 2013.
“Forward Guidance 101A: A Roadmap of the U.S.
Experience.” Silvio Contessi and Li Li, Federal Reserve Bank
of St. Louis Economic Synopses No. 25, September 2013.

ecades ago, business reporters and financial market
participants had to play detective to discern changes in
monetary policy. They monitored the activities of the open
market desk at the New York Fed, which buys or sells securities to reach the goals of the Federal Open Market
Committee (FOMC). They even scrutinized the size of the
briefcase that former Fed chair Alan Greenspan carried.
Today, Fed watchers can view the chair’s quarterly press
conferences and pore over increasingly detailed statements
released after every meeting. Much has changed about how
the Fed communicates the decisions that affect the nation’s
economic well-being. Two recent reports chronicle these
changes, especially the issuance of “forward guidance,” an
indication of when the FOMC might change the direction
of monetary policy.
“Best practices in central banking call for transparency in
policy deliberations and communicating the outcome in a
timely manner,” notes Mark Wynne, associate director of
research at the Dallas Fed and author of a September 2013
Economic Letter. “Over the past two decades, the FOMC has
gone from being quite secretive in its deliberations to very
transparent.”
The FOMC’s first major move towards greater transparency occurred on Feb. 4, 1994. To help explain why it was
acting to push up interest rates for the first time in five years,
the committee issued a 99-word statement after its meeting.
A year later, the FOMC started announcing its intended
range for the federal funds rate. It would take another four
years, until 1999, before the committee would declare the
target level for the funds rate. It also began releasing a statement after every meeting regardless of whether monetary
policy had changed.
The year 1999 was significant for another reason — the
FOMC started including forward guidance in its post-meeting statements. Since then, the committee had crafted this
guidance to lay out a near-term course for monetary policy
that was consistent with its past policy regime, but that
allowed for course corrections if there was a change in the
economic outlook.
Today, the FOMC uses forward guidance a bit differently, making a stronger commitment to a likely course of
action. Until its March 2014 post-meeting statement, the

D

42

ECON FOCUS | FOURTH QUARTER | 2013

committee had agreed to keep the federal funds rate low at
least as long as the unemployment rate remained above
6.5 percent, inflation was projected to be no more than a half
percentage point above the committee’s 2 percent longerrun goal, and long-term inflation expectations continued to
be well anchored.
According to Silvio Contessi and Li Li at the St. Louis
Fed, such forward guidance may have been a useful tool at a
time when interest rates are already close to zero. “A credible
promise to continue accommodative monetary policy until
a certain date or after the recovery strengthens (and the
policy rule calls for higher policy rates) amounts to influencing expectations and long-term yields and providing additional monetary stimulus today,” write Contessi and Li in the
September 2013 edition of the Economic Synopses essay series.
“Are Households Saving Enough for a Secure Retirement?”
LaVaughn Henry, Federal Reserve Bank of Cleveland Economic
Commentary 2013-12, October 2013.

iguring out whether you have enough retirement savings
is a lot harder than checking under your mattress. Many
variables affect this critical decision and standard economic
models can account for only some of them, notes a recent
commentary published by the Federal Reserve Bank of
Cleveland.
According to the “life-cycle hypothesis” (LCH) model, all
of us make rational choices about how much to spend or save
based on what we expect to earn from our jobs and investments during different periods of our lives. The model
assumes that we smooth out consumption over time, saving
enough during our working years in order to maintain our
level of spending many years into the future.
That assumption isn’t true for every person, however.
“While the LCH model may apply for many households,
nearly half of households do not behave the way the model
says they will,” notes LaVaughn Henry, vice president and
senior regional officer at the Cleveland Fed, in his October
2013 report. “Those households end up with inadequate savings for a retirement that maintains their standard of living.”
A growing body of research in behavioral economics
offers fresh insights into this issue. “Most households do not
pay enough attention to financial planning,” says Henry. “It
may be because the decisions that need to be made are just
too complex for the typical household. Many are aware of
this and seek the advice of a financial planner [but] others
may not be able to afford such advice.” That is why automatic enrollment in savings plans or automatic escalation of
investments in such plans can help people have a more financially secure future.
EF

F

BOOKREVIEW

The Past of Forecasting
FORTUNE TELLERS: THE STORY OF
AMERICA’S FIRST ECONOMIC
FORECASTERS
BY WALTER A. FRIEDMAN
PRINCETON, N.J.: PRINCETON
UNIVERSITY PRESS, 2014, 215 PAGES
REVIEWED BY DAVID A. PRICE

s a world financial crisis unfolded in November
2008, a London School of Economics professor
spoke at a university event about the debacle’s
causes. Afterward, guest of honor Queen Elizabeth II
demanded of him, with understandable peevishness,
“If these things were so large, how come everybody missed
them?”
Economic forecasting continued to be troublesome
following the crisis. For instance, the economics department
of the Organisation for Economic Co-operation and
Development (OECD) released a report in February 2014
stating that its estimates of GDP growth during 2007-2012
were consistently too high across countries and time
periods. The main difference between the OECD economists and those elsewhere may have been their willingness
to admit their mistakes.
The high uncertainty surrounding economic forecasts
has been well known for a long time. Indeed, the enterprise
of prophecy has been associated with insanity at least since
the Oracle at Delphi. Still, forecasts about the economy are
central to business planning, investing, and, of course, economic policymaking. How the art and science of forecasting
emerged is the subject of Fortune Tellers, a new history by
Harvard Business School professor Walter Friedman.
If economic forecasting had an inventor, it was Roger
Babson, son of a Gloucester, Mass., storekeeper. Babson
became interested in business statistics at the dawn of the
20th century while working as a clerk at an investment firm.
In 1904, at the age of 29, he founded the Babson Statistical
Organization. Initially, he sold information on current stock
and bond offerings, but the sudden onset of a financial crisis
— the Panic of 1907 — led him to recognize a market for a
different kind of information: analysis of what the latest statistics portended for the future.
Babson was an early believer in the existence of a business
cycle that was distinct from the ups and downs of securities
markets and was not caused simply by the weather and
outside shocks. While others had proposed the existence of
business cycles before — among them France’s Clément
Juglar and Russia’s Nikolai Kondratiev — the concept had
not been widely shared before Babson’s work, Friedman
notes. His prediction methods, though, were crude.

A

The work of infusing economic theory and higher mathematics into forecasting was done by others. Foremost
among them was Yale economist Irving Fisher, who, in the
early 20th century, gained fame for his work on the roles of
changes in prices, credit, and interest rates as signs of
changes to come in the real economy. “As a sign of his
stature,” Friedman recounts, “in 1924 the Wall Street Journal
introduced John Maynard Keynes to its readership as
‘England’s Irving Fisher.’”
Another early contender in forecasting was the Harvard
Economic Service, an arm of the university created in 1918
by economist Charles Bullock, who ran it with statistician
Warren Persons. Their service, aimed at academics and
business executives, brought more sophisticated statistics to
bear on the subject and it gathered information on business
conditions overseas as well as in America. Like Babson,
however, Bullock and Persons were more interested in
uncovering empirical relationships than in building theories
to explain them.
Despite the greater sophistication of the academic forecasters compared with Babson, the Great Depression gave
them their comeuppance. On Oct. 15, 1929, Fisher famously
declared that stocks had reached “what looks like a permanently high plateau.” The stock market crash began nine
days later. Afterward, the Weekly Letter of the Harvard
Economic Service advised that “serious and prolonged
business depression, like that of 1920-21, is out of the
question.” Only Babson had warned that autumn of a crash,
as he had been doing for the previous two years, contrary to
the euphoria of the time. (On the other hand, although
Fisher was wrong about the 1929 stock market, he was right
in pointing out that the Depression was greatly worsened by
the Fed’s tightening of the money supply — a finding that
only his more theory-based approach could have yielded.)
Another victim of the Depression was Herbert Hoover.
Less well known than the turn in his political fortunes after
the stock market crash is his role, documented by Friedman,
in establishing government collection of business-cycle data.
As secretary of commerce in the 1920s, he had enlisted
Columbia University economist Wesley Mitchell to lead a
committee on business cycles and to improve forecasting,
believing that the private sector could use such information
to avert future crises. Mitchell’s work as the longtime
director of research at the National Bureau of Economic
Research would lay the foundations for modern businesscycle analysis.
Friedman provides a brisk, nontechnical view of crucial
figures in American economic history — most of whom
went through dramatic and wrenching swings of success and
failure. In this, their lives resembled the business cycle to
which they had given so much of their energies.
EF
ECON FOCUS | FOURTH QUARTER | 2013

43

DISTRICTDIGEST

Economic Trends Across the Region

Urban Crime: Deterrence and Local Economic Conditions
BY S A N T I AG O P I N TO

vidence shows that crime, both property and
violent, has been declining in the United States since
the beginning of 1990. The data also suggest that
despite a general downward trend, the variation in crime
rates across regions is considerable. A growing academic
literature has been studying the causal factors explaining
changes in crime rates. Most of this work attempts to determine whether the decline in crime can be attributed to
more effective deterrence policies or to better economic
conditions that facilitate access to legitimate labor market
opportunities. The conclusions of this research may provide
guidance concerning the kinds of policies that are most
effective in controlling crime.

E

Economic Determinants of Crime
The economic theory of crime, proposed by University of
Chicago economist Gary Becker in 1968, assumes that crime
is a rational act. Economic agents engage in criminal activities if the expected psychic and monetary rewards of crime
(taking into account, among other things, the return to legal
labor market activities) are greater than the costs (determined by factors such as apprehension, conviction, and
severity of punishment). Two hypotheses flow from this
theory. The deterrence hypothesis claims that as more lawenforcement resources are targeted to fight crime, the
probability of arrest increases, and the crime rate should
therefore decrease. The economic-conditions hypothesis
states that weak legitimate labor market opportunities
should lead to lower opportunity costs of a crime (represented by foregone wages, employment, etc.), and a higher
supply of criminal activities. Conversely, under this view,
improving economic conditions should result in less crime.
The empirical literature on crime is far from conclusive
about the importance of these effects. A few studies find evidence that higher criminal sanctions, which include policy
arrests, incarceration, and other sanctions imposed through
the justice system, reduce criminal activity. Others claim
that the relationship between the two is either weak or nonexistent. Some papers even find a positive association
between sanctions and crime. Research shows that the relationship between crime and a number of variables that
capture the opportunity costs of crime (such as unemployment and real minimum wage) is not particularly strong
either. Furthermore, it has been claimed that police hiring is
related to local economic conditions, suggesting that the
two factors cannot really be disentangled.
Conflicting results are generally explained by a number of
empirical problems inherent in the crime research. The two
most important issues cited in the literature are measure44

ECON FOCUS | FOURTH QUARTER | 2013

ment errors in crime statistics and simultaneity between
crime and sanctions. Measuring crime and sanctions accurately is a complicated task. Empirical models of crime are
commonly estimated using official reported crime statistics.
The FBI’s Uniform Crime Reports (UCR) are the most
widely used source of crime data. Measurement errors may
arise from the fact that offenses are self-reported and the
number of arrests is provided by local agencies. Indeed, the
accuracy of the data depends on both the victims’ willingness to report crimes and on police recording practices and
procedures, which may differ across agencies. Additionally,
measurement errors may arise simply because hiring more
police leads to more crimes being reported.
Only a limited number of papers have directly addressed
the problem of measurement errors. Recent work by Aaron
Chalfin and Justin McCrary of the University of California,
Berkeley re-examines this issue. Their work not only confirms that the UCR dataset suffers from a high degree of
measurement errors, but it also quantifies this effect. They
claim that estimates of the impact of arrests on crime rates
obtained using the UCR dataset tend to be too small by a
factor of five when they are not corrected for measurement
error bias.
The problem of simultaneity between sanctions and
crime is also central in the crime deterrence academic
debate. According to the deterrence hypothesis, higher
expected sanctions should decrease crime rates. But the
causation operates in both directions: Increases in sanctions
may also be observed in response to higher crime rates.
Bruce Benson of Florida State University and his co-authors
claim that it is plausible that police resources are reallocated
to deal with higher levels of crime. When crime rates rise,
citizens tend to demand more police, a view known as the
“reallocation hypothesis.” If it is true, then more crime
would lead to a larger number of arrests. Thomas Garrett, an
economics professor at the University of Mississippi,
and Lesli Ott, a statistician at Yale CORE’s Quality
Measurement Group, seek to test this hypothesis. They use
monthly data for 20 large U.S. cities during 1983-2004 and
find strong support for the reallocation hypothesis and
weak support for the view that arrests reduce crime.
They also find that the crime-arrest relationship is very
heterogeneous across the cities in their sample and across
types of crimes.
In addition, the use of the minimum wage in these
studies is indeed problematic. Changes in the minimum
wage may have other unintended effects on crime. For
instance, if a higher minimum wage increases unemployment, then some people (especially those more likely to be

affected by changes in the minimum wage and with weak
labor attachment) may decide to rely on criminal activities
for income. A recent work by Andrew Beauchamp and
Stacey Chan, from Boston College, focuses on this precise
issue. In their study, they find evidence that an increase in
the minimum wage tends to displace youth from legal to illegal activities. Thus, according to their results, the effect of a
higher minimum wage on employment and, consequently,
on crime, dominates the wage effect.
Greater public law enforcement and crime may also be
observed in a more general setup that considers both private
and public crime prevention and explicitly allows for potential criminals to be mobile across geographical areas.
Kangoh Lee of San Diego State University and the author of
this article have developed a theoretical spatial model of
crime that incorporates some of these features.
In the model, criminals allocate their illegal activities
across geographical areas depending on the relative expected benefits of crime. At the local level, the probability of
being apprehended is determined by the interplay between
public law enforcement and private precautionary measures.
Our research determined that in this context, and when the
provision of local public law enforcement is decided strategically by a local agency, it is possible to obtain a positive
relationship between local public law enforcement and
crime. The conditions under which this result holds depend
on how residents respond to the relative levels of local
public law enforcement. For instance, if residents respond to
an increase in local public law enforcement by decreasing
private precautions significantly, then the overall level of
local protection would be perceived as being too low relative
to other regions, attracting more criminals into the area.
It is also possible to infer from this analysis that when
relevant factors are overlooked (in other words, when the
spatial dependence between variables such as private
security measures and local law enforcement is neglected),
it is likely to obtain results that seem counterintuitive at
first glance.
In order to identify the effects of sanctions on crime,
some research work uses quasi-experimental methods. A few
recent studies use terrorism-related events to test the deterrent effect of police. One example is the work by Rafael Di
Tella, an economist at the Harvard Business School, and
Ernesto Schargrodsky of the University Torcuato Di Tella. A
terrorist attack on the main Jewish center in Buenos Aires,
Argentina, in July 1994 led to an increased police presence in
geographical areas with Jewish and Muslim institutions. The
decision to protect these areas is assumed to be independent
of the previous levels of crime in the respective areas. In this
context, the authors examine the police-crime relationship
before and after the terrorist attacks and find that more
police decreases auto theft by about 75 percent. They also
show, however, that such effect takes place only in the blocks
where those institutions are located, and, in fact,
little or no changes are observed one or two blocks away.
Most research work does not fully isolate the impact of

labor market variables and deterrence on criminal activities.
Even though disaggregated micro-level data generally
contain information on individuals’ criminal behavior,
wages, and unemployment spells, it does not include information on deterrence measures. When researchers employ
aggregate data, they generally do not use extensive deterrence and economic variables. Therefore, the conclusions
concerning the relative impact of economic conditions and
sanctions on crime are far from conclusive mostly because
they rely on different data sets and empirical methods.
Using state-of-the-art statistical techniques and better
data, more recent research has found a significant effect of
sanctions on criminal activity and a stronger effect of labor
market conditions on crime rates than previous work. Hope
Corman of Rider University and Naci Mocan of the
University of Colorado Denver examine the impact of
several measures of deterrence (past arrests, police force
size, and prison population) and local economic conditions
(unemployment and real minimum wage) on different categories of crime. They use monthly data for New York City
spanning the period 1977-1999. Their approach consists
precisely of using this high-frequency data to distinguish
between the short-run and long-run effects of police on
crime rates. The authors conclude that both deterrence and
economic variables help explain part of the decline in crime
rates, but the contributions of deterrence measures seem to
be larger. Also, according to their findings certain categories
of crime are more responsive to changes in economic conditions than others. For instance, the unemployment rate
affects burglary and motor vehicle theft, and the minimum
wage has an impact on murder, robbery, and larceny. So even
though it is not always the same economic factor, it seems
that economic conditions affect most categories of crime
except for rape and assault.
The work by Chalfin and McCrary also calculates the
percentage change in crime rates due to a 1 percent increase
in the number of police, or the elasticity of crime rates with
respect to police, for similar categories of crime. They do
not explicitly examine the impact of economic variables on
crime rates, though. They use a panel data set of 242 cities
and year-over-year changes in crime rates and police during
the period 1960-2010. Their approach proposes various
statistical procedures to control for both measurement and
simultaneity error biases. They find that additional
resources devoted toward law enforcement tend to reduce
violent crime more than property crime. More precisely, the
police elasticity of crime is -0.34 for violent crime and -0.17
for property crime.

Crime Statistics in the Fifth District
A few interesting observations result when we apply some of
the above techniques to examine the impact of deterrence
policies and economic conditions on crime rates in the Fifth
District. We begin by describing the behavior of crime and
arrests aggregated at the state level. Next, we focus on
the relationship between crime, arrests, and local economic
ECON FOCUS | FOURTH QUARTER | 2013

45

Property Crime Rates in Large Fifth District Cities
12,000
11,000

RATE PER 100,000 POP.

10,000
9,000
8,000
7,000
6,000
5,000
4,000
3,000
1985

1990

1995

Charleston, SC
Baltimore, MD
SOURCE: FBI Uniform Crime Reports

2000
2005
2010
Richmond, VA
Charlotte, NC
Charleston, WV

Arrest Rates for Property Crime in Large Fifth District Cities
1,400

RATE PER 100,000 POP.

1,200
1,000
800
600

South Carolina. North Carolina exhibits the highest
arrest rates for both property and violent crime.
Overall, crime and arrest rates significantly
decline from the early 1990s until 2000, but since
the year 2000 the downward crime trend is less pronounced and arrest rates become fairly constant.
The five cities in the study generally have higher
property and violent crime rates than their respective states’ averages. (See charts.) The exception is
Charleston, S.C., which since 2005 exhibits a property crime rate lower than the state average.
Property crime rates decline sharply since the beginning of the 1990s in all cities. Violent crimes also
decline but less markedly, and in Charleston, S.C.,
Richmond, Va., and Charleston, W.Va., the trends
are relatively flat. Even though arrest rates in the
cities are also generally higher than their respective
states’ averages (with the exception of Charlotte,
N.C., where the property crime arrest rate is below
the state’s average), the differences tend to be smaller than the ones observed for crime rates. Also,
arrest rate trends in all these cities become flat (in
Baltimore, Md., Charlotte, N.C., and Charleston,
S.C.) or show a positive slope (in Richmond, Va., and
Charleston, W.Va.) since the beginning of the 2000s.

400
200
1994

Crime, Deterrence, and Economic
Conditions in the Fifth District
1996

1998

2000

Charleston, SC
Baltimore, MD
SOURCE: FBI Uniform Crime Reports

2002

2006

2004

Charleston, WV

Charlotte, NC

2008

conditions in five of the largest cities within the district:
Baltimore, Md.; Charleston, S.C.; Charleston, W.Va.;
Charlotte, N.C.; and Richmond, Va. We use state- and citylevel crime data from the UCR. We obtain the number of
offenses and arrests for seven categories of crime
and combined them into two broader categories: violent
crime (murder, rape, assault, robbery) and property crime
(burglary, larceny, and motor vehicle theft).
In general, crime rates in the Fifth District follow the
same declining pattern since the beginning of the 1990s as
the one observed in the entire country. Yet their behavior
shows a few differences across states. In Virginia and West
Virginia, property and violent crime rates are below the U.S.
rates, but in Maryland and South Carolina, the rates are
above the country’s rates. Crime rates in North Carolina are
very much in line with those observed in the United States.
In recent years, South Carolina has been showing the highest property and violent crime rates within the group.
As with crime rates, arrest rates for both property and
violent crimes also show a declining trend during the 1990s.
Arrest rate trends have started to flatten out since the beginning of the 2000s, however. Compared with the country’s
average arrest rate, rates are generally lower in Virginia and
West Virginia and higher in Maryland, North Carolina, and
46

ECON FOCUS | FOURTH QUARTER | 2013

2010

We use monthly data during the period 1998-2010 to
examine the relationship between criminal offenses
and crime deterrent policies (measured by police
arrests), and between criminal offenses and local
economic conditions (measured by the local unemployment
rate the real minimum wage). We adopt a similar approach
to that of Corman and Mocan. One difference, however, is
that while they look at the impact of deterrence and economic factors on seven different categories of crime, we
aggregate offenses into property and violent crimes.
Specifically, we use different lag structures to estimate the
impact of monthly changes in the number of arrests, unemployment rates, and real minimum wages on the changes in
the number of property and violent offenses for each one of
the cities.
The table presents the results of a preliminary analysis.
The table only reports the signs of the coefficients that are
statistically different from zero. The deterrence hypothesis
would predict a negative sign for arrests. To the extent that
unemployment and real minimum wages capture legitimate
labor market opportunities in the cities examined here, a
positive sign is expected in the unemployment column and a
negative sign in the real minimum wage column.
The results reveal that the relationship between crime
and arrests and between legal labor market opportunities
and crime are far from consistent across cities and types of
crime. For instance, arrests appear to have a negative impact
on property crime in Baltimore, Md., and a negative impact

Richmond, VA

RATE PER 100,000 POP.

RATE PER 100,000 POP.

on violent crime in Charleston, W.Va., and Charlotte, Violent Crime Rates in Large Fifth District Cities
N.C. Higher unemployment increases property crime
3,100
in Charlotte, N.C., and violent crime in Richmond, Va.
2,900
2,700
Finally, when the real minimum wage increases, prop2,500
erty and violent crime decrease in Charlotte, N.C., and
2,300
2,100
property crime decreases in Charleston, S.C.
1,900
The fact that the table shows a few empty cells
1,700
1,500
reveals the lack of a robust connection between the
1,300
variables included in the analysis. This kind of out1,100
900
come, however, is consistent with the conclusions of
700
the research cited earlier. It has been argued that the
500
300
weak connection between arrests and crime is to some
1990
1995
1985
2000
2005
2010
extent expected because the use of arrests to test the
Charleston, WV
Charlotte, NC
Charleston, SC
Richmond, VA
Baltimore, MD
deterrence hypothesis is already built on strong
SOURCE: FBI Uniform Crime Reports
assumptions. Not only does it assume the number of
arrests for a specific crime accurately reflects the
likelihood of apprehension for committing that crime,
but it also requires that potential criminals have Arrest Rates for Violent Crime in Large Fifth District Cities
900
timely access to this information and are capable of
800
assessing the likelihood of being arrested based on
700
this data.
600
The literature also justifies the weak effect of
500
unemployment and wages on crime rates in various
400
ways. Work by Richard Freeman of Harvard University
300
describes some of these explanations. First, when
200
deciding to become criminals, individuals consider the
100
labor opportunities available specifically to them.
0
Aggregate information about unemployment and
2000
2006
1998
1994
1996
2004
2008
2010
2002
wages may not necessarily reflect these opportunities.
Charleston, SC
Richmond, VA
Charlotte, NC
Baltimore, MD
Charleston, WV
The weak connection between these aggregate measSOURCE: FBI Uniform Crime Reports
ures and crime does not invalidate the rational theory
of crime; it simply reflects the fact that more disaggrebetween crime and legitimate labor market opportunities
gated data would be required. Second, legal work and crime
are very heterogeneous across cities and types of crimes.
are not necessarily exclusive activities. There is some
Even though arrests seem to lower crime, they only have an
evidence suggesting that individuals, especially young men,
effective deterrent impact in some cities. Lower unemployparticipate at any point in time in both the legal and illegal
ment and higher real minimum wages contribute to
labor market depending on the opportunities available to
decreased crime rates, but their impact is not significant for
them. This type of behavior suggests that the elasticity of
all types of crime and for all cities. Needless to say, further
the supply of crime is relatively high. As a result, significant
research is required to identify the factors underlying crimichanges in the level of criminal activities will only
nal activities. Developing such understanding is critical for
be observed when wages and unemployment rates change
the design of appropriate crime-reduction policies.
EF
in very large amounts. In other words, small fluctuations
in these variables will not necessarily affect
crime rates.
Effects of Deterrence and Legitimate Labor Market Opportunities
In summary, after many years of research, there is
Type of Crime
Arrests Unemployment Real Minimum Wages
City
still no consensus on the effect of arrests and legiti(-)
Property
Baltimore, MD
mate labor market opportunities on crime rates.
Violent
The research on crime faces numerous challenges.
(-)
Property
Charleston, SC
Recent work has attempted to overcome some of
Violent
the limitations using micro-level data and applying
(-)
Property
Charleston, WV
(-)
Violent
(-)
novel statistical techniques. Following a similar
Property
(+)
approach as the one developed by Corman and
Charlotte, NC
(-)
Violent
Mocan, we conduct a preliminary study on the
Property
determinants of crime in five of the largest cities in
Richmond, VA
(+)
Violent
the Fifth District. From the analysis, we conclude
SOURCE: Author’s estimates
that the relationship between crime and arrests and
ECON FOCUS | FOURTH QUARTER | 2013

47

STATE DATA, Q2:13
DC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

NC

SC

VA

WV

733.2

2,610.4

4,045.3

1,881.6

3,765.0

767.8

-0.1

0.2

-0.1

0.2

0.3

-0.1

0.4

1.5

1.5

1.3

1.2

0.4

0.9
0.0
-10.0

106.9
1.1
-2.5

441.8
-0.9
0.4

221.2
0.2
0.4

232.4
-0.6
0.4

48.9
0.2
-0.9

Professional/Business Services Employment (000s) 156.2
Q/Q Percent Change
0.6
Y/Y Percent Change
2.5

423.4
0.8
3.5

548.5
0.9
2.6

235.8
3.7
-1.0

681.1
0.3
0.5

64.8
0.1
-0.3

Government Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

237.3
-1.5
-2.4

509.0
0.7
0.8

714.9
-0.2
0.2

350.4
0.2
1.3

716.1
-0.2
0.5

152.8
-0.9
-0.3

Civilian Labor Force (000s)
Q/Q Percent Change
Y/Y Percent Change

372.1
-0.2
3.6

3,142.0
0.0
0.8

4,716.6
-0.9
0.1

2,168.3
-0.4
0.0

4,229.6
0.0
0.6

803.1
-0.7
-0.1

Unemployment Rate (%)
Q1:13
Q2:12

8.5
8.6
9.1

6.7
6.6
6.8

8.8
9.4
9.5

8.0
8.6
9.3

5.3
5.5
5.9

6.3
7.2
7.3

Real Personal Income ($Bil)
Q/Q Percent Change
Y/Y Percent Change

45.1
1.1
1.4

301.0
0.9
1.1

352.9
0.8
1.1

157.3
0.9
1.0

376.5
0.8
1.2

61.8
1.0
0.4

Building Permits
Q/Q Percent Change
Y/Y Percent Change

875
161.2
-12.1

5,091
33.3
53.3

13,705
21.6
13.2

6,260
20.2
14.1

8,261
19.1
20.5

786
52.9
34.6

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

629.1
3.3
8.8

411.5
1.1
2.8

302.5
0.6
2.1

305.6
0.4
1.2

400.3
1.1
2.7

216.3
0.4
1.1

Manufacturing Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

48

MD

ECON FOCUS | FOURTH QUARTER | 2013

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

Second Quarter 2002 - Second Quarter 2013

Change From Prior Year

Second Quarter 2002 - Second Quarter 2013

Second Quarter 2002 - Second Quarter 2013

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

10%

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

9%
8%
7%
6%
5%
4%
3%
03 04 05

06 07 08 09

10

11

12

13

03 04 05

06 07 08 09

10

11

12

Fifth District

13

03 04 05

06 07 08 09

10

12

13

11

12

13

12

13

United States

Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

Second Quarter 2002 - Second Quarter 2013

Second Quarter 2002 - Second Quarter 2013

Second Quarter 2002 - Second Quarter 2013

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

11

Change From Prior Year

40%
13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
03 04 05
Charlotte

06 07 08 09

10

Baltimore

11

12

30%
20%
10%
0%
-10%
-20%
-30%
-40%
-50%
03 04 05

13

Washington

Charlotte

06 07 08 09

10

Baltimore

11

12

13

03 04 05

Washington

06 07 08 09

Fifth District

10

United States

FRB—Richmond
Services Revenues Index

FRB—Richmond
Manufacturing Composite Index

House Prices

Second Quarter 2002 - Second Quarter 2013

Second Quarter 2002 - Second Quarter 2013

Second Quarter 2002 - Second Quarter 2013

40
30

30
20

20

10

10

0
-10
-20
-30
-40
-50

0
-10
-20
-30
03 04 05

06 07 08 09

10

11

12

13

Change From Prior Year
16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

03 04 05

06 07 08 09

10

11

12

13

03 04 05

06 07 08 09

Fifth District

10

11

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and employment
indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

ECON FOCUS | FOURTH QUARTER | 2013

49

METROPOLITAN AREA DATA, Q2:13
Washington, DC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Hagerstown-Martinsburg, MD-WV

2,507.0
1.7
1.2

1,349.4
2.1
2.2

104.0
2.0
0.3

5.4
5.4
5.6

7.2
7.0
7.2

7.2
7.5
7.9

6,832
60.1
18.0

2,083
15.1
33.4

248
43.4
62.1

Asheville, NC

Charlotte, NC

Durham, NC

176.3
2.1
3.2

872.1
1.6
2.4

288.1
0.6
1.8

Unemployment Rate (%)
Q1:13
Q2:12

6.9
7.5
7.7

8.9
9.4
9.5

6.8
7.2
7.4

Building Permits
Q/Q Percent Change
Y/Y Percent Change

427
54.7
15.7

3,598
0.8
15.2

1,011
95.2
115.1

Greensboro-High Point, NC

Raleigh, NC

Wilmington, NC

345.9
1.9
0.5

528.7
1.1
1.2

141.2
3.5
2.0

9.4
9.9
9.9

7.1
7.6
7.8

9.4
9.7
9.8

555
48.4
16.4

3,476
32.5
14.8

922
33.0
37.4

Unemployment Rate (%)
Q1:13
Q2:12
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment ( 000s)
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:13
Q2:12
Building Permits
Q/Q Percent Change
Y/Y Percent Change

50

Baltimore, MD

ECON FOCUS | FOURTH QUARTER | 2013

Winston-Salem, NC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:13
Q2:12
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:13
Q2:12
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:13
Q2:12
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Charleston, SC

Columbia, SC

206.6
0.6
0.4

310.6
2.6
1.0

360.1
1.6
1.9

8.5
8.8
9.0

6.6
6.9
7.7

7.1
7.6
8.2

337
24.4
-33.3

1,284
-13.9
-32.0

1,140
21.9
-1.6

Greenville, SC

Richmond, VA

Roanoke, VA

311.1
1.4
1.0

633.7
1.8
1.3

160.2
1.7
1.1

6.6
6.9
7.7

5.9
6.1
6.5

5.6
5.8
6.2

771
10.1
29.8

1,485
55.8
59.3

277
154.1
128.9

Virginia Beach-Norfolk, VA

Charleston, WV

760.4
2.4
2.0

148.6
2.2
0.2

112.8
0.5
0.0

5.9
6.1
6.6

6.1
7.1
6.8

7.1
7.3
7.4

1,525
-23.0
22.2

47
4.4
-2.1

13
30.0
44.4

Huntington, WV

For more information, contact Jamie Feik at (804) 697-8927 or e-mail Jamie.Feik@rich.frb.org

ECON FOCUS | FOURTH QUARTER | 2013

51

OPINION

Down But Not Out
BY J O H N A . W E I N B E RG

laureate Robert Solow wrote in 1987, “You can see the
istorically, the United States has rebounded
computer age everywhere but in the productivity statistics.”
strongly from deep recessions. During the two
Just a few years later, however, the computer age did show up
years following the 1981-1982 recession, for
in the statistics: Productivity growth averaged 2.7 percent
example, GDP growth averaged nearly 7 percent. Many
people predicted a similar trajectory for the U.S. economy
between 1996 and 2001. The fact that we do not currently see
following the most recent recession, but that has not been
the innovations of the past few years in productivity statistics
the case: Annual average GDP growth since 2010 has been
might simply indicate that businesses need time to learn about
just 2.3 percent. Not only is growth slower than might be
the new technologies and fully incorporate them into their
expected following a severe recession, it’s also a departure
operations — not that the innovations are without value.
from our postwar experience. Between 1946 and 2006,
What this history suggests to me is that while qualitative
annual GDP growth has averaged 3.2 percent. During no
observations on technology trends are interesting, it’s hard
other non-recessionary period has GDP growth been as
to infer much from them about the future of average, econslow as at present — leading some observers to conclude
omy-wide productivity growth. That’s why I’m not yet ready
that U.S. economic growth is “over.”
to agree with those who believe that the current productivity slowdown finally heralds the secular stagnation predicted
Why might this be? One argument is that the remarkable
by Hansen eight decades ago.
improvement in living standards that began around 1750 was
That doesn’t mean the United States doesn’t face some
an anomaly in American history, not to be repeated. During
significant headwinds at present. First,
this period we witnessed extraordinary
population growth is slowing, which
innovations that greatly increased our
Some observers have
means the size of the working-age popeconomy’s productivity, such as the
concluded that American
ulation is growing more slowly as well.
steam engine, electricity, and indoor
economic growth is over.
It’s also the case that the fraction of the
plumbing, to name just a few. But the
population that is working or looking
innovations of today — touchscreens,
for work is near its lowest rate in decades, due to a combinastreaming video, and new networking platforms — are
tion of demographic factors, structural changes in the labor
unlikely to produce the same kinds of gains.
market, and lingering effects of the Great Recession. In addiBefore assessing these claims, it will help to talk about
tion, although government spending has declined recently,
the factors influencing economic growth in a bit more detail.
fiscal policy as described in current law is unsustainable, and
Basically, growth is a function of employment and labor prouncertainty about how we will address our debt and deficit
ductivity, that is, how many people are working and how
might be inhibiting consumer and business investment.
much they can produce. Labor productivity depends on the
These factors could be contributing to the current slow
amount of capital inputs combined with labor, but it also
rate of GDP growth, and they might restrain growth for
depends on technology — the state of our knowledge about
some time. But even if growth is likely to be slower over the
how to produce goods and services from the inputs we have.
medium term, history suggests that we should be skeptical
But it’s very difficult to forecast advances in technology and
of our ability to predict with any confidence what’s likely to
knowledge, which means it’s also difficult to forecast
happen over the long term. Persistence is not the same as
changes in productivity.
permanence.
In the late 1930s, for example, Alvin Hansen, an econoMoreover, there are a number of reasons to be optimistic
mist at Harvard University and consultant to the Federal
about the country’s future: America’s colleges and universiReserve Board and the Treasury Department, predicted that
ties are second to none and attract students from all over the
declining population growth and slowing innovation would
world. Our public policy problems may be challenging, but
cause “secular stagnation” in the United States. But he was
they do have solutions. And our markets are flexible and
quickly proven wrong by the postwar economic boom, and
have demonstrated their resiliency time and time again,
productivity growth averaged 2.6 percent per year between
as when we emerged from the Great Depression or from
1947 and 1971.
the stagflation of the 1970s. Economic growth might be
Productivity changes are hard to quantify even when
slower for the foreseeable future, but in my opinion it is far
innovation would seem to be all around us. In the late
from over.
EF
1980s and early 1990s, economists identified a “productivity paradox”: Despite tangible advances in computing and
the adoption of new information technology by many
John A. Weinberg is senior vice president and director
businesses, productivity growth actually declined. As Nobel
of research at the Federal Reserve Bank of Richmond.

H

52

ECON FOCUS | FOURTH QUARTER | 2013

NEXTISSUE
Securing Payments
Recent high-profile data breaches at major retailers have led
many to question whether our payments system is as secure as
it should be. But allocating resources to stop such crimes is
more complex than it first appears.

Income Inequality
Some argue that economic inequality in the United States is high
and increasing. But not everyone agrees about what the data
reveal or whether inequality is inherently harmful.

Farmland Preservation
Farmland preservation programs are a popular way to protect
working farmland and open space from development. But are
they a good deal for taxpayers — or for farmers?

Federal Reserve
The chair of the Fed is often referred to as
the second most powerful person in the
country. While monetary policy is set by
committee, the views of the chair are
believed to carry outsized weight in policymaking. But does the success of the Fed
hinge upon who is at the helm?

Interview
Richard Timberlake, emeritus professor at
the University of Georgia, discusses the
still-influential ideas of Walter Bagehot,
the role of asset bubble policies in the Great
Depression, and alternatives to fiat money.

The Profession
One study after another confirms it: The
United States has almost all of the world’s
top-tier economics departments. They
attract students and scholars from around
the globe. In a field that was more or less
created by foreigners — from Adam Smith to
Vilfredo Pareto to John Maynard Keynes,
among others — how did universities in the
United States come to be this dominant?

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
• To subscribe to our magazine
• To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

Economists at the Federal Reserve Bank of Richmond
conduct research on a wide variety of economic issues.
Before that research makes its way into academic journals
or our own publications, it is often posted on the Bank’s
website as part of our Working Papers series. Listed here
are recent offerings.

TITLES

AUTHORS

Heterogeneity in Labor Supply Elasticity
and Optimal Taxation

Marios Karabarbounis

The Time-Varying Beveridge Curve

Luca Benati
Thomas A. Lubik

Productivity Insurance: The Role of
Unemployment Benefits in a
Multi-Sector Model

David L. Fuller
Marianna Kudlyak
Damba Lkhagvasuren

Internet Banking: An Exploration in
Technology Diffusion and Impact
(Revised September 2013)

Richard Sullivan
Zhu Wang

TITLES

AUTHORS

Are Young Borrowers Bad Borrowers?
Evidence from the Credit CARD Act of
2009 (Revised February 2014)

Peter Debbaut
Andra C. Ghent
Marianna Kudlyak

mREITs and Their Risks
(Revised December 2013)

Sabrina R. Pellerin
Steven Sabol
John R. Walter

Sudden Stops, Time Inconsistency,
and the Duration of Sovereign Debt

Juan Carlos Hatchondo
Leonardo Martinez

The Credibility of Exchange Rate Pegs and
Bank Distress in Historical Perspective:
Lessons from the National Banking Era

Scott Fulford
Felipe Schwartzman

ECB Monetary Policy in the Recession:
A New Keynesian (Old Monetarist) Critique
(Revised September 2013)

Robert L. Hetzel

State Dependent Monetary Policy

Nicholas Trachter
Francesco Lippi
Stefania Ragni

Demand Externalities and Price Cap
Regulation: Learning from a Two-Sided Market
(Revised October 2013)

Zhu Wang

The Shifting and Twisting Beveridge
Curve: An Aggregate Perspective

Thomas A. Lubik

Competitors, Complementors, Parents and
Places: Explaining Regional Agglomeration
in the U.S. Auto Industry

Luis Cabral
Zhu Wang
Daniel Yi Xu

Learning about Fiscal Policy and the
Effects of Policy Uncertainty

Christian Matthes
Josef Hollmayr

Banker Compensation and Bank Risk Taking:
The Organizational Economics View

Arantxa Jarque
Edward S. Prescott

The Impact of Regional and Sectoral
Productivity Changes on the U.S. Economy

Pierre-Daniel G. Sarte
Esteban Rossi-Hansberg
Fernando Parro
Lorenzo Caliendo

The Supply of College-Educated Workers:
The Roles of College Premia, College Costs,
and Risk

Kartik B. Athreya
Janice Eberly

International Reserves and Rollover Risk

Javier Bianchi
Juan Carlos Hatchondo
Leonardo Martinez

To access the Working Papers and other Fed resources visit: www.richmondfed.org