View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

VOLUME 19
NUMBER 1
FIRST QUARTER 2015

COVER STORY

12

The Secession Question
What are the economic costs and benefits of nations
breaking apart?

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
DIRECTOR OF RESEARCH

Kartik Athreya
EDITORIAL ADVISER

Aaron Steelman
EDITOR

FEATURES

17

Renee Haltom
SENIOR EDITOR

David A. Price

The Crop Insurance Boom
A long-standing U.S. farm support program now covers almost
every crop — but it attracts more and more critics as well

MANAGING EDITOR/DESIGN LEAD

Kathy Constant
STAFF WRITERS

21

Marriage on the Outs?
The institution of marriage is solid — but only for certain
groups. Economics helps explain why

Helen Fessenden
Jessie Romero
Tim Sablik
EDITORIAL ASSOCIATE

Lisa Kenney

­

CONTRIBUTORS

Jamie Feik
Eamon O’Keefe
Karl Rhodes
Sonya Ravindranath Waddell
John A. Weinberg
DESIGN

DEPARTMENTS

1 President’s Message/Creating the Richmond Fed’s Bailout Barometer
2			 Upfront/Regional News at a Glance
3			 Federal Reserve/Jekyll Island: Where the Fed Began
7			 Policy Update/Risk Retention Contention
8			 Jargon Alert/Statistical Significance
9			 Research Spotlight/Superstars of Tax Flight
10				 The Profession/Economists and the Real World
11				Around the Fed/The Competitiveness of Inner Cities
26				 Interview/Campbell Harvey
31			 Economic History/The Last Big Housing Finance Reform
35			 Book Review/Eating People is Wrong, and Other Essays on Famine,
				 Its Past, and Its Future
36				 District Digest/State Labor Markets: What Can Data Tell
				 (or Not Tell) Us?
44 Opinion/Keeping Monetary Policy Constrained

COVER PHOTOGRAPHY:
©ISTOCK.COM/ONIONASTUDIO

Janin/Cliff Design, Inc.

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/
RichFedResearch
Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Econ Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025x (Online)

PRESIDENT’SMESSAGE

Creating the Richmond Fed’s Bailout Barometer

T

he Richmond Fed recently released new estimates
of the size of the financial sector’s governmentprovided safety net — a measure that we call the
“bailout barometer.” According to these estimates, 60 percent
of the financial sector’s liabilities — $25 trillion — are either
explicitly or implicitly insured by taxpayers. Explicit guarantees include programs like deposit insurance for banks, while
implicit guarantees cover liabilities for which market participants believe the government will provide support in times
of distress. In some cases, these expectations have developed
over time following earlier government bailouts of firms or
markets deemed “too big to fail” (TBTF).
The size of the financial safety net is critically important.
While guarantees against losses can help prevent panics by
reassuring creditors, they also erode incentives for firms to
minimize risk. Protected creditors have little incentive to be
concerned over the riskiness of financial institutions’ activities and will thus overfund risky activities. As financial firms
grow in size and riskiness, policymakers may be motivated to
protect them during times of distress to prevent damage to
the rest of the economy. Such actions can increase the size
of explicit and implicit safety net guarantees alike, however,
creating a vicious cycle that perpetuates TBTF.
Despite legislation such as the Dodd-Frank Act aimed
at eliminating the TBTF problem, the size of the safety
net has remained roughly unchanged since 2009, and — as
the cycle described above would predict — it has grown
considerably since Richmond Fed researchers published
our first bailout barometer estimates in 2002. I asked them
to create the measure after I became director of research
at the Richmond Fed in 1999. There was growing concern
among policymakers and economists about TBTF at the
time but no good estimate of just how large the financial
safety net was.
Our researchers estimated that nearly 45 percent of
financial sector liabilities in 1999 were either explicitly or
implicitly protected by government guarantees. I was surprised by how high that number was. Industry experts and
banking regulators in the 1990s had been saying that the
banking industry was declining as a share of financial intermediation, as more nonbanks, like money market mutual
funds, provided services traditionally handled by banks.
Because a large portion of the safety net was composed of
protected assets in what I had assumed was the shrinking
banking sector, I had expected it to be much smaller than
what our researchers actually found.
In hindsight, the size of the safety net should have
alerted me to another problem: Financial firms outside of
the banking sector had an incentive to mimic the dependence of banks on the type of short-term funding that is
likely to receive government assistance during a crisis. Such

funding would be less costly if
it was perceived as benefiting
from an implicit government
guarantee. But relying more
heavily on cheap short-term
funding that can suddenly
dry up would also make those
firms, and the financial sector
as a whole, more fragile. In
fact, this is exactly what we
saw leading up to the financial
crisis of 2007-2008.
Before the crisis, I had
been optimistic that policymakers would take steps to prevent the growth of the safety net. In a paper I wrote in 1999
with Marvin Goodfriend, then a senior vice president and
policy adviser at the Richmond Fed (now on the faculty at
Carnegie Mellon University), we speculated that policymakers might gradually see that liberal lending during crises was
counterproductive, since it exacerbated the TBTF problem
in the long run. Thus, it seemed reasonable to think they
would commit not to rescue failing institutions.
While I was optimistic that we were heading in this
direction, Marvin was less sanguine. He believed that policymakers were likely to continue to favor short-term relief of
financial distress over the long-term goal of shrinking the
financial sector’s federal safety net. In the end, the rescues
of financial firms that our researchers previously assumed
to be outside the safety net during the financial crisis of
2007-2008 proved that Marvin’s fears were well-founded.
The long-term solution to this problem is to restore
market discipline so that financial firms and their creditors
have an incentive to monitor and reduce risk-taking. The
government can facilitate this by credibly committing not to
fund bailouts in future crises. The Dodd-Frank Act includes
a number of provisions aimed at helping policymakers establish such a commitment, including its requirement that the
largest and most complex financial firms create resolution
plans known as “living wills.” These are detailed road maps
for how regulators can unwind failed firms without threatening the rest of the financial system or requiring government
assistance. Our researchers will continue to update the
bailout barometer to gauge the progress that is being made
toward shrinking the problem of “too big to fail.”
EF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

1

UPFRONT

Regional News at a Glance

BY L I S A K E N N E Y

MARYLAND — In May, the U.S. Supreme Court struck down a Maryland
income tax law because it double-taxed residents’ out-of-state income. Normally,
income is taxed both where it is made and where taxpayers live, and states give
a full credit for the income taxes paid on out-of-state earnings. Maryland levies
so-called “state” and “county” taxes, but it allowed credits to be claimed only for
the “state” taxes; the court said that both types of taxes in Maryland are actually
state taxes.

NORTH CAROLINA — North Carolina’s life sciences industry grew 31 percent
between 2001-2012, more than four times the industry’s national growth rate,
according to a study released in March by the research firm Battelle. The study
was prepared for the private nonprofit North Carolina Biotechnology Center,
which is supported by the state’s General Assembly. The report also found life
science companies in the state were responsible for $73 billion in economic output
in 2014 and accounted for 48 percent of all net new jobs in North Carolina from
2001-2012.
SOUTH CAROLINA — Gov. Nikki Haley announced in June that the state
had paid off a nearly $1 billion loan from the federal government five months
early. The loan was granted over five years ago to help with unemployment costs
during the recession. The early repayment saved the state more than $12 million
in interest payments. South Carolina was one of 36 states that borrowed from the
federal government for their unemployment insurance funds in the last six years.

VIRGINIA — The state has launched a new business plan competition for
entrepreneurs in bioscience and energy sectors. Virginia Velocity offers $850,000
in prizes that will be shared among at least four winners. It is open to all companies
in these two sectors, including those based outside of Virginia if they are willing
to relocate to the state for two years. Winners will be announced after the final
presentations in Richmond in early September.

WASHINGTON, D.C. — Low-income D.C. residents are receiving assistance
from a new program that outfits single-family homes with solar panels at no cost
to the households. The Solar Advantage Plus Program is funded jointly by the
D.C. Department of the Environment and the DC Sustainable Energy Utility, a
partnership created by the 2008 Clean and Affordable Energy Act to administer
sustainable energy programs in D.C. Households must meet certain income
requirements to be eligible, as well as have their systems installed by Sept. 30.

WEST VIRGINIA — The state Supreme Court ruled in May that doctors and
pharmacies that negligently prescribe and dispense pain medications can be sued
for enabling addictions. The defendants argued that illegal actions by the plaintiffs
in obtaining the drugs meant they could not seek damages. The court’s decision
stated that juries must weigh the plaintiffs’ criminal conduct against any alleged
negligence of doctors or pharmacists. In response, legislation effective May 25
prevents plaintiffs from receiving damages that arise as part of their own felony
criminal acts.
2

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

FEDERALRESERVE

Jekyll Island: Where the Fed Began

“Defects and Needs of Our
Banking System”
Between 1863 and 1910, there had been
three major banking panics and eight
more localized panics in the United
States. (Some modern scholars count
as many as six major panics.) These
panics stemmed in part from the country’s “inelastic” currency: The supply
of bank notes didn’t expand and contract with the needs of the economy.
This was an unintended consequence of
the National Banking Acts of 1863 and
1864, which required all currency to be
backed by holdings of U.S. government
bonds. Because the aggregate supply of
bonds was fixed for long periods, the
aggregate supply of notes was also limited. In addition, for a bank to issue new
notes, it had to purchase bonds, deposit
those bonds with the U.S. Treasury,
wait for Treasury to authorize printing
the notes, and then wait for the notes to
be printed and shipped. The entire process could take as long as three weeks.
As a result, it was difficult for banks to
provide enough currency during seasonal increases in demand, such as the
fall harvest and the holiday shopping
season. Banks also struggled to provide
enough currency during the banking
panics that accompanied many economic downturns, when many people

would rush to withdraw their deposits
at the same time.
The banking system at the turn of the
century was also highly fragmented. The
laws in most states barred banks from
opening branches, so essentially every
small town had its own bank, to the tune
of more than 27,000 banks in the country in the early 1900s. These many small
banks were connected to larger banks in
the cities through a complex system of
interbank deposits and clearinghouses
that allowed strains to spread quickly
throughout the entire financial system.
In many European countries, the currency was backed by commercial paper,
the volume of which naturally expanded
and contracted along with the economy.
These countries also had central banks
that rediscounted the commercial paper;
by setting the discount rate, the central
bank could help regulate the flow of
currency. The central bank could also,
in certain circumstances, act as a “lender
of last resort” and provide loans to banks
during times of crisis.
Bankers, businessmen, and policymakers were aware of the problems, and
a number of groups were working on
different proposals for currency reform.
On Wall Street, however, a few young
financiers were becoming interested in
establishing a central bank.

A secret meeting
at a secluded
resort led to
a new central
banking system

The main clubhouse on Jekyll
Island was a social hub for the
island’s wealthy visitors.

PHOTOGRAPHY: COURTESY OF THE JEKYLL ISLAND MUSEUM ARCHIVES

O

n Nov. 24, 1910, a select group
of men enjoyed a Thanksgiving
dinner of wild turkey with
oyster stuffing at the luxurious Jekyll
Island Club, off the coast of Georgia.
The resort offered a host of leisurely
pursuits, but the men weren’t there to
golf or ride horses. Instead, the group
was there to devise a plan to remake
the nation’s banking system. The meeting was a closely guarded secret and
would not become widely known until
the 1930s. But the plan developed on
Jekyll Island laid the foundation for
what would eventually be the Federal
Reserve System.

BY J E S S I E RO M E RO

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

3

One of these bankers was Henry Davison, a partner at
J.P. Morgan and Co. Davison started his career as an office
boy at a small bank in Connecticut and rose quickly through
the banking world, becoming vice president of the First
National Bank of New York by age 35. In 1903, while at First
National, Davison founded the Bankers Trust Company,
which became the second-largest trust company in the
country. Five years later, J. Pierpont Morgan asked Davison
to join his firm.
Frank Vanderlip had followed a circuitous path to Wall
Street. He grew up on a farm outside Aurora, Ill., and as a
teenager took a job in a machine shop to support his family after his father died. He later worked as an editor at a
small-town newspaper and then made his way to Chicago,
where he joined the Tribune and eventually became the
financial editor. When the Chicago banker Lyman Gage
was appointed Treasury secretary, he asked Vanderlip to
accompany him to Washington as his private secretary.
Within months, Vanderlip had been promoted to assistant
secretary, and his successful handling of the sale of $1.4 billion in Spanish-American War bonds drew the attention of
Wall Street. He left Treasury for National City Bank, the
forerunner of Citibank, in 1901 and became president of the
bank eight years later.
Paul Warburg, a partner at the investment bank Kuhn,
Loeb and Co., was one of the most vocal critics of the U.S.
banking system. (Kuhn, Loeb merged with Lehman Brothers
in 1977.) Warburg was born in Germany to a wealthy banking family, and he worked in Hamburg, London, and Paris
before moving to the United States in 1902. He gave numerous speeches and wrote articles about the virtues of a central
bank, including “The Defects and Needs of Our Banking
System,” which ran in the New York Times on Jan. 6, 1907.
In it, he noted that the United States’ banking system was
at “about the same point as was reached by Europe at the
time of the Medicis and by Asia, in all likelihood, at the time
of Hammurabi.” He advocated a system like that used by
European countries, in which a central bank issued currency
backed by short-term commercial loans. “We have reached
a point in our financial development,” he wrote, “where it is
absolutely necessary that something be done to remedy the
evils from which we are suffering.”

The Panic of 1907
Those evils surfaced once again during the Panic of 1907,
when a run on the Knickerbocker Trust Company spread
to other New York City trusts and banks. J.P. Morgan
returned to New York from a trip to Richmond, Va., to
figure out how to stop the panic. The first step was to
determine which trust companies were worth saving, a task
he assigned to Davison, then still at First National, and to
Benjamin Strong, whom Davison had hired as secretary of
Bankers Trust. Davison and Strong could not assure Morgan
that the Knickerbocker was sound, and Morgan did not
intervene. The Knickerbocker failed on Oct. 22. But they
judged the Trust Company of America (TCA) worthy of
4

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

support, and over the next several days Morgan assembled a
group of bankers to make a $10 million loan to TCA and two
loans of $25 million and $10 million to the New York Stock
Exchange, quelling the panic. (John D. Rockefeller provided
an additional $10 million to the trust companies.)
The Panic of 1907 wasn’t the worst financial crisis of
the National Banking era, but it got the attention of the
older generation of New York bankers, who began to come
around to their young colleagues’ point of view. That’s
because it was fundamentally different from previous panics, according to research by Jon Moen of the University
of Mississippi and Ellis Tallman, now at the Cleveland Fed.
“The Panic of 1907 happened in trusts, in a group of intermediaries outside the New York Clearinghouse and outside the
purview of the national banks,” says Moen. “The New York
bankers realized that if the next panic were any bigger, their
banks wouldn’t collectively have enough assets to stop it. A
lot of the older bankers hadn’t thought a central bank was
necessary, but they changed their tune very quickly.”
The Panic of 1907 also got the attention of Republican
Sen. Nelson Aldrich, the chair of the Senate Finance
Committee. Aldrich was one of the most powerful politicians of his time: President Theodore Roosevelt dubbed
him the “kingpin” of the Republicans, and journalists called
him (not fondly) the “boss of the United States.” Aldrich
was a key political ally of Morgan, and many of his fellow
legislators were suspicious of his wealth and his ties to business and finance, including his daughter’s marriage to John
D. Rockefeller Jr.
In response to the panic, Aldrich pushed through a bill in
1908 that, among other things, created the National Monetary
Commission to study reforms to the financial system. (The
bill was co-sponsored by Republican Rep. Edward Vreeland.)
The Commission included eight senators and eight representatives, with Aldrich as chair. But in Aldrich’s opinion, “The
drafting of a bill was a matter for experts, not members of
Congress inexperienced in banking and financial matters,” as
economic historian Elmus Wicker wrote in The Great Debate
on Banking Reform. So Aldrich hired several advisers, including Davison and A. Piatt Andrew, an economics professor at
Harvard University, and set off to meet with bankers and central bankers in Europe. “He had been very shrewd in making
up the commission,” wrote Nathaniel Wright Stephenson in
a 1930 biography of Aldrich. “It had three parts: those whose
names were valuable but who would not want to go to Europe
and so would not hamper the work; those who would like to
go to Europe but would be willing enough to be excused from
real work; those who meant business.”
When Aldrich left for Europe, he supported the existing
bond-backed currency and was skeptical about the necessity
of a central bank. But his meetings persuaded him that the
European system was worth emulating, and after returning
home he asked Paul Warburg to give a presentation at the
Metropolitan Club of New York. Warburg had written
Aldrich several letters about his views on financial reform
and was surprised by the senator’s change of heart. But

Warburg was also doubtful the American public would
accept a central bank, no matter the benefits. Aldrich was
more optimistic. “I like your ideas — I have only one fault to
find with them,” he told Warburg. “You say that we cannot
have a central bank, and I say we can.”

The Duck Hunt
By the fall of 1910, Aldrich had learned a great deal, but he
didn’t actually have a plan for a central bank. Nor did he have
a bill to present to Congress, which would begin meeting
in just a few weeks. So Aldrich — most likely at Davison’s
suggestion — decided to convene a small group to hash out
the details. The group included Aldrich, his private secretary
Arthur Shelton, Davison, Andrew (who by 1910 had been
appointed assistant Treasury secretary), Vanderlip, and
Warburg.
A member of the exclusive Jekyll Island Club, probably J.P. Morgan, arranged for the group to use the club’s
facilities. Founded in 1886, the club’s membership boasted
elites such as Morgan, Marshall Field, and William Kissam
Vanderbilt I, whose mansion-sized “cottages” dotted the
island. Munsey’s Magazine described it in 1904 as “the richest,
the most exclusive, the most inaccessible” club in the world.
Aldrich and Davison chose the attendees for their banking expertise, but Aldrich knew their ties to Wall Street
would arouse suspicion about their motives. “Knowledge
of who wrote the plan could have influenced people’s
perception of the value of the ideas and the likelihood of
its political passage,” says Gary Richardson, the Federal
Reserve System historian and an economics professor at the
University of California, Irvine. So Aldrich went to great
lengths to keep the meeting secret, adopting the ruse of a
duck hunting trip. He instructed the men to come one at
a time to a train terminal in New Jersey, where they could
board his private train car. Warburg went so far as to bring
all the trappings of a duck hunter, when in fact he had never
shot a duck in his life. Andrew didn’t even tell his boss, the
Treasury secretary, where he was going.
So secretive was the meeting that even the exact list
of participants is lost to history. In his autobiography,
Vanderlip says Benjamin Strong attended and recalls him
horseback riding before breakfast. But Strong is absent from
other historical accounts, including Warburg’s first-person
recollections. Strong was named the first president (then
called governor) of the New York Fed, and “during the 1920s
he was heralded as being the only person who really knew
what a central bank was supposed to do,” says Moen. “So
it was assumed later he was there, but there really isn’t any
evidence he took part in the meeting.”
Once aboard the train, the men used only their first
names with each other. Vanderlip and Davison went even
further, as Vanderlip wrote in his autobiography: “Davison
and I adopted even deeper disguises, abandoning our own
first names. On the theory that we were always right,
he became Wilbur and I became Orville.” Vanderlip and
Davison would continue to call each other Wilbur and

Orville for years, and the men referred to themselves as the
“First Name Club” for decades.

The Plan Takes Shape
Aldrich and his colleagues quickly realized that while they
agreed on broad principles — establishing an elastic currency
supplied by a bank that held the reserves of all banks — they
disagreed on the details. Figuring out those details was a
“desperately trying undertaking,” Warburg told Davison’s
biographer, Thomas Lamont; completely secluded, the men
woke up early and worked late into the night for more than a
week. “We had disappeared from the world onto a deserted
island,” Vanderlip recalled in his autobiography. “We put in
the most intense period of work that I have ever had.” But it
was also, Vanderlip wrote, “entirely thrilling.”
By the end of their time on Jekyll Island, Aldrich and his
colleagues had developed a plan for a Reserve Association
of America, a single central bank with 15 branches across
the country. Each branch would be governed by boards of
directors elected by the member banks in each district, with
larger banks getting more votes. The branches would be
responsible for holding the reserves of their member banks,
issuing currency, discounting commercial paper, transferring balances between branches, and check clearing and
collection. The national body would set discount rates for
the system as a whole and buy and sell securities.
Shortly after returning home, Aldrich became ill and was
unable to write the group’s final report. So Vanderlip and
Strong — who was a member of the “First Name Club” even
if he hadn’t been on Jekyll Island — traveled to Washington
to get the plan ready for Congress. Aldrich presented it to
the National Monetary Commission in January 1911, without telling the commission members how the plan had been
developed. A final report, along with a bill, went to Congress
a year later with a few minor changes, including naming the
new institution the National Reserve Association.
In a letter accompanying the report, the Commission
(that is, the Jekyll Island attendees) said they had created an
institution “scientific in its methods, and democratic in its
control.” But many people, especially Democrats, “hated the
version of democracy it presented,” says Richardson. “The
Aldrich plan presented a reform of the financial system that
was the kind of plan many Americans feared. It looked like
the biggest banks would have an outsized influence on the
leadership, like bankers in New York would through their
control of finance and credit be able to control the country
and rig the system.”
With a presidential election coming up, the Democrats
made it part of their platform to repudiate the Aldrich
plan and the idea of a central bank more generally. When
Woodrow Wilson won the presidency and the Democrats
took control of both houses, Aldrich’s National Reserve
Association was officially shelved.
But some Democrats also were interested in financial
reform, in particular Carter Glass, a congressman from
Virginia. Glass had developed a plan for a system of separate
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

5

regional reserve banks, as opposed to a central bank with
regional branches, as in the Aldrich bill. At President
Wilson’s insistence, Glass also included a Federal Reserve
Board composed of presidential appointees to provide federal oversight. But in its technical details, the Democrats’
final bill closely resembled the Aldrich bill. “What people were really upset about was the political structure of
Aldrich’s plan. So the Democratic reply was a proposal that
used the same technical infrastructure and policy tools. A
lot of it is word for word. They just put a different political
structure in place,” says Richardson. The combination of
regional independence and federal oversight was more to the
public’s liking, and the Federal Reserve Act, a combination
of Glass’s bill and a bill introduced by Sen. Robert Owen,
became law in 1913.

Postscript
In 1917, the journalist B.C. Forbes, the founder of Forbes
magazine, somehow learned about the Jekyll Island trip and
wrote about it in Men Who Are Making America, a collection
of short biographies of prominent financiers, including
Davison, Vanderlip, and Warburg. But not many people
noticed the revelation, and those who did dismissed it as “a
mere yarn,” according to Aldrich’s biographer.
The participants themselves denied the meeting had
occurred for 20 years, until Andrew, Vanderlip, and Warburg
shared the story with Aldrich’s biographer in 1930. (Aldrich
died in 1915 and Davison in 1922.) The impetus for coming

clean was probably the publication in 1927 of Carter Glass’
memoir, An Adventure in Constructive Finance. In it, Glass,
by now a senator, had claimed all the credit for the ideas in
the Federal Reserve Act. After that, Richardson says, “The
other people who contributed, particularly the Jekyll Island
guys, came out with books and articles to talk about their
role in creating the Aldrich plan.”
Warburg was especially critical of Glass’ description of
events. In 1930, he published a two-volume book describing
the origins of the Fed, including a line-by-line comparison
of the Aldrich bill and the Glass-Owen bill to prove their
similarity. In the introduction, he wrote, “I had gone to
California for a three months’ rest when the appearance of
a series of articles written by Senator Glass … impelled me
to lay down in black and white my recollections of certain
events in the history of banking reform.” (Warburg’s book
does not mention Jekyll Island specifically, although he
alludes to a secret meeting with Aldrich.)
The Jekyll Island Club never bounced back from the
Great Depression, when many of its members resigned, and
it closed in 1942. Today, its former clubhouse and cottages
are National Historic Landmarks, and the secret meeting
that launched the Federal Reserve is a historical curiosity for
the many tourists who visit the island. But the issues Aldrich
and his colleagues wrestled with over Thanksgiving more
than 100 years ago remain relevant today, as policymakers
and the public continue to debate the structure and powers
of the Fed.
EF

Readings
Forbes, B.C. Men Who Are Making America. New York: B.C.
Forbes Publishing Co., Inc., 1917.

Warburg, Paul M. The Federal Reserve System: Its Origin and
Growth. New York: The Macmillan Company, 1930.

Moen, Jon R., and Ellis W. Tallman. “Why Didn’t the United
States Establish a Central Bank until after the Panic of 1907?”
Federal Reserve Bank of Atlanta Working Paper No. 99-16,
November 1999.

Wicker, Elmus. The Great Debate on Banking Reform. Columbus,
Ohio: Ohio State University Press, 2005.

BAILOUT BAROMETER:

How Large is the Financial Safety Net?
The Richmond Fed estimates that 60 percent of the liabilities of
the financial system are subject to explicit or implicit protection
from loss by the federal government. This protection may encourage
risk-taking, making financial crises and bailouts more likely.
Learn more at: www.richmondfed.org/publications/research/special_reports/safety_net

6

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

POLICYUPDATE

Risk Retention Contention
BY DAV I D A . P R I C E

I

n recent decades, financial assets such as home mortminimum down payment. According to a New York Times
gages, auto loans, and credit card receivables have comreport, higher standards for the exemption were opposed
monly been securitized — that is, investment firms
by a coalition of mortgage lenders and consumer groups
combine them into pools and sell interests in those pools to
concerned about mortgages becoming more difficult to
investors as securities. The process of securitizing creates
obtain. A commissioner of the Securities and Exchange
new options for investors while also creating new sources of
Commission, Daniel Gallagher, dissented from the decision,
funding for borrowers, lowering their cost of borrowing. In
stating that the agencies’ standard was “meaningless at best,
the period leading up to the 2007–2008 financial crisis, howdeleterious at worst.”
ever, many mortgage-backed securities (MBS) lost value from
The importance of risk retention to avoiding a future
borrower defaults, fueling the collapse of major institutions.
crisis is an open question, however. Economist Paul Willen
In response, when Congress passed the Dodd-Frank
of the Boston Fed noted in a 2014 article that institutions
Act in 2010, it included a requireselling MBS prior to the financial criment that issuers of some securitized
sis held significant amounts of it in
investments retain a portion of those
Risk retention forces securitizers their portfolios. “Indeed,” he wrote,
securities in their own portfolios —
“the financial crisis resulted precisely
to keep some skin in the game
the Act’s “risk retention” requirefrom the fact that the losses associment. The law requires issuers to so that they are subject to the same ated with the collapse in the housing
retain 5 percent of the securities,
market were so concentrated in the
credit risk as the investors.
with certain exceptions, and they are
portfolios of the intermediaries.”
largely forbidden to hedge the risk
A 2008 analysis by economists
that they retain. In October 2014,
Kristopher Gerardi of the Atlanta
the Fed and five other regulatory agencies jointly announced
Fed, Andreas Lehnert and Shane Sherlund of the Fed’s
the final version of the regulations for risk retention, which
Board of Governors, and Willen of the Boston Fed suggests
will take effect for securitizers of some MBS on Dec. 24,
that the underlying issue was not a lack of risk retention,
2015. The regulations will take effect for securitizers of other
but unwarranted optimism about the housing market. In
assets a year later.
an article in Brookings Papers on Economic Activity, they
The idea behind the risk retention requirement is that
examined reports from investment bank analysts, credit
during the period before the financial crisis, sellers of MBS
rating agencies, and the news media on subprime MBS from
deceived investors about the riskiness of the mortgages.
2005 and 2006. They found that the likely effects of a housThe sellers were able to carry out the deception, in this
ing downturn on MBS values were understood; where the
view, as the result of asymmetrical information: The invesanalysts erred was in assigning a low probability to even a
tors lacked information about the mortgages and their
modest downturn, let alone a major one.
underwriting standards, and the pools were structured in a
Another question is whether MBS buyers will demand
complex way that was difficult for investors to make sense
risk retention or some other protective arrangement in the
of. Risk retention forces securitizers to keep some skin in
absence of a risk retention rule. Richmond Fed economist
the game, so to speak, so that they are subject to the same
John Walter suggests that in the absence of the expectation
credit risk as the investors.
of a government bailout, institutions will seek to do so.
The statute and regulations provide for a number of
“The lender has some information advantages, but asymexemptions to the requirement. Perhaps the most signifimetric information problems occur in the economy all the
cant exemption is that, under the Dodd-Frank Act, a secutime,” Walter says. “For instance, cars are highly complex
ritizer does not need to retain risk if all of the securitized
and it’s hard for purchasers to know their quality. The way
assets in a pool are mortgages that meet a standard of safety;
manufacturers and dealers respond is to retain some of the
such mortgages are known as qualified residential mortgages
risk with warranties. Regulators don’t require warranties,
(QRM). Congress largely left it up to the agencies to define
but this solution has emerged from market incentives.”
which mortgages are QRM and which are not.
While many low-quality mortgages were made before
In the final regulations, the agencies defined QRM in a
the crisis, Walter says, that is in part because the parties to
way that created a broad exception; they did so by defining
the MBS deals were perceived as “too big to fail” or were
QRM to mean the same as a “qualified mortgage” under the
doing business with “too big to fail” firms. “Coming up with
Truth in Lending Act. As a result, mortgages can be exempt
prescribed solutions to this asymmetric information issue is
from the risk retention requirement without having any
dealing with the symptom, not the underlying problem.” EF
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

7

JARGONALERT

Statistical Significance

A

drug company has developed a new treatment for
high cholesterol. It finds that patients who take the
new drug experience fewer heart attacks and other
negative effects from the condition. But how confident are
we in those results? This is the question of statistical significance, and it can be applied to the social sciences to help
economists better determine the effects of a certain policy
change or business decision.
To determine statistical significance, a researcher begins
by creating a null and an alternative hypothesis to test if a
relationship exists between two events or characteristics.
The null hypothesis typically states that no relationship
exists, and the alternative hypothesis asserts that a relationship does exist. For example, an economist might suspect
a rise in the minimum wage will affect the employment of
less-skilled workers. The null hypothesis would be that,
on average, there is no change in the
unemployment rate for less-skilled workers after a state raises its minimum wage.
The alternative hypothesis would be that
there is a change in unemployment after
an increase in the minimum wage.
Suppose the economist runs a regression analysis and the coefficient on the
minimum wage variable is positive — suggesting a possible correlation between
unemployment for less-skilled workers
and a state’s minimum wage. The next
step is to determine our level of confidence in that result.
Researchers use what is called a p-value to communicate
the probability of finding a relationship when no such relationship exists. If the p-value is below a certain threshold —
5 percent is commonly used — the relationship is deemed statistically significant and the null hypothesis can be rejected.
Of course, correlation is not the same as causation. Just
because a change in one variable coincides with a change in
the other does not necessarily mean they cause one another.
For example, playing tennis might be correlated with wealth,
but unless one is a professional tennis player, it won’t lead
to greater wealth. Without a controlled experiment, it’s
very difficult to prove causality. Controlled experiments are
relatively rare in economics; for example, it’s unlikely that
legislators would allow an economist to tinker with their
state’s minimum wage in the name of scientific inquiry. But
economists can take advantage of “natural experiments,”
such as one state raising its minimum wage while a neighboring state leaves its wage unchanged. Or they can use statistical techniques to control for other factors that might affect
employment. A considerable amount of research has used
such methods to study the minimum wage. Most studies
8

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

have found disemployment effects, although the magnitude
varies considerably. (See “Raise the Wage?” Econ Focus, Third
Quarter 2014.)
Just as it’s important to distinguish between correlation
and causation, it’s also important to distinguish between
statistical significance and economic significance. Statistical
significance is about your confidence in the result, but just
because a result is statistically significant doesn’t mean
the result is large or meaningful. For example, say a large
increase in the state minimum wage caused a few people
in that state to lose their jobs. The statistical relationship
might be strong, but the magnitude of job loss could be small
enough to be inconsequential to policymakers.
The problem of error is implicit in any discussion of
statistical significance. There exists, in a statistical test, the
possibility for two types of error: type 1 and type 2. A type 1
error indicates a “false positive” or rejecting the null hypothesis when it is true. A
type 2 error is when one accepts the null
when it is false. Both can be problematic,
but the extent to which the researcher is
concerned about the error depends on the
question being explored.
It’s important to take type 1 and type 2
errors into account when considering the
threshold for statistical significance. The
smaller the p-value, the higher the bar for
significance. So a researcher who is especially concerned about making a type 1 error might look for
significance well below 0.05. In a 2012 column, Carl Bialik,
the Wall Street Journal’s “The Numbers Guy,” detailed how
this concept was used to validate the existence of the elusive
Higgs boson particle — sometimes referred to as the “God
particle.” Researchers used a statistical significance of “five
sigmas” to reject a result with a p-value greater than one in
3.5 million. They wanted to set an extremely high burden of
proof for discovering a new particle in the universe.
This discussion of error can be applied to other questions
society faces. For example, many might argue that determining guilt in a death penalty case should require a higher
burden of proof than in a normal trial. Implicitly, one is
determining a p-value in this situation because it is desirable
to have a very low probability of type 1 error (convicting
someone and sentencing them to death for a crime they
didn’t commit).
In a sense, then, statistical significance reflects value
judgments. Setting a high or low p-value indicates a
researcher’s belief about what constitutes significance —
an additional nuance to be mindful of when interpreting
research findings.
EF

ILLUSTRATION: TIMOTHY COOK

BY E A M O N O ’ K E E F E

RESEARCH SPOTLIGHT

Superstars of Tax Flight

O

BY K A R L R H O D E S

n Feb. 16, 2013, the New York Times published an
is his or her number of citation-weighted patents. A citation
opinion column, “The Myth of the Rich Who Flee
occurs whenever an inventor’s patent is referenced by a later
From Taxes.” On the following day, Forbes magazine
patent. The resulting accumulation of citations varies widely
countered with an online commentary, “Sorry New York
among inventors. The average inventor in the sample has 42
Times, Tax Flight of the Rich Is Not a Myth.”
citations, for example, while the average inventor in the top
The articles cited anecdotes of wealthy celebrities moving
1 percent of the sample has more than 1,000 citations. The
from high-tax states and nations, but data tracking the interauthors refer to the top 1 percent as “superstars … key drivers
national mobility of large random samples of wealthy people
of economic growth.”
over long periods of time is difficult, if not impossible, to find.
Akcigit, Baslandze, and Stantcheva combine this patent
So some economists have addressed this question by looking
data with international tax data to estimate each inventor’s
at observable subsets of wealthy populations. In 2013, for
potential earnings in each country based on factors such as
example, researchers from the London School of Economics
numbers of patents and citations and technological field.
(Henrik Kleven and Camille Landais) and the University of
Other key considerations include whether or not an inventor
California, Berkeley (Emmanuel Saez) studied the mobilworks for a multinational corporation and how active that
ity of professional soccer players among 14 Western European
company is in each potential destination country. (Inventors
nations from 1985 through 2008. They found that the players —
who work for multinationals tend to be more mobile.)
especially foreign “superstars” —
The authors then develop a
do tend to migrate to countries
model to estimate elasticities
“Taxation and the International Mobility
with lower tax rates. (The authors
with respect to effective top tax
of Inventors.” Ufuk Akcigit, Salomé
defined “foreign” players as those
rates for domestic and foreign
Baslandze, and Stefanie Stantcheva.
who are not competing in their
inventors. They find that top tax
home countries.)
rates significantly influence locaNational Bureau of Economic Research
A more recent example comes
tion decisions among superstar
Working Paper No. 21024, March 2015.
from a 2015 working paper by Ufuk
inventors — especially foreign
Akcigit and Salomé Baslandze of
superstars. The elasticity for forthe University of Pennsylvania and Stefanie Stantcheva of
eign superstars is 1.3, more than 30 times higher than for
Harvard University. They study the impact of effective top
domestic superstars.
tax rates on inventors’ mobility; in particular, they look at
The elasticity of the domestic superstar inventors is someinventors’ movement among the United States, Canada,
what lower than the elasticity of the domestic soccer players
France, Germany, Great Britain, Italy, Japan, and Switzerland
in the study by Kleven, Landais, and Saez. The authors of the
from 1977 through 2003. Inventors from these eight countries
soccer study speculate that the elasticity for soccer superaccount for most of the patents issued by the U.S. Patent and
stars may be greater than for other highly paid professionals
Trademark Office and the European Patent Office.
because soccer superstars earn most of their income during
For inventors who obtained patents in the United States,
just a few prime years and because professional soccer involves
the authors employ panel data that was disambiguated
little country-specific capital. In addition, Akcigit, Baslandze,
recently by researchers at Harvard, Berkeley, and other
and Stantcheva point out that the soccer study considers
institutions. (Disambiguation untangles name variations
migration only among Western European countries, while
that could make one inventor appear to be multiple people
their inventor study also includes the United States, Canada,
and name duplications and similarities that could make muland Japan. “Expanding the [soccer] study to other continents
tiple inventors appear to be one person.) For inventors who
might, one would expect, reduce the tax elasticities of migraobtained patents in Europe, the authors use disambiguated
tion,” they suggest.
panel data from the CRIOS-PatStat database developed by
Both studies conclude that some wealthy individuals are
researchers at Bocconi University in Italy. By combining
substantially influenced by taxes when deciding where to live.
information from both sources, Akcigit, Baslandze, and
The soccer research goes one step further by suggesting that
Stantcheva are able to track most of the inventors who
tax-induced migration has translated into better-performing
obtained patents during their study’s timeframe.
teams in lower-tax countries. The inventor study makes no
The authors sort these data into “quality distributions”
parallel suggestion regarding higher levels of innovation from
that rank each of the 1,868,967 inventors in their sample
the migration of inventors, but it certainly raises the stakes
based on several factors related to the quantity and quality of
from breakaway goals that win soccer games to breakthrough
his or her patents. The key indicator of an inventor’s quality
technologies that drive economic growth.
EF
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

9

THEPROFESSION

Economists and the Real World
BY T I M S A B L I K

H

ow important is it for economists to gain real
experience with the markets they study in
theory? Many of the founders of the discipline
held other jobs before becoming professors. Nineteenth
century French economist Léon Walras, who developed
general equilibrium theory, worked as a journalist, novelist,
railroad clerk, and bank director before becoming a professor at the age of 36. William Stanley Jevons, the 19th century
British economist who helped develop the theory of marginal utility, initially studied the physical sciences and spent
five years as a metallurgical assayer in Australia.
For much of the modern era, however, the careers of
economists have seemed to stay close to the ivory tower of
academia. Although hard data is limited, the typical path
for a research economist appears to go from college straight
into doctoral study with little or no experience outside the
profession along the way. And according to a 2013 Inomics
survey, employers of economists in the United States and
Canada said that of nine factors in the selection of a job
candidate, “experience in the private sector” was by far the
least important.
Some critics have argued that such isolation from the
real world is a cause for concern. For example, many claimed
that economists failed to predict the 2007-2008 financial
crisis because their models had become too detached from
the way real financial markets operate.
But that image of academic isolation may not be wholly
accurate today. Many academic economists have begun
collaborating more actively with private firms and public
institutions. This practice has become common in the
discipline of market design, for example. Robert Wilson
of Stanford University helped design auctions for the oil,
communications, and power industries. Along with his
former student Paul Milgrom of Stanford University and
with Preston McAfee, who is now the chief economist
at Microsoft, Wilson received the 2014 Golden Goose
Award for designing the first spectrum auctions used by the
Federal Communications Commission in 1994. Alvin Roth
of Stanford University and co-winner of the 2012 Nobel
Prize in economics collaborated with public schools in
New York City and Boston to design algorithms to improve
student placement in preferred schools and with doctors
to arrange kidney transplant exchanges between pairs of
donors and recipients.
“Market design is a team sport,” Roth said in his Nobel
acceptance speech. “And it is a team sport in which it is hard
to tell who are theorists or practitioners because it blurs
those lines.”
Susan Athey of Stanford University says that it is “not
an accident” that economists studying market design and
10

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

industrial organization have collaborated heavily with realworld firms and institutions. “If you’re trying to solve a real
problem, you need to understand the full set of constraints
to propose the best solution,” she says. Her role as a consultant for Microsoft has influenced her research on Internet
markets, such as online advertising.
Still, the choice to work in or collaborate with the private sector is not without downsides. Stints in the private
sector, while providing valuable experience, can delay an
economist’s publication of papers, hampering his or her
progress as a researcher. And economists who do publish
research based on their private-sector work or collaboration
can find themselves criticized for bias. Luigi Zingales of the
University of Chicago has argued that economists who work
with private firms face the same pressures and risk of “capture” as regulators. Since obtaining proprietary data often
means establishing a good rapport with the firms that hold
those data, economists may be more inclined to evaluate
such firms favorably in their research.
This criticism was also levied against some financial
economists following the 2007-2008 crisis. Athey says that
for this reason, it has generally been easier for microeconomists studying questions unrelated to public policy to collaborate with industry. But even those economists can face
stigma from their academic peers. “I got the impression
that many of my peers thought I was selling out,” she says.
“They couldn’t really understand why I was so confident my
work with Microsoft was going to come back and improve
my research.”
Today, many of the leading empirical studies rely on large
datasets collected by firms and government agencies. As a
result, more economists seem willing to risk some criticism
to obtain access to these data. In a 2014 article in Science
magazine, Liran Einav and Jonathan Levin of Stanford
University reported that 46 percent of papers published
in the American Economic Review in 2014 relied on private
or non-public administrative datasets, compared with just
8 percent in 2006.
“I think the profession is starting to normalize the idea
of working with a firm to get access to data,” says Athey.
“Increasingly, people are recognizing that without this
private sector data, we’re just not going to be able to get a
complete picture of trends which could end up being very
important to the economy.”
Such collaboration has helped enable research on consumer behavior, economic mobility, and high-frequency
trading, among other topics. While most academic economists may never hold jobs in other fields, as Jevons and
Walras did, collaboration with firms is increasingly bringing
the real world into economic research.
EF

AROUNDTHEFED

The Competitiveness of Inner Cities

BY L I S A K E N N E Y

“Are America’s Inner Cities Competitive? Evidence
from the 2000s,” Daniel A. Hartley, Nikhil Kaza, and
T. William Lester, Federal Reserve Bank of Cleveland
Working Paper No. 15-03, March 2015.

A

re the majority of inner cities experiencing a renaissance thanks to rapid gentrification, or is growth limited
to a small number of high-technology regions, resulting in
inequality among metropolitan areas? These two narratives
are at the center of new research from the Cleveland Fed,
which looks at whether inner cities have become more
competitive ­­— that is, whether they have had net positive
employment growth and an increase in the share of jobs
located there.
The authors conclude that while there has been nationwide job growth in inner cities, it has not been enough to
declare a renaissance in inner city America.
In their research, the authors look at three measures of
employment. First, census tract level data from the Local
Origin-Destination Employment Statistics program showed
that inner city tracts added 1.8 million jobs between 2002 and
2011. This job growth was found in nearly all census divisions,
and the inner city rate of growth nearly matched suburban
tracts’ rate of growth, 6.1 percent to 6.9 percent, respectively.
Inner cities also increased their share of metropolitan employment in 120 of the 281 metropolitan statistical
areas studied — in addition to having positive employment
growth — showing that competitive inner cities may not be
uncommon, but they are not yet universal.
Finally, the authors look at the pattern of job growth
within the inner cities. Job growth tended to occur faster in
census tracts closer to downtown, with nearby population
increases and recent residential construction. And even within
competitive inner cities, the tracts with higher poverty levels
had lower job growth than the tracts with lower poverty levels.
“Competing for Jobs: Local Taxes and Incentives,”
Daniel J. Wilson, Federal Reserve Bank of San Francisco
Economic Letter 2015-06, Feb. 23, 2015.

T

here has been a debate about whether or not localities
should use tax incentives to persuade businesses to relocate to certain areas. State and local governments across the
country, including in the Fifth District — where, for example, South Carolina lured Boeing with a multimillion-dollar
incentive package — have used these tools to increase the
economic development in their regions.
These incentives can be broken into two categories:
Discretionary incentives are created specifically for individual companies, while nondiscretionary incentives are

available to all qualifying businesses.
In a recent Economic Letter, a San Francisco Fed researcher
asks whether these incentive situations are a zero-sum game.
That is, has economic activity simply been moving from one
area to another? According to past research that the author
reviews, the answer is mostly yes.
Past research has found that when tax incentives bring
a company to a new locality, the move has an adverse effect
on the old location. This means there is no net gain for the
national economy.
The Economic Letter finds that local tax policy does influence the location decisions of companies, but that there is
no consistent way to measure whether the benefits of these
incentive policies outweigh the costs of lost tax revenue.
One large policy question is whether these tax incentives
should be banned, as they are in most of the European Union.
Standard economic theory suggests it may not be optimal
for local governments to set tax policies because they do not
factor in the negative effects their decisions will have on other
areas; the central government may be better suited for this
role. But the Tiebout model, posited by economist Charles
Tiebout in 1956, says that competition for individuals and
businesses forces local governments to be as efficient as possible in order to charge the lowest possible tax rate.
The author concludes that policy must “weigh the benefits
of local choice … against the cost of how changes in one area
might negatively affect competing jurisdictions.”
“How Cyclical Is Bank Capital?” Joseph G. Haubrich,
Federal Reserve Bank of Cleveland Working Paper
No. 15-04, March 2015.

T

he idea that bank capital is cyclical has been cited by
some as one reason for the 2008-2009 financial crisis. But economist Joseph Haubrich of the Cleveland Fed
wondered if bank capital was really cyclical at all. He finds
that the answer depended on several factors, including time
period, definition of capital ratio, and bank size.
Haubrich used both quarterly and annual data. The first
quarterly dataset shows the ratio of total equity capital
to total assets from fourth quarter 1959 to fourth quarter
2013; the second set shows the ratio of Tier 1 capital to
risk-weighted assets from first quarter 1996 to fourth quarter
2013. There are also two sets of annual data, one from 1834 to
1980 and the other from 1875 to 1946. In the quarterly data,
Tier 1 capital to risk-weighted assets is found to be moderately procyclical, while the quarterly equity to assets ratio does
not show any cyclicality.
Small banks were the most procyclical, while the largest
categories of banks showed more counter-cyclicality.
EF
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

11

The Secession Question
What are the economic
costs and benefits of
nations breaking apart?
BY T I M S A B L I K

A Perfect Union?
From a pure economic efficiency standpoint, countries are
rarely better off splitting into smaller pieces. As Alberto
Alesina of Harvard University and Enrico Spolaore of Tufts
12

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

PHOTOGRAPHY: ©ISTOCK.COM/ONIONASTUDIO

I

t was called a “once in a generation opportunity.” Last
September, Scottish voters took to the polls to decide
the fate of their country’s more than 300-year union
with England. One side, clad in the blue and white of the
Scottish flag, invoked Scotland’s unique history and heritage
and argued that they would be more prosperous on their
own. But many in Scotland and the United Kingdom as
a whole implored voters to reject independence, arguing,
among other things, that it would be economically disastrous
for everyone involved.
The referendum drew a record turnout: 3.6 million people, or nearly 85 percent of eligible voters. In the end, status
quo won the day by a margin of 55 to 45. The debate didn’t
end there, however. This May, the Scottish National Party
(SNP), which is the leading proponent of independence,
secured 56 of Scotland’s 59 seats in Parliament, prompting
speculation about another referendum in the not too distant future. And the debate reinvigorated existing secession
movements elsewhere. Catalonia, a region in northern Spain,
is seeking its own vote on independence, and the Flemish
nationalist party surged to power in Belgium following the
Scottish referendum.
What prompts some regions to seek separation from
their country? Having a distinct regional identity is a crucial
component, as most secession movements appeal to cultural
and historical differences between the region and the rest
of the country. There are a number of catalysts that might
inflame those differences. In the past, secessions have been
sparked by disputes over religion, politics, or civil rights. But
in a 2008 paper, Andrés Rodríguez-Pose and Richard Sandall
of the London School of Economics traced the evolution of
the arguments made in secession movements and found that
they have shifted. “Identity has progressively been relegated
in favour of the economy and the promise of an economic
dividend as the other main motivating factor,” they wrote.
This is certainly true of Scotland, Catalonia, and Flanders,
which have focused heavily on economic issues. But can
regions become economically better off going it alone?

In 2012, on September 11, Catalonia’s national day, hundreds of
thousands of people gathered in Catalonia’s capital, Barcelona,
to demand independence from Spain.

Taxing Their Patience
When a region has a strong independent identity and a
higher average income relative to the rest of the country,
resentment over wealth transfers can prompt residents to
question whether they might do better on their own. In a
1987 American Economic Review article, the late economists
James Buchanan and Roger Faith reasoned that just as individuals might “vote with their feet” and exit a country to
escape unfavorable tax treatment, so might entire regions or
political groups threaten secession if they believe they can
achieve a more equitable tax treatment through a government that is closer to home.
This is a key argument in the debate between Catalonia
and Spain. Catalonia’s per capita gross domestic product is
higher than Spain’s as a whole and the region accounts for
more than a quarter of all Spanish exports. In the aftermath
of the financial crisis of 2007-2008, Catalonia’s government
argued that it was contributing more in tax revenue to the
national government than it received in benefits, with the
difference going to support poorer regions of the country.
“That led to the slogan, ‘Spain steals from us,’ and from
there, ‘we would better off alone,’ ” says Ubide. He notes that
in most cases, the political platforms of regional parties are
built around achieving gains for their regions from the center.
Eventually, the parties reach the end of the road in terms of
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

PHOTOGRAPHY: ©ISTOCK.COM/ERIC FALCÓ DOMÈNECH

University noted in their 2003 book The Size of Nations, there
are several major advantages to being a large country. First,
the per capita expenses of public goods with large fixed costs
are lower in large nations. Taxes to support infrastructure like
roads, schools, and national defense are spread across a bigger
population. In the case of national defense, this means larger
countries can also more easily support a larger military, arguably allowing them to better defend their territory.
Large nations also typically have bigger, more diverse
internal markets. Smaller countries can seek this advantage
to some extent by trading with the larger world market.
Indeed, Alesina and Spolaore found a correlation between
trade liberalization and the fragmentation and downsizing of
nations. The early 20th century, which was marked by high
protective tariffs and other trade barriers, was also a period
in which countries maintained large empires. In a restrictive
trade regime, it is advantageous to be a large nation or have
multiple colonies with which to trade freely. Coincidentally
or not, as countries have relaxed trade barriers, the number
of nations has grown. In 1948, there were 74 countries; today,
the United Nations recognizes 193. “As trade becomes more
liberalized, small regions are able to seek independence at
lower cost,” wrote Alesina and Spolaore.
Still, small nations face costs to trade that larger countries
can avoid. Even relatively open international borders impose
some frictions. For example, researchers have found that
even in the case of the very open trade relationship between
the United States and Canada, internal trade remains preferred by market participants in both countries. Without
internal trade barriers, a large country has efficient access to
large domestic markets, avoiding trade frictions.
Furthermore, larger nations can support more diverse
markets. To compete in international markets, small nations
often specialize in a small number of goods or services. This
lack of diversification can leave their economies more vulnerable to macroeconomic shocks, as witnessed during the
financial crisis of 2007-2008 by the troubles in small economies like Iceland and Ireland.
With more diverse economies, larger countries are also
better equipped to share risk among their territories. If
certain regions of the country suffer greater losses than the
nation as a whole during an economic crisis, the government
can transfer tax revenues from more prosperous areas to
provide aid. Even in non-crisis times, large countries are better equipped than small ones to smooth income across the
country by transferring tax revenue from wealthy regions to
help boost development in poorer regions.
But size has downsides as well. According to research on
the political economy of secession, larger nations are more
likely to have regions that strongly disagree about public
policy. As a result, decisions intended to improve the welfare
of the country as a whole, such as economic transfers, can
benefit some regions at the expense of others.
“That creates the beginning of political resentment,” says
Ángel Ubide, a senior fellow at the Peterson Institute for
International Economics.

13

Independence Votes Since World War II
Did not lead to independence

Led to independence
1957

Guinea

1961

Samoa, Jamaica

1962

Algeria

1964

Rhodesia, Malta

Comoros (one island)

1974

Comoros (three islands)

Aruba

1977

Djibouti

1979

Saint Vincent and the
Grenadines

Quebec

1980

New Caledonia

1987

Montenegro

Slovenia

1991

Armenia, Azerbaijan, Croatia,
Estonia, Georgia, Latvia,
Lithuania, Macedonia,
Turkmenistan, Ukraine,
Uzbekistan

1992

Bosnia and Herzegovina

1993

Eritrea

1995

Nevis

1998

Scotland

Resource Control

1990

Quebec, Bermuda

1999

East Timor

2006

Montenegro

2011

South Sudan

2014

NOTES:
Jamaica: Voted to withdraw from West Indies Federation; became fully independent on its own
in 1962.
Rhodesia: Unilaterally declared independence in 1965 but was not fully recognized internationally
until 1980, when it became Zimbabwe.
Comoros: Although 95 percent of all voters supported independence, a majority on the island
of Mayotte voted against it. In July 1975, the parliament declared the independence of the three
remaining islands; Mayotte remains an overseas department of France.
Aruba: Although 95 percent of valid votes favored independence, in 1990 the transition process
was postponed indefinitely at Aruba’s request.
Nevis: 62 percent voted to secede from St. Kitts and Nevis, short of the necessary two-thirds.
Djibouti: 99.8 percent of voters chose independence over remaining a French territory; fraud
accusations marred two prior referendums, in 1958 and 1967, which came out in favor of the
territory remaining French.
SOURCE: Pew Research Center

what the center will allow. “Then, either the center makes
the road longer or the region decides to leave,” he says. On
Catalonia’s national day in September 2012, hundreds of thousands of people demonstrated in favor of leaving.
The financial crisis also exacerbated regional income
differences in Belgium between the wealthy region of
Flanders and the less-prosperous Wallonia. The New
Flemish Alliance made large electoral gains in the Belgium
government last year and has pledged to take steps toward
dissolving the current union.
Such disagreements don’t always result in secession,
though. Buchanan and Faith noted that regions can use
14

the threat of secession to exert pressure on the rest of the
country and obtain concessions on tax treatment. This may
place a cap on the tax level countries can impose on wealthy
regions in particular, since they would not want to risk damaging their own economy by letting those regions go.
On the other hand, such concessions can generate secession pressures from other regions. In a 1997 Quarterly Journal
of Economics article, Patrick Bolton of Columbia University
and Gérard Roland of the University of California, Berkeley
pointed to Belgium as an example of this dynamic: “Less
redistributive policies may prevent the more right-wing
Flanders from separation, but these may induce a revival of
separatism in the more left-wing Wallonia.”

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Besides gaining control over their taxation, regions can gain
economically from secession by assuming control of valuable
natural resources.
Proponents of Scottish independence argue that their
case for economic self-sufficiency is bolstered by the estimated 15-24 billion barrels of oil and gas in the North Sea off
the Scottish coast. In fact, Paul Collier and Anke Hoeffler
of Oxford University linked the rise of the modern Scottish
secession movement to the discovery of that oil in the
1960s. When oil prices rose sharply in the 1970s, the United
Kingdom government imposed a tax on most of the increase
in oil revenues. The Scottish National Party enjoyed its
greatest success up to that point in the 1974 election under
the rallying cry “It’s Scotland’s Oil.” Oil also figured prominently in the 2014 referendum, with Scottish nationalists
again arguing that revenue from that resource belonged to
Scotland and would help ensure its economic success as an
independent nation.
But while control over such resources can make the case
for independence more enticing, it also raises a number of
uncertainties. One problem is that such resources don’t
last forever. Oil production in the North Sea seems to have
peaked in 1999, and it is currently estimated that the oil
will last another 30 to 40 years. Scotland’s government has
argued that it would invest revenue from the oil in a sovereign wealth fund, similar to Norway’s oil fund, to provide a
revenue stream after the resource is exhausted. Still, it’s not
clear how soon they would be able to do that. In the 2013
book Scottish Independence: Weighing Up the Economics, former
Scottish government economist Gavin McCrone noted that
current oil revenue would not fully cover the Scottish government’s deficit, meaning spending cuts or tax increases
would be needed to set aside any revenue in a fund. All of
these calculations also depend on oil prices, which are highly
volatile. In the run-up to the 2014 referendum, oil prices
were more than $100 a barrel; today, they are a little less
than half that.
Additionally, while wealthy or resource-rich regions may
calculate that they would be better off on their own, there’s
no guarantee that the parent state will just let them go. And
conflict can dramatically increase the costs of separation.

Rebellion and Resistance
Becoming a newly independent nation is rarely a straightforward process. “Most countries will fight tooth and nail to
keep hold of their territory,” says James Ker-Lindsay, a senior
research fellow at the London School of Economics who studies secession. Orderly referendums like the ones in Quebec
and Scotland are more the exception than the rule, he says.
Resistance can usually be expected if the parent country would be made economically worse off by a region
leaving, but economics isn’t always the motivating factor.
Ker-Lindsay notes that when Kosovo unilaterally declared
independence from Serbia in 2008, Serbia would have been
economically better off letting the territory go. “But even if
there are good, rational, economic reasons to divest yourself
of a territory, it doesn’t always play out that states will sit
down and make that rational calculation,” he says. States may
resist because the seceding region has cultural or historical
importance, or because they don’t want to set a precedent for
allowing further disintegration of their borders.
In either case, when resistance comes in the form of
armed conflict, the costs can be devastating. In a 2014
working paper, Rodríguez-Pose and Marko Stermšek of
the London School of Economics studied the breakup of
Yugoslavia in the 1990s. Unsurprisingly, regions that were
able to break away quickly with minimal conflict, such as
Slovenia and Macedonia, suffered smaller dips in economic
performance than regions that were embroiled in protracted

armed conflict, such as Kosovo and Bosnia. And the costs
accrue to both sides during a war of secession. For example,
in a 1975 paper, Claudia Goldin of Harvard University and
Frank Lewis of Queen’s University evaluated the costs of the
U.S. Civil War by examining, among other things, changes
in per capita consumption. According to their estimates,
it took the North until 1874 to catch up to its level of per
capita consumption in 1860, the year before the war started
— and the South did not return to its 1860 level until 1904,
nearly four decades after the war’s end.
Seceding regions may face opposition from the international community as well. In the 1999 book The Dynamics
of Secession, Viva Bartkus of the University of Notre Dame
noted that the international response to secession can
be mixed, as international organizations like the United
Nations (U.N.) recognize both the right to self-determination (which favors the seceding entity) and the right to
territorial integrity (which favors the parent). On the whole,
Bartkus found that international support for territorial
integrity is stronger, particularly in cases where the secession
is contested. Kosovo, for example, is not recognized by the
U.N. as an independent country, despite having the support
of key U.N. members like the United States.
In some cases, seceding countries can find themselves
cut off from the rest of the world. The Turkish Republic of
Northern Cyprus, for example, is a self-declared state recognized only by Turkey. This has greatly limited its ability to

Divided States of America
The United States faced its biggest secession threat during
the American Civil War. But there have been cases where
states broke away from existing ones while still remaining
part of the country. This has only happened successfully four
times in America’s history, with the creation of Kentucky in
1792, Tennessee in 1796, Maine in 1820, and West Virginia
in 1863. There have, however, been hundreds of unsuccessful
attempts over the years. Under the Constitution, the division of any state must have the approval of both the state
legislature and Congress.
In late 1941, a handful of counties in southern Oregon and
Northern California briefly declared themselves the independent state of Jefferson. The movement died out following the
attack on Pearl Harbor little more than a week later, but it has
enjoyed periodic revivals since then. California, the most populous and third-largest state, has been the subject of hundreds
of proposals to break it into multiple states since it first joined
the union in 1850. Most recently, venture capitalist Timothy
Draper launched a campaign in 2014 to divide it into six states.
And similar movements have occurred at the city level
too. In 1969, Norman Mailer campaigned for mayor of New
York City on a platform of making the city the 51st state.
Residents of San Fernando Valley in the city of Los Angeles
failed to secure the votes in a 2001 referendum to secede and
form their own city.

The driving forces behind these movements are often
similar to the ones that motivate secession at the country
level. Disaffected residents argue that their tax dollars are
misspent or that local or state governments are not responsive to their needs. Differences in culture also play a major
role. But these movements face many of the same challenges
as country-level secessions. For example, the recent proposal
to split California into six states raised questions about how
public debt and services would be apportioned. Water is
currently distributed across the state; splitting the state into
six pieces would create the challenge of somehow dividing
that infrastructure across new state lines. Economic disparities between different regions could be exacerbated as well.
Critics of Draper’s California proposal contended that it
would have created both some of the wealthiest and some of
the poorest states in America.
Proponents of splitting states or cities do avoid some of
the headaches involved in splitting countries, though. The
new entities would retain the same currency, language, and
national laws, which would likely make trade between newly
split states somewhat easier than between newly separated
countries. But given that partitioning states requires both
local and congressional support to succeed, it is likely to
occur as infrequently as national secessions.
— Tim Sablik

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

15

trade with other countries, and it relies heavily on Turkey for
economic support.
For secession to have the best chance of success, it takes
consent on both sides. “And that very rarely happens,” says
Ker-Lindsay. Although the United Kingdom agreed to allow
a vote on Scottish secession, Spain has thus far ruled any similar referendum in Catalonia unconstitutional. The separation
of Czechoslovakia in 1993 is often held up as the best example
of consent. Called the “Velvet Divorce,” the secession was
handled quickly and peacefully. But it’s unclear what lessons
from that event apply to today’s movements. It was decided
by leading politicians on both sides rather than popular referendum, which made it easier to reach agreement.

votes. The wake of that close decision left the specter
of future votes, imposing costs on capital in the region.
In a 2005 paper in the Journal of International Financial
Management and Accounting, Roger Graham of Oregon State
University and Cameron and Janet Morrill of the University
of Manitoba found that Quebec firms were undervalued relative to other firms in Canada, in part due to uncertainty of
future independence votes. Others have also attributed the
loss of several business headquarters in Quebec over the last
two decades to this uncertainty.
“The point I always make to those advocating independence is: You are gambling your savings on a lottery,” says
Ubide.

Separation Anxiety

Hitting the Jackpot?

Even when countries agree to part ways, there are still a
number of difficult questions to resolve. How will the debt
be split between the seceding entity and the parent? How
will public assets like roads, communications infrastructure,
or military facilities be divided? What monetary system will
the seceding country follow? Will the parent allow it to keep
the same currency or will it have to establish its own?
Negotiating the answers to these questions takes time,
and that adds to the costs of secession in the form of uncertainty. In a 2013 paper, Robert Young of the University of
Western Ontario noted that uncertainty is both the most
important transition cost in secession and the hardest to predict. Without knowing how debt will be apportioned or what
the monetary regime of the new state will be, businesses and
individuals can’t make contracts for the future. If the seceding state’s participation in international organizations like
the European Union is in doubt, then businesses and foreign
investors might choose to pull out of the country.
“The size of these transition costs is a political question,”
says Young. Many of these issues could be resolved ahead of
time to reduce uncertainty, Young explains, but opponents
of separation have an incentive to maintain uncertainty in
order to bolster their cause. In Quebec, he says, opponents
argued that secession meant “taking a great big leap into the
unknown.” Similar arguments were made by opponents of
Scottish secession.
And secession votes can raise uncertainty costs even
when they are not successful. Quebec’s 1995 referendum to
secede from Canada failed by a margin of less than 55,000

Given the potential transition costs, regions need to be
relatively sure they will see a return to independence, says
Young. “If the transition costs are high, it can take you an
awfully long time to make up the losses from the transition
period,” he says. “If you take a loss of say 5 to 10 percent of
GDP for a few years, you had better get a very serious accelerated growth path to make it up.”
Do seceding countries enjoy faster economic growth
once untethered from the weight of their parents? There
is limited evidence, in large part due to the rarity of these
events. But according to the 2014 study by Rodríguez-Pose
and Stermšek, there doesn’t appear to be an “independence
dividend.” Even when regions in the former Yugoslavia were
able to transition to independence fairly quickly and amicably, the authors found that those countries largely continued
along the same growth path they had before becoming independent. Moreover, they still suffered significant economic
losses immediately following their independence.
Likewise, it is unclear that downsizing necessarily boosts
growth chances. In a 2006 National Bureau of Economic
Research working paper, Andrew Rose of the University
of California, Berkeley studied a panel of more than 200
countries over 40 years. He found no strong evidence of size
affecting economic well-being. And while there are plenty of
examples of successful small countries, such as Luxembourg,
Norway, and Singapore, many economists argue that institutions matter more than size.
“It all depends,” says Ubide, “on what you do with your
economy once you are out.”
EF

Readings
Alesina, Alberto, and Enrico Spolaore. The Size of Nations.
Cambridge, Mass.: The MIT Press, 2003.

Ker-Lindsay, James. “Understanding state responses to secession.” Peacebuilding, May 2013, vol. 2, no. 1, pp. 28-44.

Bolton, Patrick, and Gérard Roland. “The Breakup of Nations:
A Political Economy Analysis.” Quarterly Journal of Economics,
November 1997, vol. 112, no. 4, pp. 1057-1090.

Rodríguez-Pose, Andrés, and Marko Stermšek. “The Economics
of Secession. Analysing the Economic Impact of the Collapse of
the Former Yugoslavia.” Governance and Economics Research
Network Working Paper No. A 2014-8.

Buchanan, James M., and Roger L. Faith. “Secession and the
Limits of Taxation: Toward a Theory of Internal Exit.”
American Economic Review, December 1987, vol. 77, no. 5,
pp. 1023-1031.

16

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Young, Robert. “Transition Costs in Secessions.” Presentation
to the International Conference on Economics of Constitutional
Change, Sept. 19-20, 2013.

The Crop Insurance Boom
A long-standing U.S. farm support program now covers almost
every crop — but it attracts more and more critics as well
BY HELEN FESSENDEN

S

ince he began planting cotton in the 1970s, South
Carolina farmer John Hane has invested heavily in irrigation to manage risk. Despite the cost, he considers it
the best possible protection against drought as well as a way
of ensuring that fertilizer and pesticides are evenly distributed through the soil.
In addition, Hane also buys crop insurance. This federal
program, which today covers more than 100 crops, lets
farmers purchase policies from insurance companies at a
subsidized rate. Cotton is among the many crops it covers,
protecting against drops in yield or price, and cotton farmers now have more policies to choose from than before. For
Hane, however, some of the new policies are more confusing
than the traditional system of direct payments from the
federal government, which were phased out for all crops in
the 2014 farm bill.
“Irrigation helps a lot, but it’s not a total solution,” says
Hane. “It doesn’t protect you from hail or hurricanes. So we
need something in addition.”

A Success Story?
Under the multiyear farm bill enacted in early 2014, crop
insurance is expected to cost taxpayers $41 billion over five
years — a jump of almost 20 percent over the previous farm
bill, enacted in 2008. Crop-insurance advocates argue it is a
far more efficient program to manage an array of risks than
ex post disaster relief. It has evolved from an underused
program that was plagued by adverse selection in the 1980s
to one that covers almost every crop today, with high participation. By 2013, 89 percent of all U.S. farmland was covered
by the program, covering more than 290 million acres. In
2012, lawmakers didn’t even pass stand-alone disaster aid legislation after a devastating drought because insurers’ payouts
were comprehensive enough for the crops affected. In the
view of its supporters, crop insurance has succeeded as a risk
management tool because it covers most farmers, pre-empts
the need for ad hoc disaster relief, and effectively substitutes
for other, less efficient forms of support.
Critics of crop insurance subsidies, however, point to
the fact that the program is still a transfer from taxpayers
to farmers and private insurance companies, and as
constructed, it is more income support than classic insurance. The government covers about
60 percent of the cost of farmers’ insurance premiums as well as 100 percent of
administrative and operating costs for

insurers, which means farmers can sign up for policies that
provide payouts far more generous than reflected by their
out-of-pocket cost.
This camp, which includes economists, deficit hawks
on and off Capitol Hill, and the nonpartisan Government
Accountability Office (GAO), argues there are less expensive
ways for the government to help farmers protect themselves
against extreme or unanticipated losses, and that private
insurers do not need taxpayer assistance regardless. And
some economists say that these subsidies have a distortionary effect. For example, they may reduce farmers’ incentive
to manage risk through other means, such as crop storage
or prudent fertilizer and pesticide use; subsidies also may
encourage planting in high-risk regions and on marginal land.
“The paradox is that crop insurance may be intended
as risk management for farmers, but it actually encourages
more risk-taking,” said Vincent Smith, agricultural economist at Montana State University. “It’s a transfer of risk
away from the insurance firms and the farmers.”
One of the program’s most controversial aspects is the
policy design. For most crops, farmers have an array of
plans to choose from, but the most dominant is an option
called revenue protection. Under one of the most popular
revenue-protection plans, a farmer can purchase a policy to
insure yield losses or revenue losses on certain crops, but
he bases that coverage on the highest price of the season.
If a low yield drives up the price of a crop from spring to
harvest, the farmer is indemnified for lower yields
at the higher harvest-time price; if the price
falls over the course of the season due
to overproduction, the farmer
may use the higher springtime baseline when
calculating compensation. Either way,
this option

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

17

maximizes the payout from the insurer. Revenue protection
contrasts with yield protection, in which a farmer is protected against harvest-time losses in yield, say, in the case of
drought; those payouts are pegged to the price projected at
springtime. In 2014, about 75 percent of policies were revenue protection, compared with only 13 percent that were
yield protection.
The effect of crop insurance on farmers’ behavior and
the agricultural economy is hard to quantify, because until
recently, crop insurance has always co-existed with other
farm programs with potentially distortionary effects of their
own. Even in the most recent farm bill, which eliminated or
overhauled other traditional forms of support, lawmakers
still channeled $24 billion in aid over five years to commodity programs. Some economists, however, say evidence suggests that subsidies reduce farmers’ willingness to manage
risk more efficiently. And more broadly, the program’s growing cost has prompted calls to cut the price tag through such
measures as trimming payments to high-income farmers or
scrapping the revenue-protection option.

Swapping Safety Nets
Farmers of most major crops have received government aid
since the Great Depression. These programs have often
consisted of price supports, production controls, and ad
hoc disaster relief. Insurance has also been available for
many crops for years, but a long-standing challenge was
finding ways to encourage farmers to sign up for policies.
Even after the government began subsidizing premiums in
1980, covering 30 percent of the cost, participation in the
program rose only modestly, from 16 percent to 25 percent
of eligible acreage. Accordingly, the crop insurance industry
was challenged by adverse selection, as most policies were
bought by at-risk farmers rather than a broad pool. With
too few farmers paying in, the premiums that were paid to
insurers often failed to cover the payouts to farmers, even
with government subsidies.
All the while, Congress kept passing disaster relief legislation on an as-needed basis, which became frequent. For
18

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Risk Management or Income Support?
A farmer has to make two basic decisions when signing up
for a policy: how much of the crop to cover, and which type
of plan to select. Crop coverage is offered in 5 percent increments; farmers usually choose to cover 65 to 80 percent of

PHOTOGRAPHY: USDA PHOTO BY BOB NICHOLS

Corn withers during a drought in Texas in 2013.

example, between 1987 and 1994, more than 60 percent of
all farms got disaster aid at least once, with some getting it
every year. These trends, taken together, bolstered the argument that farmers needed more incentives to buy crop insurance: Ad hoc disaster relief was expensive and unpredictable,
but farmers viewed insurance premiums as too pricey.
“The challenge was whether we offer ex ante protection
through insurance or ex post protection through disaster
relief,” said Keith Coble, agricultural economist at Mississippi
State University. “Over the years, a consensus grew that ex
ante is more efficient, because that way, farmers go into the
growing season knowing what coverage they’ll have.”
New legislation in 1994 offered farmers subsidized catastrophic risk protection as well as the option to “buy up”
coverage beyond that. But it was not until 2000, after more
rounds of disaster relief, that the government ramped up the
premium subsidy and equalized its support for both yieldand revenue-protection policies. Participation took off:
Enrollment jumped from 182 million acres in 1998 to 265 million in 2011. The higher participation rate, in turn, has largely
eliminated the problem of adverse selection. Still, Congress
passed a series of disaster relief bills, totaling around $10
billion from fiscal 2001-2009, to cover losses, especially for
under-insured, high-risk regions.
Outside of crop insurance, another change was underway: In 1996, many traditional support programs, which
were based on historical price averages, were abolished
and replaced with direct payments. These were not based
on annual income, prices, or output; rather, they were
automatic transfer payments intended to temporarily help
transition farmers to a more market-based system. Still,
Congress kept on reauthorizing direct payments, effectively
converting them into long-term support. By 2011, direct
payments averaged $5 billion annually. These transfers came
under increasing fire because the program allowed much of
the money to go to wealthy farmers, as well as to farmers
who did not plant the covered crop in that crop year.
The most recent farm bill, passed in early 2014 as a fiveyear authorization, eliminated direct payments and some
other forms of support while increasing the budget for crop
insurance subsidies and bringing more specialty crops (like
fruits and nuts) under its purview. It also added a program
called STAX specifically for cotton, offering a subsidy based
on county prices to help cover a farmer’s deductible on
top of existing subsidies for premiums. This measure was
intended to entice more U.S. cotton growers to ramp up
insurance coverage, in conjunction with a WTO settlement
that ordered the United States to dismantle long-running
cotton price supports and export subsidies after a successful
lawsuit by Brazil.

Two Policies, Two Payouts
their crop. Many crops also have supplemental
coverage options that help cover the deductIn 2012, when a severe drought struck the Midwest, corn yields fell while the
ible, which can bring effective coverage to as
price rose. In Iowa, the price rose from $5.68/bushel in the spring to $7.50/bushel
high as 90 percent.
by harvest time. Here is a comparison between what a farmer would have received
Premiums are determined by the U.S.
under a yield protection policy that covered 80 percent of his crop and what he
Department of Agriculture (USDA) Risk
would have received with an 80 percent revenue protection policy. In this examManagement Agency and vary considerably,
ple, his yield fell from 172 to 130 bushels per acre.
depending on the crop price and an array of
The farmer’s premium is the same for both policies, but he gets a payout that is
risk factors. But the subsidy percentage rates
$13.83 more per bushel under revenue protection.
are determined by legislation, and those have
risen from an average of 37 percent in 2000
Yield Protection
to 62 percent by 2013. Accordingly, the higher
Policy covers 80 percent of 172 bushels (projected yield) = 137.6 bushels
the premium is, the higher the dollar amount
Payout equals insured yield minus harvest-time yield — 7.6 bushels – times the springtime price
of the subsidy. And if commodity prices rise
of $5.68: $43.17/bushel
— as they have done for the most part in the
Revenue Protection
past decade — the premium goes up as well,
Policy covers 80 percent of 172 bushels (projected yield) = 137.6 bushels
because the crop’s insured value has grown.
Payout equals insured yield minus harvest-time yield — 7.6 bushels – times the harvest price
The design and popularity of revenue
of $7.50: $57.00/bushel
protection explains much of the increase in
SOURCE: U.S. Department of Agriculture, Risk Management Agency
crop insurance costs to the government: It
offers the most generous payouts but does
not require a commensurate hike in premiums
compared to other policies. To critics, the revenue protecAnother question among economists is whether crop
tion guarantee makes it easier for farmers to break even or
insurance subsidies affect farmers’ behavior in different ways
make a profit on high-risk or marginal land that otherwise
than older support programs did. The challenge is that crop
would not be worth the investment.
insurance subsidies have always co-existed with other farm
support, making it difficult to isolate their particular impact.
Blueberry Pricing and Other Puzzles
And all of these programs, taken together, have changed
Some economists believe revenue protection crowds out other
over the decades.
ways to hedge against price risk, such as futures contracts.
Some economists also point out that agricultural data
“The question is this: Why should we have revenue
— in contrast to, say, data for auto insurance policies — is
insurance when we already have futures markets that try to
highly lagged. It takes years to gather due to the need to
reduce price risk?” asked Mississippi State’s Coble. “A lot of
capture different weather events. For those reasons, the
commodities already have mechanisms out there to protect
crop insurance program may buy time to gather information
against price risk, so revenue protection may be redundant.
over decades that, at some point, can design policies that are
But with smaller crops, you can’t really hedge against price
more accurate.
risk. There is no consensus on what blueberries in New
Other economists, however, say the evidence is more
England will bring at harvest time.”
clear-cut. In a widely cited 1996 article, Montana State’s
More broadly, the linkage between crop insurance and
Smith and North Carolina State University economist Barry
planting patterns was examined in a recent report by the
Goodwin published research suggesting that crop insurance
GAO, which looked at premium rates by county for the top
was linked to decisions on risk-mitigating inputs such as
five crops — corn, soybeans, wheat, cotton, and grain sorfertilizer and pesticides. Using a sample of Kansas wheat
ghum — from 1994 to 2013. Together, these crops accounted
farmers, they concluded that farmers made decisions on
for 86 percent of all premiums in 2013.
insurance and inputs jointly, and that the insurance coverThe GAO found that the premiums set by the governage was inversely related to input use. Everything else being
ment ranged a great deal depending on the fragility of the
equal, insured farmers spent $4.23 less per acre on fertilizer
land; some regions, such as the Texas high plains and the
than their uninsured counterparts.
Dakotas, stood out in this respect. Furthermore, over 20
Their broader conclusion still holds up today, says Smith.
years, farmers in high-risk regions got back far more in pay“Farmers will do less to manage their production losses if
outs than those in less risky counties: $1.97 back in net gains
the crop is insured, and will adopt more risky management
for each dollar premium dollar they paid in, versus 87 cents
techniques, like using less pesticide and fertilizer,” Smith
per dollar for the rest. The GAO report concluded that the
adds. “They’re also likely to shift production to marginal
government spends far more insuring high-risk regions than
land. This isn’t a massive movement, but it’s still movement.”
it does elsewhere, by up to a factor of three, and it called
This is, in short, a question of moral hazard: whether
on the USDA to use its authority to adjust premiums to
crop insurance makes farmers more inclined to adopt
account for this differential.
risky practices because they will not have to pay for the
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

19

downside. Economist Bruce Babcock, who led a study by
the Environmental Working Group critical of the program,
argues it can affect where a farmer choses to plant, especially
fragile and marginal land.
“If a farmer has to decide whether it’s risky to plant on
particular ground, crop insurance makes planting slightly
more likely,” says Babcock.
To him, the more clear-cut argument is that the current
crop insurance regime “crowds out” other forms of risk management that would be cheaper to the taxpayer, including
futures contracts as well as more traditional techniques.
“If they were really looking to manage risk, farmers could
use off-farm income, diversification of crops, storage, and
other macro risk-management tools that are more efficient,”
he said. “But we have to remember they don’t buy insurance
for risk management benefits alone. They buy it because the
subsidies make it worthwhile.”
In Smith’s view, if subsidies were cut, farmers would
invest more in traditional risk-management techniques
rather than pay the market price for most costly, unsubsidized insurance premiums.
“If farmers had to pay commercial rates for insurance,
most would be priced out because the insurers would pass
along the considerable administrative and operating costs
to the customers,” he says. “It’s more likely they would go
back to older, cheaper ways of risk management, like crop
diversification, better input use, storage, and so on. This is
what we saw in the 1970s and 1980s.”
The crop insurance program’s growing cost has spurred
new reform proposals since the farm bill. President Barack
Obama’s most recent budget called for cutting $16 billion
over 10 years by trimming subsidies for revenue protection,
among other measures. A recent bipartisan Senate proposal
would also trim payout costs, while another would set a cap
on subsidies to $50,000 per recipient, saving more than $2
billion over 10 years. (Crop insurance currently has no caps
on payments.) The challenge, however, is that farm bills are
typically written only once every five years or so. The process
has become more difficult in recent rounds, and what was
once a bipartisan exercise has become a heavy lift. The last
farm bill, in fact, took two years to complete.

The Cotton Case
Cotton is an unusual case for a U.S. commodity in that it has
been affected by international trade litigation. The changes
that South Carolina farmers like John Hane are adjusting
to stem from a long-running dispute between the United

States and Brazil. In 2004, Brazil charged that U.S. cotton
price supports and export credit guarantees contravened
World Trade Organization rules by keeping U.S. cotton
acreage artificially high. The WTO ruled in Brazil’s favor,
forcing farm bill negotiators to find a way to make cotton
compliant, and the case was finally settled in October 2014.
Expanding crop insurance plans to growers was viewed as the
easiest workaround once it was clear that cotton would lose
its commodity-program support, and STAX was introduced
in its stead.
Cotton, which is primarily an export commodity in the
United States, is far less dominant in South Carolina than
it used to be, but it remains among the top five crops. In
2012, the state’s cotton sales totaled around $214 million,
or around 7 percent of the agricultural economy. That share
may well decline, however, as farmers face a global cotton
glut, amid rising production and stock-piling abroad, and
a resulting decline in prices. Cotton now fetches around
$0.63 per pound, down from $0.94 per pound in 2012. Lower
prices have coincided with the end of direct payments —
seen as more generous than crop insurance — to make for a
bumpy transition.
Moreover, cotton is more labor intensive than other
crops, so it is seen as more expensive to insure. For years,
growers were less inclined to buy insurance as long as they
had other forms of assistance. Now that the older programs
are gone, revenue protection policies are gaining popularity
with the state’s cotton growers, while STAX has had fewer
sign-ups because most farmers see it as too confusing,
according to Charles Davis, an agricultural adviser affiliated with South Carolina’s Clemson University Extension
Service.
Davis says he tells farmers that crop insurance is only one
risk-management tool to consider, especially when compared to irrigation.
“In my county, Calhoun, we’re highly irrigated, and we’ve
taken the money made during good years and put it into
long-term investments like irrigation to give us a high degree
of security,” he says. “Crop insurance can’t do that. It helps
cover your production costs and lets you survive another day,
but it doesn’t do much beyond that.”
Davis adds that he has a standard response to farmers
who tell him they are unhappy with the switch to crop insurance away from direct payments.
“This is still a benefit you paid for with your taxes,” he
says. “So quit complaining. You could have had no direct
payments and no crop insurance subsidy.”
EF

Readings

20

Coble, Keith H., and Barry J. Barnett. “Why Do We Subsidize
Crop Insurance?” American Journal of Agricultural Economics,
January 2013, vol. 95, no. 2, pp. 498-504.

Glauber, Joseph W. “The Growth of the Federal Crop Insurance
Program, 1990-2011.” American Journal of Agricultural Economics,
January 2013, vol. 95, no. 2, pp. 482-488.

“Crop Insurance: In Areas with Higher Crop Production Risks,
Costs Are Greater, and Premiums May Not Cover Expected
Losses.” Government Accountability Office, February 2015.

Smith, Vincent H., and Barry K. Goodwin. “Crop Insurance,
Moral Hazard, and Agricultural Chemical Use.” American Journal
of Agricultural Economics, May 1996, vol. 78, no. 2, pp. 428-438.

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Marriage on
the Outs?
The institution of marriage is solid — but only
for certain groups. Economics helps explain why
BY JESSIE ROMERO

Y

ou wouldn’t know it from watching the wedding-dress
shows on TV or browsing the celebrity-wedding
headlines in the checkout aisle, but for years,
marriage has been on the wane in the United States. Only
53 percent of adults were married in 2012, according to the
Census Bureau, compared with 68 percent of adults in 1960.
In part, that’s because people are waiting longer to get
married than they did a generation ago. The median age of
first marriage has increased by more than six years since 1960
for both men and women, to 29 and 27, respectively. But it’s
also because fewer people are choosing to get (or stay) married — a reflection of tremendous cultural and technological
changes during the past five decades.
The decline in marriage is far from uniform, however;
marriage and divorce rates vary significantly across socioeconomic lines. Given the large body of research that purports
to show married people and their children are happier,
healthier, and wealthier, many policymakers and researchers
are concerned about the long-term consequences of changes
in marriage. But the evidence in favor of marriage is far from
conclusive, so it remains a hotly debated question: Does
marriage matter?

Love, Economist-Style
To an economist (at least from a professional point of view),
marriage isn’t just about love. Instead, it’s a decision that
can be analyzed like any other economic decision: People
get married when the net benefits outweigh the net benefits
of being single. In his influential 1981 book, Treatise on the
Family, Nobel laureate Gary Becker, an economist at the
University of Chicago until his death in 2014, described
the household as a small firm in which workers specialized
in different tasks. In particular, because of their natural
and historical comparative advantages in childbearing and
rearing, women specialized in the domestic sphere and
men specialized in the market sphere. In this framework,
men and women formed households because they could
produce more together than they could apart. Marriage was

a contract that assured men their children and home would
be cared for and that protected women who had forgone
the opportunity to gain the skills needed to succeed in the
market sphere when they opted for home life.
Beginning in the 1950s and 1960s, new technologies and
changing cultural norms dramatically altered the calculation
for people considering marriage. New devices such as dishwashers and washing machines and the increasing availability of goods and services for purchase — both domestically
and from abroad — dramatically lowered the time and
skill required to manage a household. “This reduces the
importance of having domestic household specialists,” says
Justin Wolfers, a senior fellow at the Peterson Institute for
International Economics and a professor at the University
of Michigan. “My grandmother used to make clothes for my
mother. My family also has a seamstress — it’s someone who
lives in China.”
Other changes included the advent of reliable,
female-controlled birth control in the form of the pill
and the legalization of abortion, which lowered the cost
— choosing between abstinence or the risk of having a
child out of wedlock — of remaining single. With greater
control over childbearing, women began increasing their
educational investments and delaying marriage, contributing to a dramatic rise in women working. Between 1950 and
1990, women’s labor force participation rate increased from
37 percent to 74 percent.
Shifting cultural norms also altered the calculation. “Think
about the words we used to use,” says Isabel Sawhill, a senior
fellow in economic studies at the Brookings Institution.
“Cohabitation was called ‘living in sin.’ Children born outside
of marriage were called ‘illegitimate.’ All of that’s changed.
There’s much less social pressure to marry.” In addition,
U.S. Supreme Court rulings in the late 1960s and early 1970s
granted constitutional protections to children born out of
wedlock, including overturning state laws that denied “illegitimate” children the right to paternal support, thus reducing
the social and economic costs of single parenting.
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

21

The Retreat from Marriage
These changes have resulted in what many social scientists
have deemed a “retreat from marriage.” The marriage rate
peaked just after World War II, when there were 16.4 marriages per 1,000 people, plateaued at around 10 marriages per
1,000 people during the 1970s and 1980s, and has declined
steadily since then. In 2012, there were 6.8 marriages per
1,000 people, according to the most recent data available
from the National Center for Health Statistics (NCHS).
The NCHS data do not yet include same-sex marriages,
but the Census Bureau started publishing data on samesex marriages in 2005 (and in 2013 began counting them in
its overall marriage statistics rather than grouping them
with cohabiting couples). Same-sex marriages currently are
a small share of all marriages, and it’s unclear how the
nationwide legalization of same-sex marriage in June 2015
will affect long-term trends. In 2013, there were between
170,000 and 252,000 same-sex married couples, compared
with about 56 million opposite-sex married couples. In the
short-term, some estimate the number of same-sex marriages could increase to 500,000.
While opposite-sex marriage has decreased, cohabitation — living with an unmarried romantic partner — has
increased. In 1995, cohabiting rather than marriage was
the first union for 34 percent of women, according to the
National Survey of Family Growth. During the 2006-2010
wave of the survey, cohabiting was the first union for
48 percent of women. For many of the couples in the 20062010 survey, living together was a precursor to marriage;
40 percent of cohabiting couples had gotten married within
three years. But another 32 percent were still living together
without getting married. (The remainder had broken up.)
Fewer and later marriages have coincided with a growing
share of children born to single mothers. In 1970, about
15 percent of first births were to unmarried women. By 2011,
nearly 50 percent of first births were to unmarried women,
according to a report by the National Campaign to Prevent
Teen and Unplanned Pregnancy, the Relate Institute, and
the National Marriage Project at the University of Virginia.
The cultural and economic changes of the 1960s and
1970s also contributed to a large spike in divorce rates.
Between 1960 and 1980, the number of divorces per 1,000
married couples more than doubled, from fewer than 10
to more than 20, according to research by Wolfers and
Betsey Stevenson, also an economist at the University of
Michigan. (Stevenson and Wolfers have been partners for
nearly two decades and have children together, but they are
not legally married.) Nearly 50 percent of all new marriages
between 1970 and 1979 ended in divorce within 25 years.
“As women’s earnings went up, they were able to set the
bar higher because they weren’t dependent on marriage for
their economic well-being,” says Sawhill. “They opted out of
marriages that might have been contracted in an era when
women had fewer rights and opportunities.”
But the divorce rate has steadily declined since the early
1980s, to about 17 new divorces per 1,000 married couples.
22

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

That’s still higher than during the 1950s, but it is now in
line with historical trends, according to Stevenson and
Wolfers. That could be because the same changes that initially contributed to more divorces and fewer marriages also
prevented a number of “bad” marriage matches that would
likely have ended in divorce.

The Marriage Gap
The overall trends in marriage, divorce, and childbearing mask significant differences among socioeconomic
groups, particularly between the more and less educated.
Historically, people without a college degree have been more
likely than the college educated to marry, but since World
War II the gap has closed, although the patterns differ by
gender.
For much of the 20th century, college-educated white
women were much less likely to get married than white
women with less education. But between 1950 and 1980, the
marriage rate for college-educated women increased significantly, according to research by Stevenson and Adam Isen
of the U.S. Treasury. Marriage rates for both groups began to
decline after 1980, but the decline was larger for less-educated
women, shrinking the gap between these two groups.
In contrast, a gap has emerged among white men.
Historically, men’s marriage rates have not differed by
education, but starting in 1990 the marriage rate for lesseducated men declined much more than the rate for men
with a college degree, such that college-educated men are
now more likely to marry than those without a degree.
Gaps also have opened up between whites and blacks and
between whites and Hispanics. In 1986, 4.8 percent of white,
non-Hispanic women aged 55 and older had never been married, according to Census data. Black women were slightly
more likely to get married; 3.5 percent had never been
married. In 2009, the rate for white women was virtually
unchanged, at 4.7 percent. But the number of black women
who had never been married by age 55 had increased to
13 percent. Hispanic women also are less likely to marry than
white women, although the gap is much smaller (see chart).
Isen and Stevenson’s research also suggests that the
decline in the divorce rate is concentrated among the college
educated. About 37 percent of the marriages of white female
college graduates that occurred during the 1970s had ended
in divorce 20 years later, compared with 46 percent of marriages for those with some college and 39 percent with a high
school education or less. For marriages that occurred during
the 1980s, the percent ending in divorce 20 years later had
fallen to 31 percent for college graduates but was virtually
unchanged for women with less education.
The trend was even more pronounced among white men;
the percent of marriages ending in divorce after 20 years fell
from 34 percent to 25 percent for the college educated but
rose from 39 percent to 44 percent for those with a high
school degree or less. Black men and women are more likely
than whites to get divorced, but the trends by education are
similar. While it’s difficult to predict what will happen for

Percent of Women Never Married, 1986 vs. 2009
Ages 55+
14
12

1986
2009

10
PERCENT

PERCENT

Ages 25-29
80
70
60
50
40
30
20
10
0

1986
2009

8
6
4
2
0

White (non-Hispanic)

Black

Hispanic

White (non-Hispanic)

Black

Hispanic

SOURCE: Kreider, Rose M., and Renee Ellis, “Number, Timing, and Duration of Marriages and Divorces: 2009,” U.S. Census Bureau Household Economic Studies, May 2011.

the marriages that occurred during the 1990s, the divorce
rates after 10 years suggest the divergence by education will
continue.
Differences in cohabitation and non-marital childbearing
follow similar lines. Women with a high school diploma or
less are significantly more likely to cohabit as a first union
than women with a college degree, and it’s less likely that
their cohabitations transition to marriage. They also have
children outside of marriage at much higher rates.
In economic terms, marriage trends reflect an increase in
“assortative mating,” or people marrying people who are similar to them. That increase could have an effect on the level
of income inequality. In a 2014 paper, Jeremy Greenwood
of the University of Pennsylvania, Nezih Guner of MOVE
(a research institute in Barecelona), Georgi Kocharkov of
the University of Konstanz (Germany), and Cezar Santos
of the School of Post-Graduate Studies in Economics at the
Getulio Vargas Foundation (Brazil) found that if married
couples in 2005 were matched following the same patterns
observed in 1960, the level of income dispersion would drop
by more than one-fifth.

Opposites Don’t Attract
What’s behind the socioeconomic differences in marriage?
Some researchers have pointed to declining economic prospects for less-educated men; between 1980 and 2010, the real
wages of men with a high school education or less declined
by 24 percent, according to research by Sawhill and Joanna
Venator, also with the Brookings Institution. Numerous
studies have found a positive correlation between men’s
economic prospects and marriage rates, so as men’s wages
decline, women might not consider them valuable partners
in a marriage. In recent work, Sawhill and Venator found
that declining wages can explain about one-quarter of the
decline in marriage rates among less-skilled men. “They
don’t have good jobs, they don’t earn enough, and so the
women in their networks are taking a pass,” says Sawhill.
In some communities, the issue might not be whether
the men are marriage material, but whether there are enough

men at all. High incarceration rates for black men have
significantly skewed the gender ratio, according to research
by Wolfers and David Leonhardt and Kevin Quealy of the
New York Times. In some areas of the country, there are only 60
black men for every 100 black women not in jail. Nationwide,
there are 83 black men for every 100 black women. Among
whites, there are 99 men for every 100 women.
Another explanation might be that the changing nature of
marriage has made it an institution more valuable to people
higher up the socioeconomic ladder. Stevenson and Wolfers
have proposed a theory of marriage based on “consumption
complementarities” rather than Becker’s production complementarities. Over time, families on average have seen an
increase in leisure and consumption, and many hobbies and
activities are more enjoyable with another person. “In our
view, the person you want to marry is the person with whom
you share interests and passions. At its most simple, this is
a theory of love,” says Wolfers. But such “hedonic marriage”
might offer more to people at the top end of the distribution. “It’s a forum for shared passions, and that works when
you have the time, money, and energy for sharing.”
When marriage was based on production complementarities, it was more likely that opposites would attract. “Back
in our grandparents’ day, women with graduate degrees
had very low marriage rates. If you were looking for a good
homemaker, a wife with a master’s wasn’t that helpful,” says
Wolfers. “Today, if you’re looking to share income and passions with a soul mate, a highly educated woman is incredibly
valuable.”
The shift to hedonic marriage might also have made the
institution more attractive to same-sex couples. In a 2012
article for Bloomberg View, Stevenson and Wolfers noted
that same-sex relationships are less likely to involve traditional gender roles and separate spheres, limiting the economic gains from the Becker model of marriage.
A model of marriage based on consumption complementarities doesn’t necessarily explain why a couple would
choose to get married rather than cohabit, since many of
the same benefits could be derived from living together.
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

23

Is Marriage Good for You?

“There’s a strong consensus in economics that
much of income inequality is generated by
things that happen before people reach the
labor market — family, school, neighborhood,”
says economist Robert Pollak. “Parents’
educational attainment is an important
component. So these changes in marriage and
childbearing patterns are likely to exacerbate
income inequality in the next generation.”

In a chapter for the 2014 book Human Capital in History:
The American Record, Shelly Lundberg of the University of
California, Santa Barbara and Robert Pollak of Washington
University in St. Louis try to explain how marriage today differs economically from cohabiting. They propose that highly
educated men and women use marriage as a commitment
device to make large investments in children.
In Becker’s model, marriage was a long-term commitment that enabled both parties to make specialized
household investments — the wife in caring for the home
and children and the husband in providing for the family.
Although changes in divorce law have made it less costly
to exit marriage, divorce still entails more social, legal, and
financial costs than ending a cohabiting relationship. As a
result, marriage can still function as a long-term commitment device. But today, according to Lundberg and Pollak,
the focus of the investment for both husband and wife is
children.
For a variety of reasons, including time and resource constraints, cultural norms, or expectations about what the future
is likely to hold for a child, parents with different income and
education levels might differ in their willingness and ability
to make large investments in their children’s human capital.
That could affect their willingness to enter into a long-term
contract — marriage — to facilitate those investments. For
couples with low levels of education, Lundberg and Pollak
suggested, “a child’s limited prospects for upward mobility
combined with falling real resources … precludes an intensive
investment strategy for parents and limits the value of marriage and the commitment it implies.” In contrast, couples
with more education might view marriage as having a higher
payoff for their current or future children.
“There’s a strong consensus in economics that much of
income inequality is generated by things that happen before
people reach the labor market — family, school, neighborhood,” says Pollak. “Parents’ educational attainment
is an important component. So these changes in marriage
and childbearing patterns are likely to exacerbate income
inequality in the next generation.”
24

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Beyond the possible implications for income inequality, a
large body of research also contends that married people
are both happier and healthier than their never-married or
divorced peers. For example, research has shown that married men have lower rates of cardiovascular disease, hypertension, depression, and they have better outcomes after a
cancer diagnosis. And many studies have found a positive
correlation between marriage and life satisfaction, at least in
industrialized countries.
Research on marriage is complicated by “selection
effects,” however — the possibility that people who get married vary in a systematic way from people who don’t. Poverty
can be a cause of family disruption, not just an effect, and
it’s possible that healthy, happy people are more likely to get
married in the first place, rather than marriage making them
that way. Economists and other researchers can employ a
variety of techniques to control for such selection bias, and
some have concluded there is in fact a causal relationship
between marriage and positive outcomes. But others have
found that selection plays a large role. In a 2006 article,
Alois Stutzer of the University of Basel and Bruno Frey of
Zeppelin University Friedrichshafen (Germany) found that
happier people are more likely to get married, and that people who get divorced were already less happy when they were
newly married or single.
The mere fact that fewer people are getting married
could cast doubt on claims about the benefits. “If marriage is
really so great for men and women,” Pollak asks, “why aren’t
more people getting married and staying married? Is it just
that they don’t realize how great it would be? Economists
tend to rely on the notion of ‘revealed preference’: You learn
things about people’s preferences from watching what they
do, and most economists believe that people are reasonably
good judges of what’s in their best interest.”
Adults might be able to make decisions in their own
best interest, but what about their children’s? Numerous
studies suggest that children who grow up in single-parent
families have worse economic and social outcomes — such
as growing up in poverty, becoming a teen parent, or getting arrested —than children in two-parent families. Such
studies also might suffer from selection bias, however. As
Sara McLanahan of Princeton University, Laura Tach of
Cornell University, and Daniel Schneider of the University
of California, Berkeley noted in a recent article, “Family
disruption is not random event and … the characteristics
that cause father absence are likely to affect child well-being through other pathways.”
In the article, the researchers reviewed 47 papers that
used innovative research designs to control for selection
bias. The evidence was mixed regarding the effect of father
absence on some outcomes, such as adult income or marital
status. But they did find evidence that a father’s absence has
a causal effect on risky behavior such as smoking or becoming a teen parent, the likelihood of graduating from high
school, and adult mental health. Still, the magnitude of the

effect was much smaller than in traditional studies that did
not control for selection effects.
But even if there were conclusive evidence that growing
up in a two-parent family causes better outcomes than growing up in a single-parent family, it doesn’t necessarily follow
that those parents have to be married. “It’s stability that
matters,” says Sawhill. “It just happens to be that for historic
and cultural and religious reasons, marriage has been the way
we have created that stability.”
Still, in recent decades, policymakers have made several attempts to encourage marriage. The Personal
Responsibility and Work Opportunity Reconciliation Act
of 1996, which reformed the welfare system, offered states
cash bonuses for increasing the number of two-parent families, and the Deficit Reduction Act of 2005 authorized $150
million per year for five years for states to conduct marriage
education programs and advertising campaigns, among other
initiatives. Some marriage advocates have proposed altering
the tax code or changing the qualifications for means-tested
benefits to eliminate the “marriage penalty” many poor
people face. But marriage promotion initiatives appear to

have little effect on marriage or divorce rates, and research
suggests the effects of tax and benefit changes are likely to
be small.
Whether or not it’s possible to encourage people to
marry, the larger question is whether policy should encourage
people to marry. Studies that link marriage to positive health
and economic outcomes compare the average unmarried
person to the average married person. But the more relevant
comparison, as Stevenson and Wolfers have noted in their
research, would be to someone in the marginal marriage created by policy. “If you ask the question, would someone be
happier if they were married, you also have to ask, married
to whom?” says Pollak.
The divergence in marriage and divorce rates among
socioeconomic groups raises important questions about the
long-term consequences for children and the perpetuation
of advantage from one generation to the next. But economic,
cultural, and technological changes make it seem unlikely, at
the moment, that the overall retreat from marriage is going
to reverse. What is certain is that the institution will continue to evolve.
EF

Readings
Becker, Gary. A Treatise on the Family. Cambridge, Mass.:
Harvard University Press, 1981.
Isen, Adam, and Betsey Stevenson. “Women’s Education and
Family Behavior: Trends in Marriage, Divorce and Fertility.”
National Bureau of Economic Research Working Paper
No. 15725, February 2010.
Kreider, Rose M., and Renee Ellis, “Number, Timing, and
Duration of Marriages and Divorces: 2009,” U.S. Census Bureau
Household Economic Studies, May 2011.

Lundberg, Shelly, and Robert A. Pollak. “Cohabitation and the
Uneven Retreat from Marriage in the United States, 1950-2010.”
In Leah Platt Boustan, Carola Frydman, and Robert A. Margo
(ed.), Human Capital in History: The American Record. Chicago:
University of Chicago Press, 2014.
McLanahan, Sarah, Laura Tach, and Daniel Schneider. “The
Causal Effects of Father Absence.” Annual Review of Sociology,
July 2013, vol. 39, pp. 399-427.
Stevenson, Betsey, and Justin Wolfers. “Marriage and Divorce:
Changes and Their Driving Forces.” Journal of Economic
Perspectives, Spring 2007, vol. 21, no. 2, pp. 27-52.

Economic Brief publishes an online essay each
month about a current economic issue or
trend. The essays are based on research by
economists at the Richmond Fed.
August 2015

Living Wills for Systemically Important Financial
Institutions: Some Expected Benefits and Challenges

September 2015

Inflation Targeting: Could Bad Luck Explain
Persistent One-Sided Misses?
To access the Economic Brief and other research publications,
visit www.richmondfed.org/publications/research/

Economic Brief

August 2015, EB15-08

Living Wills for Systemically Important Financial
Institutions: Some Expected Benefits and Challenges
By Arantxa Jarque and David A. Price

The Dodd-Frank Act requires systemically important financial institutions
to create resolution plans, or “living wills,” that bankruptcy courts can follow
if these institutions fall into severe financial distress. The plans must set out
a path for resolution without public bailouts and with minimal disruption
to the financial system. While living wills can, in this way, help to curb the
“too big to fail” problem, regulators face a number of challenges in achieving
this goal. The authority granted to regulators by the Act, including the power
to make systemically important institutions change their structures, offers
promising means of addressing these challenges.
When Congress passed the Dodd-Frank Wall
Street Reform and Consumer Protection Act in
2010, the elimination of bailouts for distressed
financial institutions was among its goals. One
of the Act’s measures in this regard was the creation of a new tool—known as resolution plans,
or “living wills”—aimed at giving regulators
an enhanced understanding of, and increased
authority over, the largest and most complex
financial institutions. In particular, living wills
and their associated regulatory provisions are
intended to make these institutions, known as
systemically important financial institutions
(SIFIs), resolvable without public support if they
become financially distressed.
The need to make SIFIs resolvable without public
support has its conceptual basis in the idea of
commitment. Research has indicated that policymakers can reduce instability in the financial
system by making a credible commitment not
to rescue failing institutions, thereby inducing

EB15-08 - Federal Reserve Bank of Richmond

the creditors of these institutions to monitor and
influence the institution’s risk-taking to a greater
degree.1 But given the uncertainty about the
costs to the financial system of letting a SIFI fail
outright, it is more difficult for policymakers to
make such a commitment without a roadmap
for winding down a SIFI in an orderly manner if
it becomes distressed—that is, a living will.
In practical terms, the provisions of Dodd-Frank
on living wills require these firms to produce
resolution plans to be followed in the event of
severe financial distress. On an annual basis, all
SIFIs must submit detailed plans to the Federal
Reserve and the Federal Deposit Insurance
Corporation (FDIC). With some work back and
forth between a SIFI and the agencies, a plan
ideally becomes a source of information about
the potential consequences of the firm’s failure
and how to minimize them—although this information will necessarily be subject, in practice,
to considerable uncertainty.

Page 1

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

25

INTERVIEW

Campbell Harvey

Editor’s Note: For more from this interview, go to our website:
www.richmondfed.org/publications

u

EF: How did you become interested in economics in
general and finance in particular?
Harvey: In a summer internship during business school, I
was working on a fascinating problem for a company that,
at the time, was the largest copper mining company in the
world. The price of copper correlates closely with the economy. They wanted me to figure out if there was a better way
to forecast what was going to happen in the economy than
26

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

what was commercially available. I had an idea that turned
out to be, I guess, a pretty good idea; I was using information
in financial instruments — in particular, the term structure
of interest rates — to extract information about what people
in the market at least think is going to happen in the economy. That really got me interested. After that point, I had
the research bug, so to speak, and I never looked back.
EF: Financial economists, of course, have a large practitioner community in addition to those in academia.
How do the incentives of finance economists in the
private sector differ from those of academics, and how
does this affect how they approach their work?
Harvey: There’s a lot of similarity in that you are interested
in discovering something. To be published in academic
finance or economics, the idea must be unique; it’s the same
in the practice of finance — you’re looking to do something
that your competitors haven’t thought of.
There are differences, though. The actual problems that
are worked on by practitioners are more applied than the
general problems we work on in financial economics.
The second difference is that in academic financial
economics, you have the luxury of presenting your paper to
colleagues from all over the world. You get feedback, which
is really useful. And then you send it in for review and you get
even more feedback. In business, it’s different; you cannot
share trade secrets. You really have to lean on your company
colleagues for feedback.
The third thing that’s different is access to data for
empirical finance. When I was a doctoral student, academia
had the best data. For years after that, the pioneering
academic research in empirical finance relied on having

PHOTOGRAPHY: COURTESY OF DUKE UNIVERSITY

Finance has surely existed in one form or another since
the earliest days of civilization. But finance as we know
it, as a mathematical subfield of economics, is relatively young; many date its genesis to 1952, when economist Harry Markowitz published an article on the use
of modern statistical methods to analyze investment
portfolios. Today, the discipline seems as ubiquitous
as the financial services sector itself, which grew from
2.8 percent of U.S. GDP in 1950 to a pre-recession peak
of 8.3 percent of GDP in 2006.
Among the leading thinkers in academic finance is
Duke University’s Campbell Harvey. As a doctoral student at the University of Chicago in the 1980s, he turned
out to be at the right place at the right time: Half of his
dissertation committee was made up of future Nobel
laureates — Eugene Fama, Lars Hansen, and Merton
Miller. In the years since, his research interests have
spanned such topics as the modeling of risk, the yield
curve as a source of information about expectations of
economic growth, equity and bond returns in emerging
economies, and changes in the risk premium in financial
markets. He has long been interested in bitcoin, a type
of digital currency; this interest led him to offer a class
at Duke this spring on creating new ventures based on
bitcoin technology.
In addition to his appointment at Duke’s Fuqua
School of Business, Harvey is a research associate of
the National Bureau of Economic Research. He has
been a visiting professor at the Stockholm School of
Economics and the Helsinki School of Economics and
a visiting scholar at the Fed’s Board of Governors. He
is president-elect of the American Finance Association
and a former editor of the Journal of Finance. He is also
the investment strategy advisor to Man Group, one of
the world’s largest hedge funds. David A. Price interviewed Harvey at his office at Duke in April 2015.

this leading-edge data. That is
In medicine, there is a famous
What’s really interesting is that
no longer the case. The best data
study that concluded that over
other stuff can be done with
available today is unaffordable
half of the medical research that
bitcoin technology that is almost
for any academic institution. It
was published was likely false.
is incredibly expensive and that’s
completely under the radar screen. The conclusion in economics is
a serious limitation in terms of
no different than in medicine;
what we can do in our research.
it’s the same idea, that people do
Sometimes you see collaborations with companies that allow
not properly account for multiple testing.
the academic researchers to access to data that they can’t
My paper tries to go beyond previous findings in other
afford to buy. Of course, this induces other issues such as
fields like medicine. I develop a framework to check for
conflicts of interest.
tests that we do not observe. A person says that X explains
The fourth difference is the assistance that’s available.
Y and is significant — well, what happens if that person
Somebody in academia might work on a paper for months
tried 20 things but just didn’t report it? My research also
with a research assistant who might be able to offer five to
incorporates something that is important in finance — the
10 hours per week. In the practice of management, you give
tests can be correlated. In my example, with the 20 different
the task to a junior researcher and he or she will work around
X variables, it makes a big difference if the Xs are correlated
the clock until the task is completed. What takes months in
or uncorrelated.
academic research could be just a few days.
You might think that people might be upset at me for
The fifth difference is computing power. Academics
doing something like this, but that is not the reaction I have
once had the best computing power. We have access to
experienced. I think it helps that I made the mistake, too.
supercomputing arrays, but those resources are difficult
I’m on the list of people who failed to properly account for
to access. In the practice of management, companies have
multiple tests, so some of the things that I thought I had dismassive computer power available at their fingertips. For
covered in the past are below the bar. I’m pointing a finger
certain types of studies, those using higher frequency data,
at myself, also.
companies have a considerable advantage.
EF: You’ve written extensively about bitcoin and other
EF: You’ve argued that more than half of the papers
so-called crypto currencies. How do you think their role
published in empirical finance are probably false because
will evolve and does the rest of the payments industry
they have a mistake in common. Can you explain what
have anything to be worried about?
that mistake is?
Harvey: This is a significant innovation that is poorly
Harvey: It’s a mistake that is made in the application of
understood by the general public and poorly understood
statistics. Think of testing for an effect. You try to see if
by the companies that are about to be disrupted. It is a
there is a significant correlation between what you’re trying
method that makes transactions more efficient. When
to explain, let’s call it Y, and the candidate variable, X. If
you swipe your credit card at the gas pump, most people
you do that, we have well-established procedures and stadon’t realize that the credit card fee is 7 percent. It’s very
tistics; we look for a correlation that is a couple of standard
inefficient when you are faced with transaction fees like
deviations from zero — the so-called two-sigma rule, or
that. The lowest-hanging fruit that is going to be disrupted
95 percent confidence level.
is money transfer done by companies like Western Union,
But suppose we tried 20 different versions of X, 20 difwhere it’s routine to charge 10 percent or more on a transferent things, to try to explain Y. Then suppose that one of
fer. A similar transfer fee in bitcoin is about 0.05 percent.
them, just one, satisfied this rule where the correlation is
The worldwide money transfer market is $500 billion per
two standard deviations from zero. It’s possible that this one
year and potentially $50 billion a year can be saved. In a
“worked” purely by chance. That two-sigma rule is valid only
broader sense, bitcoin is not just about money transfers —
for a single test, meaning there is one Y and one X. As soon
it establishes a new way to exchange property.
as you go to one Y and 20 X’s, then you need to change the
The foundation of bitcoin the currency is the technology
rules because something is going to appear significant by luck.
behind it, called the block chain. This is a ledger containing
And when you go to even more than 20, there is almost a 100
the transaction record of every bitcoin over the history of
percent probability something is going to show up by a fluke.
bitcoin. It is a ledger that is available to anybody who is on
It turns out — and this is not just in empirical finance,
the network. It is fully transparent. The advantage is that
it’s also in economics more generally — that if you just open
if I go pay for something with a bitcoin, then the vendor
almost any scientific journal, and if there is an empirical paper,
checks this historical transaction record to see if I actually
you will see a table with different variables tried. That is what
have the coin to pay. So there is no counterfeiting, there is
we call multiple testing. With multiple testing, the standard
no double-spending, there is no bouncing of a check. On top
sorts of cutoffs are not appropriate. So this has a wide ranging
of that, this ledger is protected by cryptographic barriers
application in terms of how we do research in economics.
generated by historically unprecedented computing power.
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

27

Campbell Harvey

This technology is offside to the
policy is an algorithm that says that
➤ Present Positions
run-of-the-mill hacker because the
bitcoins will be created at a decreasJ. Paul Sticht Professor of International
entrance fee is about $500 million of
ing rate and cap out at 21 million
Business, Fuqua School of Business,
hardware — and even with that, you
in about 2140. So it is much more
Duke University
cannot change the historical transacdifficult to think of the fundamentals
Research Associate, National Bureau of
tion records. So it is not going to be
behind bitcoin.
Economic Research
hacked. It’s secure. There’s plenty of
I think all this leads to considertalk about bitcoin being stolen and
able volatility in terms of the value
➤ Education
Ph.D. (1986), University of Chicago
things like that, but that is all the
of bitcoin. But there is a way to do
M.B.A. (1983), York University
result of incompetent third parties.
transactions that bypasses the volaB.A. (1981), Trinity College, University
It has nothing to do with the actual
tility issue. Given that there is a fully
of Toronto
technology behind bitcoin.
regulated exchange in the United
With bitcoin, you also don’t need
States for bitcoin, I can have a wal➤ Selected Publications
to worry about your private informalet with U.S. dollars in it, and when
“... and the Cross-Section of Expected
tion being hacked. In usual transacI need to transact in bitcoin, I can
Returns,” NBER Working Paper,
tions, we routinely give up private
move some dollars into bitcoin, I
October 2014 (with Yan Liu and Heqing
Zhu); “Managerial Attitudes and
information such as bank account
buy what I need to buy, the vendor
Corporate Actions,” Journal of Financial
numbers, debit cards, or even Social
accepts the bitcoin and immediately
Economics, 2013 (with John Graham and
Security numbers. Of course, ventranslates it back into U.S. dollars. All
Manju Puri); “The Theory and Practice
dors accepting bitcoin might actually
you see is a U.S. dollar price on both
of Corporate Finance: Evidence From
require some private information to
sides. Indeed, in the software, you
the Field,” Journal of Financial Economics,
verify your identity, which is fine. But
see the pricing in U.S. dollars, you hit
2001 (with John Graham); numerous
bitcoin’s much more secure.
send, and the vendor gets what was
other articles in such journals as the
And that’s just the tip of the icepromised in U.S. dollars.
Journal of Finance, Journal of Political
berg. What’s really interesting is that
So the volatility doesn’t bite you
Economy, and Quarterly Journal of
there is other stuff that can be done
for transacting, but it does bite you
Economics.
with this technology that is almost
in terms of the store of value. Right
completely under the radar screen.
now, bitcoin is not a reliable store of
For example, in this ledger, you could also put what we call
value because it is too volatile. Volatility will likely decrease
conditional contracts: very simple contracts like stocks,
when the market is more liquid and when bitcoin is better
bonds, options, forwards, futures, or swaps. So it provides a
understood by the general public, but right now the people
different way to exchange at very low transaction fees and it
who hold bitcoin are mostly speculators. Given that it is
is very fast.
eight times more volatile than the S&P 500, it is hard to
recommend as a store of value at this point. Nevertheless, I
EF: Apart from the potential just for disruption, these
believe this technology has considerable promise.
currencies seem to generate strong reactions, pro and
What will happen in the future will be something digital.
con. Why is that?
Whether it’s the bitcoin model, I’m not sure, but it will be
something like bitcoin. And indeed, I have lot of confidence
Harvey: It’s hard to think about the value of bitcoin because
that the block chain technology is definitely here to stay.
it isn’t backed by anything, so it is only valuable if people
believe it’s valuable. Now that is also true with fiat currency.
EF: Speaking of the S&P 500 volatility, you showed
But the U.S. dollar, a fiat currency, is legal tender in the
in the late 1980s that the risk premium in the United
United States, which means you are obligated by law to
States is countercyclical, but there isn’t a consensus on
accept dollars for payment. The government enforces taxawhy that is the case, is there? What do you think is the
tion and can incarcerate you if you fail to pay your taxes. So
best explanation?
there is more to the dollar than “it has value because people
believe it has value.”
Harvey: What we’re talking about when we’re talking
On top of that, if you look at the currency of the United
about the risk premium is that when you invest in the stock
States and another country, the so-called foreign exchange
market, you expect to get a higher return on average than if
rate, people are generally comfortable thinking about moveyou invest in Treasury bills. It is the same for investing in a
ments in those currencies in terms of the monetary policies
corporate bond: You expect to get the higher rate of return
and economic growth of the two countries. You put those
for a risky corporate bond than for the equivalent maturity
together and you get an idea of what is driving variation in
of a U.S. Treasury bond.
the exchange rate. With bitcoin, it’s not so simple because
The risk premium changes over time. There are many difthere are no economic fundamentals. It is a world currency,
ferent explanations as to why it would change, but the most
so it is not tied to any particular country. The only monetary
intuitive one for me is that when you go into a recession,
28

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

there is much more uncertainty than when you’re not in one.
People are worried about what is going to happen in terms of
their job stability or even their bonuses. It makes it less likely
that you are going to take a chunk of money and invest it in
the stock market. During these periods, stock prices fall. So
think of the risk premium as the extra expected return you
need to offer to get somebody into the stock market. It will
be quite high in the depths of a recession.
On the other hand, when we are in good economic times,
people are very calm, they do not want to miss out on financial opportunities, and the stock prices are driven up. When
the stock price is high, almost by construction, the expected
returns are lower. So you get a countercyclical pattern in risk
premium. This pattern is found in many different types of
markets.
EF: You co-founded the Duke University/CFO
Magazine Global Business Outlook Survey, the poll of
chief financial officers. Analyzing your survey results,
you found, among other things, that CEOs and CFOs
were overconfident and this affected their businesses.
Harvey: One of the questions we have been running for
almost the entire survey, almost two decades, is that we ask
the CEOs and CFOs to forecast the stock market return —
the S&P 500 over the next year and the next 10 years. Why do
we do that? We want them to provide a forecast of something
that is common. We ask them about their firm also, but we
are interested in the market as a whole. And on top of that,
they are very knowledgeable of the S&P in general because
they are often asked to explain why their company’s stock
price has changed — and you need to understand what is happening in the overall stock market to answer that question.
The unique thing in our question is we also ask for a confidence interval. We ask them for their assessment of a 1 in
10 chance the S&P 500 will be above X and a 1 in 10 chance it
will fall below Y. So what we get is an 80 percent confidence
interval for the forecast. We do not care that much about
the accuracy of their forecast because it is very hard to forecast the S&P 500. We are more interested in the strength of
their confidence in their forecast, and it turns out that the
confidence bounds that they provide us are unreasonable
by almost any metric. They are far too narrow. That was a
surprising result.
We found that there was a correlation between this overconfidence and some of the investment policies within the
individual firms. You see that this overconfidence affects
the way that they choose investment projects and organize
their capital structure.
EF: Financial economists have been saying for some
time that index funds consistently outperform active
management over time when fees are taken into account.
Do you agree that this comports with what we see in the
world, and if so, how do we account for the persistence
of active management?

Harvey: I have thought a lot about this. The research on
mutual funds basically concludes that the performance, once
you have benchmarked to a passive investment, is negative.
I have a new paper that uses the multiple testing techniques
we talked about earlier and applies them to fund evaluation.
Of course, if you have 7,000 mutual funds, some of them
are going to look very good year after year after year, just by
chance. In my research, we looked at mutual funds from 1976
onward and could not find one, not one, that significantly
outperformed a passive benchmark.
For hedge funds, my colleague David Hsieh has concluded that the outperformance on average for hedge funds
is essentially zero. This is better than mutual funds, but on
average, it’s zero.
So then the question is, well, why are there mutual
funds? And why are there hedge funds? The key for hedge
funds is that the excess performance is zero on average —
that is different than every single hedge fund having a zero
excess return. It means that there are many hedge funds
out there that significantly underperform and many hedge
funds that outperform. If you have a scientific process to
try to separate the skillful from the lucky, then hedge funds
become more attractive. So I believe that active management is something that, if it is done properly, can lead to
positive excess returns.
The next category is maybe the most complex. There
might be, let’s say, a hedge fund that I know has zero excess
return to benchmark, but I still might want to invest in
it. How does that make sense? The key is that when we
talk about the excess return, we think about adjusting for
risk. If we take all of the risks that the hedge fund takes
into account, and strip out the expected returns that are
due to that risk, then whatever is left over is the so-called
outperformance. Even with zero outperformance, I still
might invest in that hedge fund because, as an investor, I
do not have access to the types of risks that the hedge fund
is actually taking. It is not as simple as buying an S&P 500
index fund. There are many other types of risks out there,
and some of them are exotic. Maybe I would like to take
some of those risks, and it is hard for me to actually do that
on my own. For example, I might not be able to easily invest
in an emerging market currency carry trade, where I buy the
currencies of countries with high interest rates and sell the
currencies of countries with low interest rates. I am just not
equipped to do that, but this hedge fund is an expert at it.
The other thing to consider is behavioral biases on the
side of the people selecting the investment managers. Even
though the average return might be zero, people believe they
are better than the average at selecting a mutual fund or a
hedge fund. It is a classic behavioral bias: 85 percent of the
people believe they are better than average.
EF: Researchers looking at the data on cash reserves of
U.S. corporations have found that those reserves have
been increasing, with 50 firms holding over $1 trillion in
total. Why have corporations been holding so much cash?
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

29

Harvey: There are two main reasons and perhaps a third.
The first one that people talk about is that a lot of this
cash is offshore and it is not repatriated because of the
punitive tax rates in the United States. The second has
to do with thinking of this cash holding as a so-called
real option. And what I mean by that is that you always
want to be able to move quickly if you see a really good
opportunity. You cannot wait around four months to
get bank financing or float a secondary equity offering or
something like that. You need to move with speed, and
cash gives you the flexibility to move with speed. It might
be that a firm is available for sale and you can use that cash
to do the deal instantly.
EF: Do you think the importance of this has increased
over time?
Harvey: We saw this flexibility in action during the financial crisis. Think of people like Warren Buffett, who had a
lot of cash, just cleaning up, buying incredibly high-quality
assets like Goldman Sachs at rock-bottom prices. So you
could deploy that cash at a time when the expected returns
were the highest. That is part of the flexibility. It is not just,
“This firm is for sale, but we have to close it within a week
or we’re not going to get it”; it is also that through time, you
can be strategic and pick and choose when you do the investment. During the financial crisis, it was not easy to borrow
for an investment.
The third aspect is also related to time. This is just my
opinion — I don’t have any research paper on this — but I
believe that with the exponential growth in technology, the
rate of disruption has increased through time. It used to be
product life was much longer than it is today. I think that
some firms are thinking of the cash as insurance regarding
this disruption. If a new technology arises, it gives them a
cushion with which to try at least to attempt a counterattack. To adapt to the situation, to maybe disrupt the
disruptor. I think if you put those three things together, it
pretty well explains why cash holdings have increased. There
are other technical reasons that are less important.
EF: You were editor of the Journal of Finance for six
years, from 2006 to 2012. What are the main lessons for
authors that you took away from that experience?
Harvey: For authors, the advice is probably no different for
the Journal of Finance than for any top economics journal,
namely, that the editors are looking for disruption. Indeed,
it is not much different than the world of business; there is
a status quo and we are looking for somebody to challenge
that and to come up with a fresh approach. We are not as
interested in ideas that tweak the status quo.
The ideal is that when people look at the abstract, they’ll
say, “Well, that can’t be right.” And then they read the paper
and they are convinced. Within finance, it is also useful that
your idea not only changes the way academics think, but it
30

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

also changes the practice of management. Just like if you’re
doing macroeconomics research, you hope not just to publish in, say, the Journal of Political Economy, but you hope that
the policymakers will read it and that it changes the way they
think about policy.
EF: What do you think are the most important open
questions in finance?
Harvey: One is how you measure the cost of capital. We
had the capital asset pricing model in 1964, but the research
showed very weak support for it. We have many new models, but we are still not sure. That’s on the investment side.
On the corporate finance side, it would certainly be nice to
know what the optimal leverage for a firm should be. We
still do not know that. In banking, is it appropriate that
banks have vastly more leverage than regular corporations?
Again, we need a model for that. Hopefully these research
advances are forthcoming. Some people have made progress,
but we just don’t know.
EF: Who were your main influences in your development as an economist?
Harvey: I was very lucky to be at the University of Chicago
in the early to mid-1980s because there were all of these people who we knew were going to win Nobels. As students, we
talked about it all the time. I remember seeing Gary Becker
out jogging — he was always exercising — and we joked that
he was jogging so that he’d be in good shape to stay alive to
win the Nobel Prize. You were sitting in his class, in Robert
Lucas’ class, in Lars Hansen’s class, and you knew they were
going to win. And in the business school you had Merton
Miller and Eugene Fama, an incredible environment for
thinking and research.
The seminars were electric. Unlike the experience that
often you see today where somebody goes through some
PowerPoint slides, it was totally different. The audience
members had thought about the research paper, and they
were ready to go at it. And there were no hard feelings.
One thing that was pretty important for me in my development was an office visit with Eugene Fama, my dissertation adviser, where I had a couple of ideas to pitch for a
dissertation. I pitched the first idea, and he barely looked
up from whatever paper he was reading and shook his head,
saying, “That’s a small idea. I wouldn’t pursue it.” Then I hit
him with the second idea, which I thought was way better
than the first one. And he kind of looked up and said, “Ehh,
it’s OK. It’s an OK idea.” He added, “Maybe you can get a
publication out of it, but not in a top journal.” He indicated
I should come back when I had another.
Even though he had shot down both of my ideas, I left
feeling energized. The message from him was that I had a
chance of hitting a big idea. That interaction, which I am sure
he doesn’t remember, was very influential — it pushed me to
search for big ideas and not settle on the small ones.
EF

ECONOMICHISTORY

The Last Big Housing Finance Reform
issuing MBS backed by those loans.
But this time around, Congress
reached a deal that left much of the
status quo intact. Most importantly, it
left in place the implicit government
guarantee that, in the view of investors,
backed the enterprises. When such a
guarantee is present, investors are likely
to underprice the risks the institutions
take. And while the 1992 reform was an
attempt at addressing this problem by
ramping up regulation, many observers
argue that it fell short. Among its outcomes were capital requirements far
lower than those imposed on banks and
thrifts, and a regulator that some say
lacked the supervisory and regulatory
tools commensurate with the GSEs’ size
and exposure to risk.
“The fundamental problem in
1992 was that it formalized the hybrid
public-private model, which is destined
to fail,” says economist Scott Frame
of the Atlanta Fed, who worked with
the Treasury Department on the 2008
GSE rescue. “If you privatize the gains
and socialize the losses, you will create
excessive risk-taking incentives.”

Modest Beginnings
The GSEs’ original mission was to buy
particular categories of governmentinsured mortgages, freeing up liquidity
for lenders to issue more loans. The
Federal National Mortgage Association,
or Fannie Mae, was chartered in 1938
and initially bought mortgages that
were backed by either the Federal
Housing Administration or the Veterans
Administration. In 1954, Congress converted it from a government agency to
a mixed-ownership entity and granted
it certain tax advantages. Another step
occurred in 1968, when Congress took
Fannie off the federal budget and turned
it into a publicly traded company.
Meanwhile, the Federal Home Loan
Mortgage Association, or Freddie Mac,
was chartered in 1970 and targeted its
business toward buying mortgages from

Policymakers
concerned over
the future of
Fannie Mae and
Freddie Mac may
find a cautionary
tale in the last
time policymakers
sought to reform
the enterprises
more than two
decades ago

The Congressional Budget Office issued
a report in April 1991 that outlined
suggestions for improved oversight of
Fannie Mae and Freddie Mac.

COURTESY OF CONGRESSIONAL BUDGET OFFICE

I

n 2008, the Treasury Department
took over near-broke Fannie Mae
and Freddie Mac with a mandate
to stabilize their finances. Seven years
later, these two housing finance giants
remain in government hands with no
immediate prospects of escaping conservatorship. Many economists as well
as policymakers in both parties agree
the status quo is not a long-term solution and that these two governmentsponsored enterprises (GSEs) should
be downsized and at least partially privatized, but there is no consensus on
how to achieve this. In fact, the share
of single-family mortgages owned or
backed by the GSEs rose to a high of
47 percent in 2013, up from 40 percent in
2007 and far higher than their 7 percent
share in 1981. The enterprises also continue to hold a dominant position in the
issuance of mortgage-backed securities
(MBS), accounting for 70 percent of all
issuances in the first quarter of 2015.
The challenge of defining the basic mission and identity of the enterprises —
public, private, or something in between
— is not a new one, however. It was at
the center of the debate the last time
Washington tried to reform the GSEs,
back in 1992.
On the surface, that year promised
real momentum for housing finance
reform. Congress had agreed to a costly
rescue of the thrift industry three years
earlier, and amid the bailout’s widespread unpopularity, the George H.W.
Bush administration and lawmakers in
both parties were eager to prevent future
rescues requiring taxpayer dollars. In
1991, Congress followed through with
legislation that strengthened regulators’ authorities over banks with federal
deposit insurance. Then, with a strong
push from the Treasury Department,
Congress turned to reforming Fannie
and Freddie, which were taking on an
increasingly important role in providing
liquidity to the housing market by buying mortgages from lenders and then

BY H E L E N F E S S E N D E N

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

31

thrifts. In 1971, Freddie issued its first mortgage-backed
securities, and it proceeded to grow its MBS business while
Fannie tended to keep its mortgage purchases on its books.
As a result, Freddie was better able to handle the interest
rate volatility in the late 1970s and early 1980s, because it had
transferred interest-rate risk to MBS investors. In contrast,
Fannie struggled to stay afloat as many of the mortgages it
bought and held in its portfolio lost value to inflation.
Once interest rates stabilized, both GSEs dramatically
expanded their business, including issuance of MBS. In 1983,
the two issued a combined $35 billion in MBS; by 1992, it
was almost $675 billion. The number of mortgages held on
their books also expanded, from a combined $49 billion
purchased in 1983 to $443 billion in 1992. This rapid rate of
growth far outpaced the rise in the value of the single-family
mortgage market over the same period, from $202 billion to
$894 billion.
These numbers would rise even more dramatically in the
years that followed. But it was that rise in exposure in the
1980s and early 1990s, combined with the woes in the banking
and thrift sectors, that compelled the Bush administration to
turn to reforming Fannie and Freddie. Some in the administration became concerned that the GSEs could pose a longterm risk to taxpayers as long as their status as public-private
hybrids remained unresolved. Multiple government agencies,
including the Treasury Department and the Congressional
Budget Office, addressed these worries in reports in the
spring of 1991, and they concluded that Fannie and Freddie
needed formal capital requirements and stronger government
oversight, even though they were not in imminent danger of
failing and had high credit ratings. The CBO report, for example, argued that the GSEs had developed more comprehensive
ways to manage credit- and interest-rate risks, but the feature
of the implicit government guarantee meant that formal capital requirements would be needed to serve as a buffer against
taxpayer liability.
The House acted first, passing a bill in the fall of 1991
to establish a new GSE overseer within the Department of
Housing and Urban Development: the Office of Federal
Housing Enterprise Oversight, or OFHEO. Notably,
this office would be funded from dedicated fees, like the
Securities and Exchange Commission, rather than annual
appropriations, which tend to be less predictable and more
politicized. The bill also set a 2.5 percent capital requirement for the GSEs’ balance-sheet assets (the loans it held
on its books) and 0.45 percent for off-balance-sheet assets
(the MBS). (By comparison, banks had a requirement of
4 percent for home loans they held and 1.6 percent for GSE
MBS.) Finally, the bill authorized OFHEO to impose stress
tests to see if higher capital requirements were necessary; if
the GSEs failed those tests, they could face cease-and-desist
orders and fines. As soon as the bill headed to the Senate,
however, Fannie’s senior management warned it would
drop its support due to those two provisions, according to
media accounts at the time. This move could have spelled
trouble for the bill’s prospects in the Democratic-controlled
32

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Congress, which had at the time liberal constituencies that
also were close to the GSEs.
Negotiators released a new draft the following spring,
this time making it easier for the GSEs to challenge regulatory findings and making it harder for OFHEO to set fines.
Another provision established an affordable housing mandate, under which a certain percentage of loans and MBS
on the GSEs’ books had to come from home purchases in
underserved communities. Backed by fair housing groups,
the provision was intended to make borrowing easier and
cheaper for low-income and minority homebuyers.
In the fall of 1992, the bill was finally close to passage
when Fannie sent another unexpected warning: It still
opposed the bill because, in its view, OFHEO had too much
say over risk-based capital standards given that it lacked the
necessary expertise to understand them, and it ultimately
could cause a nationwide credit crunch if it compelled the
GSEs to raise capital. According to press reports, a deal was
struck in which OFHEO’s funding was moved over to the
appropriations process, and the capital-standards provision
stayed. Congress finally sent the legislation, formally titled
the Federal Housing Enterprises Safety and Soundness Act,
to President Bush to sign in October.

The Legacy of Reform
One of the most important legacies of the 1992 reform is
what it did not do: namely, resolve the question of whether
the government would come to the GSEs’ aid if they became
distressed. On one hand, they were chartered by Congress,
had access to a $2.25 billion line of credit with the Treasury
Department, and were granted special tax and regulatory
exemptions on account of their unique status. They were
also entrusted with a public mission to enhance liquidity
in the housing market and, after 1992, to meet affordable
housing goals.
This was the “government” part of their acronym, and
collectively, these provisions cemented the belief among
investors that the GSEs enjoyed implicit support from
Treasury. For this reason, the securities issued by Fannie and
Freddie carried a lower interest rate than those issued by the
private sector, reflecting the assumption that their debt was
ultrasafe. At the same time, the GSEs had a private shareholder structure and New York Stock Exchange (NYSE)
listings. This model worked well for them in the 1990s; the
GSEs’ combined net income in 1992, more than $2.2 billion,
rose to $14 billion in 2002.
The 1992 reform did include language stating that the
government would not come to the GSEs’ aid if they were
distressed. But the law left all of their quasi-governmental
advantages untouched, thereby preserving the implicit government guarantee that was so central to their growth. As
Thomas Stanton, a Washington lawyer who was involved with
the legislation, points out, the proof of the durability of the
implicit guarantee was in how markets treated GSE securities.
“Banks, pension funds, foreign governments — everyone
— kept treating Fannie and Freddie MBS as if they were

almost as safe as Treasuries,” he says. “So whether you have
some language in the bill purporting to prohibit a bailout
is really beside the point. And when these enterprises ultimately grow so large and become too big to fail, a government guarantee is inevitable.”
Several economists have tried to estimate the size of the
subsidy that the guarantee provided to the GSEs. By issuing
securities at exceptionally low interest rates reflecting their
perceived safety, and then using the money raised to buy
higher-yielding mortgage securities from the private sector,
the GSEs could count on making money off of this spread.
In 2001, the Congressional Budget Office estimated that
this differential created a subsidy worth $3.7 billion in 2000
for the GSEs’ MBS business, in addition to $1 billion derived
from their tax and regulatory exemptions. (That same year,
in comparison, Fannie and Freddie reported a combined
net income of $8.1 billion.) In another widely cited paper,
published four years later, Federal Reserve economists
Wayne Passmore and Shane Sherlund and Gillian Burgess
of New York University estimated that the GSEs had a
debt-funding advantage that ranged from 24 to 40 basis
points over long-term, highly rated corporate debt.
The GSEs also earned money in other ways — notably,
by securing fees from MBS buyers to guarantee timely payment of interest and principal. These “g-fees” — basically,
an insurance premium taken out of the interest payments
on the underlying loans — grew steadily until the financial crisis, from a combined $1.8 billion in 1992 to almost
$11 billion in 2008. Because these guarantees also put the
GSEs on the hook to pay back investors if these securities
soured, however, they drove the enterprises’ rapid financial
deterioration in 2008.

Benefits and Costs
For their part, Fannie and Freddie executives consistently
argued that this advantage ultimately benefited homeowners because it led to savings in the form of lower interest
rates and the availability of fixed-rate, 30-year mortgages.
Research has found those benefits to have been modest,
however. In their 2005 paper, Passmore, Sherlund, and
Burgess analyzed the difference in rates between conforming mortgages backed by GSEs and those for “jumbo” mortgages (that is, loans too large to qualify under the GSEs’
conforming limit). They found not just a narrow spread,
but a minimal pass-through effect of the GSE debt-funding
advantage to homebuyers. Looking at more than 1 million
loans from 1997 to 2003, they calculated a spread of only 15
to 18 basis points between conforming and jumbo loans. As
for the savings passed along to homebuyers that resulted
from the GSEs’ cheaper yields, the researchers calculated
that this amounted to about 7 basis points.
Economists argue that another effect of government
guarantees that protect investors is to weaken market discipline, resulting in too much risk. In principle, regulation
can help offset increased risk-taking, but critics argue that
OFHEO had insufficient tools to do that.

“The excessive risk-taking incentives created by this
hybrid model need to be countered with a strong regulatory
regime,” says Frame of the Atlanta Fed. “With the GSEs,
you had the facade of regulation but no teeth.”
For example, the office lacked the independent authority
to bring lawsuits against the GSEs or to replace their executives. In the event the GSEs faced insolvency, OFHEO
could opt to keep the enterprises operational (known as conservatorship) but could not close them down (that is, place
them in receivership) as the FDIC can with struggling banks.
One of the more important but overlooked aspects of
the reform, however, may have been the more esoteric issue
of OFHEO’s authority to conduct stress tests — one of the
key sticking points during the 1991-1992 negotiations. Under
the final deal, OFHEO could not set the GSEs’ minimum
leverage capital requirements by itself, but it was authorized
to devise and conduct stress tests to assess the existing
risk-based capital requirements for the GSEs. It took a full
10 years, however, to write and implement the rule; moreover, when the stress tests were conducted, up through the
crisis, they underestimated the GSEs’ losses.
A recent Atlanta Fed working paper that Frame
co-authored with Kristopher Gerardi and Paul Willen dissected OFHEO’s stress test methodology to find out why it
failed. They noted that OFHEO used mortgage data from
1979-1997 to create the statistical model that guided the
tests after 2002 — which meant the model did not reflect
the many changes in mortgage underwriting practices after
1997. As a sign of how poorly this model worked, the
researchers found that actual defaults during the housing
bust were four to five times greater than what the OFHEO
model predicted. To be sure, many market participants and
regulators alike underestimated the extent of losses related
to the housing bust. But in the case of OFHEO’s stress
tests, the researchers found that if the agency had used
an alternative model with real-time mortgage loan data, it
would have dramatically increased the quality of its forecast
of defaults starting in 2005; similarly, if OFHEO had used
real-time home price data, its model would have determined
as early as late 2006 that the GSEs lacked enough capital to
handle the risk of a national housing slump. But instead, the
researchers found that OFHEO used an adverse home-price
scenario that predicted property values would actually rise
for the first 10 quarters of the stress test — contrary to the
idea of a stress test, namely, to see how an institution would
perform during a period of market turmoil.
The Atlanta Fed researchers traced these shortcomings
back to the 1992 law, which required that OFHEO publish every detail of the model’s construction through the
federal rule-making process. This is a process that can take
years, with multiple rounds of notice-and-comment and
interagency clearance, which would make any updating an
onerous task. For this reason, it took a decade to finish the
first stress test; after that, the researchers argued, OFHEO
lacked the time, resources, and political capital to update
the model.
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

33

A Sudden Collapse
In the 15 years following the reform, the growth of the U.S.
housing market bolstered the GSEs’ performance as well. The
GSEs kept the bulk of their mortgage purchases in relatively
high-quality loans, and they kept their capital cushions, on
average, higher than the minimum requirement. After 2003,
however, they began buying more MBS issued by both bank
and nonbank lenders with looser standards, including those
backed by “Alt-A” and subprime loans. The GSEs’ combined
purchases of “private label” MBS rose from about $68 billion
in 2002 to almost $300 billion in 2006. Then, when private
investors began shedding these securities in 2007 as foreclosures began climbing and devaluing the underlying loans,
Fannie and Freddie ramped up their purchases. As a result,
their market share in the mortgage securitization business,
which had fallen from 50 percent in 2003 to 27 percent in
2006, climbed to 44 percent in late 2007.
At the same time, the quality of the loans underlying the
GSEs’ MBS fell as their market share expanded. From 2003
to 2007, the percentage of these loans with a loan-to-value
ratio over 80 percent (that is, for homes with little or no
equity) rose from 12 percent to 23 percent. Some have contended that this shift was driven by the affordable housing
mandate, which allowed the GSEs to apply private-label MBS
toward their housing goals. Recent research on this topic
suggests the impact is less clear-cut, however. For example,
three economists at the St. Louis Fed —Ruben MernandezMurillo, Andra Ghent, and Michael Owyang — have found no
evidence that lenders ramped up subprime loan originations
or changed the pricing of their mortgages so that they would
conform to the various cutoffs (for example, ensuring that a
certain percentage of loans were made to homeowners under
an income threshold) that the affordable housing provisions
had mandated. As economist Ronel Elul argued in a recent
article in the Philadelphia Fed’s Business Review, profit and
desire for market share, rather than the affordable housing
provisions, prompted this late drive by the GSEs to buy
private-label securities. “They did not significantly contribute
to the development of risky lending practices in this sector,”
he concluded.
In the second half of 2007, the losses began to rise as the
GSEs began paying out credit guarantees on bad loans. By
summer 2008, the two had lost $14.2 billion over the year, and
their combined capital dropped to about 1 percent. In July
2008, Congress established a new overseer for the GSEs, this

time with the power of receivership, and Treasury Secretary
Henry Paulson ordered a classified review of their finances,
concluding that the GSEs no longer had enough capital to
cover their obligations. Because the size of their exposure was
so vast — $5.2 trillion in held or guaranteed mortgage debt,
almost half of the roughly $11 trillion in household mortgage
debt outstanding at the time — Paulson decided that only
a government takeover could prevent systemic contagion.
Treasury then executed the takeover in a surprise operation
over the first weekend of September.
Paulson’s concern that the GSEs no longer had enough
capital to cover their losses was borne out by the numbers.
Over the course of the bailout, the two suffered a capital
erosion of $232 billion, $181 billion of which was in losses from
credit guarantees. The bailout itself cost $187.5 billion.

Still Seeking a Solution
Under the Treasury Department’s conservatorship, it offered
assistance in the form of stock warrants so that the GSEs
could continue to meet their obligations. The enterprises were
placed under new leadership and de-listed from the NYSE, and
they had to give up their dividends and any future profits to the
government. Paulson and other senior administration officials
assumed this arrangement would be only a temporary solution,
and that Congress would legislate a permanent fix, whether it
was a full-scale privatization, a shrunken but well-defined government backstop role, or something in between.
That has yet to happen. To date, the GSEs have yielded
about $225.5 billion in their returns to the government,
more than the dollar cost of their bailouts. The Treasury
Department released a white paper in 2011 that laid out
options for winding down or reforming the enterprises, but
it did not kick off any sustained legislative action. To date,
no proposal for reform has been able to advance in Congress
beyond the committee level or gain support in both chambers. At the same time, as noted above, the GSEs back a
greater percentage of mortgages than ever before.
Frame, of the Atlanta Fed, says the current impasse over
resolving the GSEs’ fate still leaves in place significant risks.
“As it stands, the status quo offers benefits in terms of
significant control over mortgage credit standards, risk
pricing, and generally lower mortgage rates than would otherwise be the case,” he says. “But what it does do is generate
an enormous contingent liability. That is still the case with
Fannie and Freddie.”
EF

Readings
“Federal Subsidies and the Housing GSEs.” Congressional
Budget Office, May 2001.
Frame, Scott W., Andreas Fuster, Joseph Tracy, and James
Vickery. “The Rescue of Fannie Mae and Freddie Mac.” Journal
of Economic Perspectives, Spring 2015, vol. 28, no. 2, pp. 25-52.
Frame, W. Scott, Kristopher Gerardi, and Paul S. Willen. “The
Failure of Supervisory Stress Testing: Fannie Mae, Freddie Mac,
and OFHEO.” Federal Reserve Bank of Atlanta Working Paper
No. 2015-3, March 2015.
34

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Hernandez-Murillo, Ruben, Andra C. Ghent, and Michael T.
Owyang. “Did Affordable Housing Legislation Contribute to the
Subprime Securities Boom?” Federal Reserve Bank of St. Louis
Working Paper No. 2012-005D, March 2012.
Passmore, Wayne, Shane M. Sherlund, and Gillian Burgess.
“The Effect of Housing Government-Sponsored Enterprises on
Mortgage Rates.” Real Estate Economics, September 2005, vol. 33,
no. 3, pp. 427-463.

BOOKREVIEW

Food Fights

EATING PEOPLE IS WRONG, AND
OTHER ESSAYS ON FAMINE, ITS
PAST, AND ITS FUTURE
BY CORMAC Ó GRÁDA
PRINCETON, N.J.: PRINCETON
UNIVERSITY PRESS, 2015, 248 PAGES
REVIEWED BY JESSIE ROMERO

T

he Third Horseman of the Apocalypse has visited
every corner of the world. Europe, Asia, Africa,
the Americas — all have suffered the ravages of
famine. Sometimes weather is to blame; 600,000 people
starved to death in France in 1709-1710 following a winter so
cold it became known as the “Great Frost.” Sometimes the
culprit is war, as when the Germans blockaded Leningrad
and residents resorted to eating leather and wallpaper paste
(and, according to some accounts, each other). In Eating
People is Wrong, and Other Essays on Famine, Its Past, and Its
Future, Cormac Ó Gráda, professor emeritus of economics
at University College Dublin, explores the political, cultural,
and economic forces that combine to create such crises.
In the title essay, Ó Gráda illustrates the “repulsiveness”
of famines by broaching a gruesome topic: cannibalism.
Ó Gráda finds that cannibalism, while rare, is more common
in famines than previously believed. But in some cases, he
says, reports of cannibalism are an attempt to demonize the
“other” or achieve political ends. He describes some stories
of cannibalism during famines in Ireland as colonialist discourse aimed at “indigenous savages.”
In an essay on the Great Bengal Famine of 1943-1944,
which killed more than 2 million people, Ó Gráda re-examines
the conclusions of Nobel laureate Amartya Sen on the famine’s causes. In an influential 1981 book, Sen blamed the famine not on an actual lack of food, but rather on large-scale
speculation and hoarding that drove up prices. In contrast,
Ó Gráda offers evidence of a substantial food shortage, precipitated by a poor rice harvest in 1942. For example, Indian
officials conducted multiple food drives, which involved
searching homes and businesses, but failed to turn up any
hoarded food. In addition, the 1943 harvest was good, which
should have prompted the release of hoarded supplies. But
prices remained high in early 1944, arguing against the existence of large-scale hoarding.
Ó Gráda attributes the famine to British and Indian officials who repeatedly denied the problem and were unwilling
to divert ships or food from the war effort. The problem in
Bengal in 1943, he writes, was “the failure of the Imperial
power to make good a harvest shortfall that would have been
manageable in peacetime. … The famine was the product of
the wartime priorities of the ruling colonial elite.”

The role of markets in famines is a contentious issue. On
the one hand is the classical view that markets both prevent
and remedy famines; in The Wealth of Nations, Adam Smith
wrote that all famines in Europe had been the result of “the
violence of government attempting, by improper means, to
remedy the inconveniences of a dearth.” On the other hand,
a more populist tradition argues that markets exacerbate
famines by diverting food away from the poor to the rich.
At the outset of the third essay in the book, it seems that
Ó Gráda hopes to help resolve this debate by studying how
markets functioned during four famines: France in 16931694 and 1709-1710, Finland in 1868, and Ireland in 1846-1852
(the Irish potato famine). In a technical section that most
non-experts will likely find difficult to follow, he analyzes
price data and concludes that, rather than markets helping
or hurting, these four famines were the result of disastrous
crop shortfalls and inadequate government assistance for the
poor. But Ó Gráda does not go on to explain how these findings relate to a broader understanding of the role of markets
in famines, leaving the promise of the essay unfulfilled.
The responsibility of government is a central theme in
Ó Gráda’s chapter on the Great Leap Famine, which killed
tens of millions of people in China between 1959 and 1961.
Mao Zedong’s “Great Leap Forward,” an attempt to forcibly
industrialize the country, hobbled agricultural production
and left millions of people in the countryside without
enough food.
Ó Gráda’s essay on the Great Leap Famine is mostly an
analysis of three recent books on the famine, and as such
lacks a clear conclusion. Still, he raises a number of interesting questions, including Mao’s culpability, the role of local
officials, and — one of the biggest questions surrounding the
famine — how many people actually died during it.
Determining an accurate excess mortality rate (the number
of people who died beyond the natural death rate) has been
complicated by poor record keeping and limited access to
what records there are. As a result, Ó Gráda explains, estimates vary widely and often reflect political ideology. Modern
supporters of Mao claim only 2 million to 3 million people died;
critics contend as many as 60 million died. The truth is probably somewhere in the middle; demographers have estimated
excess mortality of between 18 million and 32.5 million — still
making the Great Leap famine the most deadly in history.
In recent decades, famines have become relatively rare
and small by historical standards, the result of productivity increases in agriculture, improved communication and
transportation networks, and numerous international aid
agencies. Still, while extraordinary famines are on the wane,
steady-state malnutrition remains a serious problem. As
Ó Gráda warns, “ ‘making famine history’ is not the same
thing as ‘making hunger history.’ ”
EF
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

35

DISTRICTDIGEST

Economic Trends Across the Region

State Labor Markets: What Can Data Tell (or Not Tell) Us?
BY S O N YA R AV I N D R A N AT H W A D D E L L

F

rom December 2007 through January 2010, the Fifth
Federal Reserve District lost 766,000 net jobs —
more than 5 percent of total employment in the
District. Over the same period, the regional unemployment
rate jumped from 4.4 percent to 9.4 percent. During the
recovery, employment grew, and by March 2015, payroll
employment in the region exceeded its pre-recession level by
179,100 jobs while the unemployment rate had fallen back to
5.5 percent (see chart).
As the national and regional economies have undergone the fluctuations of the last eight years, understanding
state and local labor market conditions has been extremely
important to academic economists, to practitioners in state
and local government, and to other organizations involved in
local economic development. Employment statistics at the
national level mask significant movement at the state level,
and even state numbers mask activity at the metropolitan
area or county level. In addition, information availability at
the state and local level is limited — for example, we don’t
have timely data on gross domestic product or consumer
spending — so employment numbers are used even more
broadly to understand local economic conditions and the
breakdown of industry in a region.
There are many sources of labor market information,
most of which are maintained by the Department of Labor’s
Bureau of Labor Statistics (BLS). For example, the information in the opening paragraph above on the number of net
jobs lost in the Fifth District during the recession came from
the BLS’s Current Employment Statistics (CES) program.
While the national unemployment rate is developed in the
BLS’s Current Population Statistics (CPS) area, state and
local unemployment rates come from the BLS’s Local Area
Unemployment Statistics (LAUS) program.

Fifth District Labor Market Indicators
16
14

PERCENT

9
8

12

7

10

6

8

5

6

4
3

SOURCE: Bureau of Labor Statistics/Haver Analytics

36

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

2012—

2014—

2010—

2006—

2008—

2002—

2
2004—

2000—

1996—

1998—

1992—

1994—

1990—

1986—

1988—

1982—

1984—

1980—

1
0

4

Recession
Unemployment Rate (left axis)
Payroll Employment (right axis)

2

0

EMPLOYMENT LEVEL (THOUSANDS)

11
10

This article will focus on understanding the origination
and use of three critical sources of labor market information
at the state and local level: the CES data, the LAUS data, and
the Quarterly Census of Employment and Wages (QCEW)
data. It will shed light on questions such as what economists
mean when they refer to “establishment” or “household”
data and what role revisions play in developing the data.
Finally, how does one interpret the data and what can it tell
us about the Fifth District economy?

The Quarterly Census of Employment
and Wages (QCEW)
The QCEW is the most comprehensive source of information on employment, hours, and wages by industry in
the United States. The QCEW program cooperates with
the various state labor market information agencies to
collect data from the roughly 9 million U.S. business establishments covered by unemployment insurance (UI) on a
quarterly basis (as well as federal agencies subject to the
Unemployment Compensation for Federal Employees program). The data represent about 97 percent of all wage and
salary civilian employment in the country and comprise a
complete set of monthly employment and quarterly wage
information at the national, state, metropolitan area, and
county levels (see map).
In the decades since the national UI system was instituted
in 1938, coverage has become quite broad. Today, all workers
are covered except members of the armed forces, the selfemployed, proprietors, domestic workers, unpaid family
workers, and railroad workers covered by the railroad insurance system. UI coverage is largely consistent across states,
although there are some differences; for example, in a number
of states, certain types of nonprofit employers, such as religious
organizations, are given a choice of coverage or exclusion.
State agencies collect data on employment and wages from
businesses and submit the data to the BLS each quarter.
Given that the QCEW program provides such a comprehensive view into employment, why do we need any other
source of employment information? The first reason is timeliness. The QCEW data is usually released about six months
after the end of the period in question, and those who follow
the trajectory of the national and state economies depend
upon getting data as soon as possible. Second, as noted
above, states can differ in their requirements for UI coverage, which can create some inconsistency when comparing
QCEW data across states. Third, QCEW provides no
information on the number of unemployed in an economy.
To address these limitations of the QCEW data, economists
rely on survey data such as that collected through the CES
and the CPS programs.

The Establishment Survey and the CES
The modern surveys and methods for collecting information
on employment and unemployment in the United States
developed throughout the 20th century. Before 1915, only a
few states produced employment statistics. In 1915, the BLS
entered into a cooperative agreement with New York and
Wisconsin whereby sample data was collected from employers by a state agency and used jointly with the BLS to prepare
state and national series. The Great Depression prompted
increased interest in employment data and by 1933, the federal
government published employment, average hourly earnings,
and average weekly hours for total manufacturing, 90 manufacturing industries, and 14 nonmanufacturing categories. By
1940, estimates of total nonfarm employment for all 48 states
and the District of Columbia were available. Since 1949, the
CES program has been a joint federal-state program that
provides employment, hours, and earnings information by
industry on a national, state, and metropolitan area basis.
The CES sample covers employment, hours, and earnings from about 143,000 businesses and government agencies, which, in turn, cover approximately 588,000 individual
worksites — about one-third of all nonfarm payroll employees
in the 50 states and Washington, D.C. The sample is drawn
from the QCEW database (that is, from the 9 million establishments that are covered by unemployment insurance).
The CES data is frequently cited. On the first Friday of
every month (unless the Friday is a holiday), the BLS releases
information on the number of jobs added to the U.S. economy the previous month; for example, on July 2, 2015, the
BLS announced that the United States added 223,00 jobs
in June. Later in the month, the state data is released. For
example, on July 21 the BLS announced that Virginia added
13,400 jobs in June.
Although modeling techniques are used to develop the
CES results for some locality and industry combinations
that do not have a large enough sample, at the state level
estimates are based on the same establishment reports as the
national estimates, using direct sample-based estimation.
The size of the samples for Fifth District states can be seen
in the table; it is important to remember that because state
and area samples are smaller, the error component associated with these estimates are bigger than that for the nation.

Average
Weekly
Wage
Across
thethe
Fifth
District
Average
Weekly
Wage
Across
Fifth
District

Wage (Dollars)
531 - 637
638 - 698
699 - 759
760 - 869
870 - 1696

SOURCE: QCEW (December 2014)

labor force, employment, and unemployment were developed during the Depression and throughout the 1930s.
The mass unemployment in the early 1930s created the
need to directly measure the number of jobless people, and
widely conflicting estimates based on a variety of indirect
techniques began to appear. In 1940, the Works Progress
Administration used the concepts developed in the late
1930s for a national sample survey of households, called the
Monthly Report of Unemployment. The household survey
was transferred to the Census Bureau in 1942 and in 1948,
the name was changed to the Current Population Survey
(CPS). Although the Census Bureau continues to collect the
data, responsibility for analyzing and publishing the CPS
labor force data was transferred to the BLS in 1959.
The Department of Labor began developing unemployment estimates at the subnational level during World War II
in order to identify areas with inadequate labor supply,
material shortages, or transportation difficulties. After the

Number of Unemployment Insurance Accounts
and Establishments in the CES Sample

The Household Survey and the LAUS
In the beginning of July, the public found out not only
that the United States had added 223,000 jobs in June,
but also that the unemployment fell slightly to 5.3 percent
— in other words, 5.3 percent of the total labor force was
defined as unemployed by the household survey. The figure
of 223,000 jobs came out of the CES program, discussed
above. The unemployment rate, however, is developed
through an entirely different survey; while the payroll
numbers come from a survey of establishments, the labor
force and unemployment rate come out of a survey of about
60,000 households across the United States.
Precise definitions, or at least more specific concepts, of

Source: QCEW (December 2014)

UI Accounts (% of U.S.)

Establishments (% of U.S.)

U.S.

143,179 (100)

587,531 (100)

DC

944 (0.66)

1,441 (0.25)

MD

2,069 (1.45)

7,931 (1.35)

NC

3,778 (2.63)

22,309 (3.80)

SC

2,195 (1.53)

9,530 (1.62)

VA

2,669 (1.86)

13,398 (2.28)

WV

1,635 (1.14)

5,501 (0.94)

13,290 (9.28)

60,110 (10.23)

Fifth District

SOURCE: Bureau of Labor Statistics, as of March 17, 2015

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

37

Major Sources of State and Local Labor Market Information
BLS Program

Major Data Pieces

Geography Available

Quarterly Census of
Employment and Wages
(QCEW)

Employment, Hours,
Wages

County, MSA, state,
U.S.

Current Employment
Statistics (CES)

Payroll employment

MSA, state, U.S.

Current Population Survey
(CPS)

Unemployment
rate, Labor Force
Participation rate

U.S.

Local Area Unemployment
Statistics (LAUS)

Unemployment
rate, Labor Force
Participation rate

County, MSA, state

SOURCE: Bureau of Labor Statistics/Author’s Analysis

war, the emphasis was to identify areas of labor surplus, and
a program was established to classify areas in accordance
with severity of unemployment. In 1950, the Department
of Labor published a handbook entitled Techniques for
Estimating Unemployment in order to produce comparable
estimates of the unemployment rate for all states. This led to
the formulation of the “handbook method” in the late 1950s:
a series of computational steps designed to produce local
employment and unemployment estimates using available
data at a much lower cost than a direct survey.
In 1972, the BLS assumed technical responsibility for
the state and local program, and in 1973 a new system for
developing labor force estimates was introduced, combining
the handbook method with the concepts, definitions, and
estimation controls from the CPS. Beginning in 1989, the
handbook estimation was discontinued for states in favor
of time series statistical modeling, although estimates for
most substate areas continue to be based on the handbook
method. Until 1996, for a handful of the largest states,
labor force estimates were calculated directly from the CPS
data. Starting in 1996, however, labor force data have been
estimated for all states using the time series approach mentioned in this section.
At the state and local level, the 60,000-household sample is not large or representative enough to use the straight
sample data. In January 2015, for example, the CPS sample
contained 3,347 individuals (in 1,567 households) from North
Carolina — only about 0.04 percent of the total state population. In West Virginia, the CPS sample in January 2015
contained only 2,839 individuals from 1,156 households.
Variance estimates of employment and unemployment in
the household survey are low enough to be acceptable for the
nation, but at the state level, the small sample size results in
the data being much more variable.
To address the high variability, the BLS develops estimates of unemployment using signal extraction techniques
developed in the time series literature. The model takes
advantage of the time series of sample estimates in order to
reduce the variance by pooling data over time for a given area.
38

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

In other words, the model
uses past data to help reduce
the variance associated with
Data collected from UI records:
97% of all wage and salary civilian
current estimates. In addiemployment
tion, the model incorporates additional data series.
Approximately 143,000 businesses
and 588,000 individual worksites
To estimate employment,
(about ⅓ of all nonfarm payroll
the model uses CES data;
employees)
to estimate unemployment,
Approximately 60,000 U.S.
the model includes unemhouseholds
ployment claims as an input.
(The labor force is then the
CPS sample plus model/other data
sum of employment and
(signal+noise for state data
unemployment.) The model
Handbook method for sub-state)
is referred to as a signalplus-noise model because it
postulates that the observed
CPS estimate consists of a true, but unobserved, labor force
value (the signal) plus noise that reflects the error arising from
taking a sample of the adult population rather than the full
population.
To calculate labor force indicators at the local labor market level, the handbook method is used. As mentioned earlier, this approach is a building block approach that utilizes
data from the CPS, the CES program, state UI systems, and
the American Community Survey (recently changed from
the decennial census) to create estimates that are adjusted
to statewide measures of employment and unemployment.
Below the labor market area level (i.e., for counties and cities/towns), estimates are prepared using disaggregation techniques based on inputs from the decennial census, annual
population estimates, and current UI data.
Sample

Data Revisions and Interpreting Differences
There is another important part of CES data development:
the annual benchmarking process. Because the CES data is
a sample, it does not account for the opening and closing of
firms during the year. When the economy is growing and
new businesses are opening, the CES data is likely to underestimate employment growth. On the other hand, in periods
of decline, when firms are closing their doors, the CES data
is likely to overestimate employment. The CES program
uses non-sampling methods to account for this bias, but the
BLS also uses the QCEW data to adjust the CES data (called
“benchmarking”) — a revision that can have a substantial
effect on the employment numbers. For example, with the
benchmark that came out on March 17, 2015, the average
employment in the Fifth District in 2014 was revised up by
9,300 jobs.
The LAUS estimates are also revised. Monthly, the BLS
imposes a process to ensure that substate employment/
unemployment estimates add up to the state estimates and
state totals add up to the national total. Annual revisions
are also made at the beginning of each calendar year using
statistical techniques that are built into the model process
and incorporating changes to the inputs (revision to the CES

Different agencies repackage the QCEW data and combine
them with other data to offer the user alternative ways
of analyzing local area labor markets. For example, the
Census Bureau produces Longitudinal Employer-Household
Dynamics (LEHD) data using UI earnings data, QCEW
data, and censuses and surveys. Firm and worker information are used to create data on job-level quarterly earnings,
on where workers live and work, and on firm characteristics.
Some of these data are available only to qualified researchers
on approved projects, but the LEHD program also creates
public use data sets and online tools.
States themselves often provide further information about state and local labor markets. The Virginia
Employment Commission (VEC), for example, provides
data on Virginia labor markets through its Labor Market
Information (LMI) website. Some of the data, such as
unemployment rates, are just repackaged BLS data. Some
other data, such as characteristics of the unemployed who
have access to UI benefits, include pieces that are available
for states through the BLS as well as pieces or geographies
unavailable anywhere else. Finally, some of the data, such
as characteristics of new job applicants or top employers
by county, are available only through the VEC’s LMI tools.
Different states have different amounts of data available to
the public. For the most part, outside of the QCEW, CES,
and LAUS data, the information the public can get and at
what level of geography varies considerably by state.

Interpreting the Data for the Fifth District
Although there can be short-run discrepancies between the
CES and the LAUS data, over the long term they usually tell
the same stories. In the Fifth District, the main story they
tell is that the states in the southern part of the District
experienced a more severe economic downturn but have
recovered more quickly and more completely than the
northern part of the District. Combined, North Carolina
and South Carolina lost almost 60 percent of the 828,700
jobs lost in the Fifth District from December 2007 through
February 2010, but both of them regained the losses of the
recession by the fall of 2014. As of June, North Carolina was
70,200 jobs above the pre-recession level and South Carolina
was 53,700 jobs above. On the other hand, although employment declined less steeply (as a percentage of total employment) in Virginia, Maryland, and the District of Columbia,

4
3
2
1
0
-1
-2
CES
QCEW
LAUS

-3
-4
-5

1991—
1992—
1993—
1994—
1995—
1996—
1997—
1998—
1999—
2000—
2001—
2002—
2003—
2004—
2005—
2006—
2007—
2008—
2009—
2010—
2011—
2012—
2013—
2014—

Where Else Do Labor Market Data Come From?

Fifth District Employment Growth
YEAR-OVER-YEAR PERCENTAGE CHANGE

data, revision to unemployment insurance claims counts,
new population controls, etc.).
It is not uncommon at the state level to see contradictions between the CES data and the LAUS data, particularly
before the annual revisions occur. That’s because the data
come from different surveys. Furthermore, even when the
data trend together (see table), they show slightly different
numbers for employment. Most of the time, what analysts
are interested in are the trends, however, and particularly the
QCEW and the CES data tend to trend together.

SOURCE: Bureau of Labor Statistics/Haver Analytics

they have also not grown as strongly during the recovery,
particularly in the last few years.
A major factor in the slower recovery in the latter states
has been employment in professional and business services.
Using the CES data, we can drill down by industry; in
Virginia, for example, the professional and business services
industry was a driver of growth coming out of past recessions, but the industry expanded more slowly coming out of
this recession. More recently, that sector of the state has suffered losses. One of the contributors to the sluggish growth
in the northern part of the region, and particularly in professional and business services, is the decline in and uncertainty
surrounding federal government contract spending, which
plays a large role in the economies of Maryland and Virginia.
West Virginia also survived the recession better than
North Carolina and South Carolina, but that state’s labor
market has also been struggling in the last few years. The
decline in energy prices and, more particularly, the contraction in the coal industry has hit the state hard. Of the 8,800
net jobs lost from June 2014 through June 2015 in West
Virginia, 2,900 were in mining and logging.
This story plays out in the LAUS data, as well. For example,
in December 2009, South Carolina, with an unemployment
rate of 11.7 percent, had one of the top unemployment rates
in the country. By June 2015, the rate had fallen to 6.6 percent.

Conclusion
The Fifth District economy has expanded slowly but steadily
in recent years — a trend best evidenced by the employment
data provided through the Bureau of Labor Statistics. In
addition, much of the job expansion and the sharpest decline
in unemployment has occurred in the southern part of the
District, namely in North Carolina and South Carolina. We
know this primarily because of the data the BLS provides to
the public. Economists, analysts, and various researchers use
this information not only to judge current economic activity,
but also to better understand the structure of local economies and to analyze which other parts of the economy affect
and are affected by labor markets.
EF
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

39

State Data, Q3:14
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000s)
752.0
2,621.5
4,150.7
1,949.9
3,777.0
761.9
Q/Q Percent Change
0.0
0.1
0.4
0.3
0.1
-0.5
Y/Y Percent Change
0.3
1.0
2.2
2.3
0.5
-0.5
							
Manufacturing Employment (000s)
1.0
103.3
449.0
230.3
231.8
47.7
Q/Q Percent Change
0.0
-0.3
0.4
0.1
0.0
-0.6
Y/Y Percent Change
0.0
-2.1
1.4
2.6
0.5
-1.8
						
Professional/Business Services Employment (000s) 157.9
424.8
576.1
255.0
679.8
67.2
Q/Q Percent Change
0.5
0.2
1.2
0.1
0.0
1.4
Y/Y Percent Change
0.9
1.9
4.9
4.8
0.3
3.4
							
Government Employment (000s)
233.9
503.8
716.5
357.2
707.3
152.7
Q/Q Percent Change
-0.5
0.0
0.0
0.4
0.3
-2.2
Y/Y Percent Change
-1.9
0.1
0.2
1.0
0.0
-1.3
						
Civilian Labor Force (000s)
379.2
3,103.7
4,627.5
2,197.0
4,236.4
784.7
Q/Q Percent Change
1.4
0.1
-0.1
0.8
-0.1
-0.8
Y/Y Percent Change
1.8
-0.3
-0.6
1.1
0.1
-1.4
							
Unemployment Rate (%)
7.8
5.7
6.0
6.5
5.0
6.4
Q2:14
7.8
5.8
6.3
6.2
5.2
6.7
Q3:13
8.5
6.4
7.5
7.3
5.5
6.5
					
Real Personal Income ($Bil)
46.3
303.2
363.7
164.6
380.4
62.5
Q/Q Percent Change
0.2
0.4
1.0
0.5
0.3
0.4
Y/Y Percent Change
1.8
2.0
2.4
2.7
1.1
1.7
							
Building Permits
2,095
5,209
14,278
7,027
7,346
651
Q/Q Percent Change
996.9
32.7
15.2
5.0
-5.0
26.7
Y/Y Percent Change
93.6
4.4
18.8
12.4
-8.9
5.5
							
House Price Index (1980=100)
703.1
427.0
314.6
317.5
413.3
225.2
Q/Q Percent Change
0.8
0.8
1.2
0.9
0.5
1.2
Y/Y Percent Change
8.0
3.5
3.5
3.9
3.3
2.2

40

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

Third Quarter 2003 - Third Quarter 2014

Change From Prior Year

Third Quarter 2003 - Third Quarter 2014

Third Quarter 2003 - Third Quarter 2014

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

10%
9%
8%
7%
6%
5%
4%
3%
04 05 06 07 08 09 10 11

12

13 14

04 05 06 07 08 09 10 11

12

Fifth District

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

13 14

04 05 06 07 08 09 10 11

Unemployment Rate
Major Metro Areas

Building Permits

Change From Prior Year

Change From Prior Year

Third Quarter 2003 - Third Quarter 2014

Third Quarter 2003 - Third Quarter 2014

Third Quarter 2003 - Third Quarter 2014

04 05 06 07 08 09 10 11
Charlotte

Baltimore

12

13 14

Washington

20%
10%
0%
-10%
-20%
-30%
-40%
-50%
04 05 06 07 08 09 10 11
Baltimore

12

13 14

Third Quarter 2003 - Third Quarter 2014

Third Quarter 2003 - Third Quarter 2014

10

0
-10
-20
-30
-40
-50

0
-10
-20
-30
04 05 06 07 08 09 10 11

12

13 14

Fifth District

United States

House Prices
Change From Prior Year
Third Quarter 2003 - Third Quarter 2014

16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

40

10

04 05 06 07 08 09 10 11

Washington

FRB—Richmond
Manufacturing Composite Index

20

13 14

30%

FRB—Richmond
Services Revenues Index

30
20

12

Change From Prior Year

40%

Charlotte

30

13 14

United States

Nonfarm Employment
Major Metro Areas

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%

12

04 05 06 07 08 09 10 11

12

13 14

04 05 06 07 08 09 10 11
Fifth District

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease. The manufacturing composite index is a
weighted average of the shipments, new orders, and employment indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building Permits: U.S. Census Bureau, http://www.census.gov.
House Prices: Federal Housing Finance Agency, http://www.fhfa.gov.

For more information, contact Jamie Feik at (804)-697-8927 or e-mail Jamie.Feik@rich.frb.org

12

13 14

United States

E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

41

Metropolitan Area Data, Q3:14
Washington, DC

Baltimore, MD

Hagerstown-Martinsburg, MD-WV

Nonfarm Employment (000s)
2,536.3
1,347.2
Q/Q Percent Change
-0.2
-0.3
Y/Y Percent Change
0.5
1.2
			
Unemployment Rate (%)
5.1
6.1
Q2:14
5.1
6.2
Q3:13
5.4
6.8
			
Building Permits
7,223
2,189
Q/Q Percent Change
35.3
26.8
Y/Y Percent Change
25.6
-8.9
			
		
Asheville, NC
Charlotte, NC

103.1			
0.1			
0.3			
6.0			
6.2			
6.6			
282			
36.2			
8.9			

Durham, NC

Nonfarm Employment (000s)
176.9
1,060.8
290.8			
Q/Q Percent Change
-0.4
-0.5
-0.2			
Y/Y Percent Change
1.5
3.8
2.9			
					
Unemployment Rate (%)
4.9
6.0
5.0			
Q2:14
4.9
6.2
5.0			
Q3:13
6.0
7.6
5.9			
						
Building Permits
373.0
5,176
884			
Q/Q Percent Change
-6.0
44.1
8.3			
Y/Y Percent Change
-10.8
77.8
-33.5			
					
					
Greensboro-High Point, NC
Raleigh, NC
Wilmington, NC
Nonfarm Employment (000s)
346.2
562.5
116.6			
Q/Q Percent Change
-0.8
0.9
0.5			
Y/Y Percent Change
0.9
3.6
3.4			
					
Unemployment Rate (%)
6.5
4.9
6.0			
Q2:14
6.7
5.0
6.2			
Q3:13
8.1
6.0
7.5		
Building Permits
Q/Q Percent Change
Y/Y Percent Change

42

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

586
5.4
-12.0

3,199
8.6
37.2

640			
1.7			
-30.3		

Winston-Salem, NC

Charleston, SC

Columbia, SC

Nonfarm Employment (000s)
251.9
322.9
372.1		
Q/Q Percent Change
-0.6
-0.2
-0.4		
Y/Y Percent Change
1.6
3.0
2.6		
			
Unemployment Rate (%)
5.9
5.7
6.1		
Q2:14
6.1
5.3
5.6		
Q3:13
7.4
6.2
6.6		
		
Building Permits
688
1,264
1,330		
Q/Q Percent Change
25.5
-1.4
29.4		
Y/Y Percent Change
14.5
-1.8
46.2		
				
Greenville, SC

Richmond, VA

Roanoke, VA

Nonfarm Employment (000s)
387.3
631.5
160.0		
Q/Q Percent Change
-0.6
-0.2
-0.5		
Y/Y Percent Change
1.9
1.5
0.7		
			
Unemployment Rate (%)
6.0
5.4
5.2		
Q2:14
5.5
5.6
5.3		
Q3:13
6.6
5.9
5.7		
				
Building Permits
1,044
1,252
116		
Q/Q Percent Change
-28.3
-12.4
-16.5		
Y/Y Percent Change
30.2
-22.7
6.4		
				
Virginia Beach-Norfolk, VA

Charleston, WV

Huntington, WV

Nonfarm Employment (000s)
759.0
123.9
140.0		
Q/Q Percent Change
0.2
-0.5
-0.8		
Y/Y Percent Change
0.1
-0.2
0.5		
				
Unemployment Rate (%)
5.6
6.3
6.4		
Q2:14
5.8
6.5
6.7		
Q3:13
6.1
6.2
7.3		
				
Building Permits
1,242
6
34		
Q/Q Percent Change
-22.2
20.0
-8.1		
Y/Y Percent Change
-50.3
-88.2
78.9		
				

For more information, contact Jamie Feik at (804) 697-8927 or e-mail Jamie.Feik@rich.frb.org
E C O N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

43

OPINION

Keeping Monetary Policy Constrained
BY J O H N A . W E I N B E RG

S

ome have proposed that the Fed follow a binding,
explicit rule — a mathematical formula — to determine monetary policy. Such a prescription has even
found its way into legislation introduced in Congress last
year, with proponents arguing that it would enhance monetary policy transparency and accountability. Is this a good
idea?
An early example of such a rule is one advanced by Milton
Friedman in 1960, his “k-percent” rule, under which the Fed
would choose a measure of the money supply and increase
the money supply by a constant percentage every year.
Several decades later, in 1993, Stanford University economist
John Taylor proposed a somewhat more complex type of
monetary policy rule, known as the Taylor rule. This type of
rule more closely reflects the operations of modern central
banks, which tend to conduct policy by setting a target for a
short-term interest rate.
One purported benefit of adhering strictly to a rule is that
it would make the Fed’s actions more predictable, eliminating an unnecessary source of uncertainty in the economy and
financial markets. Research has shown, for instance, that the
uncertainty created by highly variable inflation can hurt the
performance of the economy.
Committing to a fixed rule is also sometimes seen as a
response to the so-called time consistency problem discussed by Edward Prescott and Finn Kydland, among others. A central bank might always perceive that a short-run
gain in real economic activity can be had by producing a bit
more inflation than the public expects. But acting on this
temptation ultimately only leads to ever-higher inflation.
Yet when assessing the concept of a monetary policy rule,
it is important to ask what we are comparing it to. During
the 1960s and 1970s, Fed policy was indeed highly activist
and discretionary. This period was marked by policymakers acting on a perceived trade-off between inflation and
unemployment. The resulting economic performance was
far from desirable, with volatile inflation that ratcheted up
in each cycle.
For several decades now, as many observers have noted,
the FOMC has instead operated as if it were pursuing an
explicit inflation target. In this sense, the behavior of the
Fed has already been broadly rule-like for some time, albeit
with some exceptions. In fact, the Taylor rule began as an
effort — a successful effort — to show that Fed monetary
policy had been following a path described by that rule. Fed
policy arguably continued to follow such a path until the
2007-2009 recession, when most Taylor-type rules began
calling for negative interest rates.
In January 2012, the Fed’s policy of constrained discretion
44

E CO N F O C U S | F I R ST Q U A RT E R | 2 0 1 5

again took a step in the direction of being rule-like. At that
time, the Fed announced an explicit long-run inflation target
of 2 percent. Since then, the Fed has continued to commit
publicly to achieving this target and to addressing substantial
departures from it with monetary action, if need be.
Why not take that final step, then, and adopt a formal
rule such as Friedman’s or Taylor’s — and follow it strictly
all the time?
I think the main answer is that while a monetary policy
rule could be useful during normal times, we don’t always live
in normal times. In fact, if we think of “normal” as “average,”
then times are almost never normal. This might not matter if
the economy’s abnormal times always looked like its abnormal times of the past; in that case, we could write the rule
to deal with them, too. As we know well from the financial
crisis and its aftermath, however, this is not the case. As Leo
Tolstoy wrote of unhappy families, each unhappy economic
period is unhappy in its own way.
Thus, it is unlikely anyone could have constructed an
autopilot prior to 2007 to steer the Fed through the ensuing
recession and weak recovery. Nor is it plausible to think
that monetary policy rules in existence today are necessarily
sufficient to get us through the next crisis, whatever it may
turn out to be. FOMC members will then need to draw upon
lessons of history and theory and upon their own judgment.
Further, a formula like the Taylor rule embodies assumptions about underlying characteristics of the economy.
Concepts like the potential rate of output growth or the natural rate of unemployment could affect one’s view of what
the exact rule is that the central bank should follow. These
are theoretical concepts — they can have a precise meaning
in an economic model but are not directly observable in the
data. The process of discussing policy within the FOMC
can in part revolve around the sorting out of different views
about these “latent variables.”
Monetary policy rules can serve a useful function within
a regime of constrained discretion by helping the Fed communicate what it is doing and intends to do. But for the Fed
to prescribe a policy rule for itself and to commit always to
follow it, or for Congress to impose such a rule, could actually reduce rather than increase the Fed’s credibility with
markets — because market participants understand that a
commitment never to vary from a monetary policy rule is a
commitment that neither Congress nor the Fed could realistically keep.
EF
John A. Weinberg is senior vice president and special
advisor to the president at the Federal Reserve Bank
of Richmond.

NEXTISSUE
Private Debt

In the run-up to the Great Recession, private debt as a share of
GDP reached historic levels. Debt serves many useful economic
functions, but research suggests that excessive debt can be
harmful for the broader economy — for example, the most
severe financial crises and slowest recoveries of the 20th and
21st centuries were preceded by large credit booms. Why do
households and firms choose debt financing, and why might it be
damaging during economic downturns?

Evaluating Medical Treatments

It’s hard to put a price on an extra day or year of life. But many
health care experts believe considering cost-effectiveness is
crucial to lowering health care costs and improving patient care.

Smart Grid

Technology to enhance monitoring and communication across
the electrical grid, called the “smart grid,” enables utilities to
charge prices that change as the cost of producing electricity
fluctuates. Some economists believe such dynamic pricing could
help reduce demand during peak usage times, leading to economic
and environmental gains. But can the technology live up to
expectations?

Jargon Alert
The “real” interest rate is the inflationadjusted cost of borrowing and return
on investment. It has taken on a broader
significance in monetary policy, as some
economists have argued that central banks
should push real interest rates into negative
territory when nominal interest rates are
close to zero yet the economy remains weak.

Economic History
Development of Hilton Head Island
transformed one of the poorest and most
isolated corners of South Carolina into a
haven for wealthy people from all over
the world, creating a popular model for
resort and residential development. How
did it happen, and who benefited from this
economic miracle?

Interview
James Poterba of the Massachusetts
Institute of Technology on the shifting
financial sands of retirement, the debate
over the home mortgage interest deduction,
and how MIT’s economics department
reached the top ranks after World War II.

Visit us online:
www.richmondfed.org
•	To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
• To subscribe to our magazine
•	To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

The Richmond Fed’s 2014 Annual Report features the essay

Living Wills: A Tool for
Curbing “Too Big to Fail”

Richmond Baltimore Charlotte

www.richmondfed.org

14064_FRB_14_Cover_Vfinal.indd 1

The Annual Report is available on the Bank’s website at
www.richmondfed.org/publications/research/annual_report/

FEDERAL RESERVE BANK OF RICHMOND

FEDERAL RESERVE
In addition
toBANK
the essay and the Bank’s financial
OF RICHMOND
statements, the Annual Report includes a
summary of the region’s economic performance
in 2014 and an update on activities by the Fed
and the payments industry to improve the U.S.
payments system.

2014 ANNUAL REPORT
FEDERAL RESERVE BANK OF RICHMOND

2014 ANNUAL REPORT

In the essay, Richmond Fed economist Arantxa
Jarque and senior editor David A. Price explore
an innovation of the Dodd-Frank Act of 2010,
which requires the largest and most complex
financial institutions to create resolution plans to
follow if the institutions fall into severe financial
distress. In these plans, or “living wills,” the
institutions must give regulators a road map
forFifth
resolving
them via the bankruptcy process
Federal Reserve
District Offices
— without disrupting the financial system or
resorting to public bailouts. Jarque and Price
argue
that living wills
are a tool that
regulators
RICHMOND
BALTIMORE
CHARLOTTE
701 East Byrd Street
502 South Sharp Street
530 East Trade Street
can
use
to
curb
the
“too
big
to
fail”
problem
by
Richmond, Virginia 23219
Baltimore, Maryland 21201
Charlotte, North Carolina 28202
(804) 697-8000
(410) 576-3300
(704) 358-2100
decreasing the odds that policymakers will feel
compelled to rescue large, complex firms for fear
that their failure would damage the economy.

Living Wills: A Tool for
Curbing “Too Big to Fail”
With contingency planning,
regulators can make the
financial system more stable —
and avoid future bailouts

4/30/15 11:33 AM