View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

SECOND QUARTER 2010

THE

FEDERAL

RESERVE

BANK

OF

RICHMOND

VOLUME 14
NUMBER 2
SECOND QUARTER 2010

COVER STORY
12

Do Deficits Matter? And If So, How?
As fiscal imbalances increase, economists debate their effect
on the macroeconomy
The effects of budget deficits are gaining renewed attention
because of current shortfalls and large projected expenditures on
entitlement programs. Economists seem to agree that deficits are
not inherently inflationary. But their effects on interest rates and
other economic variables are less certain.

FEATURES
16

Our mission is to provide
authoritative information
and analysis about the
Fifth Federal Reserve District
economy and the Federal
Reserve System. The Fifth
District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
The material appearing in
Region Focus is collected and
developed by the Research
Department of the Federal
Reserve Bank of Richmond.

Markets for Safety: Product recalls yield mixed effects on firms
DIRECTOR OF RESEARCH

A number of recent high-profile cases of product recalls suggest that the
marketplace generally works as economists would predict. Firms that
produce defective goods usually take a hit to their reputation and their
bottom line, though there are exceptions.

John A. Weinberg
EDITOR

Aaron Steelman
SENIOR EDITOR

Stephen Slivinski

19
MANAGING EDITOR

Advancing Immunity: What is the role for policy in the
private decision to vaccinate children?

Kathy Constant

Despite direct benefits, some parents choose not to vaccinate their
children and can effectively free-ride off the immunization of others.
Policymakers must weigh potential limitations on private freedoms
with public health to achieve the socially optimal level of vaccination.

Renee Courtois
Betty Joyce Nash

STA F F W R I T E R S

22

The Generosity Cycle: Charitable giving during downturns
Philanthropy professionals have been investigating patterns of giving
during the downturn to see what they can learn. They find that people
cut back and reallocate gifts, but things could be worse.
24

High-Speed Chase: Taking broadband to the limit
Many remote areas do not have broadband access due to the high cost
of extending service. That broadband gap ultimately may be closed, or
at least narrowed, using wireless configurations, satellite, and existing
power lines.
27

Of Mines and Markets
An explosion at a West Virginia coal mine raises legitimate questions
about the role of market discipline in workplace safety as well as the
effectiveness of regulation.

DEPARTMENTS

1 President’s Message/Placing Limits on Fed ‘Credit Policy’
2 Upfront/Regional News at a Glance
5 Federal Reserve/How the Gold Standard Works in Theory and Practice
8 Jargon Alert/Leading Indicators
9 Research Spotlight/What Immigration Means for the Economy
10 Policy Update/Currency Swaps with Foreign Central Banks
11 Around the Fed/Righting What Went Wrong
28 Interview/Justin Wolfers
32 Economic History/Intranational Trade
36 District Digest/Economic Trends Across the Region
44 Opinion/Too Big to Fail and the Distortion of Compensation Incentives

E D I TO R I A L S U P P O RT/C I R C U L AT I O N

Jessie Sackett
CONTRIBUTORS

Ross Lawrence
Sonya Ravindranath Waddell
Christina Zajicek
DESIGN

BIG (Beatley Gravitt, Inc.)
Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
Subscriptions and additional
copies: Available free of
charge through our Web site at
www.richmondfed.org/publications
or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Region Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Region Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal
Reserve System.
ISSN 1093-1767

PRESIDENT’S MESSAGE
Placing Limits on Fed ‘Credit Policy’
n July 21, President Obama signed the Dodd-Frank
Wall Street Reform and Consumer Protection Act.
At more than 2,300 pages, this is a large and wideranging law with implications for virtually every aspect of
banking and finance in the United States. It creates new
government agencies, new obligations and powers for existing financial regulators, and new limits on the permissible
activities of banking firms. The process of fully implementing the Act will stretch over many years and will include
more than 240 rule-makings and 60 studies by various
agencies.
As the legislation was being crafted, I expressed concerns
about the portion of the bill that created a new governmentrun resolution mechanism for large failing financial
institutions. The discretion to shield creditors, especially
short-term creditors, if one of these firms were to be closed
could produce ambiguity for investors. Lingering belief in
the possibility of such protection could dampen the market
discipline the Dodd-Frank Act seeks to enhance.
But the new law also does some very good things. For
instance, it tightens constraints on risk-taking by large
complex financial institutions — and it provides for more
consistent consolidated oversight of those entities when
different affiliates have different functional regulators. It
also creates a stronger and broader mechanism for cooperation and coordination among federal agencies with financial
regulatory and supervisory responsibilities.
There’s another accomplishment of the Dodd-Frank Act
that I think is very important but has gone largely unnoted.
The legislation takes a significant step toward diminishing
the role of the central bank in the allocation of private
credit, and instead placing that responsibility in the hands
of the U.S. Treasury and the Congress.
At the Richmond Fed, we have a history of arguing for
just such a delineation of those responsibilities. My former
colleague Marvin Goodfriend proposed a “credit accord”
between the Treasury and the Federal Reserve, analogous to
the Treasury-Fed Accord of 1951 that allowed the Fed to
conduct interest rate policy independent of government
financing needs. The case for a credit accord rests on the
fact that the provision of central bank credit to private
borrowers, like other public-sector credit provisions, is an
act of fiscal policy and should be subject to the normal
checks and balances the Constitution provides for the
distribution of public funds. In addition, interventions in
private credit markets could compromise the central bank’s
ability to conduct monetary policy independently of the
legislative and executive branches. Such independence has
been crucial to the Fed’s pursuit of price stability since the
1970s, and thus beneficial to the larger economy.

O

The
Dodd-Frank Act
reduces the Fed’s emergency
lending powers by amending
the portion of the Federal
Reserve Act — Section 13(3) —
that allowed the Fed to lend to
“individuals, partnerships, and
corporations” under “unusual
and exigent circumstances.”
Most of the vast expansion of
Fed credit beyond depository
institutions was made under
this authority — the lending
connected with Bear Stearns and AIG, for example, as well
as the special credit programs for the commercial paper and
asset-backed securities markets. The Dodd-Frank Act only
permits lending programs with “broadly based” eligibility
that provide liquidity to the financial system, and only with
the written consent of the Secretary of the Treasury. Fed
lending to aid individual nonbank institutions under Section
13(3) is prohibited.
These provisions, along with a number of new reporting
requirements, reduce the scope of Fed emergency lending
powers and improve accountability, though they stop short
of restricting the Fed from allocating credit entirely.
Nonetheless, the Dodd-Frank Act takes an important step
toward a credit accord, and any journey begins with but a
single step.
RF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

Region Focus | Second Quarter | 2010

1

UPFRONT

Regional News at a Glance

Crossing the Border

As Taxes Rise, Locals May Buy Cigarettes Elsewhere

PHOTOGRAPHY: GETTY IMAGES

During a time when revenue is difficult to come by for many local and state governments, a
number of city councils and state legislatures are looking for ways to raise money. For many of
these governing bodies, raising excise taxes, such as those for cigarettes, could seem to offer the
least politically contested route to increasing revenue.

2

Washington, D.C., imposed a 50 cent increase in its
cigarette tax, effective October 2009. The tax is now $2.50
per pack.
The economic impact of excise taxes like those for
cigarettes has garnered attention from politicians and
academics alike. A recent contribution came from economist David Merriman of the University of Illinois at
Chicago. Merriman arranged teams to collect a representative random sample of littered cigarette packs in parts of
Chicago and neighboring jurisdictions for his paper, “The
Micro-Geography of Tax Avoidance: Evidence from
Littered Cigarette Packs in Chicago,” published recently
in the American Economic Journal: Economic Policy.
Merriman’s results suggest that tax avoidance may be a
significant concern. In Chicago, 75 percent of the littered
packs displayed no city tax stamp, indicating that they
were purchased outside the city. Given the tax differential
between Chicago and neighboring locales, it’s no surprise.
In July 2007, Chicago proper had a combined state and
local cigarette tax rate of $3.66 per pack, while nearby
Indiana had only a 55.5 cent state levy and no local taxes.
Merriman also looked at a sampling of properly disposed
packs in Chicago, and those results indicate that the
littered boxes were representative of all packs.
A key subtlety that Merriman noticed is when it comes
to tax avoidance, distance matters. In Chicago, “the degree
of avoidance diminishes rapidly with distance from the
[Indiana] border,” he said in a phone interview. That observation holds true in cities where he has conducted similar
studies. In New York City, for example, about half of littered packs did not include a NYC tax stamp. But in
Warsaw, Poland, where consumers must travel much
farther for lower-cost cigarettes, only 11 percent of smokers were thought to have participated in the illicit market.
In the case of Washington, D.C., comparisons to New
York City and Chicago may be more apt. At $2.50 per
pack, the excise tax in D.C. is currently the ninth highest

Region Focus | Second Quarter | 2010

among state taxes in the country, more than $1 higher than
the national average for state cigarette taxes. More important, D.C. residents must pay a tax that is more than $2 a
pack higher than in neighboring Virginia, which levies a
fee of 30 cents per pack. “The proximity of D.C. to
Virginia and the ease of transportation between the two
lead me to think you could find a ton of Virginia packs
there,” Merriman says.
From a revenue perspective, the latest numbers from
D.C. certainly are not encouraging. For the six-month
period following the October 2009 tax increase, cigarette
tax revenues in the District of Columbia actually have
fallen 23.6 percent, or $4.9 million, compared to the same
period a year earlier. Of course, high taxes may not be the
only culprit — a slumping economy can diminish consumption and hurt tax revenues as well. But recessionary
effects on excise tax income elsewhere seem more modest.
In Virginia, for instance, tax revenue from cigarettes fell
only 0.32 percent between fiscal years 2008 and 2009.
According to the Washington Business Journal, D.C. Chief
Financial Officer Natwar Gandhi speculated in a February
2010 revenue estimate that D.C.’s tax increase sent local
smokers to Virginia and Maryland to buy cigarettes.
While tobacco-industry lobbyists point to decreases in
tax revenue as a reason to keep cigarette taxes low, certain
advocacy groups, such as the Campaign for Tobacco-Free
Kids, argue that cigarette tax avoidance is overhyped and
not widespread enough in many places to result in a
decline in government revenue.
Merriman suggests that policymakers should avoid
generalizations and instead pay close attention to the
different circumstances and conditions each locality faces,
especially distance to alternative markets. “In D.C., [proximity to Virginia and Maryland] makes avoidance a prime
issue, but say for a large city in the middle of a state, it
shouldn’t feel like it can’t raise taxes without encountering
a significant avoidance effect.”
—ROSS L AWRENCE

End of an Era

South Carolina Hikes Tax on Smokes
n South Carolina, lawmakers in May voted to raise the
cigarette excise tax from 7 cents a pack to 57 cents a pack.
For 33 years, South Carolina had the lowest cigarette tax in
the country — a reign that ended when the tax hike took
effect July 1.
South Carolina has long been a significant tobaccoproducing state, which may partially explain its historical
commitment to keeping cigarette taxes low. Although the
state’s economy has diversified considerably from its mostly
agrarian origins, tobacco remains an important crop.
According to the U.S. Department of Agriculture, South
Carolina dedicates about 20,084 acres to tobacco cultivation,
the fifth highest of any state. Grown mostly in the northeast
part, known as the Pee Dee region, it is South Carolina’s
most profitable crop by acre and the fourth highest by cash
receipts.
The rate increase moves South Carolina closer to the
Fifth District average of $1.06 for cigarette taxes. Virginia
now sits with the lowest tax in the District at 30 cents per
pack, while North Carolina and West Virginia levy 45 cents
and 55 cents per pack, respectively. Washington, D.C., on the
other hand, charges $2.50 per pack in taxes, while Maryland
levies $2 a pack. Both the Fifth District average and South
Carolina’s tax rate remain considerably lower than the
average for all states of $1.45.
Legislators hope the tax increase will provide additional
financing for Medicaid programs for the poor and disabled.
Of the $135 million the hike is expected to raise in revenue
for the state, $125 million will be allotted to Medicaid. That
money should largely replace federal bailout dollars that have
kept the program in the black for two years.
Although the impetus for the new law may have been
financial in nature, antismoking groups have stepped up
pressure on states in recent years to use excise taxes —
among other policy options — to reduce demand for
tobacco products. In April, the Centers for Disease Control
and Prevention issued a report about state excise taxes, highlighting that a 10 percent increase in the effective price of
cigarettes can curb consumption by 4 percent. Of the states
that increased cigarette taxes in 2009, or thus far in 2010,
South Carolina is the first to allocate some of the projected
revenue to tobacco prevention and control. The state will
set aside $5 million for cancer research and smoking
cessation programs.
— ROSS L AWRENCE

I

State Cigarette Excise Taxes
Highest Rates
New York
Rhode Island
Washington
Connecticut
Hawaii
New Jersey
Wisconsin
Massachusetts
District of Columbia
Vermont

$4.35
$3.46
$3.03
$3.00
$3.00
$2.70
$2.52
$2.51
$2.50
$2.24

Lowest Rates
Missouri
Virginia
Louisiana
Georgia
Alabama
North Dakota
North Carolina
West Virginia
Idaho
South Carolina

$0.17
$0.30
$0.36
$0.37
$0.43
$0.44
$0.45
$0.55
$0.57
$0.57

NOTES: Average state tax: $1.45 per pack.
Chart lists state tax rates noninclusive of
federal excise tax or any local taxes.
SOURCE: Campaign for Tobacco-Free Kids

Region Focus | Second Quarter | 2010

3

Taxing e-Commerce

Amazon Fights N.C. Access to Records
eople owe sales taxes on goods purchased online, even if
remote sellers don’t collect. Some catalog and Internet
retailers don’t charge the tax in states where they have no
stores (or other physical presence). So several states have
intensified efforts to collect. North Carolina, for instance,
asked Amazon late last year for information on transactions to
North Carolina addresses. Amazon subsequently sued.
Sales and use collection on Internet purchases are mired in
the confusing concept of “nexus,” or physical presence, and
the issue will likely go unresolved until the U.S. Congress
weighs in. Until then, states will keep trying to persuade
retailers to collect.
For example, North Carolina unveiled a compromise for
Internet retailers who have operated affiliate programs in the
state. Those who agree to collect future sales/use taxes
and sign onto the program by Aug. 31, 2010, won’t pay penalties, back taxes, or interest. Earlier efforts to extract taxes
included a 2009 law requiring online retailers to collect when
affiliate Web sites operated by state residents refer customers
to those retailers.
Other states have passed these “Amazon” laws, named for
the major online-only seller. In response to the North
Carolina law, Amazon ended its agreements with bloggers and
business Web sites that referred business to the seller. The
firm did likewise last spring in Colorado when the state passed
a similar law. Amazon lost its court challenge to New York
state’s Amazon law, but is appealing.
Amazon’s federal lawsuit seeks to block the request of the
North Carolina Department of Revenue (DOR) for seven
years’ worth of customer order information. The legal action
reads, in part: “The DOR’s actions threaten to chill the exercise of customers’ expressive choices and to cause Amazon
customers not to purchase certain books, music, movies, or
other expressive material from Amazon that they might otherwise purchase if they did not fear disclosure of those
choices to the government.” Amazon wants the court to
agree, so that other states won’t do likewise. In late June,
the American Civil Liberties Union filed a complaint on
behalf of one named and six anonymous North Carolinians, in
support of Amazon’s complaint. The ACLU intervened,
according to its press release, because of free speech and
privacy issues.
The secretary of the DOR, Kenneth R. Lay, wrote the
American Booksellers Association in June, in response to a
request, that the department isn’t interested in customers’

P

4

Region Focus | Second Quarter | 2010

specific book titles but needs product codes to calculate the
taxes.
The stakes are rising, along with the value of goods and
services sold online. In 2008, the value reached $3.7 trillion,
according to the U.S. Census Bureau’s latest adjusted figures.
About $142 billion were business-to-consumer retail sales.
As consumer spending picks up, it’s likely that online and
catalog sales will too.
Donald Bruce, William Fox, and LeAnn Luna of the
University of Tennessee estimate state and local revenue
losses nationwide may grow to $11.4 billion by 2012. Estimates
of losses in North Carolina, with a 5.75 percent sales tax, could
reach $213.8 million.
Warehouses are apparently excluded from the definition of
“physical presence.” Amazon operates a warehouse in
Virginia, from which merchandise is shipped, but pays no
sales and use tax in Virginia. An Amazon bill introduced
during the Virginia General Assembly in 2010 failed to pass.
The courts last weighed in on remote sellers and tax
collection in 1992. The U.S. Supreme Court ruled in Quill
Corp. v. North Dakota that a business wasn’t required to charge
sales tax in states where it had no physical presence. The
opinion suggested Congress had the authority to resolve the
issue. So far it has not, though some states have simplified tax
rates and administration to make collection easier. Remote
sellers have objected to the complexity and variation among
state and local tax regimes. Through the Streamlined Sales
and Use Tax Agreement, 23 states of the 45 that collect sales
taxes have legislated changes conforming to the agreement.
North Carolina and West Virginia are two Fifth District
states that have done so.
There’s a possible advantage for remote sellers who don’t
collect taxes. “Theory would suggest you have out-of-state
firms competing on something other than a level playing
field,” says Don Bruce, one of the University of Tennessee
economists who has studied sales tax revenue losses from
electronic commerce. They operate at an advantage over local
firms that do remit this tax. “So there’s an inflow of activity
from those sellers, presumably at the expense of a local business.” The lack of clarity on the sales tax issue also can distort
remote sellers’ organization and location decisions, he notes.
States are likely to get more aggressive in trying to collect
from catalog and Internet retailers, but many customers are
unlikely to voluntarily pay the tax when it’s not collected at
the time of purchase.
— B E T T Y J OYC E N A S H

FEDERALRESERVE

An Anchor of Gold
BY ST E P H E N S L I V I N S K I

ome modern critics of the
Federal Reserve suggest that it
could be eliminated and replaced
with a gold standard. They claim that
monetary policymakers are apt to
bend under pressure to inflate the
currency. A gold standard, on the
other hand, can serve as an anchor for
the currency that puts a limit on the
growth rate of the money supply.
There are benefits to a gold standard, but there are costs too. The
history of the gold standard provides
important context for the suggestion
that the United States should return
to a commodity-backed monetary
system — gold historically being
the most commonly used commodity.
Additionally, policymakers and the
public could benefit from a greater
understanding of how the gold standard works, even if reforms of the
monetary system do not include its
restoration.

S

PHOTOGRAPHY: CURATOR SECTION, FEDERAL RESERVE BANK OF NEW YORK

Mechanics of a Gold Standard
In the United States, the gold standard
operated for most of the 18th century
and the early 20th century before the
creation of the Fed. (See sidebar).
In the absence of a central bank,
nations that committed to the gold
standard agreed to redeem their
currency at a fixed price of gold. The
gold standard effectively fixed
exchange rates between participating
nations since those currencies were
themselves fixed to gold. When
the stock of gold is relatively fixed,
this arrangement can provide a
predictability that currencies not
anchored by a commodity standard
may fail to produce. The supply of
money is constrained by the amount
of gold in the vaults of each nation.
By contrast, fiat money created by
central banks and not backed by a
commodity in relatively fixed supply
could be devalued simply by printing
more of it.

That doesn’t mean that prices
wouldn’t change under a gold standard. In practice, the price level of
nations would tend to move in tandem
under this arrangement. The mechanism that drives the movement in the
price level is the balance of payments
that results from trade between
nations. For example, assume that a
technological innovation increases
economic growth in the United States.
Since the supply of gold, and therefore
the money stock, is fixed, prices in the
United States will fall since it is cheaper to produce goods domestically as a
result of the innovation. Prices of U.S.
exports to other countries would fall
too. That leads to lower demand for
U.S. imports — which are now relatively more expensive — and increased
demand for U.S. products
abroad.
Under a gold standard, the
currency and the commodity by
which it is backed travel together. In the example above, the
trade surplus would also result in
a balance-of-payments surplus in
which gold from overseas would
find its way into the coffers of
U.S. banks as foreign traders use
dollars to purchase U.S. goods.
The stabilizing effect of the
gold standard manifests itself
here in how prices would react to
this surplus. The new gold in the
United States will reverse the
initial price decline. Meanwhile, the
exodus of gold from abroad will lower
the price level in the countries that
traded with the United States since
smaller amounts of gold equal
a shrinking of the money supply.
Equilibrium is reached when the relative prices between nations converge.

How the gold
standard works in
theory and practice

Historically, many countries have linked
their currencies to gold. Pictured, in 1963
a member of the vault staff at the
Federal Reserve Bank of New York
checks the melt number and fineness
inscribed on each gold bar.

Weighing the Costs and Benefits
While anchoring the money supply to
gold may have obvious benefits, there
are risks to consider. One potential

Region Focus | Second Quarter | 2010

5

downside is the effect that a discovery of large amounts of
gold would have on the price level. This was a problem in the
late 1840s when the California gold rush introduced large
amounts of gold into circulation, causing a “monetary
shock” and a rise in the price level of goods. In addition,
mining and minting gold is costly. Economist Milton
Friedman once estimated that the resource price of producing gold and maintaining a full gold coin standard for the
United States would be more than 2.5 percent of GDP.
However, that cost could fall over time as new technologies
are developed.
Some believe that gold flows between nations serve as a
check on inflation. Tame inflation over the long term was a
strong characteristic of the gold standard. Yet gold flows
could transmit detrimental shocks, both monetary and nonmonetary, between economies. In the past, vulnerability to
economic shocks caused prices to be highly unstable in the
short run. Economist Michael Bordo of Rutgers estimated
the “coefficient of variation” in the price level under the
historical gold standard. A higher coefficient indicates more
short-term instability. For the United States between 1879
and 1913, the coefficient was 17, which Bordo notes is quite
high. Between 1946 and 1990, when central banks were able
to deviate from the automatic responses required by the
gold standard, it was only 0.88. By association, real output is
also highly variable under a gold standard. The coefficient
for variation was 3.5 between 1879 and 1913. But between
1946 and 2003 it was only 0.4.
Central banks would later mitigate the costs of
economic shocks by pursuing countercyclical policies. Yet a

gold standard, by definition, makes the money supply
procyclical — when the economy contracts, so does the
money supply. For supporters, this is a benefit: It can limit
the potentially expansionary impulses of central bankers.
Supporters also point out that the system can work without
a central bank officiating the movement of gold. Instead,
each government must make a credible commitment to
allow currency holders to redeem their bills for a predetermined amount of gold. One way to do this is to pass a law
that fixes the exchange rate between gold and the currency.
In the United States, the Gold Standard Act of 1900 set the
price of one ounce of gold at $20.67. However, keeping such
credible commitments may prove difficult in the wake of
unexpected shocks and geopolitical upheaval.

Central Banks and the Gold Standard
Much of the 20th century featured a mixed system in which
central banks and the gold standard existed simultaneously.
The ideal role of central banks when an international gold
standard is in force is to sustain the fixed exchange rates
and allow prices and output to vary as required by the
movement of gold across borders. When gold is flowing into
the country, for instance, the central bank should raise
the interest rate at which it lends to banks — the discount
rate — to facilitate the inflow. Conversely, the central bank
should lower the discount rate to facilitate the gold outflow
when a balance-of-payments deficit materializes.
However, there can be temptations for central banks to
stop playing by the rules. Monetary policymakers could
“sterilize” the gold flow: They could buy or sell domestic

The U.S. Gold Standard Before the Fed
Between the nation’s founding and 1971, the United States
had been on one form or another of a gold standard. The
authors of the Constitution were of the opinion that any
money minted by the federal governments should be backed
by some “specie” standard (i.e., gold or silver).
On the recommendation of Secretary of State Alexander
Hamilton, the U.S. Congress passed the Coinage Act of 1792.
That officially put the United States on a bimetallic standard in which the dollar was defined as equaling a specified
weight in gold or silver. However, the ratio between gold and
silver that the act established — 15 grains of silver to 1 grain
of gold — served to undervalue gold relative to silver after
the act was passed. This was particularly true over the next
three decades as mines in Mexico yielded more silver. As a
result, gold began to flow out of the United States and silver
began to flow in. While gold and silver coins were still
accepted as legal tender, gold coins became quite scarce.
The Coinage Act of 1834 put the United States on a
de jure gold standard. It moved the ratio of silver to gold to
16-to-1. That helped remedy the imbalance, and gold coins
became more common in the United States.
Before the Civil War, state-chartered banks could issue

6

Region Focus | Second Quarter | 2010

notes and certificates that were redeemable in specie.
During the war, a partly decentralized national banking
system existed in which federally chartered banks would
deal in “greenbacks” issued by the U.S. government backed
by little specie. The return to an operational gold standard
occurred in 1879 when the U.S. government resumed
payments of gold to dollar holders who demanded them.
By that point, however, a series of Supreme Court decisions
had made the greenbacks legal tender, which over time
crowded out state-issued currency.
The United States tied itself to a de facto monometallic
standard with the Gold Standard Act of 1900. It set the
dollar price of gold at $20.67 per ounce, effectively relegating
silver to a subsidiary role in the monetary system. This meant
that dollars would circulate alongside silver coins, and the
U.S. Treasury would aim to sustain the dollar price of gold.
The creation of the Federal Reserve in 1913 took away
from the executive branch the explicit power of money
stock maintenance. The history of the 20th century would
show, however, that the relationship between a gold
standard and the central bank was an uneasy one.
— STEPHEN SLIVINSKI

securities — in other words, either expand or contract the
money supply relative to gold — to shield the domestic
money supply from the external disequilibrium. This would
weaken the ability of the gold standard to anchor the value
of money in the economy.
Economic downturns, political pressures, and wartime
threatened the gold standard in the 20th century. Just as it
was at the peak of its effectiveness in 1914, World War I
broke out. Britain, the banking center of Europe, experienced a run on sterling and enacted trade and exchange
controls, including a postponement of domestic and international payments. This basically made the international
gold standard nonoperational. Other countries instituted
similar capital controls. In addition, the issuance of shortterm debt to finance the war effort in the United States led
the federal government to pressure the Fed to abandon the
gold standard rules on exchange rate targets and instead
focus on keeping the interest rates on war bonds low.
After the war, the developed nations tried to reconstruct
the gold standard. The 1917 U.S. embargo on gold exports
was lifted in 1919, and the convertibility of the dollar at the
prewar gold price was restored in 1922. The gold value of the
dollar rather than the pound sterling soon became the reference point for other currencies. The post-war gold standard
was faced with new challenges, though. High tariff barriers
during the 1920s hindered the price adjustment process.
Also, the United States, France, and England began routine
sterilization of gold flows.
The economic pressures of the Great Depression weakened support for the gold standard. Britain left the standard
in 1931 after a massive gold outflow. The United States
followed in 1933 when emergency measures allowed the
federal government to abrogate all gold-related clauses in all
public and private contracts. In 1934 it devalued the dollar
by raising the fixed price for gold to $35 per ounce.
Emergency measures also allowed the issuance of Federal
Reserve notes that did not have to be backed by gold. World
War II drove central banks even further away from the gold
standard as they again sought to keep government borrowing costs low at the expense of the fixed exchange rate. Trade
and capital restrictions also hindered whatever cross-border
price adjustment might have occurred.
After the war, the finance ministers and treasury secretaries of the Allied nations met in Bretton Woods, N.H., to
reconstruct some form of a gold standard. The agreement
essentially linked the dollar to gold and, in turn, all other
major currencies were linked to the dollar. Yet it also allowed
some flexibility for central banks to pursue changes in the

exchange rate. Foreign governments were also allowed to
trade in their dollars to the U.S. government in return for
gold. The expectation was that the United States could credibly commit to maintaining the standard over the long term.
In the early 1950s, the United States held close to 60 percent of the world’s gold reserves. By the 1960s, however,
dollars began to rapidly flow out of the United States as a
result of the Fed monetizing the debt issued to pay for
spending on the Great Society social programs and the
Vietnam War. The inflationary policies of the United States
put pressure on currencies that were linked to the dollar to
revalue their currency to satisfy the balance of payments —
pressure that reached its peak in 1970. Additionally, U.S. gold
reserves were beginning to dwindle because foreign governments were rapidly trading in their dollars for gold. Many
foreign policymakers were not convinced that the U.S.
government would regain a commitment to exchange rates
per the Bretton Woods rules in the near term. To put an end
to the international pressure, President Richard Nixon
finally took the dollar off gold in 1971, effectively killing the
international gold standard.

Gold and Monetary Policy Today
Since the episode of runaway inflation in the 1970s,
monetary economists have learned a number of lessons.
Foremost among them is an understanding of how central
bank credibility is vital to monetary policy. In some sense,
that is also a lesson of the gold standard years. Regardless of
the signals central bankers use to navigate policy, public
trust that they will stay the course is essential to making the
policy work. Even under a gold standard, the stability provided by the commodity anchor dissolves if the central bank
can’t or won’t credibly commit to the rules of the standard.
Today, the price of gold is just one of a number of
signals that Fed policymakers may use to make decisions
about the direction of monetary policy. Since the 1980s,
the Fed’s independence and need to maintain its credibility
have largely been helpful in keeping inflation under
control even when it has to occasionally embark upon
countercyclical policy. Many of the traits that supporters
of the gold standard value, such as long-term price stability,
have materialized over the past 20 years under a fiat money
system not directly tethered to the price of gold.
It’s unlikely that the nations of the world will adopt the
gold standard again. But the lessons of central bank credibility are a product of the gold standard years. Strong public
expectations about how the Fed conducts policy may produce
the same benefits today that a gold standard once did.
RF

READINGS
Bordo, Michael D. “Gold Standard.” In David R. Henderson (ed.),
The Concise Encyclopedia of Economics. Indianapolis, Ind.: Liberty
Fund Inc., 2007.
Hetzel, Robert L. The Monetary Policy of the Federal Reserve: A
History. New York: Cambridge University Press, 2008.

Meltzer, Allan H. A History of the Federal Reserve — Volume 1: 19131951. Chicago: University of Chicago Press, 2003.
Timberlake, Richard H. Monetary Policy in the United States: An
Intellectual and Institutional History. Chicago: University of Chicago
Press, 1993.

Region Focus | Second Quarter | 2010

7

JARGONALERT
Leading Indicators
orecasting economic activity is critical to policymaking, though at times it is so fraught with uncertainty
that many consider it an art rather than a science.
Fortunately, forecasts can be aided by certain economic
data that tend to react before the economy as a whole
starts to move in a new direction. Such data are called
leading economic indicators because they reflect economic
agents acting in response to expectations about the future
direction of economic activity.
Consider the stock market, for example. Financial market participants are generally quite good at gathering
information about the likely future course of the economy.
A rise in stock prices, therefore, may signal that investors
anticipate a coming surge in demand. Similarly, a stock
market decline could signal that many firms’ prospects are
diminished due to a coming contraction
or continued sluggishness.
Other financial market variables also
hold predictive value. The difference
between short-term and long-term
interest rates for bonds, called the
“yield curve” slope, has proven to be an
insightful economic indicator. When
the slope is negative, long-term bond
rates are lower than those for shortterm debt instruments, which implies
that investors expect interest rates to
fall in the future as they would during a recession. The slope
of the yield curve has turned negative about a year before
each of the last seven recessions. Of course, not all financial
market moves are clearly and unambiguously related to fundamentals, so the signals sent by asset prices and interest
rates sometimes can be “noisy.”
Economists also can gain perspective on the economy’s
prospects by tapping into businesses and individuals on the
ground. Home builders, for example, must obtain a permit
before building — and they are unlikely to do so unless they
think consumers are confident enough in their jobs and
other economic prospects to make the large purchase of
a home. Therefore, the number of new building permits
authorized, as measured and released by the Census Bureau,
is a strong indicator of coming construction activity.
Home construction, in turn, typically precedes other types
of economic activity, including consumer spending on
housing-related goods such as furniture and other home
furnishings.
To get an overall sense of what message these and other
leading economic indicators are providing, a research
organization called the Conference Board compiles them
into an index of Leading Economic Indicators (LEI). The

F

8

Region Focus | Second Quarter | 2010

Conference Board took over this duty from the Bureau of
Economic Analysis in 1995, though the index of leading
indicators can be traced back to the late 1930s when Wesley
Mitchell and Arthur Burns (who would later become Fed
chairman) compiled these data for the National Bureau of
Economic Research.
Each of the above indicators is included in the LEI, along
with several other forward-looking series such as new manufacturers’ orders, initial claims for unemployment insurance,
a broad measure of the money supply, hours worked by
manufacturing workers, and the speed with which industrial
companies receive deliveries from suppliers. Also included in
the LEI is the Index of Consumer Expectations, a
monthly survey conducted by the University of Michigan.
Consumers who feel confident about the economy’s prospects
may be more willing and likely to spend,
which helps turn that optimism into
economic reality.
Each data series included in the LEI
is chosen for its consistent relationship
with the business cycle, demonstrated
over many years. The data also must be
timely, relatively void of erratic movements from period to period, and
economically significant. When push
comes to shove, no data series matches
each of those criteria exactly, but the 10
of them included in the LEI arguably come closest.
Since the LEI compiles data series that have already been
released, it doesn’t provide much new information to
markets. But since any single data series may have uncharacteristic blips from period to period, the LEI provides a more
reliable picture of the overall trend. If one or two components
of the LEI rise sharply, it could be due to unique or even
temporary factors taking place in those markets. But if the
LEI as a whole rises persistently, investors and policymakers
may take notice. Taken together, the LEI composite can help
reveal and identify turning points in the business cycle better
than any one series can do alone. The LEI has historically led
downturns by eight to 20 months, and recoveries by one to 10
months, according to the Conference Board.
Nonetheless, it’s important to remember that “the economy” is simply a collection of the actions of millions of
individuals and businesses interacting with each other, so
there are a great many indicators to watch to know how
the economy is performing. No one indicator or index will
hold the same importance in every business cycle, and no
single economic indicator will ever tell the whole story
about economic activity, including the state of the current
recovery.
RF

ILLUSTRATION: TIMOTHY COOK

BY R E N E E CO U RTO I S

RESEARCH SPOTLIGHT
What Immigration Means for the Economy
BY RO S S L AW R E N C E

through positive spillovers on fellow researchers, the
s concerns about a difficult labor market weigh
achievement of critical mass in specialized research areas,
heavily on the minds of many Americans, an endurand the provision of complementary skills such as manageing anxiety about the effects of immigration on
ment and entrepreneurship,” the authors write. They also
the economy underlies many policy debates. As a result, a
note “that the immigrant patenting advantage over natives is
number of policymakers and pundits have declared that
entirely accounted for by immigrants’ disproportionately
liberal immigration policies are a source of economic
holding degrees in science and engineering fields.”
instability for the country.
Of course, unskilled immigrants rather than skilled ones
Jennifer Hunt and Marjolaine Gaulthier-Loiselle put
often receive the majority of public scrutiny. Other econosome of these concerns into context with their recent paper.
mists, including David Card of the University of California
Much of the conventional wisdom holds that immigrants
at Berkeley, have looked at this issue. In particular, Card has
exhaust more than their share of public resources, in addiaddressed the question of whether immigrants hurt the
tion to providing competition to native-born Americans in
job opportunities of less skilled native workers. In a 2005
the domestic job market. But economic research about
paper titled “Is the New Immigration Really So Bad?” he
these newcomers suggests that they may provide more of a
concludes that, on the whole, “evidence that immigrants
long-run boon to the U.S. economy than previously thought.
have harmed the opportunities of less educated natives is
This article, for example, studies the contribution of skilled
scant.” He also responds to the
immigrants to innovation in the
research of economist George
United States.
“How Much Does Immigration Boost
Borjas of Harvard University and
The authors point out that
others, who argue that recent
the United States had about a 12
Innovation?” Jennifer Hunt and
years have witnessed an increase
percent foreign-born populaMarjolaine Gaulthier-Loiselle.
in cultural and language differtion in 2000, but 26 percent of
ences between immigrants and
U.S. Nobel Prize winners from
American Economic Journal:
natives that may make assimila1990-2000 were immigrants, as
Macroeconomics. April 2010,
tion more difficult. According to
were 25 percent of the founders
Card’s research, immigrants may
of venture-backed publicly
vol. 2, no. 2, pp. 31-56.
be adapting to the American
owned American companies
lifestyle better than some think
between 1990 and 2005. To
— on average, second-generation children of post-1965
explore the link between immigration and innovation, Hunt
immigrants have higher education levels and wages than
and Gaulthier-Loiselle use data about U.S. patents per
their native counterparts.
capita. “The purpose of studying patents is to gain insight into
Card considered a more specific example of the relationtechnological progress, a driver of productivity growth, and
ship between immigration and unemployment in a 1989
ultimately economic growth. If immigrants increase patents
paper, in which he examines the impact of the Mariel
per capita, they may increase output per capita and make
Boatlift on the Miami labor market. During about a fivenatives better off.” As the authors note, such information
month period in 1980, some 125,000 Cubans fled a declining
undoubtedly should influence policy debates about skilled
economy and internal tensions in their native country. The
immigration, such as determining the appropriate number of
data suggest about half of these immigrants, most of whom
employer-sponsored H-1B visas to allow for skilled workers.
were relatively unskilled, settled permanently in Miami,
What if immigrants are just crowding out natives from
Card writes. This drove up the city’s population by about
the science and engineering fields? They control for that
7 percent. It had no discernable effect on the wage rates
possibility, however, in a way that is designed to estimate
for less skilled non-Cuban workers, Card found, nor did
the impact of immigrants on innovation given positive or
Miami’s unemployment rate rise disproportionately to state
negative spillover effects.
and national averages.
Based upon individual-level data gathered from the
The growing body of research ought to contribute to a
National Survey of College Graduates, the authors show
more informed debate about U.S. immigration policy.
that a 1 percent increase in the proportion of collegeAlthough other political considerations play a role in this
graduate immigrants in the population increases patents
conversation, the bulk of evidence seems to suggest that
per capita by 6 percent.
immigrants — of varying skill levels — have a net positive
“In addition to the direct contributions of immigrants
effect on the American economy.
RF
to research, immigration could boost innovation indirectly

A

Region Focus | Second Quarter | 2010

9

POLICYUPDATE
Currency Swaps with Foreign Central Banks
BY R E N E E CO U RTO I S

Craig Kennedy, and Jason Miu. Foreign
he Fed acts as the lender of last
banks were hit especially hard when
resort when financial market
distress makes it difficult for
What happens when activity in private U.S. dollar interbank
lending markets slowed because they were
banks to obtain the short-term loans that
banks in need of
more dependent on those markets than
help finance their operations. That is,
the Fed lends U.S. dollars to U.S. banks.
dollar-denominated American banks. U.S. financial institutions are relatively flush with dollars —
But what happens when the banks in
the denomination of a majority of their
need of dollar-denominated funds are
funds are located
assets as well as their deposit base — and
located abroad?
abroad?
could tap into dollar backstop financing,
The Fed typically has no direct means
including from the Fed, when needed.
of lending to foreign financial instituSo, while actions that eased global finantions, yet many foreign banks hold U.S.
cial distress were surely beneficial for U.S. institutions, the
dollar-denominated assets and liabilities, and thus have
swap lines were not really created to benefit U.S. banks, note
occasional need to borrow from and lend to other banks in
Michael Fleming and Nicholas Klagge, also of the New York
U.S. dollars. When financial markets recently grew nervous
Fed, in an April 2010 summary of the swap program.
about the fiscal positions of Greece and other European
The swaps carry little direct risk to the Fed. There is no
countries and the exposure of financial institutions to trouexchange rate risk since the loans are made and reversed
bled sovereign debt, investors charged a higher premium to
using the same exchange rate. And though the funds are
extend funding to those institutions, including in dollars,
intended to be loaned to private institutions, the foreign
risking a disturbance to financial and economic activity.
central bank assumes any risk that loans will default, deterThat's why in May the Fed reopened a “currency swap”
mining independently which institutions are able to borrow
program with five central banks to help them act as lender of
and what types of collateral they can borrow against.
last resort in their respective countries — in dollars. The
Similar swap lines were launched following the terrorist
swap lines work like this: The Fed sells a quantity of dollars
attacks of 9/11, in a coordinated effort by several central
to a foreign central bank, and in payment receives an equal
banks to keep global financial markets operational. In fact,
quantity in foreign currency at the prevailing market
other forms of swap agreements were in place from 1962
exchange rate. Simultaneously, the Fed and the foreign
to 1998, though those existed mainly to facilitate central
central bank agree to trade the funds back at a date agreed
banks’ interventions in foreign exchange markets to affect
upon in advance, between one day and three months later.
exchange rates — which the Fed rarely does today.
The second transaction reverses the first. But over the
The recent swap lines will play only a supporting
duration of the swap, the foreign central bank is free to use
role in easing the European financial market strains, noted
the funds to make dollar-denominated short-term loans to
Brian Sack of the New York Fed in a June speech. The
banks in its jurisdiction.
policy actions of European governments toward debt will do
The currency swap lines were previously launched in
the heavy lifting. Indeed, little of the dollar-denominated
December 2007 to address the financial crisis, but had been
funds have actually been exchanged with the five central
allowed to expire in February 2010 after interbank dollar
banks involved relative to the amount traded during
funding markets improved. Initially, the swap lines were
the financial crisis. (At the program’s peak in December
used because investors feared counterparties’ exposures to
2008 swaps outstanding comprised more than a quarter
securities related to subprime mortgages in the United
of the Fed’s total assets.) Still, the swap lines may be imporStates. Those assets were often denominated in U.S. dollars,
tant in reassuring creditors that dollar funding is available,
and for foreign banks a large portion was financed through
as central banks hope to head off further dollar liquidity
interbank dollar funding markets. When interbank lending
shortages.
became strained, these institutions had to either find
“The swaps were essentially put in place in a preemptive
alternative sources of dollar funds or sell the assets under
manner, under the view that their presence would provide
chaotic market conditions, which potentially could have
a backstop for dollar funding markets and help to bolster
contributed further to their already plunging prices.
market confidence,” Sack said. To firmly establish confiEuropean Union, United Kingdom, and Swiss banks’
dence that dollar liquidity will be available, the swap lines
dollar exposures on their balance sheets exceeded $8 trillion
will be kept open until January 2011.
RF
in 2008, report New York Fed economists Linda Goldberg,

T

10

Region Focus | Second Quarter | 2010

AROUNDTHEFED
Righting What Went Wrong
BY C H A R L E S G E R E N A

“Prudential Discipline for Financial Firms: Micro, Macro,
and Market Structures.” Larry D. Wall, Federal Reserve Bank
of Atlanta Working Paper 2010-9, March 2010.

ederal Reserve economists have been busy dissecting
the 2007-08 financial crisis and evaluating various
reforms of market regulation. In this paper Larry Wall at
the Atlanta Fed discusses ways to strengthen market
discipline at financial firms as well as revise government
supervision at both the firm level (microprudential) and
the market level (macroprudential).
Wall argues that the owners and managers of a financial
firm won’t manage their risks prudently unless they bear the
costs of poor management practices. “If the government
bears most of the risk of loss, not only will the managers lack
adequate incentive to manage the risk,” he notes, “but the
government is likely to insist on playing a major role in the
firm’s risk management.” And regulators can’t observe or
second-guess every manager’s financial decisions.
Wall suggests that a microprudential supervisor should
regulate a broad spectrum of firms, which encourages information sharing among supervisors of different sectors. As
for macroprudential supervisors, Wall says they should be
bold in their efforts to understand major threats to the
financial system, but modest in their ambitions.
“Macroprudential supervisors cannot guarantee an end to
all financial instability, and trying to attain such a goal could
be worse than having no macroprudential supervisor,”
Wall notes. Aiming to prevent all instability will create “an
incentive to severely limit the financial system’s capability to
innovate and to take risk.”
Wall does offer several options for mitigating the chances
of large losses turning into a full-blown crisis. A special
resolution regime could help shut down insolvent firms that
are systemically important, thus avoiding the instability that
may result from a bankruptcy. Or, firms could be required to
develop their own resolution plan. Regulators could also
reduce the probability of failure by obtaining the commitment of private investors to recapitalize failing firms.

F

“Financial Statistics for the United States and the Crisis:
What Did They Get Right, What Did They Miss, and How
Should They Change?” Matthew J. Eichner, Donald L. Kohn,
and Michael G. Palumbo, Federal Reserve Board Finance
and Economics Discussion Series 2010-20, April 2010.

ould more and better data on risky mortgages and securitization have averted the financial crisis? Donald
Kohn, vice chairman of the Federal Reserve Board of

C

Governors, and two deputy associate directors of the
Board’s research and statistics division evaluate the true
benefits of improved data collection in this paper. Their
general conclusion is that, while gaps in data and analysis
prevented market participants and regulators from recognizing the vulnerabilities building up in the financial system,
filling those gaps is only one step in developing an early
warning system.
“The information delivered by expanded and improved,
but essentially static, aggregate data can (and should) be
relied on for signals akin to grainy images captured by
reconnaissance satellites,” the authors note. Such images
are suggestive, but aren’t conclusive by themselves.
“Improved data collection can provide the greatest value by
highlighting changes and inconsistencies that bear further
investigation using other, more-focused tools mobilized to
deal with a particular anomaly.”
“Nonlinear Effects of School Quality on House Prices.”
Abbigail J. Chiodo, Rubén Hernández-Murillo, and Michael
T. Owyang, Federal Reserve Bank of St. Louis Review,
May/June 2010, vol. 92, no. 3, pp. 185-204.

he quality of a neighborhood’s schools is one of the
factors scrutinized by families during their house hunt.
So, it would be logical to expect that factor to be reflected
in home prices. Researchers at the St. Louis Fed argue that
these variables have a nonlinear relationship: The home
price premium grows as school quality increases.
For one thing, families who value education more than
others will compete with one another for homes in neighborhoods with the highest-quality schools. Alternatively,
families may choose homeschooling or private schools to
give their children a better education if they live in lowerquality school districts. Therefore, the quality of the
neighborhood public school is less important to them and
has less influence on home prices.
The authors further hypothesize that school quality can
be considered a luxury good, so people in richer neighborhoods will pay higher home prices for the same marginal
increase in school quality.
To test this effect, the paper’s authors used housing
prices, math test scores for the St. Louis metropolitan area,
and other data. “Unlike most studies in the literature, we
find that the price premium parents must pay to buy a house
in an area associated with a better school increases as school
quality increases,” the authors note. “We also find that the
racial composition of neighborhoods has a statistically
significant effect on house prices.”
RF

T

Region Focus | Second Quarter | 2010

11

As fiscal imbalances
increase, economists

debate their effect on
the macroeconomy

BY ST E P H E N S L I V I N S K I

n late 2008 the U.S. government enacted a number

I

of spending programs that were intended to stimulate the economy and support struggling financial

institutions. In so doing, it continued a practice that has
been common for decades: spending more money than it
collects. The resulting deficit in the budget requires that
the federal government issue Treasury debt to pay for the
spending in the short term.
This seems like a relatively innocuous practice. As long
as capital markets have a demand for Treasury bills, what’s
the worry? But that question has divided economists for
decades. The recent upswing in the current federal deficit
and projected future deficits has pulled this debate back
into public view.
As of fiscal year 2009, the federal budget deficit reached almost
$1.4 trillion, or 9.9 percent of GDP. That’s the largest deficit since 1945 as a
percentage of the national economy. At that time, wartime spending
was accelerated and the budget deficit was an unusually high 22 percent.
It dropped to 7 percent in 1946. Since then, however, it hasn’t reached
beyond 6 percent of GDP.

12

Region Focus | Second Quarter | 2010

The prospect of deficits remains high. Current spending
is projected to keep deficits persistently large for the foreseeable future. The levels of debt that will accumulate are unlike
anything we’ve seen before in peacetime. That will be compounded by the fact that even state and local governments
are issuing debt in historic amounts. The total debt load of
state and local governments has grown from $1.1 trillion in
1995 to $2.4 trillion in 2009. Most of that debt increase —
nearly $800 billion — has been issued in the last six years.
Economists have made some headway in research on the
topic of how deficits might influence macroeconomic variables — in particular they have generally rebutted the idea
that deficits alone have a substantial effect on inflation in
the United States — but there remains debate about
whether deficits have any real influence over other variables,
such as interest rates.
With deficits and debt levels projected to be bigger than
normal in the foreseeable future, the question of what
macroeconomic effects deficits can have is an important
one. The analysis done by economists over the past 30 years
has tried to find consistent relationships between debt levels
and certain macroeconomic variables. The results to date
have been mixed.

Deficits and Inflation
Many arguments have been put forward in defense of
balanced budgets. In the 1950s, some policymakers worried
that running budget deficits was inherently inflationary.
The concern was that government spending in excess of
revenue would artificially increase aggregate demand in the
economy. This was actually a feature, not a bug, in the
schools of Keynesian thought that saw government spending as a lever to revitalize economic production. But the
counter-Keynesian argument of that era sometimes hinged
on an assertion that counterproductive inflationary pressures might arise out of such deficit spending, while at the
same time arguing that government spending was limited in
its ability to boost real output.
It was hard to tell at that time whether either view was
correct as an empirical matter. After the military demobilization post-World War II, the federal government did not
run large deficits until the 1960s. Part of that had to do with
the ideology of President Dwight Eisenhower, who is
remembered as an advocate of balanced budgets because
of his belief that it was a necessary component of a constitutionally limited government. As a practical matter,
policymakers on Capitol Hill and even within the Federal
Reserve then regarded deficits as dangerous because of the
inflationary pressures they might unleash.
For most of the decade, a post-war economic boom
helped sustain revenue and make deficits a less likely threat.
The budget imbalances that did eventually arise in the 1950s
were small (usually between 0.5 percent and 2 percent of
GDP) and transitory. Each of those annual deficits was
mainly the result of an economic slowdown that reduced
federal revenue.

Beginning in the 1960s, however, budget deficits became
the norm. At the same time, inflation began to take off.
While some worried about this, it wasn’t necessarily at odds
with the Keynesian view of deficits. In fact, Keynesians saw
inflation as an acceptable cost of the increased output and
employment that would come from deficit spending.
What’s missing from this simple story is that monetary
policy at the time was becoming progressively looser to support more government spending and that began to fuel the
subsequent inflation. “The extent to which monetary policy
is used to help balance the government’s budget is the key to
determining the effect of budget deficits on inflation,”
writes Keith Sill, an economist at the Philadelphia Fed.
Indeed, one of the things that economists generally agree
on in relation to budget deficits is that — at least in the U.S.
experience — they are not inherently inflationary. Analysis
of the history of fiscal and monetary policy from the 1960s
to the 1980s has led most economists to argue that the relevant factor during this period was that the Fed began to
warm to the idea of “monetizing” the deficit. In essence,
that meant the Fed would act to guarantee there was always
a market for Treasury debt.
The fear of inflationary deficits is most credible today in
small developing countries. Many small developing countries have central banks often motivated more by political
pressures than by a regard for price stability. But it’s difficult
to determine whether one central bank is more independent
than another or more prone to monetizing the debt. As an
empirical matter, capturing the independence of a central
bank quantitatively is difficult.
A study in the Journal of Economic Literature by Stanley
Fischer, the current governor of the Bank of Israel, Ratna
Sahay of the International Monetary Fund, and Carlos Vegh
of the University of Maryland offers some insight to this
question. The authors split a sample of 94 market economies
into high-inflation countries and low-inflation countries.
The high-inflation countries were those that had at least one
episode of 12-month inflation exceeding 100 percent during
the period from 1960 to 1995.
In both sets of countries they needed to find a variable
that would explain the incentive a government would have
to pressure a central bank to monetize the deficit. They
chose seigniorage as a fraction of GDP. When a central bank
“creates” money, it generates seigniorage revenue resulting
from the difference between the cost of producing the currency and the face value of the currency. (For example, if it
costs 5 cents to produce $1, the seigniorage amounts to 95
cents.) That revenue can be used to pay for spending in the
federal budget.
A country with a high seigniorage-to-GDP ratio might
be more tempted to generate that revenue when faced with
a budget deficit. That’s what Fischer and his co-authors
discovered. First, they found that high-inflation countries
tended to rely more on seigniorage to help finance government spending. The ratio averaged about 4 percent in
high-inflation countries and 1.5 percent in low-inflation

Region Focus | Second Quarter | 2010

13

U.S. Federal Publicly Held Debt as a Percentage of
All U.S. Nonfinancial Debt
40
35
30

PERCENT

25
20
15
10
5
0
1959 1963 1967 1971 1975 1979 1983 1987 1991 1995 1999 2003 2007

NOTE: For years 1959 to 1990, data is monthly. For all other years, data is annual.
SOURCE: U.S. Census Bureau, 2005-2007 American Community Survey

ones. Next, they found that a worsening fiscal balance
is more likely to be accompanied by an increase in
seigniorage in high-inflation countries than in the low-inflation ones. A 10 percentage point increase in the budget
deficit as a share of GDP is associated with, on average, a
4.2 percentage point increase in seigniorage as a share of
GDP. In low-inflation countries, however, there was no
significant link.
The experience of high and erratic inflation in the 1970s
in the United States taught Fed policymakers the importance of price stability. The 1980s proved that the Fed could
take the necessary steps to tame inflation. The credibility of
the Fed as an institution is essential to maintaining price
stability. The fact that seigniorage revenue is a very small
portion of the U.S. government’s revenue stream may
merely be secondary to the fact that policymakers have a
much better sense of what works and what doesn’t in terms
of monetary policy. But keeping the lessons of the past 40
years in mind will be vital to making sure that U.S. budget
deficits remain noninflationary.

Deficits and Interest Rates
A debate that has yet to be resolved is whether deficits can
influence interest rates. Like many debates among economists, the different conclusions rest on the assumptions
made and models used.
One type of model assumes that there is a “crowding out”
of investment capital. When a budget deficit is present,
more investment capital is swallowed up by Treasury bonds
relative to a scenario in which a deficit is lower or nonexistent. This diversion of private savings that would otherwise
go to investment makes the remaining available capital more
valuable. That drives up the rate of return necessary for competing investment options (including Treasury bills) to
remain attractive. Hence, a rise in interest rates.
This is the main story told in a few papers co-authored by
Peter Orszag, formerly of the Brookings Institution and
currently the director of the U.S. Office of Management

14

Region Focus | Second Quarter | 2010

and Budget. For example, a widely cited 2004 study he
co-authored with Brookings colleague William Gale comes
to the general conclusion that deficits do raise interest rates.
The estimates they arrive at suggest that the strongest
effects pertain mainly to anticipated future deficits: Every
1 percent increase in the projected budget deficit raises longterm interest rates by 25 to 35 basis points.
Another element that bears on whether deficits affect
the conversion of available savings into investment capital
also happens to be one of the most controversial. It comes
from the assumptions made about how people in the present
view deficits relative to their (or their children’s) expected
income in the future. The notion of “Ricardian equivalence”
— advanced by Robert Barro of Harvard University and
based on an insight from the early 19th century economist
David Ricardo — is the phenomenon that, when faced
with the knowledge that the federal deficit will grow, people
today will save more to account for the fact that they or
their children will face higher taxes in the future to pay
off the debt. As Michael Pakko, an economist at the St. Louis
Fed, explains, under the assumptions of “a closed economy
with rational, forward-looking consumers, Ricardian equivalence suggests that deficits have no effect at all.” The money
borrowed from the public by the government is exactly offset
by new savings.
The logical extension of this idea is that interest rates
wouldn’t have to move to equilibrate capital markets as they
would in a world where the crowding out occurred. Yet,
when economists have set out to identify episodes of
Ricardian equivalence, they have had trouble finding them.
Martin Feldstein of Harvard University has suggested that
the planned bequests that underlie the logic of the phenomenon aren’t all that common. That shouldn’t be surprising,
he argued in a 2004 speech, “in an economy in which
economic growth raises the incomes of future generations so
that even an altruistic parent sees no need to reduce his own
consumption in order to raise the consumption of his adult
children after he has died.”
Although the conditions under which Ricardian equivalence holds are quite restrictive, some economists maintain
that it is a useful baseline against which to measure the
effect of deficit finance on the economy. During the past 25
years, many studies have arrived at the conclusion that there
doesn’t seem to be much connection between interest rate
movements and debt over the long term. In an influential
study, Eric Engen of the Federal Reserve Board and R. Glenn
Hubbard of Columbia University argue that a better way of
viewing the matter isn’t to try to find correlations with yearto-year deficits. Instead, the level of government debt as a
whole is the factor that has the best chance of influencing
interest rates. Even then they find a much smaller effect,
an increase of two to three basis points for every 1 percent
increase in federal debt as a percentage of GDP.
There are a number of reasons this result might strike
someone as unsurprising even if Ricardian equivalence
isn’t assumed. A wide variety of factors can influence the

determination of interest rates and it is difficult to empirically tease out exactly which interest rate movements are
related to increasing debt levels and which are not.
Additionally, the debt incurred by the federal government
over the past 50 years has been consistently smaller than
the aggregate debt incurred by businesses, households, and
state and local governments.
Another factor that has renewed skepticism about the
effect of deficits on interest rates is the volume of capital
from foreign trading partners that has flowed into the country, particularly from those countries with which the United
States has a trade deficit, such as Japan and China. As Pakko
notes, “the demand for U.S. Treasury securities by foreigners
is likely to have mitigated upward pressure on interest rates
that might otherwise have been observed.”

Are All Deficits Created Equal?
None of the research so far is meant to suggest that debt and
deficits can be run up indefinitely without consequence.
As Feldstein argues, for instance, seeing little reaction by
interest rates to deficits shouldn’t imply that deficits don’t
reduce national savings. Instead, he argues that the capital
inflow from abroad is evidence that deficits can lower
savings rates in the United States. A country with “a low
saving rate imports capital,” he notes, and that’s what has
happened. He concludes that deficits “reduce national
saving and capital formation. That lowers the growth rate
for a long period of time and permanently lowers the level of
real income and the real standard of living.”
Part of this argument depends on what creates the deficit
in the first place. For example, small deficits that are the
result of business cycles are generally not damaging.
Revenues dry up while spending remains constant. The
stabilizing effect these sorts of deficits may have on the
economy may even be desirable.
What many textbook models seem to miss is how the
revenue stream that can pay off the debt is structured. Some
economists have pointed out that the current tax code is
heavily biased against capital formation. Raising taxes in
their current form to cover budget shortfalls may be quite
damaging if the deficits are large. The adverse effects that
deficits may have, argues Feldstein, “is reinforced by the
deadweight loss that results from the need to raise substan-

tial amounts of revenue to service the national debt.”
That deadweight loss — or, the investments foregone
because of how the tax system is structured — can be
exacerbated further by the tax code’s penalization of capital
formation relative to consumption.
Of greater consequence than today’s deficits are the permanent structural deficits that may persist and grow over
time. The terms popularly used to discuss budget deficits are
simply cash-flow identities for the near term: Count the
money in and the money out and find the difference. This
operation doesn’t account for the assets on the federal books
nor does it account for the future liabilities of the benefits
promised to retirees through Social Security, Medicare, and
other entitlement programs. These systems are considered
pay-as-you-go programs in which benefits are financed by
current-year taxation. Over time, however, the demographic
reality is that the tax base will shrink relative to the number
of retirees.
The gap between the estimated tax collections and the
benefits to be paid, in present value terms, are enormous —
much larger, in fact, than the current federal debt of about
$13 trillion today. Economist Laurence Kotlikoff of Boston
University estimates that the total unfunded liabilities of the
federal government are in excess of $70 trillion today. It is
these much larger dollar amounts that have many economists worried. These numbers may indeed be large enough
to spur future macroeconomic effects of the sort that some
have feared since the 1980s.
These larger deficits in entitlement programs can be
viewed from this perspective as a byproduct of an institutional problem that requires a structural solution. But it
remains to be seen what form that change will take and
when. Most deficits to this point haven’t been large enough
to prompt policy action, except on the rare occasion when
the Social Security trust fund was on the verge of falling into
deficit in the early 1980s and both the payroll tax and the
retirement age were raised to remedy the problem.
How policymakers will deal with the threats posed by
these unfunded liabilities remains uncertain. Until that
time, economists have once again picked up a debate over
the theoretical models and empirical analysis that is likely to
provide a useful framework to weigh policy options when
the demand for structural change finally materializes. RF

READINGS
Engen, Eric M., and R. Glenn Hubbard. “Federal Government
Debt and Interest Rates.” National Bureau of Economic Research
Working Paper No. 10681, August 2004.
Feldstein, Martin. “Budget Deficits and National Debt.” L.K. Jha
Memorial Lecture to the Reserve Bank of India, Jan. 12, 2004.
Gale, William G., and Peter R. Orszag. “Budget Deficits, National
Savings, and Interest Rates.” Brookings Panel on Economic
Activity, September 2004.

Pakko, Michael. “Deficits, Debt, and Looming Disaster: Reform of
Entitlement Programs May be the Only Hope.” Federal Reserve
Bank of St. Louis Regional Economist, January 2009, vol. 17, no. 1,
pp. 4-9.
Sill, Keith. “Do Budget Deficits Cause Inflation?” Federal Reserve
Bank of Philadelphia Business Review, Third Quarter 2005,
pp. 26-33.

Region Focus | Second Quarter | 2010

15

Product recalls yield mixed effects on firms
BY B E T T Y J OYC E N A S H

F

16

Region Focus | Second Quarter | 2010

Reputation on the Line
Research has confirmed the benefit of a good reputation in
the marketplace. Using the definition of reputation to mean
the “consumer’s subjective evaluation of the perceived
quality” of the producer, management professors Pamela
Haunschild of the University of Texas-Austin and Mooweon
Rhee of the University of Hawaii studied how the reputation of an automaker affects market share in response to
recalls.
High-reputation firms enjoy lower costs, can charge
higher prices, and can access capital more easily. They profit
from better sales and status, and that serves as some protection against competitors and new market entrants. These
assets also translate into greater survival rates and better
financial performance. “A positive reputation is also important to a firm’s competitive advantage because it is a positive
signal to potential buyers and suppliers, increasing their
willingness to contract with a firm,” the authors write.
A good reputation naturally creates expectations of
quality among consumers. The market differentiates
between high-quality, high-priced products and low-quality,
low-priced products, with buyers expecting less from
mediocre products. That means missteps in quality among
high-reputation firms violate consumer expectations to a
greater degree and could prompt some brand switching.
Haunschild and Rhee used official product recall information from NHTSA. (While nearly all recalls are
“voluntary,” the law requires that manufacturers conform to
standards. When they find defects, they’re obliged to inform
NHTSA within five days and notify customers.) To explore
how pre-recall reputation influences impacts on recalls, the
authors used auto industry data from 1975 to 1999, and the
results were published in 2006. “The results were pretty
clear,” Haunschild says. “High-reputation firms suffer more
than low-reputation firms.” The authors also investigated
substitution effects and found that among more unique
products, recall impacts were lessened because “consumers
can’t just go find another alternative.”
With the instantaneous information flow via the
Internet, reputation effects could be greater. “For the highreputation automakers, my sense is, and we see it with
Toyota, there is more of a penalty," she says. Studies indicate
consumers may refresh expectations after learning of
defects and that may prompt substitute purchases.
Haunschild and Rhee also investigated the possibility
that high-reputation firms suffer stiffer market penalty
because they get more media attention. The authors

PHOTOGRAPHY: GETTY IMAGES

or customers, product defects
can create inconvenience at
best and cause injury or death
at worst. Ensuing recalls also can
wreak reputational and sales havoc
on firms and sometimes even competitors as the market accounts for
information about faulty products.
Potential fallout has escalated as the
supply chain has gone global and extended the
product-recall reach.
A high-profile example involved the 2007
recall of 276 types of toys and other children’s
products, mostly due to lead-based paint.
Parts had been supplied by a multitude of
Chinese manufacturers, and toys were sold
under brand names in the United States. In
another case, a 33-year-old family-run Virginia
firm sought bankruptcy after salmonella, traced to
peanuts used in foods worldwide, was linked to
sickness and several deaths.
Firms can and do survive product
recalls, but the direct costs of severe
recalls can be high. Indirect costs may
in some cases exceed direct costs. Less
severe recalls may cost very little. Firms
may suffer regulatory fines, as in the
case of Toyota’s recent $16.4 million levied
by the National Highway Traffic Safety Administration
(NHTSA), but are most likely punished by the market in
a severe recall. Firms may suffer market share and stock
value declines after demand plunges. Margins can shrink
if a manufacturer slashes prices to spur sales. For
instance, Toyota drove April automobile sales by flooding
the market with buyer incentives, a sign of fear about
the extent of a recall’s damage to its bottom line and
reputation, says automotive economist George Hoffer
of Virginia Commonwealth University. Recalls can tarnish
reputations.
Market response is important, and economists have
tried to make sense of how direct and indirect
costs add up after a recall. It’s complicated
to unravel the array of factors at play but market
responses do generally provide incentives for firms
to make safe products. These days, markets can
respond more quickly than ever to product recalls,
though long-term effects appear mixed in empirical studies.

counted news articles at the time of recalls of the highestreputation firm and the lowest — at the time those were
Lexus and Hyundai, respectively. Again, results were unambiguous. “When Lexus had a recall, there were many more
articles about it than when Hyundai did,” Haunschild says.
Recalls get more publicity when firms are well-known for
quality and when the recall affects many people.

Effects on Demand for Cars, Toys, Food
Product recalls can slow sales, and sometimes consumers are
even reluctant to buy from rival firms producing products
within the same category. Automotive recalls date to 1966
and the birth of NHTSA in the wake of the success of
consumer advocate Ralph Nader’s 1965 book, Unsafe at Any
Speed. That first year, manufacturers issued 58 recalls, affecting 982,823 vehicles. Recalled vehicle numbers have varied
over the past decade, but the general trend indicates numbers are rising. In 2008, NHTSA announced 22.5 million
vehicles in 781 recalls, but in 2009 the numbers fell to 570
recalls, affecting 17.8 million vehicles.
In years past, unbiased information about product
quality was generally unavailable, certainly compared to the
plethora of independent sources available today. Back then,
consumers may have used recalls as a proxy for quality,
according to economists Hoffer and his co-authors, Steven
Crafton, formerly of George Mason University, and Robert
Reilly of Virginia Commonwealth University. In a 1981 paper,
they researched effects on demand for specific car models
recalled, on models of the same make, and on the demand
for similar models made by competitors (substitutes). The
authors categorized recalls by severity, using data from
NHTSA.
“What we found was that the market responded to a
severe recall in the month after the recall,” Hoffer says.

“It did not respond to more minor recalls.” While a severe
recall affected demand of the model recalled, it did not
affect other lines within the same car make. In particular,
the Ford Pinto recall was found to affect not only Pinto
but competitors’ similar models. Consumers apparently
inferred problems with similar-size models, regardless of the
company of manufacture, according to the authors.
Another way the market can penalize firms is through
equity response. Findings on shareholder wealth effects are
mixed, however. Early work by economists Gregg Jarrell,
currently of the University of Rochester, and Sam Peltzman
of the University of Chicago in 1985 found effects greater
than the direct costs of an automotive recall. Hoffer and his
co-authors found no significant effects on auto firms’ shareholders or on recalled firms’ competitors.
More recent studies find that the stock market responds
quickly to certain product defects, especially severe ones.
For example, a recall on defective heaters cost shareholders
less than an airbag recall. Economist Nicholas Rupp of East
Carolina University found certain types of recalls caused
significant shareholder loss, exceeding direct costs. “One of
the conclusions I draw is that effects are limited unless
they’re persistent and serious recalls, sometimes resulting in
injury or death or in cases where the media piles on,” he says.
Rupp measured the dollar value shareholders lost under
certain recall characteristics, in order to identify attributes
that cause significant losses. Particularly costly, he notes, are
recalls of new makes and models “where consumers don’t
have much information and then suddenly they get this
news.” Minor recalls of heaters, defrosters, or air-conditioning units were not costly whereas airbag recalls were. Airbag
recalls, in 1983 dollars, cost between $136 million and
$162 million in equity losses, he estimated. Highly rated
companies — those with AAA bond ratings — had the most

The Private Component of Product Safety Testing
Before consumers became more sensitive to product safety,
the knowledge gap between the buying public and product
makers loomed large. That’s when Underwriters Laboratory
(UL) got started. UL today dominates the independent testing market, with 64 labs, testing, and certification facilities
that serve customers in 98 countries. Founded in 1894 by
an electrical engineer, UL first catered to insurance firms
wanting to gauge fire risks associated with new electric
appliances. UL developed testing for the hazards, and from
there, the product list grew.
Today, 20 billion UL-approved labels go on 72,000 manufacturers’ products annually. Getting UL certification is
voluntary, for the most part, and procedures and standards
remain unregulated. In some cases, government testing
standards may apply, and UL also has played a large role in
promulgating some of the standards.
In the 1970s, Underwriters Laboratory investigated
10,000 incidents of television fires, and developed federal

television standards adopted and still used by the
Consumer Product Safety Commission (CPSC). UL conducts quarterly product tests at factories to monitor quality,
and companies pay for the tests and the use of the UL label,
now a standard symbol of quality in the marketplace.
As recall numbers have grown, so has this private market
for raters and certifiers. Such groups range from published
“lists” to private labs like UL. Many are authorized to
inform and certify products for government agencies such
as the CPSC and the Occupational Safety and Health
Administration (OSHA).
In other fields, bond agencies rate issuers, health-care
raters grade hospitals, Consumer Reports magazine and J.D.
Power and Associates rate products and services.
In 1988 OSHA established a list of recognized private
laboratories to certify and test the products that must
conform to the agency’s standards. Today, 15 private labs are
recognized on OSHA’s roster.
— BETTY JOYCE NASH

Region Focus | Second Quarter | 2010

17

to lose from a recall announcement.
Economists Suresh Govindaraj and Bikki Jaggi of Rutgers
evaluated in 2004 the market reaction in a specific case, the
recall of the brand of tires linked to Ford Explorer rollovers.
Market losses again exceeded direct costs for this firm. The
authors also found that tire competitors gained market
value, “probably because their products were substitutes for
the products affected by recall.”
Another study documents how consumer perceptions
produce these spillover effects to other products. The 2007
toy recall that covered items containing lead paint
represented an 80 percent increase in the number of recalled
kids’ toys over a two-year period. Economists found industry-wide effects. Even infant/preschool toy manufacturers
without recalled products suffered a 25 percent decline in
sales. Overall holiday sales for similar products by manufacturers named in the recalls fell by about 30 percent,
compared to other products sold by the same makers.
Efforts to observe how people make decisions and
inferences can prove useful to policymakers, according to
one of the paper’s co-authors, economist Seth Freedman, a
doctoral candidate at the University of Maryland. After the
toy recalls, Consumer Product Safety Commission laws were
strengthened. “If consumers punish the manufacturer
enough, then the manufacturer will have incentive to
produce safe toys,” he says. “But if consumers can’t direct
the punishment to a specific target, then the manufacturer
may have incentive to produce at lower quality.” He was
referring to the multiple suppliers of toy parts to a wide
range of companies. Since people didn’t know exactly which
toys were made by suppliers using lead paint, purchases of
toys that were in the recalled category declined generally.
Uncertainties about market response remain. For example, toy sales among nonrecalled categories didn’t suffer,
even of those firms that were hit by the recall. But Freedman
points out that it’s unknown whether consumer preference
or the increased advertising and promotion by the company
facing recalls were responsible. Freedman and his co-authors
also found capital market losses at the time of the recalls but
could not associate the losses with particular recalls.

Recent research has investigated spillover effects in the
pharmaceutical industry. John Cawley of Cornell University
and John Rizzo of Stony Brook University published a
National Bureau of Economic Research working paper in
2005 using the withdrawal of a drug combination (fen-phen)
from the market. The drug was withdrawn in 1997 for potentially fatal side effects. The paper found that competitor
drugs benefited from that withdrawal.
Food recalls may represent the greatest threat for firms
caught in the growing web of the supply chain when things
go wrong. Those can be especially dangerous and costly,
and may explain why food companies account for 75 percent
to 90 percent of product recall insurance coverage, introduced in the late 1980s after Tylenol tampering. Demand for
such insurance has been growing at a rate of about 30 percent a year. While most food companies don’t have product
recall insurance because it’s expensive, demand is growing,
according to insurers who offer these types of policies. The
insurance can cover direct and indirect losses.
While the cost of auto and drug recalls have been investigated, there’s less research about product recalls of food
despite recent illness outbreaks involving hamburgers, fruit
juices, prepared meats, fruits, and vegetables. Agricultural
economists Victoria Salin of Texas A&M University and
Neal Hooker of Ohio State University investigated stock
market reaction to four food recall events of microbiological
contamination. Results varied by product, company size,
scope, and severity. Returns to shareholders in some cases
fell, but stock market reaction could not be detected in
other incidents.
The empirical evidence that detects effects on firms in
the case of recalls is hard to arrange and decipher, given the
wide range of products, severity, timing, and reputation
of firms. While less-severe recalls may be nonevents for
firms, one certainty stands out: In the case of a major defect
that causes illness or death, even a reputable firm will be
penalized not only by regulators but also by the hand of the
market.
“The market is efficient at meting out justice,” Rupp says.
“The market will punish and reward accordingly.”
RF

READINGS
Crafton, Steven, George Hoffer, and Robert Reilly. “Testing the
Impact of Recalls on the Demand for Automobiles.” Economic
Inquiry, October 1981, vol. 19 no. 4, pp. 694-703.
Freedman, Seth, Melissa Kearney, and Mara Lederman. “Product
Recalls, Imperfect Information, and Spillover Effects: Lessons
from the Consumer Response to the 2007 Toy Recalls.” NBER
Working Paper no. 15183, July 2009.

18

Region Focus | Second Quarter | 2010

Rhee, Mooweon, and Pamela Haunschild. “The Liability of Good
Reputation: A Study of Product Recalls in the U.S. Automobile
Industry.” Organization Science, January-February 2006, vol. 17 no. 1,
pp. 101-117.
Rupp, Nicholas. “The Attributes of a Costly Recall: Evidence
from the Automotive Industry.” Review of Industrial Organization,
August 2004, vol. 25, no. 1, pp. 21-44.

BY R E N E E CO U RTO I S

set of vaccinations recommended by the CDC, according to
en years ago the United States declared that
the latest data available. Though below the 80 percent mark,
widespread transmission of the measles — one of
herd immunity is not necessarily threatened since vaccinathe world’s most infectious diseases — had been
tion rates are much higher for each individual disease. For
eliminated. No small feat considering that 50 years ago
example, Idaho, the state currently with the lowest total vacvirtually everyone in the United States got the disease
cination rate, still enjoys coverage above 80 percent for most
before the age of 20. As many as 4 million Americans convaccines. By and large, it is the case that most children
tracted the disease each year; 400 or 500 died, while about
receive most vaccines.
48,000 were hospitalized and 1,000 left with chronic
But that’s for the nation as a whole; there are pockets of
disabilities like brain damage or deafness.
the country — sometimes as narrow as the community or
Vaccinations are at the root of this dramatic improveschool level, for which data are scarce — with a relatively
ment. Nowadays, most years see about five dozen cases of the
higher rate of unvaccinated individuals. “That suggests there
measles in the United States. In 2008, the year-end total of a
are areas that are more at risk of getting these vaccinemere 140 cases was the worst in years. As with all modern-day
preventable diseases than others,” says Davis. In some
outbreaks, the disease was imported from foreign visitors to
schools, as many as 15 percent to 20 percent of students are
the United States or from U.S. residents who traveled abroad
unvaccinated. Modern measles outbreaks tend to be concenand acquired measles in other countries experiencing
trated in unvaccinated populations, such as members of the
outbreaks. Once in the United States, 90 percent of infected
same religious congregation or young classmates in commupeople had not received the measles vaccination or their
nities where a culture of natural medicine is prominent.
vaccination status was unknown, according to the Centers
The reasons behind widely different vaccination rates
for Disease Control and Prevention.
across regions are not entirely understood by the health care
Though small in relative terms, recent outbreaks are a
community. One clear part of the explanation is that
reminder that containment of vaccine-preventable diseases
requirements differ dramatically across states (vaccine
depends critically on the number of people in the popularecommendations can be enforced only at the state level).
tion who choose to get vaccinated. If enough people are
According to the Centers for Disease Control, all states
immunized, they collectively create “herd immunity” —
require vaccinations against diphtheria, tetanus, pertussis
with sufficiently few susceptible people in the population,
(whooping cough), polio, and measles prior to kindergarten
the disease is unable to spread, protecting those who are not
entrance through 12th grades. States
vaccinated by medical necessity,
have mixed vaccination requirements
choice, or because they are too young.
Marked Improvement
for other diseases, such as mumps
That rate is determined by a matheNew cases of many diseases have fallen
(47 states plus Washington, D.C.), and
matical formula based on factors
dramatically since vaccines were introduced,
varicella or chickenpox (44 states plus
including the vaccine’s rate of failure
though experts note that some diseases,
like pertussis, are on the rise.
D.C.), among others.
and how easily the disease is transmitBut all states allow for exemptions
ted. Professor Matthew Davis at the
Pertussis *
Measles*
Mumps* (Whooping Cough)
that permit a child to attend
University of Michigan says the rule of
1950
211.01
N/A
79.82
public school unvaccinated. Medical
thumb is that it takes about an 80
N/A
8.23
1960
245.42
exemptions, such as an allergy to a
percent vaccination rate against a
1970
2.08
23.23
55.55
component of the vaccine, are allowed
disease to provide herd immunity to
0.76
1980
5.96
3.86
in all states, though well under 1 perthe other 20 percent. But for a highly
1.84
2.17
11.17
1990
cent of children fall into that category.
infectious disease like the measles —
2.88
0.03
0.13
2000
Religious exemptions are allowed by
which will infect nine of 10 susceptible
4.40
2009
0.02
0.65
Date Vaccine
48 states and Washington, D.C. —
people who come into contact with
1967
1963
1949
Introduced
West Virginia and Mississippi are the
it — as much as 95 percent of the
*Per 100,000 people in population
exceptions — and 20 states allow
population must be vaccinated to
NOTE: A national measles outbreak spanning 1989-1991
boosted new case numbers for 1990.
philosophical exemptions.
provide herd immunity.
SOURCE: Centers for Disease Control and Prevention.
The ease of being granted an
About 67 percent of children aged
Data for 2009 calculated by author using CDC and
exemption also is a factor. Some states
19 to 35 months receive the broadest
Census data.

T

Region Focus | Second Quarter | 2010

19

require only a signature on a form, whereas others require
notarized personal statements, annual reviews, and input
from local health officials. A 2006 study in the Journal of the
American Medical Association (JAMA) found that exemptions
doubled between 1991 and 2004 in states with a relatively
easy exemption process, with no obvious increase occurring
in states with a harder exemption process. The study found
that states with a stricter exemption process had lower rates
of exemptions and, consequently, lower incidence of the
diseases in question.

The Costs and Benefits of Vaccinations
Vaccines are heralded as one of the single greatest public
health triumphs the world has seen. Thanks to vaccines,
deadly and debilitating diseases have been kept at bay,
virtually wiping out the incidence of illnesses such as
mumps, polio, and measles. This has freed health
professionals to focus on chronic diseases like cancer. The
demonstrated effectiveness of vaccines in preventing
disease clearly provides an individual with an incentive to
get vaccinated.
Vaccines work by injecting the body with a mild or dead
form of a virus, providing the immune system the opportunity to figure out how to attack it. The immune system has a
memory: If ever again confronted with the disease, it will
recall the blueprint to the antibodies. Edward Jenner discovered the method in the 18th century when he observed that
milkmaids rarely contracted the deadly smallpox disease,
which he hypothesized was because they contracted the
less-virulent version that afflicted cows. Their bodies were
able to fend off cowpox and establish immunity to smallpox
in the process.
Despite proven benefits of vaccinations, some parents
choose not to vaccinate their children. One reason is that
vaccines are a victim of their own success: As diseases like
measles and polio decline in numbers or are eradicated, so
dies the memory and fear of them. And in many states the
exemption process is less burdensome than actually getting
the many required rounds of vaccinations viewed by some
parents as excessive.
Financial costs are an impediment, sometimes leaving
areas with many low-income families vulnerable. Vaccines
are funded through a mixture of private and public sources.
For those with health insurance, differing state regulations
mean insurance coverage of vaccines varies. Few state regulations mandate national recommendations as a guide,
though, and the skyrocketing expense of the full recommended regimen of vaccines increasingly means that many
are not covered by insurance.
Public assistance is available for children not covered or
underinsured. The U.S. government under President
Clinton enacted the Vaccines for Children program that
subsidizes child vaccinations for the vast majority of
children whose private insurance doesn’t cover them.
A growing number of states also have “universal purchase”
programs in which the state purchases and distributes

20

Region Focus | Second Quarter | 2010

vaccines to both public and private immunization providers
at lower prices.
Despite such steps, financial barriers persist. Families
often don’t know they’re covered by government programs,
according to Davis, and that has limited their success.
But parental fear of vaccine safety is by far the largest
stated reason for avoiding vaccinations. Nearly one in eight
parents refuse at least one recommended vaccine, according
to Davis and coauthors in a 2010 study, especially newer
vaccines for chicken pox and human papillomavirus (HPV).
One in five believes some vaccines can cause autism in
otherwise healthy children.
Interestingly, it’s not that such parents think vaccinations are ineffective; even vaccine refusers overwhelmingly
believe vaccines are able to prevent disease, according to
Davis and his coauthors. It’s that they think vaccinations
may be more harmful than the diseases they prevent, given
the low probability of catching them.
Experts say the risks from vaccines are small. Mild reactions are common — about one in four children experience
low-grade fever following the diphtheria/pertussis/tetanus
(DPaT) shot, for example — but severe reactions are very
rare. One in 1 million children will experience seizures or
brain damage after the DPaT shot. Severe effects are so rare
that it is hard to know if they’re caused by the vaccine,
according to the CDC.
Experts view the parental fears of such small risks as a
major threat to public health since they have led to
decreased vaccination rates and subsequent outbreaks in
other countries. After a study linking the MMR vaccine to
autism — a study that was discredited and retracted earlier
this year — was published in a British journal in 1998, MMR
vaccination rates in England dropped over 10 percentage
points in six years. England saw 56 measles cases in 1998, and
by 2008 there were 1,370. A similar story occurred in the
northern region of Nigeria after people shunned the polio
vaccine out of AIDS and other concerns. Following
a rapid resurgence of polio in that country, experts say
immunization against polio in Nigeria is in danger of failing.
The lesson is that as immunization rates fall, there can be
a tipping point at which even the vaccinated face increased
risk since no vaccine is perfectly effective, and diseases start
to dramatically resurge. But where that tipping point is,
experts aren’t sure.

Guiding Vaccination Policy
In the matter of vaccinations, there is a natural tension
between self-interest and public welfare. How should
policymakers weigh public health with private freedom
concerning health choices? Researcher Alison Galvani
of Yale University and various colleagues have developed
game theory models in which an individual’s choice
depends on the strategies chosen by others. They used
these models to analyze the vaccination rates that could
prevail under a purely voluntary vaccination policy regime
compared to vaccination rates that would maximize

Fifth District Coverage
Percent of children aged 19 to 35 months receiving all recommended
dosages for select vaccines

PERCENT

the welfare of the population as a whole.
If the decision to vaccinate was left purely up to selfinterest, individuals (and parents, in the case of a child)
would decide whether to vaccinate based on their perception of the costs and benefits of doing so. But if everyone
else is immune, a vaccine poses little individual benefit.
For individuals who view vaccines as especially risky or the
risk of disease as low, their best choice will be to go without.
Therefore, in the Nash equilibrium — a game theory outcome in which no individuals can improve their lot given the
strategies chosen by others — the total vaccination rate is
likely to be lower than socially optimal.
The outcome, in this case, would be greater illness since
a nonimmunized person is more likely to catch and spread
the disease. This meshes with empirical studies: Several have
found that communities with lower vaccination rates had
higher infection rates even among vaccinated children.
The utilitarian approach is arguably more characteristic
of the vaccination policy we have today: Vaccine mandates
are intended to maximize the welfare of the entire population, at least where disease control is concerned. School
mandates have been by far the most effective way to increase
vaccinations. However, some requirements test the limits
of public tolerance for sacrificing freedom for the greater
good like the newer adolescent vaccines for sexually transmitted diseases that have proven unsavory to many parents.
Exemptions are a way to modify the utilitarian approach
to allow a greater scope for private preferences. But they
undermine the benefits provided by mandates since exemptions provide an opportunity to “free ride” off the immunity
of the herd, just like in the Nash equilibrium. Those exempted get the benefits of immunity through the herd without
the hassle, financial costs, or perceived risks of vaccination.
Both strategies seem to imply that policy should also
focus on directing private choice toward the optimum;
that is, to bring the Nash and utilitarian outcomes closer
together through strategies that increase voluntary vaccinations. This means understanding people’s decisions not to
vaccinate and improving accurate public information about
the costs, benefits, and administration of vaccinations.
This could be particularly helpful concerning the risks that a
vaccine poses for a given individual, since those fears are one
of the biggest current threats to herd immunity and have led
to reduced vaccine uptake and outbreaks in the past.

100
90
80
70
60
50
40
30
20
10
0

MD
DC
VA
SC
All Recommended Vaccines
Diphtheria, Tetanus, Pertussis (4+ doses)

WV
NC
U.S.
Polio (3+ doses)
Measles, Mumps, Rubella (1+ doses)

SOURCE: Centers for Disease Control and Prevention, 2008 National Immunization Survey

Research indicates that the people most trusted to convey information about vaccine safety are doctors. So Davis
suggests that any efforts to address the public’s concerns
over vaccine safety have to involve individual physicians to
be effective. There’s risk with any procedure or medication,
he says, but it’s hard to know whether a given individual will
experience side effects as he or she receives something for
the first time. “For some people the vaccine safety concerns
are outweighing the possible benefits in their minds, and
that’s a very important conversation that doctors need to
have with patients and parents.”
If all else fails, Galvani and her colleagues suggest that
policymakers shouldn’t discount appealing to altruism as a
way to increase voluntary vaccinations. Parents aren’t always
conscious that the private vaccination decision has public
consequences, according to Davis. He says parents who are
inclined to refuse vaccines often ask why they should give
the polio vaccine, for example, to their children when
chances are imperceptibly small they’ll catch the disease.
“My answer to them is, ‘Why do you think your child is
not likely to get polio?’ They pretty quickly get to the fact
that their children are protected only because other parents
have vaccinated their children against polio.” No parent, he
says, enjoys realizing their children would be free-riding on
the immunity of other children.
RF
Christina Zajicek contributed to this article.

READINGS
Bauch, Chris T., Alison P. Galvani, and David J.D. Earn. “Group
Interest Versus Self-Interest in Smallpox Vaccination Policy.”
Proceedings of the National Academy of Sciences of the United States of
America. Sept. 2, 2003, vol. 100, no. 18, pp. 10,564-10,567.
Boulier, Bryan L., Tejwant S. Datta, and Robert S. Goldfarb.
“Vaccination Externalities.” B.E. Journal of Economic Analysis and
Policy, May 2007, vol. 7, issue 1, article 23.
Geoffard, Pierre-Yves, and Tomas Philipson. “Disease Eradication:
Private vs. Public Vaccination.” American Economic Review,

March 1997, vol. 87, no. 1, pp. 222-230.
Omer, Saad B., et al. “Vaccine Refusal, Mandatory Immunization,
and the Risks of Vaccine-Preventable Diseases.” New England
Journal of Medicine, May 7, 2009, vol. 360, no. 19, pp. 1981-1988.
Salmon, Daniel A., et al. “Factors Associated with Refusal of
Childhood Vaccines Among Parents of School-Aged Children:
A Case-Control Study.” Archives of Pediatrics and Adolescent
Medicine, May 2005, vol. 159, no. 5, pp. 470-476.

Region Focus | Second Quarter | 2010

21

Charitable giving during downturns

BY B E T T Y J OYC E N A S H

M

Recessionary Giving and “Crowding Out”
Giving USA estimates that, in addition to human services
increases, sectors such as health and international aid benefited despite the recession. “This focus on vital needs is
consistent with what historians tell us happened during the
Great Depression,” said Patrick Rooney, executive director
of the Center on Philanthropy, in a press release. Giving
USA Foundation, also affiliated with the center, publishes a
report of the same name annually. It estimates contributions
using Internal Revenue Service tax data on itemized gifts,
government estimates for economic indicators, and data
from other research institutions.
Individual giving represents about three-fourths of all

22

Region Focus | Second Quarter | 2010

United Way volunteers sort shoes at a shelter.
The organization and others like it are
categorized as public-society benefit groups.

contributions, and it remained unchanged, in real terms
after falling by 6.3 percent in 2008. Bequests, however,
plummeted by nearly 24 percent, after falling by about 6 percent the previous year (due to unexpectedly large sums
reported by the Internal Revenue Service for estate tax
returns filed late in 2008). Foundation giving comprises 13
percent of all charitable contributions, and that category
declined by 8.6 percent in 2009.
Religious giving represents the biggest share of all contributions. After increasing by 1.6 percent in 2008, the
category fell slightly, by 0.3 percent, in 2009. The demand
for charity services has expanded during hard times, and the
share of donations to human services in 2009 grew by 2.7
percent after a stunning decline of 16 percent in 2008.
Giving to foundations fell by 7.6 percent, after a whopping 22
percent decline in 2008; gifts to education groups fell again
in 2009 by 3.2 percent. Arts, culture, and humanities sectors
had another 2 percent decline in contributions.
But several categories received more in 2009 compared
to 2008: Donations to environmental and animal organizations grew by 2.7 percent; giving for international aid
increased by 6.6 percent, in real terms; giving for health
causes increased by 4.2 percent.
Because giving is tied to economic health, individual
donors and foundations watch market indices closely as they
plan gifts. Bequests aren’t necessarily timed with overall
market indicators. Corporate giving is tied more closely to
corporate profits than stock market performance.
The Giving USA Foundation has tracked the performance of charitable giving following the Depression. They
found that from 1928 to 1934, itemized charitable giving fell
35 percent in real terms. It reached its 1929 level in 1937, fell
slightly a year later, and exceeded its 1929 level in 1939.
During the Depression, however, foundations like
Rockefeller, Carnegie, and Russell Sage kept giving generously, with Carnegie providing an additional $2 million
in social welfare relief in the early 1930s. Lack of data,
however, makes it unclear whether total foundation giving
rose or fell in the 1920s and 1930s, and information about
how quickly foundation assets recovered in the aggregate is
also scarce. Historian David Hammack of Case Western
Reserve University found in his studies about philanthropy
in the Depression that wealthy donors switched to secular
and away from religious giving.

PHOTOGRAPHY: UNITED WAY

oney trouble finally forced Charleston, S.C.,
Symphony Orchestra to suspend its 2009-2010
season. Over in Charleston, W.Va., a foundation
announced it’s running out of money. It will fold this fall
after disbursing its $9 million remaining dollars. The Clay
Foundation had granted $100 million over its 23-year
history.
Charitable contributions nationwide have declined, as in
previous downturns. Foundations and other giving sources
also are coping with a slide in asset values, affecting their
own operations and those of the nonprofits they support.
In 2007, charitable donations had reached a record
$314 billion, about 2.3 percent of GDP. The latest available
report from the Giving USA Foundation estimates giving in
2009 declined 3.2 percent, after a 5.7 percent decline the
previous year. (Giving numbers throughout the article have
been adjusted for inflation.)
Yet Americans remain committed to philanthropy and
often reallocate gifts, year to year, when money is tight.
Giving in some categories increased, according to economist
Una Osili, who directs research at the Center for
Philanthropy at Indiana University. Some categories of
giving that had declined the previous year actually rose.
In 2008, for instance, giving for public-society benefit
organizations, such as United Way, rose slightly, but declined
in 2009 by 4.2 percent. However, giving for human
services groups rose 2.7 percent after having declined the
previous year.
Philanthropy professionals have been investigating patterns of giving during the downturn to see what they can
learn. The general conclusion is that things could be worse.
“Giving does recover after recessions,” Osili says. “But it
does take some time.”

At the grass roots, community foundations are feeling the
pain. The Coastal Community Foundation of South
Carolina is one of about 800 local foundations in the United
States; the foundation has about $130 million in assets. This
community foundation manages a collection of funds for
business, individual, or family donors. For instance, the CCF
manages the family fund of low-country native and television talk-show host Stephen Colbert and his Ben & Jerry’s
“Americone Dream Fund.” The fund receives a percentage of
proceeds from the Colbert-named ice cream.
The CCF has worked with the Charleston Symphony for
more than a year to stave off its funding problems, and continues to manage its endowment fund. The foundation
manages some 550 other family or business foundations,
each with its own cause or story.
“The funds together create a large mass so we can
afford to hire investment managers,” says Christine Beddia,
director of marketing and communications. A mark of
this recession, she notes, is that fiscal year 2009-2010
has seen the creation of fewer than 30 new funds.
That compares to 55 established two years ago, in 20072008. Future funding may be precarious. Foundations
employ formulas based on multiyear averages to disburse
grants and those vary. Beddia expects grant-making
may stabilize or decline as those averages incorporate

Who Gives to What and Why?
The motive for giving falls into a couple of categories: altruism and exchange. An altruist simply wants to help people,
pure and simple, expecting nothing in return. Others give
because they want something, say, a tax break or public
recognition.
Americans are generous, and endowments from
organizations founded by wealthy industrialists — Andrew
Carnegie and John D. Rockefeller come to mind — have
continued on page 35

Itemized Individual Charitable Giving Post War (1945-2008)
200
180
160
140
120
100
80
60
40
20
0
1945
1947
1949
1951
1953
1955
1957
1959
1961
1963
1965
1967
1969
1971
1973
1975
1977
1979
1981
1983
1985
1987
1989
1991
1993
1995
1997
1999
2001
2003
2005
2007

Coping in the Nonprofit Community

asset-value declines in 2008 and 2009.
Foundation grant-making fell in real terms in 2008, by
less than 1 percent nationwide, but 2009 may be worse,
according to the nonprofit Foundation Center. Two-thirds
of foundations surveyed anticipated cuts in the number and
size of grants in 2009, with overall foundation giving expected to slide. Some survey respondents expected to tap
endowment principal.
The Z. Smith Reynolds Foundation in 2009 granted
money based on 2007 fund values, a peak. “So there is a twoyear delay between our actual trust value and our spending
capacity,” says executive director Leslie Winner. “In 2011 our
spending capacity will be at its lowest, so our trough is yet to
come.” The annual average value of the trusts has dropped
about one-third. Besides cutting administrative expenses,
the Winston-Salem, N.C.-based foundation has cut back on
multiyear grants. Separately, it has reallocated money into a
coalition of nonprofits working to prevent foreclosures. The
recession has also prompted soul searching. “If we thought
home ownership was a good asset-building strategy in
the past, do we think it will be in the future?” Winner asks.
“We are actively rethinking this.”
Among grantees, foundations have seen layoffs and
mergers to cope with declining revenues. “This is a time
when we’re seeing partnerships,” Osili says. “Nonprofits are
building synergies with the public, government officials, and
other nonprofits.”

$BILLIONS

There is also some evidence that government spending
can “crowd out” private charitable giving. Jonathan Gruber
of the Massachusetts Institute of Technology and Daniel
Hungerman of the University of Notre Dame found that
charitable church spending fell by 30 percent in response to
New Deal relief spending. That explains the one-third
decline in charitable church activity between 1933 and 1939.
Partial crowd-out was also observed in research by Tom
Garrett of the St. Louis Fed and co-author Russell Rhine of
St. Mary’s College in Maryland. Using data from 1965 to
2003, the authors found that increases in state and local government welfare and education spending did reduce
charitable giving to these categories.
While the Great Depression is fertile ground for the
study of philanthropy, the recession this time around isn’t as
severe. Later downturns provide clues about the future of
giving. After the 1973-1975 recession, individual itemized
giving exceeded its 1973 level in 1979, when giving rose to
$52.7 billion, according to Giving USA. After the 1980 and
1981-1982 recessions, itemized individual contributions rose
consistently, in real terms, even during the slide in the Dow
Jones Industrial Average. That indicates no lag in giving after
that recession.
Foundation giving, though, tells another story. From 1972
through 1975, foundation giving stalled out, and did not
reach 1972 levels again until 1985. After the 1980-1982
slumps, foundation giving also fell before finally growing to
$8.2 billion in 1985. That was 14 years after a previous peak of
$7.9 billion.

NOTES: Green bars indicate recession years. Data exclude estimates of amounts given by
non-itemizing households, foundations, corporations, and estates.
SOURCE: Center on Philanthropy at Indiana University for Giving USA Spotlight; data from
Internal Revenue Service/Statistics of Income office

Region Focus | Second Quarter | 2010

23

TA K I N G B R O A D B A N D TO T H E L I M I T
BY B E T T Y J OYC E N A S H

ast, reliable Internet access shrinks time and
distance like no predecessor technology. It’s hard to
exaggerate the significance of this “broadband”
service that packs data through lines, over airwaves, or via
satellite at a clip fast enough for a doctor to interpret an
X-ray or monitor a patient’s chronic disease from afar in
real time. A firefighter can download a building plan, in
the heat of the moment, via a mobile device. Broadband
can also bring big businesses to regions that otherwise
might get bypassed.
Most of the people who want broadband in the United
States have it already. But bringing everyone up to speed gets
iffy, especially in remote places, where low subscriber numbers might not justify the cost of deploying wire and fiber.
This “last-mile” problem led the government to wire
segments of the nation with electricity and telephone lines
in the previous century.
Government grants have been spurring investments
in “middle-mile” fiber installation, which will help, but taxpayers can’t fund every last mile. Could the broadband gap
ultimately be closed using wireless configurations, satellite,
and even existing power lines?

F

The Broadband Advantage
Worldwide, governments want citizens connected via
broadband — it enhances productivity, innovation, and may
cut costs. Economist Robert Litan of the Kauffman
Foundation and the Brookings Institution, for example,
has written about broadband’s potential to deliver health
care and information to the elderly and the disabled.
Remote medical monitoring and two-way communications
between patients and health care providers could delay or
even eliminate the need for institutionalized living.
Broadband would also make it easier for both populations to
work, if they chose.
When people can’t access broadband, it’s due not only to
geography, as in the case of rural residents, but also to
sociology, especially relating to the elderly, disabled, minorities, or poor. Most people who can easily be connected are
connected. Many of those without broadband have decided
against it for a variety of reasons. Thirty-eight percent of
those rural households without broadband, when asked, say

24

Region Focus | Second Quarter | 2010

they don’t need it, or they’re not interested. Affordability is
cited by 22 percent of rural nonusers (and, tellingly, 28
percent of urban nonusers). But only 11 percent of rural
households say they don’t use broadband because it’s not
available. About 65 percent of rural households, compared
to 69 percent of urban households, already have Internet use
“at least somewhere.” These numbers come from Digital
Nation, a report published by the U.S. Department of
Commerce based on data collected in October 2009.
So as the above numbers show, it’s not only a last mile
problem, it’s a “last user” problem. The push for affordable
broadband access in every nook and cranny has been a
stated national goal since 2004. Rural schools, health clinics,
hospitals, and businesses may benefit most from these highcapacity circuits that can improve learning, medical care,
and economic development.
Money from the federal government’s stimulus package
aimed at expanding broadband access nationwide, $7.2
billion in all, is starting to roll into the Fifth District.
A North Carolina nonprofit, MCNC, which runs the North
Carolina Research Education Network got $28.2 million in
broadband recovery money for middle-mile deployment in
eastern and western parts of the state. The idea is to expand
the optical footprint so it’s faster, more robust, and more
reliable, says Noah Garrett of MCNC. The nonprofit has a
bigger fiber ring project on the drawing board, worth $100
million, if money from other grants comes through. “What
you’ll see with the expansion, the middle-mile, you’re going
to start seeing more households having more affordable
access,” Garrett says, in the hope that commercial providers
install the last mile.
The Federal Communications Commission (FCC)
estimates that it could take another $23.5 billion to bring
every home in the nation up to speed, including about $13
billion to reach the most rural areas. But is it necessary? The
latest FCC report on wireless says 92 percent of the rural
population has at least one mobile broadband provider
already, enabling wireless Internet access via mobile phones
or laptops. Wireless isn’t a perfect wire-line substitute but
may serve rural areas more economically. Each generation of
wireless improves on the last, with fourth-generation (4G)
technology upon us. If speed and customer satisfaction

Could the broadband gap ultimately be closed — or at least narrowed —
using wireless configurations, satellite, and even existing power lines?

compare favorably to fixed service, nonterrestrial technologies such as 4G can bring the cost of closing the broadband
gap to roughly $10 billion.
Mobile wireless has developed in scope and sophistication, and it’s also become more concentrated, with the two
biggest providers, AT&T and Verizon, accounting for 60 percent of subscribers and revenue, according to a May 2010
FCC report to Congress on wireless penetration. Both
firms continue to gain market share. As smart phones and
mobile computing devices proliferate, wireless use grows.
The iPhone, for instance, has driven data traffic on AT&T’s
mobile network up by 5,000 percent between mid-2006 and
mid-2009.

National Network
By 2013, about 90 percent of the nation may have access to
peak download speeds of more than 50 megabits per second,
according to the FCC, compared to the average (actual)
speed of about four megabits per second today. Advertised
and actual speeds depend, however, on infrastructure,
service take-up rates, and patterns of use. If everyone on a
circuit logs on, then speed can slow. When all is said and
done, the FCC’s goal is affordable 100-megabit-per-second
download speeds to 100 million homes by 2020 and one
gigabit-per-second connections to institutions — libraries,
schools, hospitals, military installations, and the like.
Digital Nation found that as of October 2009, 63.5
percent of U.S. households used broadband (technologies
faster than dial-up); 66 percent of urban and 54 percent of
rural households accessed broadband. Rural households
were more likely to use dial-up, 8.9 percent, than urban
ones, 3.7 percent. Also, U.S. households with children are
more likely to have Internet service than those without
children, so the per-household figures may understate use.
The FCC’s National Broadband Plan released earlier this
year outlines changes, not only to subsidize broadband
extension but also to auction underused broadcast spectrum
for mobile communications. The FCC wants to switch the
universal service funds that telecoms currently pay to subsidize rural telecommunications, including discounts to poor
households and services to schools and libraries, to fund
broadband diffusion.
Wire-line services require large fixed costs, and while
reducing these costs could spur competition, that’s unlikely
to happen over vast geographical areas. Digging and burying
fiber — the dominant desired transmission method for the
foreseeable future — can cost $100,000 a mile, and so it
makes sense to deploy fiber simultaneously with water
or sewer pipes. Some communities have these build-out
policies in place.

And more competition could emerge from wireless by
cutting costs of entry and expansion through access to spectrum, according to the FCC plan. Economists Robert
Crandall and Hal Singer noted in a recent Brookings
Institution report that most U.S. households have at least
three broadband technologies from which to choose and, in
most service areas, even more suppliers.
Broadband deployment in the United States is nearly
ubiquitous, with the exceptions previously noted. And competition exists in most markets, a fortuitous accident
because coaxial cable for television and copper wires for
telephone developed separately. Both worked to deliver
broadband.
Today, most people can choose between two wire-line
platforms: 78 percent of housing units are located in census
tracts with two providers; 13 percent have only one, according to the FCC. However, data are inadequate to show
whether price and performance offer enough competition
for a variety of reasons, including the fact that many people
buy bundled services from cable or telco providers.

Power Lines
When the federal government began to support power line
extension in 1935, barely 10 percent of farms had electricity
and 20 percent had telephone service. Private firms considered the remote investments unfeasible. Today, the U.S.
Department of Agriculture loans money to rural electric
cooperatives, and since 1949 the universal service fund has
subsidized telephone lines in remote areas. Telephone companies often charge customers a fee to recover that cost. The
idea is for customers in remote regions to receive service
priced similarly as in urban regions.
In rural America, that last mile can be long. And expensive. For fixed broadband, last mile can mean trenches, and
digging represents most of the cost. Exclusive of any longterm spillover benefits, broadband so far has benefited its
private suppliers handsomely. Economist Shane Greenstein
and his co-author Ryan McDevitt, both of Northwestern
University, in a 2009 National Bureau of Economic Research
paper found that private investment diffused broadband
effectively. As broadband became faster, more reliable, and
available, households upgraded to speedier service, paying
more along the way. Internet access revenue reached $39 billion in 2006, with broadband accounting for $28 billion of
GDP, with $20 billion to $22 billion associated with household use. Of that amount, broadband’s deployment created
approximately $8.3 billion to $10.6 billion of new GDP. In
part, Greenstein and McDevitt found that price indices had
undervalued gains to users of broadband, and yet that’s what
motivated upgrades. In short, the authors’ recalculation of

Region Focus | Second Quarter | 2010

25

conventional GDP estimates show that the gains to
broadband suppliers from creating new revenue covered
investments in urban and suburban areas.
But reaching low-density locations may not be profitable.
“Once the costs exceed one or two thousand dollars per
household, then the profitability gets dicey. Prices have to
increase or payback periods have to increase,” Greenstein
notes.
No one knows that better than Maureen Kelley, who
formerly worked for Apple Computer. She now lives in rural
Nelson County, Va., where she serves as economic development director. The county has gotten $1.8 million in
broadband stimulus money to install 31 miles of fiber and
four wireless tower sites, ultimately connecting schools, a
library, seven county facilities, and the Blue Ridge Medical
Center, the local health clinic.
“What we are putting in is the infrastructure that
ISPs have not deployed in our very rural area,” she says.
Internet service providers will be able to lease strands from
the county-owned and operated network to connect homes.
Of the 8,000 households in the county, more than half now
use dial-up.
The county’s electric cooperative is deploying fiber over
existing power lines. “They have given us a sweet pole
attachment gift,” she says, referring to the cooperative’s fee
waiver. “This is so much like rural electrification.” While
underground fiber installation protects wires from weather,
aerial deployment is cheap by comparison.
While Nelson County is stringing fiber over telephone
lines, another technology may also help diffuse broadband.
After a shaky and unpredictable start, using the lines themselves still holds promise. Conceived in part to create a smart
grid to monitor electricity use, the technology can transmit
data with speeds comparable to DSL and cable modem.
While power lines are installed everywhere, the technology
has yet to be widely deployed, as it continues to evolve.

Home-Grown Fiber
Wilson, N.C., and Salisbury, N.C., are investing in fiber systems. Wilson sold bonds to finance its “Greenlight” system
of cable, broadband, and telephone service. Bristol, Va.,
located in the southwest corner of the state, is often cited as
an example of the home-grown fiber initiative. Bristol
Virginia Utilities first deployed its OptiNet fiber in 1998
among substations and city offices for internal use, but soon
started serving businesses and homes. Since then, Northrop

Grumman Corp., has located a 90,000 square-foot computing center in Lebanon, Va., population 3,214. Although the
firm was driven to the remote region, in part, by the politics
of its contract to serve as the state’s technology provider, the
location would have been unworkable without broadband.
A Canadian IT services company, CGI, has also put down
roots in Lebanon.
Combined, the two companies employ about 700 people,
according to Larry Carr, executive director of the
Cumberland Plateau Co., the nonprofit formed to oversee
implementation in a multicounty area. “We tried to work
with the incumbents to put the fiber into these areas so we
would have a chance at recruiting Internet technology companies, but they weren’t interested,” he says, adding that
low-density populations in these hard-to-reach locations
makes profitability uncertain. Carr says his nonprofit has
applied for a piece of the federal stimulus money for middlemile infrastructure that can bring broadband closer to
residents on the last mile.
The federal dollars allocated for broadband won’t finish
the job of connecting every household. Also, regulatory
uncertainty hangs over FCC efforts. In April a federal
appeals court found that the FCC lacks authority to regulate
broadband services. The FCC had sought to ensure that all
Internet content is treated equally by providers, after
Comcast slowed customers’ access to BitTorrent, a program
used to share large video files. Comcast then challenged
FCC authority over broadband. The ruling allows providers
to control access to some content or price access to it. The
FCC chairman, Julius Genachowski, has proposed an alternative, but results at press time were unclear.
The ruling’s effects, if it stands, on future applications
like the next YouTube are unknown. The Internet has
developed over the past 20 years without interference from
carriers. “That experience has yielded obvious growth,”
Greenstein says. “Part of the reason [for that growth] is the
Silicon Valley software developer doesn’t worry about who’s
delivering it in Boston or Dallas: Everybody has been
prevented from interfering with the message.”
So far, market-driven policies have diffused broadband
widely and quickly despite the pockets of people who
remain un- or underserved. Whether public efforts can ultimately solve that problem — and whether it actually is a
problem worth solving, given the costs — remains unclear.
As innovation flourishes, so does uncertainty as broadband
creeps toward its final frontier.
RF

READINGS
Connecting America: The National Broadband Plan. Washington, D.C.:
Federal Communications Commission, March 2010.
Digital Nation: 21st Century America’s Progress Toward Universal
Broadband Internet Access. Washington, D.C.: National
Telecommunications and Information Administration, U.S.
Department of Commerce, February 2010.

26

Region Focus | Second Quarter | 2010

Greenstein, Shane and Ryan C. McDevitt. “The Broadband Bonus:
Accounting for Broadband Internet's Impact on U.S. GDP.”
National Bureau of Economic Research Working Paper no. 14758,
February 2009.
Hahn, Robert W., and Scott J. Wallsten. “An Economic Perspective
on a U.S. National Broadband Plan.” Policy & Internet, vol. 1, no. 1,
article 5.

BY B E T T Y J OYC E N A S H

recent explosion in a West Virginia coal mine in
April killed 29 miners, and injured two. As of press
time 31 miners have died in West Virginia’s underground coal mines so far in 2010.
These tragedies have intensified public scrutiny of the
industry, the labor market that serves it, and the regulatory
structure that has grown up around it. West Virginia also lost
23 miners in 2006. Among other accidents, the number
includes 12 killed in the Sago Mine blast near Buckhannon.
Workplace disasters raise legitimate questions about the
role of market discipline in workplace safety as well as the
effectiveness of regulation.
“Market discipline, if it works perfectly, produces an efficient amount of safety, not the maximal amount of safety,”
says Devra Golbe, an economist at Hunter College of the
City University of New York. “Thus, even in a perfect market, the choices firms and workers make are not likely to
result in an accident-free workplace, because safety is costly
and some industries, like mining, are inherently risky.”
The median number of days away from work in underground coal mining due to work illness or injury was 34 days
compared to eight in all private industries, according to the
U.S. Dept. of Labor’s latest available data, 2008.
The recent blast happened at Performance Coal Co.’s
Upper Big Branch Mine, a subsidiary of Richmond, Va.based Massey Energy. According to the U.S. Mine Safety and
Health Administration (MSHA), the mine appealed 77 percent of its “significant and substantial” violations from 2007
through 2009. Appeals have been increasing, in part,
because of federal rules legislated after the 2006 Sago
disaster. The laws hiked fines and the number of inspectors.
Fines rise with the number of violations, so companies have
a greater incentive to contest them. An appeal, however,
can keep mines from the “potential pattern of violation”
category, a status that could lead to a shutdown. Two-thirds
of penalties are now appealed, overloading judges at the
Occupational Safety and Health Administration.
Appropriate laws can enhance safety, but may also reflect
prevailing politics. Laws may also fail to keep pace with
changing industry expertise. For instance, standards to prevent explosions that can occur in the presence of high levels
of combustible gases are said to be outdated.
Costs associated with accidents, in theory, give firms an
incentive for safety because of lost production time, lawsuits, workers’ compensation claims, increased insurance
costs, and possible stock market losses. Massey now produces six days a week at some mines to make up for reduced
coal output from Upper Big Branch to meet contract obligations. Its stock value fell following the accident. The firm
also faces several lawsuits from pension fund investors. And

A

Massey has said it did not carry business-interruption coverage for Upper Big Branch. Two years ago, Massey agreed to
pay $4.2 million in criminal and civil penalties following the
2006 deaths of two miners after a fire at another Massey
subsidiary.
The market for workplace safety may be “quite imperfect,” Golbe says. “Moreover, in a labor market where jobs
are scarce, the price for avoiding a dangerous job may be
unemployment.”
Economic theory suggests mines will have trouble
attracting employees if they’re unsafe. But information
about accident risk may be unavailable or hard to decipher.
Although with the strong mining tradition in West Virginia,
the risks may be widely known.
The presence of contract workers also complicates
the issue. Mine operators are ultimately responsible for contractor safety, according to Ellen Smith, managing editor of
Mine Safety and Health News. “Percentage-wise, it’s safe to say
there are a higher number of injuries with contract workers,”
she says, citing the Blacksville No. 1 mine explosion in 1992,
also in West Virginia. Contractors sealing a mine shaft didn’t
realize they should have taken methane readings.
Higher wages can reflect job hazards, although this
compensating wage differential varies according to labor
supply. For example, in West Virginia, the annual mean wage
of explosives workers, roof bolters, extraction workers, and
continuous mining machine operators, ranges from $42,320
to $50,500, according to the Bureau of Labor Statistics.
The wages are higher than for service jobs, and above
Raleigh County’s median income of $38,672, according to
the Census Bureau’s American Community Survey. Most
people there work in social or educational services, retail
trade, or other service jobs; 9 percent work in agriculture,
fishing, or mining.
Economist Clifford Hawley of West Virginia University
suggests that in some mining areas of West Virginia, employers may exert monopsony power. (A monopoly firm is a
single seller; a monopsony firm is a single buyer.) “Typically
among nearby mining opportunities, it’s rare that you have
much competition, and so the miners’ wages will be lower to
the extent that there is monopsony power in the labor
market,” he says. A dominant firm like Massey Energy may
also influence wages of smaller mines.
Mining jobs require skills but not college degrees. And in
some mining communities “there’s just not a lot for people
without a college education to do,” says Hawley. “Mine
workers are not mobile enough to say, ‘Well, I’ll move out
of West Virginia.’ People in West Virginia are very tied to
where they live; they are just not geographically mobile —
that’s by choice and by culture.”
RF

Region Focus | Second Quarter | 2010

27

Justin Wolfers

Editor’s Note: This is an abbreviated version of RF’s conversation with Justin Wolfers.
For the full interview, go to our Web site: www.richmondfed.org/publications

Classical economists such as Adam Smith and John
Stuart Mill were interested in a wide range of issues that
for later generations of economists were thought to
be largely beyond the scope of their discipline.
What makes people happy? What gives our lives meaning? How ought we to organize ourselves as a polity?
Relatively recently, a number of economists have
started to revisit those questions, to place economics
squarely within the broader social sciences, where
it was once understood to belong, while at the same
time not eschewing the formal tools that have given
economics so much of its analytical power. The work
of Justin Wolfers, an economist at the University of
Pennsylvania, exemplifies this broadening scope of
inquiry. As stated on his faculty Web page, his research
interests include labor, macro, political economy,
economics of the family, social policy, law and economics, public economics, and behavioral economics.
One research area not listed is monetary economics.
However, he also has contributed to that field, both
through his academic research and his professional
activities. A native of Australia, he has worked
at the Reserve Bank of Australia and is currently
a visiting scholar at the Federal Reserve Bank of
San Francisco. Wolfers also is a nonresident senior
fellow at the Brookings Institution in Washington,
D.C., where he is co-editor of the Brookings Papers on
Economic Activity, and a research associate at the
National Bureau of Economic Research. Aaron
Steelman interviewed Wolfers at his office at the
University of Pennsylvania in May 2010.

✧
RF: Could you please talk about your work with
Betsey Stevenson on the recent decline in self-reported
happiness among women? What may explain that drop
and what does this tell us about subjective measures of
well-being?
Wolfers: We organize the alternative hypotheses into
three categories of explanations. The first is that women’s
measured happiness went down following the women’s
movement — and this shows that the women’s movement
was somehow a bad thing. The second is that our finding
tells us something about measurement problems with happiness research. If most of us believe that the women’s
movement was good for women, but the happiness data say
that it didn’t make women happier, then there is a problem
using subjective well-being to measure large-scale social

28

Region Focus | Second Quarter | 2010

change. There are lots of versions of this story. One is that
the way women have answered the question over time has
changed. Another may be that when you ask people how
happy they are, they think about it in relative terms. Perhaps
back in the 1970s, women were reporting how happy they
were compared to the lonely housewife next door, and today
they are reporting how happy they are compared to the man
who has the corner office that they should have. Another
version would be that when you report how happy you are,
your report is heavily influenced by those domains of your
life where you feel that you are doing badly. This is Betsey’s
preferred explanation. The number of things that women
are involved in has greatly expanded over time, which means
that there are more chances of failing. The third category
suggests that there is a puzzle for social scientists. We
simply don’t know why women’s reported happiness has
fallen following the women’s movement. When you ask
most economists how things have changed for women over
the last 40 years, most will describe it as a triumph for
women. Wages have increased, social and legal protections
have improved, technological change has arguably been
gender biased in favor of women. The choice set of women
has expanded, and according to neoclassical economics this
is an unambiguously good thing. But it could be that our
finding tells us that there’s some other even more important

PHOTOGRAPHY: TOMMY LEONARDI

INTERVIEW

factor in the background. For instance, declining social
cohesion or rising risk could have had a disproportionate
effect on women relative to men.
Betsey and I are working on another paper that looks at
another great social movement of the second half of the
20th century: the civil rights movement. The women’s
movement coincided with a decline in self-reported happiness. But for African-Americans, self-reported happiness
increased greatly. There was an unconscionably huge gap
between the happiness of blacks and the happiness of whites
in the 1970s. Today that gap is large, but has declined very
substantially. This is interesting, because most of the major
civil rights legislation had already been passed by the 1970s,
the period where the data begin. So it suggests that changes
in attitudes — a decline in racism, for instance — have had a
very positive effect on the lives of black Americans.
RF: Should policymakers use happiness as a metric
when deciding policy or should they use other measures
that we tend to think of as more concrete and which we
have traditionally considered to be the proper things to
focus on, such as economic growth?
Wolfers: I think the first piece of advice is that policymakers should not abuse happiness research. There was a
view, for instance, that economic growth was unrelated to
happiness — or actually might impede happiness. That just
turns out to be false. So one useful role of social scientists
here is to knock over canards. That said, I am still optimistic
that there is something useful that can come from happiness
research. (Also, I should note that I prefer the term “subjective well-being” to “happiness” because I think it gives a
broader measure of how people perceive their circumstances.) The female well-being paper suggests that the
trend moved in a puzzling direction during one period of
time. But other results are more conventional. If you look
across countries, it is absolutely astonishing how closely
subjective well-being tracks objective measures. And if you
look across countries, the correlation between the level of
GDP per capita and the average level of life satisfaction is
about .8, which is one of the highest correlations you will see
in the social sciences.
In his presidential address this year to the American
Economic Association, Angus Deaton made a somewhat
obvious but important point. What we normally think of as
objective measures of well-being are in some ways subjective. If we want to compare per capita GDP in the United
States to that in Burundi, it’s easy to measure the number of
dollars, but then we have to compare the different price
levels. And then do we use the consumption basket of a
typical person in the United States or the consumption
basket of a typical person in Burundi? And what is the social
meaning of owning what is considered a pretty standard
good in the United States compared to what is considered
a luxury good in Burundi? So there is a level of technical
difficulty in getting these things right.

A related point is that the objections we have to subjective measures of well-being are often quite similar to
objections we could raise about “objective” measures. How
do we measure subjective well-being? We go out and ask
people how they feel. How do we measure the unemployment rate? We go out and ask people. You might object that
happiness is a social construct. But if you ask someone if
they had gone out and looked for work in the last four
weeks, there’s a lot of ambiguity too. Similarly, corporate
profits sound like a pretty objective measure — until you
talk to an accountant. So the value of subjective well-being is
that it measures something we really care about. Those
measures may be flawed and you can point out how they
might be improved, but we should inquire whether people
are satisfied with their lives.
The first generation of people doing subjective wellbeing analysis was very motivated by it, and sometimes their
work has the feeling of religious revival. But the second
generation of people involved in this area of research
has been able to take a step back and ask some of the
difficult methodological questions we discussed. But why
should it necessarily interest economists? One answer is
market related: Some people are going to do it. Why not
economists? Our friends in psychology, sociology, and
political science are doing it. And it’s turned out to have
enormous political resonance; for instance, consider the
Sarkozy Commission. So this will be part of the policy
discourse and, as economists, we have to decide whether we
are going to be part of that policy discussion. I think we
bring two things to the table. We bring very precise and useful models of human behavior that can help us interpret
well-being data. And we bring some statistical savvy that,
frankly, has been missing.
RF: Your previous answer touches on this, but it may be
useful to ask it explicitly: What do you think of the
Easterlin Paradox — the idea, broadly speaking, that
increases in income are not particularly well correlated
with happiness?
Wolfers: In some sense, we all seem to want the Easterlin
Paradox to be true. We want to think that people are made
happier by seemingly loftier ideals than becoming wealthier.
As I noted, it turns out that it’s just not true. Income has a
huge effect on people’s happiness.
It’s also been asserted that there is some level of income
that satisfies most people’s desires — and that there is little
point in striving to get above that number because it won’t
make you happier. That number is often given as $15,000
annually. That’s a very widely held view, but as far as we can
tell there has never been a formal statistical test of that view.
So Betsey Stevenson and I went through every data set we
could find to test it, and there is no evidence that an increase
in income — at any point — stops making people happier.
That’s true for the very rich as well as the very poor. A 10
percent increase in income yields the same bump in

Region Focus | Second Quarter | 2010

29

happiness, whether it’s from $400,000, $40,000, or $4,000.
RF: You noted in a recent blog post that despite being the
“queen of the social sciences,” books by economists
are not frequently cited in scholarly journals across
the board. It is true that the economics profession
rewards publication of a book less than do many other
disciplines — for example, a few good journal articles are
more likely to help an economist get tenure than a book
— but, still, economists do publish books, both with
academic and large commercial houses. Why do you
think the citation count for those works is relatively low?
Wolfers: I still believe that economics is the queen of the
social sciences. But the metric that leads me to say that is its
influence on the world, which is what I think social science
should be about. When it comes to almost any public policy
problem, you call the economists. This is true in many areas
once thought outside of the domain of economics, such as
family policy and understanding politics. Economists have
been very successful moving into those fields and have
provided many important insights.
There are very few economics books that are widely cited
across all of scholarship. The likely explanation for this is the
body that one calls “all of scholarship” is dominated by the
humanities, and people in the humanities don’t cite economics very often. But the humanities don’t have much
influence, either. There may well be a poet laureate of the
United States, but there is not a Council of Poetic Advisers.
So we are unpopular with those who don’t have much influence. I don’t see much problem or tension with that.
The broader issue — the reason why I believe economics
is the queen of the social sciences — is this movement of
economics beyond GDP. It is hard not to think of Gary
Becker as the founder of that, and this has been a very good
thing. In sociology, I think our biggest influences have been
on research about family or crime, as economists have done
a lot of empirical work on those topics. With political
science, on topics from election forecasting to political
economy, we tend to see quite good empirical work from
economists. That’s not to say that we should ignore other
research methods. In fact, I had a sociologist on my dissertation committee, Sandy Jencks. I used to joke with Sandy
— and he promised not to be offended — that sociologists
have great questions and economists have great answers.
What is interesting to think about are the terms of trade
between economics and all these other disciplines. We are
clearly a net exporter to political science and sociology. But
at this point the trade with psychology is almost all one way.
We are a near-complete importer. I wonder why we haven’t
been bigger exporters to psychology. I think it has to do with
the research method. Like political scientists and sociologists, economists are almost all about the analysis of
observational data. And then there are second-order differences. Formal political scientists write down a model before
they observe data; informal ones don’t. Ethnographers

30

Region Focus | Second Quarter | 2010

observe four people; survey researchers observe 4,000. But
it’s all observational. But when I watch and speak with my
friends in psychology, very little of their work is about
analyzing observational data. It’s about experiments, real
experiments, with very interesting interventions. So they
have a different method of trying to isolate causation. I am
certain that we have an enormous amount to learn from
them. But I am curious why we have not been able to
convince them of the importance of careful analysis of
observational data.
RF: Becker and others have long argued that discrimination is costly to firms and that in order to engage in it
the leaders or shareholders of those firms must have a
“taste” for it. What does your research on the gender
composition of CEOs tell us about that claim?
Wolfers: The standard neoclassical approach doesn’t fully
allow for what I think most people really believe discrimination to be: a mistake. With mistake-based discrimination,
imagine that you go to evaluate the future profitability of a
firm. One of the things that you are going to look at is the
quality of the CEO. You probably have a mental picture of a
tall white guy in a pinstripe suit, and if the CEO doesn’t fit
that image you may have a less positive opinion of that firm.
If that is true, firms headed by women should systematically
outperform the market’s expectations. The first paper was
somewhat inconclusive; it wasn’t clear whether the firm
overall outperformed expectations. Alok Kumar and I are
working on a follow-up paper that uses quarterly earnings
announcements, which gives us a lot of observations. It
turns out that female-headed firms beat analysts’ expectations each quarter much more frequently than similar
male-headed firms. If you look at which analysts are getting
things wrong, it’s disproportionately male analysts who have
inaccurately low expectations of female-headed firms.
That’s not true of female analysts; female-headed firms actually do not beat the expectations of female analysts. This,
then, suggests what we see are mistakes, not tastes. These
analysts do not want to get a reputation for poor forecasts;
they are not trying to lose money. In fact, one of the ways
you can test whether what we observe are mistakes is to ask
people if they would be willing to change their behavior
when presented with the data. And whenever I teach this
paper to my MBA students, many of whom are former analysts, they say that they are going to change their behavior
when they get back to the real world. So this is just a bias that
is in the back of their minds, and when they understand the
implications of that bias they want to rid themselves of it.
RF: Could you explain what a prediction market is —
and in which areas of business and policy you think that
prediction markets have the most promise?
Wolfers: It’s simply a betting market, really. You choose
an event and bet on whether it will occur. The simplest

Justin Wolfers

its management or operation strucexample is: Who is going to win the
ture by using prediction markets.
next presidential election? The value
➤ Present Position
But there are some firms like Google
of this approach is that it is a way of
Associate Professor of Business and
that have people researching prediceliciting expectations.
Public Policy, The Wharton School,
tion markets and use them for some
A lot of people ask: Are prediction
University of Pennsylvania
purposes. In policy, at the Federal
markets accurate? I think a more
➤ Previous Faculty Appointment
Reserve I assume that Ben Bernanke
useful question is: Are prediction
Stanford University Graduate School of
has a Bloomberg terminal in his
markets better than the alternative?
Business (2001-2004)
office and looks at what’s happening
So, for instance, in presidential elec➤ Education
with interest rate futures. What are
tions are prediction markets more
B.Ec. (1994), University of Sydney;
interest rate futures? They are a preaccurate than the Gallup Poll? The
A.M. (2000) and Ph.D. (2001),
diction market on the likely path
answer is yes. In nearly every head-toHarvard University
of interest rates. Similarly, when
head comparison between prediction
➤ Selected Publications
economists at the Fed want to put
markets and some alternative, preAuthor or co-author of numerous papers
together a macro model, they put in
diction markets have turned out to
in such journals as the American Economic
some assumptions about oil prices.
be at least as accurate.
Review, Quarterly Journal of Economics,
In order to do this, they look at how
Still, a lot of social scientists,
Journal of Political Economy, Journal of
oil futures are trading. What are
policymakers, and businesspeople
Monetary Economics, Quarterly Journal of
oil futures? They are prediction
seem reluctant to use prediction
Political Science, Journal of Legal Studies,
markets on the future path of oil
markets. I think there are several
and Science
prices. The same is true with foreign
barriers to their adoption. One is
exchange markets and so on. So
legal. Betting on events is generally
prediction markets are being used, but we don’t necessarily
not legal in the United States. So most of the interesting
call them prediction markets in these cases.
prediction markets are operated offshore. Another is that
the United States does not have a gambling culture. In conRF: If prediction markets are such a powerful tool, then
trast, in Australia, my home country, we will bet on virtually
why weren’t we able to use them to more effectively see
anything. Betting on whether something will happen is
simply a natural part of our language. Third, in order to
that, say, the run-up in house prices was unsustainable
listen to the results of a prediction market you have to be
or that (related) large problems in the financial markets
willing to accept that the market is smarter than you are.
were likely?
That requires a lot of humility — and a fair bit of knowledge
of how markets work. When someone asks me who I think
Wolfers: We should acknowledge that all mechanisms
will win the next election and by how much, I look up the
of aggregating information are imperfect. So you do see
prediction market and I state that number exactly, which
bubbles, manipulation, noise trading, volatility, and so on.
means I have to give myself no credit for knowing anything
Despite that, as an empirical statement, in every head-toabout politics beyond that info embodied in the prediction
head comparison, prediction markets tend to do better than
market price. Most people are not very good at this. They
the alternative. As an illustration, I co-authored a paper a
tend to be confident in their individual ability to predict
few years ago that looked at a short-lived market called the
outcomes, even in areas where they may not know much.
“economic derivatives market,” where you could bet on nonIn order for prediction markets to be useful in business,
farm payrolls, retail sales, unemployment claims, business
for example, the CEO has to be willing to listen to them, and
confidence. The way we normally forecast these things is
CEOs tend to be men of action who are quite reluctant to
we call 30 forecasters and we determine the consensus. It
admit the limits of their knowledge. Also, think about what
turned out that prediction markets did a better job than the
middle management is in most firms. They tend to be inforconsensus.
mation monopolists. Their analysts do the research and
Would this be true in housing? I don’t know. We could
report it to them and then they decide whether to present it
run the experiment and find out. Still, we know that markets
to the CEO. With a prediction market, everyone on the
were wildly optimistic in predicting the future path of house
shop floor could give an opinion and that information would
prices. But think about the alternative: So were most of the
go directly to the CEO. That would undermine middle mananalysts. If you had surveyed analysts rather than relying on
agement’s role as an information monopolist, so they are
markets, you would have run into the same problems. So it’s
reluctant to adopt prediction markets.
not clear to me that markets failed us in the case of housing
As for where prediction markets are useful, I think there
considering the alternative. They didn’t do a great job, but
is a wide range of opportunities in business. Any business
they didn’t do worse than the alternative of asking analysts.
would like to forecast next year’s sales, and it appears
The evidence so far suggests that markets are the least
that prediction markets are very useful at doing that. No
imperfect forecaster. There may be settings where that is
company or policy organization has fundamentally changed
not true, but I have not run across them.
RF

Region Focus | Second Quarter | 2010

31

ECONOMICHISTORY
Intranational Trade
BY R E N E E CO U RTO I S

The Schechter brothers with attorney
Joseph Heller (center) celebrate the
1935 Supreme Court ruling in
A.L.A. Schechter Poultry Corp. v. United
States, which overturned fines

against them and ushered in a
short-lived era of judicial limits
on congressional power.

32

T

Region Focus | Second Quarter | 2010

— Congress’s right to regulate interstate commerce — became a hotly
debated clause in the 20th century.
The debate has been renewed today in
light of recent federal legislation concerning health care reform, which
requires citizens within states to
undertake a specific form of commerce (i.e., purchase insurance).
It’s always an open question as to
whether the Supreme Court will take
up legal challenges to new legislation
based on commerce clause grounds.
Understanding the legal history of the
clause can help the public put the
current debate in context.

The Birth of Federal Regulation
The Commerce Clause was rarely
invoked for the first 100 years of the
U.S.’s history. During that time it was
mostly used under the purpose the
Framers envisioned: to mitigate state
trade barriers that would hinder interstate commerce, such as taxes levied
on goods produced in other states.
But the industrial revolution made
states more economically interdependent than ever. The stakes on interstate
commerce were now higher and
brought new questions about what
constituted interstate commerce.
Not surprisingly, the rise of the railroads — then the literal vehicles of
interstate commerce — became an
early test of the boundaries of state
versus federal regulation. In the late
1800s, the transport of bulk items like
grain, lumber, and coal was the railroads’ main business. But it wasn’t that
profitable. Competition from water
carriers forced railroads to keep rates
low, and railroads increasingly used
profits from local delivery services to
recoup fixed costs on less-profitable
bulk transport services.
The growth of local delivery
business also made it affordable
for railroads to draw freight business
from competitors by offering favored
pricing to certain shippers and

PHOTOGRAPHY: THE GRANGER COLLECTION, NEW YORK

How a century of
legal precedent
has shaped the
government’s power
to regulate commerce
between states

he Constitution divides lawmaking authority in the
United States between states
and the federal government. State
governments can pass laws governing
anything except matters the Constitution says they cannot, whereas the
federal government can regulate only
the things the document explicitly
says it can.
In regards to commerce, the
authors
of
the Articles
of
Confederation — the first governing
document of the United States —
thought states should have autonomy
to regulate within their own borders
according to their industry and priorities. But uncertain economic times
after the American Revolution made
clear the need for a federal authority
too. Severing ties with Britain also lost
the colonies one of their primary trading partners, as well as their chief
regulator of trade across state lines.
The states soon suffered from a
simple collective action problem.
They erected trade barriers to protect
their own citizens, which no one state
had incentive to unilaterally tear down while
others left theirs up,
even though everyone
would have been better
off with freer commerce between them.
States’ protectionist
policies grew so onerous and retaliatory that
some even feared they
would culminate in
state-to-state combat.
For this reason,
the Framers of the
Constitution included
the second enumerated power granted
to Congress in Section 1, Article 8 of
the U.S. Constitution, which gives
Congress the right to “regulate
Commerce with foreign Nations, and
among the several States, and with the
Indian Tribes.” The middle provision

localities, a practice called price discrimination. This bred
public frustration especially from farmers in far-flung
geographic areas who were on the losing end of the deal.
States tried to limit price discrimination through
regulation, but their rules could extend only as far as their
borders. In 1886 the Supreme Court ruled that the state
of Illinois had actually overstepped its bounds in regulating
railroads, and Congress intervened in 1887 by creating the
first-ever federal regulatory agency, the Interstate Commerce
Commission. The ICC allowed railroads to continue
charging a markup on local delivery services to recoup the
fixed costs of bulk transport, but they could no longer offer
discounted pricing and rebates to certain customers over others. The primary goal was to maximize access to services.
Though the ICC was a direct answer to widespread
public frustration with railroads, it is telling that railroads
supported the legislation. They sought an end to the price
wars, secret rebates, and price concessions offered to customers to garner business, but also hoped the ICC would
strengthen the railroad cartel. Indeed, the ICC helped the
railroad industry evolve from a private cartel to a publicly
managed one, noted the late economist Marcus Alexis of
Northwestern University in 1982.
The ICC is now regarded as a classic example of “regulatory capture,” in which regulators end up sympathizing with
the regulated and enact rules in their favor. For example, in
the Transportation Act of 1920, Congress allowed the ICC
to regulate minimum, not just maximum, shipping rates, as
well as control entry into and exit from the industry, among
other issues. Contrary to the original intention of Congress
to widen competition, the ICC eventually came to have the
opposite effect.

The New Deal Court
The ICC would not be the last example of public agitation
prompting federal regulatory action. Starting in 1933, a
sweeping batch of New Deal economic sanctions was passed
under President Franklin Roosevelt to deal with the Great
Depression. Roosevelt, based in part on counsel from economists, thought the Depression was a product of unbridled
and “unfair” competition that kept wages low and suppressed demand. The answer, in his view, was a heavier
government hand in managing the economy.
The government created the National Recovery
Administration (NRA) to enforce price and wage controls,
in part by establishing “fair competition codes.” The codes
set maximum hours for the workweek, prohibited child
labor, and set minimum wages. Virtually no industry was
exempted.
The strict controls on competition proved difficult
to enforce. Producers began finding ways around the
codes, such as a group of immigrant brothers in Brooklyn
who ran a business slaughtering chickens and selling them
to retailers within the state of New York. The Schechter
brothers were charged with selling unfit and diseased
chickens at discounted prices, among other violations. They

The Commerce Clause of
were convicted by the
government and fined
the U.S. Constitution grants
before they appealed
the decision. Since their
Congress the power to,
chickens were being
“regulate Commerce with
sold solely within New
York State lines, the
foreign Nations, and among
brothers said, the federthe several States, and with
al government had no
authority to regulate
the Indian Tribes.” But what
them through the NRA.
The Supreme Court
constitutes interstate commerce?
agreed in 1935’s A.L.A.
Schechter Poultry Corp. v. United States. The Court interpreted
the Constitution to mean that Congress could regulate commerce between states; Congress could not, however,
delegate those authorities to the president. The Roosevelt
administration held that transactions which wouldn’t
ordinarily have a substantial effect on interstate commerce
may do so in an “emergency,” when the national economy
is more interdependent. But even though the national
economic emergency may justify extraordinary measures,
wrote Chief Justice Hughes in the Court’s ruling against the
government, it did not justify an expansion of the government’s constitutional powers.
The political implications became as evident as the legal
ones. After the Schechter ruling, Justice Louis Brandeis made
a point of pulling aside one of Roosevelt’s aides to warn that
the decision was “the end of this business of centralization.”
His words were prophetic, as 1935 and 1936 saw a series of
“Black Mondays” in which the Supreme Court repeatedly
struck down attempts by Congress to enact New Deal
programs.
But the president would not take this lying down. In early
1937 Roosevelt pitched a proposal to add another justice to
the Supreme Court for each existing justice over the age of
70, to ease the case burdens of the older judges, he said. The
real goal was to pack the Court with justices sympathetic to
New Deal policies.
Soon thereafter, a justice switched sides on another New
Deal constitutionality case and the Court ruled in favor of
the government. The justice’s change in position became
known as the “switch in time that saved nine.” The justices
held that Roosevelt’s threat did not affect the outcome of
the case, but many legal scholars are not convinced.
In the years that followed, the seemingly chastened
Court overturned many of its previous rulings limiting
federal government power. An era was born in which the
Court deferred to Congress on all matters of economic
regulation. The new trend was amplified in 1942 in what was
arguably the single greatest expansion of federal regulatory
power in the history of Commerce Clause case law.
At the time, the nation’s wheat growers were restricted to
a limited crop size under a Depression-era policy created
to moderate (some say raise) national wheat prices. Roscoe
Filburn, a farmer in Ohio, exceeded the limit to feed his
Region Focus | Second Quarter | 2010

33

livestock and family. He was fined and ordered to destroy
the extra wheat, but he appealed. The wheat was intended
for private use and would never come to market, he said, so
the government’s wheat limits should not apply.
The Supreme Court agreed with the government in
1942’s Wickard v. Filburn. The extra wheat Filburn grew
constituted wheat he would not buy commercially, the
Court said, and therefore affected the interstate wheat market. Furthermore, though Filburn’s actions alone were not
likely to have a noticeable effect on interstate commerce, if
many individuals followed suit the cumulative effect surely
would be substantial.
For the next 50 years, legislation passed by Congress
assumed a continually expanding interpretation of its
authority to regulate, and every related case taken by
the Supreme Court was decided in the government’s
favor. Not that there was much public objection to this
trend.
Congress’s broader interpretation of Commerce Clause
authority led to some widely lauded legislative achievements
such as the Civil Rights Act of 1964 and other enhancements
of civil liberties. The courts, by comparison, looked rudderless in commercial cases. Lower courts lacked a clear
framework by which to interpret the many cases that rested
vaguely on the Commerce Clause, and different federal
appeals courts reached conflicting conclusions. Congress
was effectively the arbiter of the lines between federal and
state power during this period.

Defining the Limits
A pivotal 1995 case came as a surprise. After a 12th grade boy
carried a loaded gun into his school, he was convicted under
the federal Gun-Free School Zone Act of 1990 (GFSZA) that
made it illegal for an individual to possess a concealed
firearm within 1,000 feet of a known school zone.
The government had justified the GFSZA under its
authority to regulate interstate commerce. Yet the link
between gun violence and commerce, let alone the interstate
variety, was not obvious. Previous Commerce Clause rulings
had established that an activity would need to affect interstate commerce on one of three levels for Congress to
regulate it. First, the activity could relate to the channels of
interstate commerce, such as railroads, waterways, or
streets. Second, it could affect the “instrumentalities” of
commerce, or the people and things that are conduits of economic activity. That left only the third and hardest to define
class of activity: those with a “substantial effect” on interstate commerce.
This is where the government made its case. Guns likely
lead to violence, it contended, which would disrupt the
educational process and impair the future productivity of
affected children. If enough people did it, the health of the
economy as a whole would be impaired.
But under the federal government’s logic, the Court
argued, Congress would have the power to regulate any
activity that might conceivably lead to a violent crime. It was

34

Region Focus | Second Quarter | 2010

hard to imagine anything that couldn’t meet this threshold.
It was not enough to “pile inference upon inference” to
connect an activity to interstate commerce. In actuality, the
GFSZA neither dealt with a commercial activity nor
required that the gun possession it prohibited be in any way
connected to interstate commerce. The Court ruled in 1995’s
United States v. Lopez against the government and the
GFSZA was invalidated.
A case in 2000 echoed the Lopez ruling. A female
student at Virginia Tech was sexually assaulted and sought
federal recourse under a portion of the 1994’s Violence
Against Women Act (VAWA) that allowed victims of gender-motivated crimes to file a federal case against attackers.
The VAWA exceeded congressional power, the Supreme
Court ruled in United States v. Morrison, which named one of
the alleged attackers as the defendant. The violent act of one
party against another was not economic in nature — despite
the potential economic harm that might result for the
victim — and therefore had no conceivable impact on
interstate commerce.
Morrison also strengthened the Lopez result. In contrast
to the Lopez case, the VAWA legislation provided ample
evidence of the economic effects of gender-motivated
violence. But the Court stood that Congress could not regulate noneconomic crime of one person against another
based solely on the possibility that the cumulative effect of
many similar acts of that crime could affect interstate
commerce. This differed from the Wickard case, in which
cumulative effects of economic activity were deemed
appropriate federal jurisdiction. To allow Congress to regulate any activity that in any remote way affects commerce
would be to confer onto Congress general police power over
the nation, the Court said. That could somewhat eradicate
the federated structure secured by the Constitution.
Although the Court seemed to be ending its long-standing deference to Congress, remnants of that deference
remained. The Court ruled in 2005’s Gonzales v. Raich against
an ill California resident who had grown marijuana for
medicinal use, which was valid under California law but prohibited nationally. There was indeed an established, albeit
illegal, market for marijuana, the Court said. Like the
Depression-era wheat farming in Wickard, a booming
black market for marijuana could raise prices and draw
homegrown product into the market, counteracting the government’s efforts to limit commercial transactions in the
drug. Justice Scalia wrote in a clarifying opinion that
Congress could regulate purely intrastate activities, even if
they don’t “substantially” affect interstate commerce, if they
could otherwise undercut its ability to regulate interstate
commerce.

What’s at Stake
Justice Sandra Day O’Connor’s dissent in Raich reiterated
that the Supreme Court’s role is to enforce the outer limits
of Congress’s Commerce Clause authority to protect state
sovereignty from a gradual encroachment of federal power.

It is difficult to imagine in advance how any precedent might
be applied in the future toward this end without knowing
the specifics of the cases that will arise.
Take, for example, health care legislation passed in March
2010, the most recent arena in which Commerce Clause
breaches have been alleged. The law requires all U.S. citizens
to purchase health insurance or be subject to a fine. Critics
point out that health insurance is strictly intrastate; it is
regulated by states and historically has never been purchased
across state borders.
The other side recalls the Wickard and Raich rulings, in
which the Supreme Court allowed Congress to regulate
activities that aren’t strictly interstate commerce but have
the potential to “substantially” affect interstate commerce,

or that impede Congress’s regulation of a market the
Commerce Clause might say is valid to regulate, such as that
for health care.
But the health care question contains something new.
The Commerce Clause says Congress has the right to regulate certain activities — but can it regulate the failure to
engage in an activity like the purchase of health insurance?
What if said inactivity “substantially” affects a regulated
class of interstate commerce? It’s not immediately clear how
the legal precedents established by the Supreme Court apply
in these examples.
Answering such questions may not be easy. Many of the
same debates held by the Framers over the proper balance of
authority are still very much alive today.
RF

READINGS
Alexis, Marcus. “The Applied Theory of Regulation: Political
Economy at the Interstate Commerce Commission.” Public Choice,
January 1982, vol. 39, no. 1, pp. 5-27.
Althouse, Ann. “Inside the Federalism Cases: Concern about the
Federal Courts.” Annals of the American Academy of Political and
Social Science, March 2001, vol. 574, no. 1, pp. 132-144.

Bork, Robert, and Daniel Troy. “Locating the Boundaries: The
Scope of Congress’s Power to Regulate Commerce.” Harvard
Journal of Law & Public Policy, Summer 2002, vol. 25, no. 3,
pp. 849-894.
Fine, Sydney. Laissez Faire and the General Welfare State: A Study of
Conflict in American Thought, 1865-1901. Ann Arbor, Mich.:
University of Michigan Press, 1956.

CHARITABLE GIVING continued from page 23
endowed society’s most famous institutions. Those gifts
have also enabled prototypes, such as the nation’s 911 emergency response system and the Pell Grant program that
sends poor students to college. Nonprofit grants from
Carnegie and other foundations even gave the private, nonprofit National Bureau of Economic Research an initial leg
up in the 1920s. More recently, Warren Buffett announced
his gift of $31 billion to the Bill and Melinda Gates
Foundation. That’s more than twice — in 2006 dollars —
the combined amount Carnegie and Rockefeller gave in
their day.
While individuals make up three-fourths of charitable
giving, less than 2 percent of households actually give according to a traditional religious “tithe” — 10 percent of income.
The norm is 1 percent to 2 percent of average income.
Contributions to groups that supply basic needs, such
as homeless shelters or food banks, grew by 3.7 percent
after a decline the previous year. Religious giving barely
budged, with a 0.3 percent decline. “Combination organizations,” such as United Way and the United Jewish Appeal,

received more in contributions in 2008; giving to that
category fell by 4.2 percent in 2009.
People give money when they feel secure based on the
value of their assets, and the connection between changes in
the stock market and giving has strengthened. Estimates
associate a 10 point increase in the Dow Jones Industrial
average with $16 million more in charitable giving, and a
$1 billion increase in personal income associated with
$15 million more. “We particularly see the DJIA more
important in the post-World War era, as more households
own financial assets,” Osili says. “We are watching personal
income closely. Based on historical patterns of recovery,
personal income will have a robust impact on giving.”
The outlook for giving remains uncertain. Wider participation in financial markets affects philanthropy today more
than in previous downturns, and policy changes could also
inhibit gifts. But philanthropic professionals are pinning
hopes for recovery on other dissimilarities: higher percapita income, a greater percentage of college graduates, and
more households supporting secular causes.
RF

READINGS
Fleishman, Joel L., The Foundation: A Great American Secret.
New York: Public Affairs, 2007.

Giving USA Foundation. Giving USA 2010. Indianapolis: The
Center on Philanthropy at Indiana University.

Garrett, Thomas A., and Russell M. Rhine. “Government Growth
and Private Contributions to Charity.” Federal Reserve Bank of St.
Louis Working Paper 2007-012, July 2009.

Gruber, Jonathan, and Daniel M. Hungerman. “Faith-based
Charity and Crowd-out during the Great Depression.” Journal of
Public Economics, June 2007, vol. 91, nos. 5-6, pp. 1043-1069.

Region Focus | Second Quarter | 2010

35

DISTRICTDIGEST

Economic Trends Across the Region

The Great Trade Collapse:
Past, Present, and Future in Fifth District Export Activity
BY S O N YA R AV I N D R A N AT H WA D D E L L

or much of the past decade, and particularly as the
recession began to take hold in 2007, international
demand for U.S. consumer goods was hailed as a way
to replace declining domestic demand. In fact, until
October 2008, international trade was considered a bright
spot in the U.S. economy, with exports of goods and services peaking in that month at 13.2 percent of U.S. GDP. In
October 2008, however, U.S. exports began to plummet,
and over the fourth quarter alone exports fell nearly
11 percent. Although output was also falling, by the
second quarter of 2009 export activity had dropped to
10.6 percent of GDP.
Trade activity in the Fifth Federal Reserve District also
contracted notably during that period. In fact, while goods
exports in the nation fell 26.9 percent from the third quarter
of 2008 through the second quarter of 2009, Fifth District
exports fell 22.2 percent. This was the sharpest export contraction on record for the Fifth District. And, although
trade activity has recovered considerably since the middle of
2009, exports are still below their prerecession levels.
Analyzing export changes in the Fifth District over the past
two years requires an understanding of what happened to
trade on a national and global level. Any speculation on the
magnitude of export activity in the Fifth District going
forward, and its role in the Fifth District economy, will also
require a careful understanding of the industrial and
geographic makeup of Fifth District exports.

F

The Great Trade Collapse
U.S. export activity experienced an unprecedented contraction in the winter of 2008-2009. From the third quarter of
2008 to the second quarter of 2009, total real export values
fell at a 10 percent average quarterly rate. But the decline in

Total Exports
(Percent Change, Year-over-Year)
30
20

PERCENT

10
0
-10
-20

Fifth District

United States

SOURCE: BEA/Haver, WISER/Haver

36

Region Focus | Second Quarter | 2010

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

-30

U.S. trade activity was really a decline in global trade. World
trade experienced its sharpest drop in recorded history and
deepest contraction since World War II. All 104 nations for
which the World Trade Organization reports data experienced contracting imports and exports during the second
half of 2008 and into 2009.
It is no coincidence that the trade contraction coincided
with a slump in global output. As a country’s economy slows,
demand for goods — including imports — will decline.
There is a close connection between trade and GDP: Falling
demand for imports in a country typically is connected to a
decline in export activity with the country’s major trading
partners which will, all else equal, contribute to output
falling further. In fact, according to a 2010 paper by economists Rudolfs Bems, Robert C. Johnson, and Kei-Mu Yi, of
the 14 countries that collectively account for three-quarters
of world GDP, only India and China experienced growth in
the last quarter of 2008 and the first quarter of 2009.
It would be easy to conclude, then, that it was simply
falling GDP, and reduced demand for global goods, that led
to this unprecedented fall in trade activity. However, world
trade activity contracted considerably more than world
GDP — anywhere between four and 25 times more, depending on the source (and time period) chosen. In the United
States, for example, while real export activity fell almost 28
percent from the third quarter of 2008 through the second
quarter of 2009, over the same period real GDP declined
only 3.2 percent (at a 1.1 percent average quarterly rate).
There are a number of theories as to why the trade contraction so considerably outpaced the drop in GDP. First,
the composition of GDP and the composition of traded
goods can be quite different. There is strong evidence that
the drop in demand was dominated by a narrow range of
“postponable” goods, such as consumer durables and investment goods. These goods make up a small share of world
GDP but a large share of world trade.
Bems, Johnson, and Yi cited data from the Bureau of
Economic Analysis that showed domestic demand for
durable goods decreasing by 18 percent, while demand for
nondurables decreased by only 1 percent. A contraction in
demand for manufactured goods would affect trade in the
United States and the Fifth District more severely than it
would tend to affect the overall economy of either. In the
third quarter of 2008, the manufacturing sector accounted
for less than 10 percent of employment in the United States,
but almost 80 percent of total goods exports. In the Fifth
District, the manufacturing sector accounted for almost

Export Similarity Index

2010

2009

Fifth District
2008

2007

WV
2006

VA
2005

SC
2004

NC
2003

MD
2001

2000

DC

2002

100
90
80
70
60
50
40
30
20
10
0
1999

90 percent of exports in the same quarter, but less than
9 percent of payroll employment.
Another explanation for the global trade decline — or at
least for its synchronized nature — lies in the increasing
globalization of production processes, or the expansion of
“vertical linkages” in production. An increasingly large share
of trade involves goods at different stages of the production
process, and creating a final good involves many different
countries. These vertical linkages can propagate shocks
because a reduction in demand for a final good is felt
in every country with a role in the good’s production.
Negative demand shocks can also asymmetrically affect
industries whose production processes involve more
vertical linkages.
Finally, an explanation for the steep and sudden trade
decline lies in the nature of this particular recession. In
September 2008, a number of exceptional things happened:
The U.S. government put mortgage giants Fannie Mae
and Freddie Mac into conservatorship, the investment
firm Lehman Brothers filed for bankruptcy, and U.S. policymakers took action to prevent the failure of the insurance
company AIG. These events not only created uncertainty
about the future, forcing many households and businesses to
rein in spending, but they also led to a global credit market
freeze. The deteriorating credit conditions could have
affected trade finance, thus contributing to the sharp contraction in activity. However, research suggests that the
decline in demand for goods — which stemmed in part from
uncertainty about the economy — and the vertical integration of supply chains had a stronger impact on trade than
did a decline in credit availability.

SOURCE: Calculated using data from WISER/Haver

traction. Of the top 20 importers of District goods, which
together consume almost 80 percent of District exports, at
least 15 saw notable declines in GDP from the third
quarter of 2008 through the second quarter of 2009.
Overall, the demand conditions faced by District exporters
do not differ much from those faced by exporters in the
United States as a whole, since the Fifth District’s major
export destinations are not significantly different from the
major destinations of national exports.
Of the top 20 export destinations of U.S. and Fifth
District goods, only six destinations are not shared. (The
Fifth District’s major importers include the United Arab
Emirates, Saudi Arabia, and Egypt, while those of the United
States include Switzerland, Malaysia, and Colombia.)
The industrial makeup of Fifth District exports is also
very similar to that of the United States. To measure the
similarity between the sectoral concentration of Fifth
District states’ manufacturing exports and that of the
United States as a whole, we calculate an export similarity
The Fifth District in the Trade Collapse
index. We use the measure proposed by Finger and Kreinin
To what extent was the decline in Fifth District export
(1979) and used in a similar manner by Coughlin and Pollard
activity the result of the factors discussed above? To answer
(2001). The index ranges from zero to 100, with zero indicatthis question, it is important to explore changes in the
ing complete dissimilarity and 100 indicating that the state’s
economic environment faced by the District’s major trading
sectoral distribution of exports is identical to the national
partners and explore the types of industries that faced
distribution.
the sharpest contraction in exports. It will also be
The Fifth District export similarity index
instructive to better understand the makeup
has hovered around 80 for most of the past
of District exports and how they differ from
decade, indicating a sectoral distribution
exports in the United States as a whole.
that is quite similar to the U.S. distribution.
Because there is not much state-level data on
The Export Similarity
The two jurisdictions with consistently the
services exports, “exports” in this section
Index is constructed by
lowest index — West Virginia and the
refers to exports of goods. Goods exports
calculating a particular
District of Columbia — are also the two
make up about 70 percent of U.S. export
industry’s share of a
regions of our District that contribute the
activity, and the U.S. decline in goods exports
state's total exports and
least to total manufactured exports (7.1 perwas more severe (26.9 percent) than the
comparing that to the
cent and 1.3 percent, respectively, in the first
decline in services exports (10.3 percent).
same industry’s share of
quarter of 2010).
Furthermore, our industry analysis includes
national exports. For each
Another interesting note about the Fifth
only exports of manufactured goods, which
industry, we compare the
District similarity index is that it has trended
make up about 80 percent of U.S. goods
state share to the national
up in the last 10 years, indicating that the
exports and 90 percent of Fifth District
share, take the minimum,
industry makeup of Fifth District exports is
goods exports.
sum the 20 values, and
slowly converging to that of the nation.
Clearly, a drop in international demand
multiply by 100.
In the fourth quarter of 2008 and the first
was a factor in the Fifth District export con-

QUICK
FACT

Region Focus | Second Quarter | 2010

37

17.4 percent starting in the third
quarter of 2009, when exports were
Top 10 Export Destinations
quarter of 2008.
falling most severely, the export simFifth District
U.S.
Although no industry has yet
ilarity index reached a series high of
(1)
Canada (19.5%)
Canada (18.1%)
recovered to the export levels
more than 85. At least part of the
(2)
Mexico (12.5%)
China (8.1%)
seen before the collapse, only
explanation for this convergence lies
(3)
Germany (5.9%)
China
(7.1%)
three industries have continued
in the declining auto sector; the
(4)
Mexico (5.8%)
Japan (4.9%)
to see export declines. For two
transportation equipment’s share of
(5)
U.K. (4.8%)
U.K. (4.2%)
industries — printing and chemDistrict exports fell notably in this
Japan (4.7%)
(6)
Germany (3.9%)
icals — the average quarterly
period and began to match the
(7)
Netherlands (3.4%)
South Korea (3.2%)
decline has abated notably.
national share.
(8)
Brazil (2.6%)
France (2.9%)
Although declines in District
Exports of transportation equip(9)
Brazil (2.9%)
Netherlands (2.6%)
exports of petroleum and coal
ment did, in fact, make up the
Singapore
(2.4%)
Belgium
(2.8%)
(10)
products moderated, exports
largest portion (34 percent) of the
Total
59.4%
62.9%
continued to fall at a 12.5 percent
District export decline. In the
SOURCE: Bureau of the Census/Haver, WISER/Haver
average quarterly rate since
second quarter of 2008, transportathe second quarter of 2009.
tion equipment made up almost 24
percent of all Fifth District exports; that number had
dropped to about 18 percent by the second quarter of 2009
Export Diversification
and did not improve much in the ensuing quarters. This
Globalized production processes almost certainly concoincides with national problems in the motor vehicles sectributed to export declines in certain industries. However, it
tor that also helped to drive the collapse in total U.S. trade.
is outside of the scope of this article to examine the extent
Exports of transportation equipment in that year fell 38 perto which that was a factor in their decline. It seems likely
cent in the United States compared to 34 percent in the
that the role of various factors in the trade decline differed
Fifth District, but the industry’s share of total exports
across industries; certainly the disproportionate decline in
remained around 19 percent in the United States.
demand for durables played a role in the transportation
Despite the transportation equipment industry’s high
equipment and furniture export sectors. We do, however,
share of total losses, five District industries saw export
explore the extent to which the recent trade collapse might
levels fall at a faster pace than the transportation equipment
have altered the level of diversification of Fifth District
industry. Petroleum and coal products had the sharpest fall,
exports. Were certain industries permanently affected by
followed by primary metals, beverages and tobacco, furnithe trade collapse? To better understand the diversification
ture, and apparel. In other words, firms across District
of Fifth District exports and how that might have changed,
industries suffered declining exports in this period; firms
we engage the Hirschman-Herfindahl (HH) index used by
exporting transportation equipment did not dominate the
Gazel and Schwer (1998). We use the index to measure the
trade collapse in our region. And, although all industries
relative concentration of tradeable sectors and individual
experienced accelerated export declines from the third
export markets for the United States and for Fifth District
quarter of 2008 through the second quarter of 2009, a few
states. See tables on page 39.
industries had been seeing falling exports for some time.
The HH index is the sum of squares of all market shares
Exports from the apparel industry, for example, fell at an
and therefore ranges from one, which indicates total
average quarterly rate of 2.4 percent from the beginning of
concentration in one sector, to one divided by the number
the decade to the third quarter of 2008 (at which point the
of sectors, which indicates complete diversification.
decline accelerated to an average 14 percent quarterly). The
Because we would like to be able to compare industry and
beverages and tobacco industry exports also declined at a 3.3
export destination diversification within a state, we use the
percent average quarterly rate before the trade collapse, and
same number (20) of international markets as we had

QUICK
FACT
The Origin of Movement (OM) data contain export sales
(or free-alongside-ship costs if the good is not sold)
from U.S. states and territories to 242 foreign
destinations, classified by NAICS subsectors. The data
are published by the Census Bureau and the World
Institute for Strategic Economic Research (WISER). The
OM data reflect the transportation origin of exports,
not their origin of production, a limitation that has
deterred many academics and practitioners from using

38

Region Focus | Second Quarter | 2010

the data set. However, work by Andrew Cassey in 2006,
and Ron Cronovich and Ricardo Gazel in 1999, indicates
that OM data are usable for Origin of Production data
with the primary disclaimer that OM data can be
inaccurate for agricultural and mining exports. In order
to limit inaccuracy, we confine our analysis primarily
to data on manufactured goods and, for time-series
reliability, only to data collected after the institution of
NAICS categorization in 1997.

notably less concentrated than those in
NAICS codes for manufactured
Hirschman-Herfindahl Export
the nation. Again, D.C. has a high
exports. For every state, the top 20
Concentration Indexes:
HH index, but we also find South
export destinations accounted for at
Export Destination
Carolina and Maryland to have notably
least 75 percent of all exports and as
U.S.
Fifth District
high levels of sector concentration.
much as 92 percent in the District of
2000:Q1
0.137
0.141
Almost 50 percent of South Carolina
Columbia.
2001:Q1
0.125
0.123
exports are in machinery and transOn the whole, once again, the Fifth
2002:Q1
0.130
0.117
portation equipment, and an additional
District and the nation look rather
2003:Q1
0.132
0.121
17 percent are exports in chemicals.
similar. Turning first to the HH indexes
2004:Q1
0.128
0.115
Maryland also has more than 25 percent
for export destination, it is clear that
2005:Q1
0.132
0.126
of its exports in transportation equipalthough District exports began
2006:Q1
0.131
0.127
ment, and an additional almost 25
the decade more concentrated than
2007:Q1
0.121
0.111
percent in chemicals. Almost 15 percent
national exports, they later became less
2008:Q1
0.115
0.103
2009:Q1
0.111
0.091
of Maryland’s exports are in computers
concentrated. This does not mean that
2010:Q1
0.113
0.097
and electronic products.
the Fifth District had more export
It is not immediately obvious that
destinations, since in creating this
SOURCE: Calculated using data from Bureau of the
Census/Haver, WISER/Haver
the trade collapse had a notable effect
index we constrained ourselves to the
on the concentration by industry
top 20 importers of District and U.S.
of District exports. The industrial
goods. The lower index in the District
Hirschman-Herfindahl Export
concentration of regional exports
Concentration Indexes:
simply means that regional exports
Export Sector
trended up for most of the decade, and
were more widely spread among those
although the last few quarters have seen
top 20 export destinations than total
U.S.
Fifth District
slightly lower index levels than the
U.S. exports. There is some intuition
2000:Q1
0.135
0.097
index peak in the second quarter of
behind this finding — many states in
2001:Q1
0.136
0.095
2002:Q1
0.139
0.106
2009, it is not clear that we are facing a
the United States are geographically
2003:Q1
0.133
0.115
regime shift.
and culturally closer to some of our
2004:Q1
0.136
0.118
The Fifth District is remarkably
nation’s major trading partners such as
2005:Q1
0.129
0.119
like the nation in export concenMexico, Canada, and parts of Asia than
2006:Q1
0.128
0.109
tration by both industry and destinaFifth District jurisdictions. Within the
2007:Q1
0.127
0.119
tion. It is not surprising, then, to see
District, exports from Washington,
2008:Q1
0.118
0.124
expansion and contraction in Fifth
D.C., are the most concentrated, with
2009:Q1
0.117
0.127
District export activity that closely
more than 50 percent of D.C. exports
2010:Q1
0.115
0.126
tracks that of the United States as
going to the United Kingdom or the
SOURCE: Calculated using data from Bureau of the
a whole.
United Arab Emirates. On the other
Census/Haver, WISER/Haver
hand, Maryland and, increasingly,
Virginia have had the lowest export destination HH
Looking Forward: The Great Trade Recovery?
indexes among the Fifth District states.
The export industry in the Fifth District has started to
The HH export destination index has been generally
recover following the great trade collapse. District goods
trending down. This index reached a low of 0.074 in the
exports grew at an average quarterly rate of 4.8 percent
fourth quarter of 2008 and has since returned to first quarfrom the second quarter of 2009 through the first
ter 2009 levels. It is not clear, though, if we are going to see
quarter of 2010. U.S. export activity also expanded over the
a reversal in the downward trend of the index. It is likely that
period as goods exports expanded an average 5.7 percent
at least part of the drop in the index can be attributed to the
each quarter.
collapse in exports to Canada in the fourth quarter of 2008.
Nonetheless, international demand remains weak and
Fifth District exports to Canada fell by almost half in the
total exports are not yet back to their pre-collapse levels
fourth quarter of 2008 as Canada’s share went from 17 pereither in the Fifth District or in the United States overall.
cent of total District exports to 10 percent. By the fourth
Given the diversity of District exports, however, and the
quarter of 2010, however, exports to Canada returned to
great similarity between regional and national export makeabout 18 percent of District exports.
up and growth trends, Fifth District firms are well-placed to
Turning to the industry concentration of exports, we
benefit from a national and global return to normal patterns
find that until 2008, Fifth District exports were often
— and growth — in trade activity.
RF

✧

Region Focus | Second Quarter | 2010

39

State Data, Q4:09
DC

MD

NC

SC

VA

WV

702.1
-0.2
-0.2

2,499.1
-0.4
-3.1

3,890.9
0.3
-4.5

1,809.4
0.0
-4.3

3,602.5
-0.4
-3.6

735.4
-0.6
-3.4

1.4
0.0
-6.7

117.5
0.9
-6.0

434.0
-0.9
-12.9

207.5
-0.8
-12.1

232.9
-0.9
-9.7

49.5
0.3
-10.9

Professional/Business Services Employment (000s) 149.0
Q/Q Percent Change
1.5
Y/Y Percent Change
-1.3

383.5
0.4
-2.6

464.4
1.9
-5.0

208.4
4.0
-2.7

636.7
0.4
-2.9

59.0
0.0
-3.0

Government Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

245.0
-0.6
4.0

491.3
-0.4
0.2

727.2
2.3
2.1

351.8
0.6
1.2

692.4
-0.4
-1.0

148.6
-1.2
0.4

Civilian Labor Force (000s)
Q/Q Percent Change
Y/Y Percent Change

332.5
0.3
-0.4

2,960.5
-0.6
-2.0

4,521.7
-0.1
-1.3

2,172.7
-0.3
0.4

4,147.3
-0.6
-0.2

788.5
-1.1
-1.8

11.6
10.8
7.7

7.3
7.2
5.4

10.9
10.9
7.8

12.3
12.1
8.7

6.8
6.9
4.8

8.9
8.6
4.9

36,107.6
0.3
-0.5

251,232.4
-0.1
-0.3

295,639.4
0.2
-0.9

132,751.6
0.3
-1.1

315,566.7
-0.1
-0.5

53,340.6
-0.3
-0.4

Building Permits
Q/Q Percent Change
Y/Y Percent Change

421
158.3
902.4

2,974
23.3
57.4

7,519
-19.7
-6.7

3,804
-12.9
10.5

4,723
-12.6
-6.2

367
-50.1
-8.9

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

572.8
1.6
-1.5

442.0
-1.7
-7.7

327.7
-1.3
-3.4

333.5
-0.7
-3.2

420.6
-0.7
-4.3

225.3
-0.4
-1.4

10.4
18.2
62.5

87.6
16.5
49.0

162.8
13.7
32.6

81.6
11.5
25.2

120.4
-3.2
14.0

32.8
13.9
41.4

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Manufacturing Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Unemployment Rate (%)
Q3:09
Q4:08
Real Personal Income ($Mil)
Q/Q Percent Change
Y/Y Percent Change

Sales of Existing Housing Units (000s)
Q/Q Percent Change
Y/Y Percent Change

NOTES:
Nonfarm Payroll Employment, thousands of jobs, seasonally adjusted (SA) except in MSAs; Bureau of Labor Statistics (BLS)/Haver Analytics, Manufacturing Employment, thousands of jobs, SA in all but DC and SC; BLS/Haver Analytics, Professional/Business
Services Employment, thousands of jobs, SA in all but SC; BLS/Haver Analytics, Government Employment, thousands of jobs, SA; BLS/Haver Analytics, Civilian Labor Force, thousands of persons, SA; BLS/Haver Analytics, Unemployment Rate, percent, SA
except in MSA’s; BLS/Haver Analytics, Building Permits, number of permits, NSA; U.S. Census Bureau/Haver Analytics, Sales of Existing Housing Units, thousands of units, SA; National Association of Realtors®

40

Region Focus | Second Quarter | 2010

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 1999 - Fourth Quarter 2009

Change From Prior Year

First Quarter 1999 - Fourth Quarter 2009

First Quarter 1999 - Fourth Quarter 2009

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

10%

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

9%
8%
7%
6%
5%
4%
3%
99 00 01 02

03 04 05 06 07 08

09

99 00 01 02

03 04 05 06 07

08 09

Fifth District

99 00 01 02

03 04 05 06 07 08

United States

Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 1999 - Fourth Quarter 2009

First Quarter 1999 - Fourth Quarter 2009

First Quarter 1999 - Fourth Quarter 2009

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

Change From Prior Year

30%

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
99 00 01 02
Charlotte

03 04 05 06 07 08
Baltimore

20%
10%
0%
-10%
-20%
-30%
-40%
-50%
99 00 01 02

09

Washington

Charlotte

03 04 05 06 07 08
Baltimore

09

99 00 01 02

Washington

03 04 05 06 07 08

Fifth District

FRB—Richmond
Manufacturing Composite Index

House Prices

First Quarter 1999 - Fourth Quarter 2009

First Quarter 1999 - Fourth Quarter 2009

First Quarter 1999 - Fourth Quarter 2009

Change From Prior Year

15%
13%
11%
9%
7%
5%
3%
1%
-1%
-3%
-5%

30
20

30
10
20
0
10

-10

0

-20

-10

-30

-20
-30

-40
-50
99 00 01 02

03 04 05 06 07 08

09

09

United States

FRB—Richmond
Services Revenues Index

40

09

99 00 01 02

03 04 05 06 07 08

09

99 00 01 02

03 04 05 06 07 08

Fifth District

09

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and employment
indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

Region Focus | Second Quarter | 2010

41

Metropolitan Area Data, Q4:09
Washington, DC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Hagerstown-Martinsburg, MD-WV

2,393.1
0.1
-1.7

1,270.7
0.2
-3.2

96.8
-0.5
-4.0

6.2
6.2
4.3

7.6
7.7
5.4

9.4
9.1
6.3

2,874
2.6
-1.8

1,325
20.2
93.7

145
-30.3
-14.7

Asheville, NC

Charlotte, NC

Durham, NC

165.7
0.1
-5.5

806.6
1.0
-6.1

284.6
1.4
-2.5

Unemployment Rate (%)
Q3:09
Q4:08

8.8
8.9
5.8

12.0
12.1
7.7

7.8
8.3
5.4

Building Permits
Q/Q Percent Change
Y/Y Percent Change

255
-16.1
-3.0

1,436
-28.0
-28.8

508
27.6
49.9

Raleigh, NC

Wilmington, NC

343.0
1.1
-6.2

500.1
0.9
-4.2

137.8
-0.5
-4.8

11.4
11.6
7.6

8.9
9.1
5.8

10.4
10.1
7.1

428
-22.2
-26.7

1,228
-7.8
-1.3

402
-31.2
-20.4

Unemployment Rate (%)
Q3:09
Q4:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment ( 000s)
Q/Q Percent Change
Y/Y Percent Change

Greensboro-High Point, NC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:09
Q4:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

42

Baltimore, MD

Region Focus | Second Quarter | 2010

Winston-Salem, NC

Charleston, SC

Columbia, SC

208.7
1.0
-4.5

283.6
0.3
-4.3

347.6
1.0
-4.0

Unemployment Rate (%)
Q3:09
Q4:08

10.0
10.2
6.8

10.3
10.2
7.0

10.0
9.9
7.2

Building Permits
Q/Q Percent Change
Y/Y Percent Change

142
-56.8
-46.0

694
-21.8
-13.0

959
18.2
55.4

Greenville, SC

Richmond, VA

Roanoke, VA

293.8
0.6
-6.2

598.1
0.0
-5.2

154.9
1.0
-4.6

11.1
11.1
7.4

7.6
7.8
4.8

7.2
7.5
4.4

352
-11.3
12.8

816
-16.2
-21.9

103
-12.0
0.0

Virginia Beach-Norfolk, VA

Charleston, WV

734.1
-0.8
-3.4

147.3
-0.3
-4.7

116.4
1.4
-3.2

6.9
7.0
4.8

7.3
7.1
3.4

7.8
8.2
5.2

1,255
5.6
93.7

47
0.0
-17.5

8
14.3
60.0

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:09
Q4:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:09
Q4:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Huntington, WV

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail Sonya.Waddell@rich.frb.org

Region Focus | Second Quarter | 2010

43

OPINION
Too Big to Fail and the Distortion of
Compensation Incentives
BY J O H N A . W E I N B E RG

that seeks to operate in the best interest of its shareholders
e sometimes think of compensation for
has an incentive to create the best compensation
employment as being a pretty straightforward
scheme it can with the tools it has. What might get in the
thing — you get paid a fixed rate for the
way of a firm’s ability to strike the best possible balance
amount of time you work. But many jobs involve choices
between risk and incentives?
that the worker makes on a daily basis — choices that
While the common narrative that many firms’ pay
affect the outcomes achieved but are hard for the worker’s
practices gave executives an incentive to take excessive
boss to directly observe or influence.
risks may have been true, it might not have been because
For instance, it becomes difficult to simply say “you do
compensation schemes were poorly aligned with
X and I’ll pay you Y” when X involves managing a portfolio
shareholder interests. That is, it could
of assets. How do you know if the assets
have been the case that shareholders
have been effectively managed? Of
themselves had an inefficiently high
course, you can look at the results
Designing
appetite for risk-taking by large
achieved — for instance, the returns on
compensation schemes financial firms and that executive
investments — and compensate the
compensation was well-aligned with
manager based on those returns. But
is complicated,
shareholders’ distorted interests.
the results are likely to depend
For financial firms, that distortion
both on choices made by the portfolio because of the difficulties
comes from the safety net provided
manager and on random factors beyond
in measuring
by deposit insurance as well as the
the manager’s control.
implicit subsidy that comes from some
In general, you would like to be able
performance and
firms being viewed as too big to fail.
to base compensation on an indicator of
These protections make creditors less
whether the manager made sound
tying it to the actions
concerned about the risks taken by a
choices, but such indicators are hard
of employees.
firm, resulting in lower costs of debt
to come by. After-the-fact indicators,
financing. And since shareholders
like the actual portfolio performance,
benefit from higher returns, the safety
although imperfect, may often be the
net will tend to increase a leveraged financial firm’s interest
best you can hope for. By rewarding performance after
in taking risks. So absent regulatory or supervisory interventhe fact, a compensation arrangement faces the trade-off
tion, one might expect such firms to arrange their
between giving the manager an incentive to make good
executives’ compensation in ways that encourage, or at least
decisions and exposing the manager to risks beyond
do not discourage, risky decisions.
his control — risks that make the job less desirable to
In the presence of a safety net that distorts financial
begin with.
institutions’ incentives for risk, regulation needs to replace
The trade-off between risks and incentives is the
the discipline that would otherwise come from market
fundamental problem in designing compensation schemes
forces. Whether that regulation is most effective when
in large organizations. The problem certainly arises in
applied to compensation practices or more directly to the
banks and other financial institutions, where pay policies
risk-taking activities of a firm is somewhat of an open queshave been argued to have increased incentives for taking
tion. But the effectiveness of either approach will be
large risks that contributed to the financial crisis.
enhanced by a recognition that the fundamental source of
And in the wake of the crisis, efforts have begun, both
incentive problems is not in compensation practices per se
in the United States and internationally, to increase the
but in the protections of the financial safety net.
RF
regulatory scrutiny of compensation practices in large
banks. But what exactly is the problem that regulation needs
to fix?
Designing compensation schemes is complicated,
because of the difficulties in measuring performance and
John A. Weinberg is senior vice president and director
tying it to the actions of employees. But typically, a firm
of research at the Federal Reserve Bank of Richmond.

W

44

Region Focus | Second Quarter | 2010

NEXTISSUE

0

Interview

The Furniture Factor
Consumers typically defer furniture purchases during recessions, but more buyers than expected showed up last spring
at the High Point Market in North Carolina. The number
of registered attendees reached its highest level in two years,
raising hopes among manufacturers and retailers.

The Trying State of Public Pensions
Public pensions are facing shortfalls across states and municipalities in part because of accounting rules that allow some
administrators to apply optimistic assumptions that result in
lower funding levels. What fixes are being considered?

The Economics of Music
It’s no secret that the music industry has changed with the
growing popularity of both Internet purchases and file-sharing
software. As consumers begin accessing music in different ways,
recording labels and artists alike are preparing for a world
where the CD is no longer king. Firms must reconsider which
products they wish to offer — and how to market them.

Bruce Caldwell of Duke University will
discuss why the history of economic thought
matters for today’s economists, who typically employ significantly different methods
than their predecessors, but whose work
often builds on time-tested ideas.

Economic History
Many idealized societies took root in the
19th century against a backdrop of industrialization. Early efforts included Robert
Owen’s “village of cooperation” in New
Harmony, Ind. How did these experiments
fare, and what influence did they have on
seemingly similar communities today,
including one in central Virginia?

Jargon Alert
It’s commonly thought that firms’ profitability is hurt by economic regulation.
That’s often true — except for some firms
that use regulation to their advantage, by
stifling competition, for instance. Find out
how “regulatory capture” works and learn
about some prominent historical examples.

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and Web-exclusive content
• To add your name to our
mailing list
• To request an e-mail alert of
our online issue posting

Federal Reserve Bank
of Richmond

PRST STD
U.S. POSTAGE PAID
RICHMOND VA
PERMIT NO. 2

P.O. Box 27622
Richmond, VA 23261

Change Service Requested

Please send subscription changes or address corrections to Research Publications or call (800) 322-0565.

This special issue of Economic Quarterly
is devoted to Douglas Diamond and
Philip Dybvig’s seminal 1983 article
on bank fragility and banking
regulation, which continues to

provide insights for today’s policymakers
and researchers.
The articles in this issue explore the
influence of Diamond and Dybvig’s
article on subsequent economic research,

extend our understanding of current
financial phenomena, and examine how
the model might be used to evaluate new
regulations and banking policies.