View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

FALL 2009

THE

FEDERAL

RESERVE

BANK

OF

RICHMOND

VOLUME 13
NUMBER 4
FALL 2009

FEATURES
12

Recessions and Entrepreneurship: Is necessity the mother
of invention?
More than half of Fortune 500 firms were founded in recessions or
bear markets. Well-known companies like Burger King, Hyatt, and
Microsoft were either launched or conceived during a recession.
Is there something special about economic hardship that spurs
entrepreneurship?
16

The Price is Right? Has the financial crisis provided a fatal
blow to the efficient market hypothesis?
Some observers have argued that the financial crisis has disproven
what has become known as the “efficient market hypothesis.”
But the conclusions of the hypothesis are more modest than its
critics often suggest.

Our mission is to provide
authoritative information
and analysis about the
Fifth Federal Reserve District
economy and the Federal
Reserve System. The Fifth
District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
The material appearing in
Region Focus is collected and
developed by the Research
Department of the Federal
Reserve Bank of Richmond.
DIRECTOR OF RESEARCH

John A. Weinberg
EDITOR

20

Aaron Steelman
SENIOR EDITOR

The Business of Higher Ed: Prices and costs of a
college education

Stephen Slivinski
MANAGING EDITOR

Kathy Constant

College sticker prices have outstripped inflation for three decades.
That trend is likely to continue as long as demand remains strong
and opportunities for institutions to increase their productivity
are limited.

STA F F W R I T E R S

Renee Courtois
Betty Joyce Nash
David van den Berg

23

Questions Grow Along with Ginnie’s Portfolio
While the housing market struggles to recover, the Government
National Mortgage Association’s business is booming. Yet a large share
of the mortgage-backed securities it guarantees is from loans insured
by the Federal Housing Administration, which has experienced an
increasing default rate in its portfolio.

CONTRIBUTORS

Kartik B. Athreya
Daniel Brooks
Ann Macheras
Bob Schnorbus
Sonya Ravindranath Waddell
DESIGN

Big (Beatley Gravitt, Inc.)
C I RC U L AT I O N

Alice Broaddus

24

Jalopy Economics: How to judge the success of
“Cash for Clunkers”
The Cash for Clunkers program seemed quite popular. But did it
really provide a robust boost to economic growth or did it simply
spur vehicle consumption earlier rather than later?

DEPARTMENTS

1 President’s Message/The Importance of Financial Education
2 Upfront/Economic News Across the Region
5 Federal Reserve/The Evolution of Fed Independence
8 Jargon Alert/Prisoner’s Dilemma
9 Research Spotlight/How Forecasts Can Influence the Present
10 Policy Update/New Credit Card Rules Could Harm Some
11 Around the Fed/Lending Standards and the Foreclosure Crisis
26 Interview/George Kaufman
30 Economic History/A Tale of Two Virginias
35 Book Review/This Time is Different: Eight Centuries of Financial Folly
36 District Digest/Economic Trends Across the Region
44 Opinion/Why Efficiency Matters … Even if You Value Equality

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA23261
www.richmondfed.org
Subscriptions and additional
copies: Available free of
charge through our Web site at
www.richmondfed.org/publications
or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Region Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Region Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal
Reserve System.
ISSN 1093-1767

PRESIDENT’SMESSAGE
The Importance of Financial Education
n October, I gave a talk to the Council for Economic
Education, addressing what educators might do to help
people make more informed financial choices. This
issue seems particularly timely in the wake of the financial
crisis, and in this column I would like to revisit the topic.
Every day, we are faced with financial decisions. But for
most of us, there are only a few key choices that have the
potential to fundamentally shape our financial future. In
those cases, it is important that we make the correct choice
— or, perhaps more importantly, avoid a really bad choice.
Among those decisions are: whether to pursue higher
education; whether to purchase a house and, if so, on what
terms; and how to best save for retirement. I believe that
financial education should focus on those key decisions, ones
that will have large and potentially long-lasting effects on
people’s lives.
Consider education. Wage inequality has been growing in
the United States since the late 1970s. While there are
numerous potential causes of this trend, many research
economists agree that one of the most significant is skillbiased technical change. That is, technological progress over
the last few decades has increased the productivity of skilled
workers more rapidly than it has of less-skilled workers. As a
result, the financial returns to accumulating skills have
grown sharply. Often — though not always — those skills are
obtained through higher education. Investments in human
capital can prove more useful if they are made at a relatively
early age, giving people the opportunity to recoup their
investment throughout the majority of their working lives.
In addition, such training tends to build on itself, because
acquiring skills early in life makes it easier to acquire additional skills later in life. The evidence is clear that, over
time, higher education has become a larger determinant of
lifetime earnings. This suggests that it is particularly important that high school students fully understand the returns to
human capital investment as they consider which path to
take following graduation.
Most households will purchase a home. It is crucial that
they know how much they can afford to spend and which
terms would be most desirable given the path they expect
their earnings to take over their lifetimes. Homeowners
should see the considerable benefit in gathering that information. But prudent regulation, such as requiring the
essential provisions of a mortgage contract to be clear and
explicit, can make the process easier. I think it is also important to note that homeownership itself is not a wise choice
for everybody. For example, for those who value mobility
and are apt to move relatively frequently to pursue better
job opportunities, the search and transaction costs of purchasing a house can be prohibitively large. More critically,

I

purchasing a home usually
means placing a considerable
amount of a household’s savings into a single asset that is
often costly to trade. This decision may be particularly risky
for a household with a variable
income stream and low levels
of savings, as is true of many
low-income households. When
one considers such factors,
I think it becomes clear that
renting does not necessarily
mean “throwing your money away,” as is commonly suggested. It is simply another way to obtain housing, one that
is appropriate for some households, just as buying is
for others.
Planning for retirement involves a hard set of choices as
well, which, if not done carefully, can lead to painful results.
In the aftermath of this recession, which has seen many
workers postpone their impending retirement, the importance of accounting for a range of plausible risks should be
clear. Given the differences in how people wish to live in
retirement — that is, whether they want more, less, or
roughly the same annual spending as when they were
working — it is hard to say much in the abstract about what
is an “optimal” retirement plan. But I think everyone would
benefit from understanding that there generally is good
reason to shift your retirement portfolio from more risky to
less risky assets as you grow older. This understanding might
have helped prevent some of the considerable losses in
retirement funds that some consumers have unfortunately
experienced.
People do not always make the best possible choice in
every situation. But we can learn which decisions are likely to
be especially consequential and to take appropriate care
when making them. Directing our financial education efforts
toward that goal, I believe, could help many people more
effectively pursue their ambitions and avoid costly mistakes.

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

Fa l l 2 0 0 9 • R e g i o n Fo c u s

1

UPFRONT

Economic News Across the Region

Danville Works

New IKEA Factory Hums

PHOTOGRAPHY: DANIEL BROOKS

One manufacturer decided to swim against
the outsourcing current and situate a
factory close to its market.

2

Swedwood, the manufacturing unit for IKEA, opened
its first U.S.-based manufacturing facility in Danville,
Va. By the time the factory opened in May of 2008,
there were more than 175 employees at the 930,000
square-foot plant. Swedwood expects to hire another
625 workers by 2013.
For Southside Virginia, the new factory brought
relief. Here the economic blues are nothing new.
Danville, home to 48,411 residents, according to the
2000 Census, has struggled with high unemployment
and a stagnant economy for a decade. In the current
recession, Danville’s unemployment jumped to 13.9
percent in January 2009, well above the 7.6 percent
nationwide unemployment rate. By October, the rate
had fallen to 11.2 percent, much higher than Virginia’s
6.3 percent rate.
Manufacturers have generally left Danville for the
cheaper labor found overseas. For example, textile
manufacturer Dan River Inc. was facing overseas competition and steadily decreased its production. The
textile mill once employed as many as 15,000. By 2008,
only a handful of workers remained when the company
filed Chapter 11 bankruptcy. The departure of the textile mill and smaller manufacturers crippled the
economy. For a city where about 15 percent of jobs were
in the manufacturing sector (compared to the 7 percent
Virginia average), the decline of manufacturing was
particularly devastating.
For some manufacturers, transportation makes up a
large percentage of total costs. This is especially true
for IKEA, known for low-priced furniture. Two years
prior to IKEA’s 2006 announcement that they would
open the Danville plant, crude oil had more than
doubled in price, from about $30 a barrel in 2004 to
$70 a barrel by August of 2006.
The Danville facility produces wood-based products such as bookshelves, and coffee and side tables.
According to Jorgen Lindquist, vice president of
Swedwood in North America, “this kind of lightweight
furniture is cheap to make, so transportation is a huge
part of the overall cost.” Having to ship goods from factories in Eastern Europe all but wiped out the benefits
of manufacturing in such low-wage economies.

R e g i o n Fo c u s • Fa l l 2 0 0 9

IKEA’s Swedwood manufacturing plant in Danville, Va.,
specializes in wood-based furniture such as bookcases and
side tables.

Now IKEA’s products can be shipped to East Coast
distribution centers in Georgia, Maryland, and New
Jersey. Proximity to those centers was one reason why
Danville was chosen. A manufacturer also needs skilled
labor, and Danville had plenty who were eager for
employment.
The Danville Office of Economic Development
especially welcomed the arrival of Swedwood. After the
tobacco settlement in 1999, in which states received
$206 billion over 25 years from tobacco companies,
millions of dollars went to aid Danville’s weakening
economy. The city and its home county, Pittsylvania,
used that money to attract industry and built the 900acre joint industrial park Swedwood now occupies.
Swedwood was also lured by the industrial park’s
communications capacity. Using money from the
tobacco settlement, the state initiated the MidAtlantic Broadband Cooperative (MBC) to construct a
fiber-optic network covering a large section of southern Virginia. This enables Swedwood to connect to
its home office in Sweden. According to Tad Derisa,
general manager of MBC, it cost less for Swedwood to
buy connectivity to Sweden through the cooperative
than other carriers.
Because IKEA is not publicly traded, it is difficult
to gauge the impact of its manufacturing move to the
United States. With the value of the dollar declining
and the rise in oil prices at the time, other manufacturers were reviewing the United States for possible
sites. The outsourcing trend is unlikely to reverse, even
with a sharp increase in oil prices. But in some cases,
these high transportation costs may provide some
employment relief for small towns like Danville.
—DANIEL BROOKS

Troubled Asset Relief

Bank of America Repays Government Loans
Bank of America in December repaid the $45
billion it borrowed from the government’s Troubled
Asset Relief Program (TARP). Bank officials said in a
press release that the repayment clears a “significant
hurdle” and demonstrates the company’s strength
following the nationwide financial crisis.
More than half of the repayment came from cash,
with more than $19 billion coming from a new
issuance of stock. Bank of America is the nation’s
largest bank-holding company by assets and received
the second-biggest chunk of TARP money behind
Citigroup.
Most banks want to repay the money, in part to be
free of government influence over executive pay and
other operational issues. But it’s not easy. Banks must
prove they’re out of financially tumultuous waters
before using resources to pay back TARP funds. This
includes judging whether the weak economy could
pose further troubles.
For Bank of America, the decision likely also factored in possible financial challenges issuing from its
acquisitions of Countrywide in 2008 and Merrill
Lynch in 2009. “It may still have to stomach losses
from its portfolios associated with [those] acquisitions,” notes Daniel Indiviglio of The Atlantic.
TARP was initiated by the U.S. Treasury in
October 2008 following the onset of the financial crisis. The program gave eligible banks capital in
exchange for preferred shares of common stock
granted to the government. The only way for banks to
recoup those shares is to buy them back which,
in effect, is a repayment of the TARP money. But
they can only do that with government approval.

Treasury allows TARP repayment based on an institution’s soundness, capital, and ability to lend, all of
which are determined in concert with the bank’s
regulator.
Regulators also have a say in how banks finance
TARP repayment. Banks are required to hold a certain amount of capital against assets to protect
against possible future losses. Using existing cash to
repay TARP funds will lower the bank’s capital ratio,
all else being equal. A bank’s capital cushion will help
weather strains stemming from the financial crisis.
That means repaying TARP funds could impair its
ability to withstand future turmoil. Alternatively,
banks can repay TARP funds by issuing new shares,
but that displeases current shareholders who want
shares to stay scarce and, hence, more valuable.
The Federal Reserve’s Fifth District was home to
two of the nation’s largest banks prior to TARP’s
inception: Bank of America and Wachovia, both
based in Charlotte. However, San Francisco-based
Wells Fargo announced in the same month TARP was
created that it would merge with Wachovia. Wells
Fargo received $25 billion in TARP funds, and repaid
the money late in December.
Citigroup, the New York-based bank, was the
largest recipient of TARP funds at $50 billion.
Citigroup paid $20 billion back in December.
Repaying TARP funds and reassuming control over
executive pay likely also made it easier for Bank of
America to secure a replacement for CEO Kenneth
Lewis who retired at the end of 2009. Brian
Moynihan, an internal pick, was chosen as the new
head of the banking giant.
— RENEE COURTOIS

Apples, Soybeans, Poultry

Virginia’s Exports to Cuba Grow
Cuba is an expanding market for Virginia agricultural products, with $45 million sold as of the third quarter of 2009.
Virginia ranks among the top five states in the value of exports to the Caribbean nation. Cuba was opened to medical
and agricultural exports in 2000 for the first time since 1962.
The Virginia Department of Agriculture and Consumer Services reports that the state’s agricultural exports totaled
$2.2 billion in 2008 to all corners of the globe to the mutual benefit of farmers and consumers. Virginia cattle, for
instance, are shipped to Turkey and this export of genetic stock improves Turkish stock, says VDACS marketing director
Charles Green. Virginia also sells $100 million in agricultural goods annually to China.
Farmers and VDACS representatives went to Cuba last November to promote and negotiate shipments of soybeans,
pork, poultry, and apples. The state has exported agricultural products to Cuba since 2003, when it sold $838,000 in

Fa l l 2 0 0 9 • R e g i o n Fo c u s

3

Virginia Agricultural Exports to Cuba
45
40
35
30
25
20
15
10
5
0

40.7
32.6

$MILLIONS

products. In 2008, that number had grown to $40 million.
The process takes patience, though. Transactions must go through a thirdcountry bank. But the relationship is paying off. The Cuban government buys
through its agent, Alimport. VDACS sets up meetings between exporters and
8.2
government representatives. “They may relay a desire to quote on a particular
2.4
838K
product; we bring it back and put it out here to the industry to see if anyone is
2003 2004 2005 2006 2007 2008
interested in providing a quote on that particular product,” Green says.
SOURCE: Virginia Department of Agriculture and Consumer Services
Foreign consumers can help diversify markets for farmers. Apple growers
Philip Glaize of Winchester, Va., and Henry Chiles from Crown Orchards, for
instance, work together to supply necessary volume for their overseas buyers.
The Cuban government agency coordinates purchase requests of multiple agencies, and consolidates them
into contracts with exporters, Green says. The market could be expanded, he says, but current trade and travel
restrictions limit growth. There are other transaction costs as well. Because of everything from travel licensing
to the necessary export licenses from the U.S. Treasury and U.S. Department of Commerce, it’s not as easy as
going to Mexico City to visit a customer. But it has been worth the investment.
This is one of those markets where it’s truly unique when you see someone who is American,” Green adds.
And when he does, it’s often someone from the South. “Geographically it makes sense — we have a competitive
advantage. It’s a three-day transit time to get a vessel down there with soybeans.”
— B E T T Y J OYC E N A S H
26

Cash Economy

Underground Commerce Grows in Recession
Some transactions in the economy aren’t conducted on the
books. Some are simple cash transactions between people in
what analysts call the “underground economy.” Contrary to the
connotation, much of this activity is not illicit. Most of the
underground economy does not involve drugs, prostitution, or
other illegal activities but is more likely to involve paying a
neighbor to mow your lawn or shovel snow — tasks that, were
it not for the nonreporting of the earned income, would be
otherwise perfectly legal.
Recessions tend to be fertile periods for these deals. One
way to measure the underground economy is to evaluate the
ratio of unreported income to reported income. Edgar Feige,
professor emeritus of economics at the University of
Wisconsin-Madison, estimates that in 2008, unreported income
reached about $2 trillion, the highest since World War II. This
estimate puts the percentage of unreported income at between
22 percent and 24 percent of reported income.
As a first step to estimating the underground economy,
Feige took a look at currency in circulation, which at the start of
2009 hit $824 billion. His research shows that about a third of
U.S. currency is held overseas, less than earlier estimates. The
remainder, Feige notes, includes underground transactions.
During World War II, the calculated percentage of
unreported income rose dramatically, but fell off during the
post-war period and remained fairly stable until 1973. This

4

R e g i o n Fo c u s • Fa l l 2 0 0 9

figure rose to a temporary peak in 1982, also a recession year.
Feige estimates that the 1980s and 1990s showed considerable
fluctuations in levels of reported and unreported income.
Yet during the past decade, levels of unreported income rose
substantially.
Lack of employment opportunities probably plays a role in
the rise of unreported income. “People do try to substitute
work in the unofficial sector for work in the official sector
where they’re losing their jobs,” Feige says.
When so much income is unreported, legislators have fewer
tax revenues at their disposal. His findings suggest that governments could be out more than $600 billion a year, which
includes money from overseas tax havens. That figure, called the
“tax gap,” was last published by the Internal Revenue Service in
2002, when it was about $345 billion. Collecting that money
could allow lawmakers to reduce deficits.
Feige also posits a close relationship between distrust of
the government and the growth in unreported income. Events
like the Vietnam War and Watergate scandal caused
an increase in government mistrust. That led to an increase in
noncompliance with tax laws, Feige says. He expects dissatisfaction with the Iraq War and other policies to drive increased
rates of government distrust. “I suspect that could be an important factor in the increase in noncompliance we’ve witnessed.”
— DAV I D VA N D E N B E RG

FEDERALRESERVE
The Evolution of Fed Independence
BY ST E P H E N S L I V I N S K I

oday there is a consensus
that a central bank can best
contribute to good economic
performance by pursuing price stability — and that it should remain independent from political forces. In the
case of price stability, this understanding evolved over decades of experience. The notion of independence of
the central bank was more difficult to
fulfill.
The original conception of the
Federal Reserve System when it was
created in 1913 was meant to continue
the spirit of the “independent treasury
system” that existed in the pre-Fed
era. That system assumed that the U.S.
Treasury would store its gold and
assets in its own vaults lest it unduly
influence the markets for credit and
money. Ideally, Treasury meddling in
what passed for monetary policy at the
time was to be avoided.
Yet for most of the first four
decades of its existence, a lack of
independence was characteristic of
the Federal Reserve. The hand of the
executive branch of the U.S. government was ever-present when the Fed
began operations in November 1914.
The 1913 act that created the Fed
made the Secretary of the Treasury
and the Comptroller of the Currency
ex officio members of the Board.
In fact, the Treasury secretary
presided over all meetings in those
early days. The Board did not have
its own building — they held their
meetings in the Treasury building
instead.
Thus, the evolution of monetary
policy cannot be understood without
an understanding of the changes
and personalities involved in the
evolution of the Fed as an institution.
Throughout much of its history, the
struggle for independence has often
occurred in conjunction with changes
in policy — and these changes have
tended to reinforce each other.

PHOTOGRAPHY: FEDERAL RESERVE BOARD OF GOVERNORS

T

Wars, Depression, and
Dependence
When the United States entered
World War I in April 1917, the Federal
Reserve almost instantly became the
primary vehicle for financing the war
effort. The main function of the Fed
during those war years was to lend
money to banks to purchase “Liberty
Loans” bonds from the U.S. Treasury.
They loaned the money at a discounted rate — not coincidentally lower
than the interest rate on the war bonds
— to entice bond purchasers. After
the war, the Federal Reserve Bank of
New York remained the official
fiscal agent of the U.S. Treasury
Department.
In the post-war years, the Fed
busied itself with maintaining the
newly reconstructed gold standard.
Its missteps in the wake of the stock
market of October 1929 contributed
to the impression that the Fed was
powerless.
Political forces retained the upper
hand in the economic upheaval of the
Great Depression. As economist Allan
Meltzer points out in his
history of the Fed, monetary
policy would basically be
dictated by Congress and
the White House between
1933 and 1951. For instance,
after Franklin Roosevelt
became president in 1933 he
assumed emergency powers
that explicitly took the
United States off the gold
standard. Congress would
later that year mandate that
the Fed issue “reserve notes”
not backed by gold. The Fed
was forced to freeze its asset portfolio
and the monetary base was effectively
determined by the Treasury.
Additionally, the Federal Reserve
structure as we know it today is a byproduct of the policy actions taken
during the Great Depression. Many of

How monetary policy
and central bank
autonomy came
of age

Although the Fed Board of Governors
moved into their own building in 1937,
their independence in monetary policy
wasn’t established until 1951. The
headquarters was named the Eccles
Building — after the former Fed
chairman influential in achieving
Fed independence — in 1982.

Fa l l 2 0 0 9 • R e g i o n Fo c u s

5

them were motivated by a desire of policymakers to further
centralize control over monetary policy. When Roosevelt
went looking for a new head of the Federal Reserve Board,
he settled on Marriner Eccles, an assistant to his Treasury
secretary. Yet Eccles told the president he wouldn’t take the
job unless the Fed was reformed to give the Board more
power over the regional Fed banks. The 1935 amendment to
the Federal Reserve Act modified the FOMC and Eccles
became its chief. He would serve as chairman until 1948.
While the 1935 act took the Secretary of the Treasury and
the Comptroller of the Currency off the Fed Board, it didn’t
translate into a softer hand by the executive branch in monetary policymaking. When the United States entered World
War II, the Fed became again a mechanism by which the
government could more cheaply finance the war effort. In
April 1942, the Fed announced a policy of cooperating with
the Treasury to keep interest rates low. By 1947, the Fed was
summarizing its “primary duty” as “the financing of military
requirements and of production for war purposes.” In his
memoirs, Eccles even described his work during this period
as “a routine administrative job” as the Fed “merely executed Treasury decisions.” Alan Sproul, the president of the
New York Fed, lamented, “We are not the masters in our
own house.”

The Accord
After the war, Eccles — worried about inflation — began to
make strong statements behind the scenes to the effect that
the Fed should no longer support the prices of Treasury
bonds. The “peg,” as it was called, was the rule by which the
Fed would buy up those Treasury securities if prices fell as a
result of a sell-off. President Truman didn’t take kindly to the
Fed’s new stirrings of independence and told Eccles that he
would not be reappointed when his term as chairman was
over in 1948. He was replaced by Thomas McCabe but
stayed on the Board as vice chairman.
Yet many within the Fed, particularly New York’s Sproul,
were worried about the loss of Fed independence. For the
next two years, the Fed and the Treasury would cooperate
but in a rather tense and uneasy way. While the Treasury
bond peg remained mostly intact, the stage was slowly being
set for a showdown.
The spark that ignited the next consequential chain of
events was the Korean War which began on June 26, 1950.
Although the first year of that war was financed mainly by
tax increases, the Treasury Secretary John Snyder made no
secret of his department’s commitment to keeping the Fed
in the business of maintaining the bond price peg.
The FOMC had other ideas in mind. Still wary of inflation, Fed policymakers were eager to raise the short-term
interest rate to reduce the money supply and stave off price
increases. That would also have an effect on government
financing of debt — it would drive down the price of bonds,
which is always inversely correlated with the interest rate,
and also increase the government’s cost of borrowing.
After the August 1950 FOMC meeting, a movement was

6

R e g i o n Fo c u s • Fa l l 2 0 0 9

afoot to persuade Snyder to accept a small increase in shortterm interest rates. At that meeting, Sproul raised a
challenge: The FOMC “should not seek instructions” from
the U.S. Treasury. Eccles agreed and said that if the Fed is
“expected to survive as an agency with any independence
whatsoever [it] should exercise some independence.”
President Truman was also willing to try to influence Fed
policy. In early December 1950, he phoned McCabe at his
home and urged him to “stick rigidly” to the pegged bond
rates. “I hope the Board will not allow the bottom to drop
from under our securities. If that happens that is exactly
what Mr. Stalin wants.”
To help smooth relations and try to persuade the administration to change their stance on the bond peg, McCabe
met with Truman and Snyder at the White House on Jan. 17,
1951. The chairman stated the concerns of the Fed and the
meeting ended without a specific resolution. McCabe
seemed convinced that their conversation would continue
behind the scenes at a later date.
But the next day Snyder delivered a speech to the New
York Board of Trade in which he announced that McCabe
had agreed with the Treasury’s peg policy. This infuriated the
members of the FOMC. The minutes of the Fed meeting
record that McCabe reported to his colleagues that he had
made no such commitment.
The Fed decided to fight back. At their January 29 meeting, in a challenge to the Treasury, the Fed allowed the price
of the pegged government bond to drop. The action
prompted Truman to call the entire FOMC to the White
House to apply some pressure the next day. It was the first
time a U.S. president had done such a thing.
As Meltzer describes it, “The meeting with the president
smothered the conflict in ambiguity. Everyone seemed to
agree, but no one changed position.” Yet the FOMC members also were confident that nothing said at the meeting
could have been construed as an endorsement of the
Treasury’s policy position.
At noon on February 1, the White House released a press
statement that took the Fed policymakers by surprise: The
Truman administration announced that the Federal Reserve
Board had agreed to the peg policy. In his memoirs, Eccles
noted that if swift action was not taken , the Federal Reserve
would lose the independent status Congress meant it to have
and “would be reduced to the level of a Treasury bureau.”
The fight for Fed independence also began to hone the
thinking of Fed policymakers about the nature of inflation
and the consequences of pegging the interest rate of
Treasury bonds. By committing to a policy of buying those
bonds when the price fell below an arbitrary level, the
FOMC members began to understand that they were
expanding the money supply. Richmond Fed economist
Robert Hetzel and former Board of Governors economist
Ralph Leach suggests this marked an “intellectual watershed.” “Gone,” they write, “was the self-image of a central
bank that allows an ‘elastic currency’ passively to ‘accommodate commerce.’ The Fed moved toward the idea of control

of money creation to stabilize the purchasing power of the
dollar.”
The Fed forced resolution of the dispute on February 19.
That day it informed the Treasury that it “was no longer willing to maintain the existing situation in the Government
security market.” As Sproul recounted in congressional testimony a year later, the Fed also let them know that unless
there was someone at the Treasury who could work out a
prompt and definitive agreement with them, they “would
have to take unilateral action.”
The Treasury finally acknowledged the need to end the
public dispute by holding a meeting at the White House
between the president and other government policymakers.
Snyder, however, was not at the meeting. He was in the
hospital recovering from surgery. Instead, he left the negotiations in the hands of William McChesney Martin, assistant
secretary of the Treasury.
After a few days of negotiation, the parties involved
agreed on what became known as the Treasury-Fed Accord.
As ratified, it read: “The Treasury and the Federal Reserve
System have reached full accord with respect to debt-management and monetary policies to be pursued in furthering
their common purpose to assure the successful financing of
the Government’s requirements and, at the same time, to
minimize monetization of the public debt.”
This newfound independence by the central bank would
mark the start of a new era for the Fed. “For the first time
since 1934, the Federal Reserve could look forward to conducting monetary actions without approval of the Treasury,”
writes Meltzer. Now the Fed “faced the task of rediscovering
how to operate successfully.”
That task would have to be undertaken with a new leader.
In one final shot at the Fed, Truman told McCabe that his
“services were no longer satisfactory.” Even though his term
ran until 1956, McCabe agreed to resign — but only under
the condition that he be replaced with someone who would
pass muster with the FOMC. The president appointed
William McChesney Martin, one of the key figures in the
Treasury-Fed Accord negotiations. He would go on to serve
for almost 19 years once he assumed office on April 2, 1951 —
the longest term of any chairman to this day.

A Brief Detour
The Martin era is still seen today as a vital period during
which the Fed was established as a credible and autonomous
policymaking body. Part of the success of the Martin years
was the unwillingness of President Dwight Eisenhower to
meddle in Fed policy the way his predecessors had.
On the other hand, throughout his presidency Lyndon
Johnson was eager to get Martin to pursue an easy money
policy to assist him in funding both the Vietnam War and his
deficit-fueled increases in government spending. Johnson
frequently criticized Martin’s policies in private meetings
and asserted that he seemed intent on hurting Johnson
politically.
LBJ’s crusade to steer Martin was ultimately an ineffec-

tive one. Yet Johnson did reappoint Martin in 1966 for what
would become his last term as Fed chairman.
Another president for whom Fed policy was seen as a tool
to influence political outcomes was Richard Nixon. In his
memoirs he was quite outspoken about how he thought the
tight Federal Reserve monetary policy virtually killed his
chances of getting elected president in 1960.
In 1970, President Richard Nixon was intently pursuing a
political strategy that had as one of its goals increased
employment through easy money. He appointed Arthur
Burns as Fed chairman with the expectation — sometimes
explicitly stated — that he would be more sympathetic to
using monetary policy to pull unemployment down. During
an applause-filled interlude at Burns’ swearing-in ceremony,
Nixon famously turned to him and said: “You see, Dr. Burns,
that is a standing vote of appreciation in advance for lower
interest rates and more money.”
Burns was initially sympathetic, and that mutual expectation married a shift in monetary policy with a close
relationship between the White House and the Fed that
didn’t exist since the pre-Martin days. Burns, like Nixon,
had a view of inflation that made him prone to believing that
hard-money Fed policies would be misguided in the 1970s.
He came to believe the “cost-push” model of price increases
in which inflexible labor union contracts were keeping
wages artificially high and contributing to inflation in
the price of goods that utilized that labor. In that model,
monetary policy was ineffective at battling inflation in the
short term.
The temporary weakening of Fed independence under
Burns wasn’t motivated only by the president’s steps toward
assuring a compliant Fed. They were also facilitated by
Burns himself who was quite willing to bargain with the
White House to achieve policy outcomes that he saw as critical to defeating cost-push inflation. Economic historians
acknowledge that he at least tacitly promised an easy money
policy to the White House in exchange for Nixon’s imposition of wage and price controls.
The consensus of the economics profession since then is
that such controls and the easy money policy that accompanied them were harmful to the economy. The high inflation
it created led to a period of economic stagnation that lasted
until the early 1980s.
What broke the cycle was the appointment of Paul
Volcker as chairman in 1979. Volcker was able to restore not
only a more Martin-esque monetary policy by taming inflation and slowing money growth but also restore the
independence and credibility of the Fed. Political support in
such an endeavor was also important, and the lack of meddling by both presidents Jimmy Carter and Ronald Reagan
was crucial to that success.

A New Accord?
Just as policy shifts in the past have been tied to shifts in Fed
independence, a new concern among some economists is
continued on page 33

Fa l l 2 0 0 9 • R e g i o n Fo c u s

7

JARGONALERT
Prisoner’s Dilemma
ick and Kyle are two aspiring thieves with plans to
get rich from reselling antiques on the black
market. For their first heist, they set out to rob an
antique shop in the countryside. They break into the shop’s
back door. When the alarm is triggered, they head to their
getaway car. As they flee, a state trooper pulls the two over
for speeding. Noticing that the car matches the eyewitness description of the one fleeing the scene of the crime,
the trooper searches the car. He finds two unregistered
firearms, and the men are locked up in the county jail.
Rick and Kyle are placed into separate cells. No communication is allowed between them. The officer goes to each
cell and gives them the same options: Confess to the robbery or stay silent. If one confesses and the other stays
silent, the confessor will receive no jail
time, but the other must serve five years.
If both stay silent, each gets one year in jail
for the firearms charges. And if both confess, they each receive three years in jail.
Assuming that Rick and Kyle are selfinterested, each will confess. Rick is
worried that Kyle will not stay silent. So, if
Rick stays silent, he gets five years, but if
he confesses he gets three years. Even if
Kyle decides to remain silent, Rick would
still confess because no jail time is better
than one year in jail. (Similar thoughts are
running through Kyle’s mind.)
The dilemma is that, for each suspect, confessing is the
better choice no matter what the other person does. But, as
a whole, they are worse off because they end up with a total
of six years in jail when they could have received a total of
two years if they both stayed silent.
Countless variations of this “prisoner’s dilemma” story
have been pondered ever since mathematician Albert W.
Tucker first coined the term and formalized the game in
1950. Yet the punch line remains the same: Rational individuals acting in their own interest can result in suboptimal
outcomes in the aggregate.
A common application of the prisoner’s dilemma intuition is in the analysis of the conflict inherent in individual
and group decisions. Members of a group that act in their
own self-interest can end up making the group worse off
than if everyone were to cooperate.
Examples may be found in the real world. Assume, for
instance, that greenhouse gas emissions are responsible for
global warming and that, all else equal, it would be desirable
to curb climate change. Nations face a choice to either
reduce greenhouse gas emissions or maintain the status quo.

R

8

R e g i o n Fo c u s • Fa l l 2 0 0 9

If enough nations cooperate, emissions will fall and the
temperature stabilize. But since reductions in emissions
require costly actions, any nation could sit on the sidelines
and let other nations bear that cost while they enjoy the
benefit. With enough selfish nations on the bench, cooperation breaks down and everyone loses relative to what they
could achieve if they worked together. This might also be
recognized as a “free-rider” problem. Prisoner’s dilemmas
can be seen in this light as an example of such a problem.
The logic of the prisoner’s dilemma makes a big assumption about individuals: It presumes that they care mainly
about their self-interest. Economic experiments have tested
this assumption. When looking at a variety of prisoner’s
dilemma experiments from 1958 to 1992, Dartmouth
College economist David Sally found that
when participants played the games, on
average, they tended to act selfishly only
about half the time. A plausible explanation for this is that the players are less
prone to selfish behavior than economists
predict.
Another explanation might be that
cooperation is actually an optimal strategy
in the real world where people interact
with each other repeatedly over time. To
test this, Robert Axelrod, a social scientist
at the University of Michigan, organized a
tournament. Academic colleagues were
invited to submit a computer strategy, which was to be
repeated a number of times.
As it turns out, the exclusively selfish strategies did very
poorly. The one that fared the best was also the simplest:
the “tit for tat” strategy developed by a mathematical
psychologist. It required the player to cooperate on the first
move and then choose the same strategy that the opposing
player picked on the previous turn. While the strategy gives
the benefit of the doubt to the opposing player, it also lets
him know that a lack of cooperation will not go unanswered.
The enforcement of this implied social norm and the nature
of reciprocal behavior over repeated rounds of the game
might explain the rate of cooperation in a variety of experiments.
Or, to put it another way, even inherently selfish individuals may tend to cooperate more often when benefits to
cooperation over the long run outweigh the benefits over
the short run. It is through these experiments that economists have been able to mine a wealth of economic and
social insight that arises from the hypothetical predicament
of two prisoners.
RF

ILLUSTRATION: TIMOTHY COOK

BY DA N I E L B RO O K S

RESEARCHSPOTLIGHT
How Forecasts Can Influence the Present
BY DA N I E L B RO O K S

In a 1984 paper, Robert J. Barro of Harvard University
ould positive news about economic growth in
and Robert G. King, now of Boston University and a visiting
the next quarter make you increase your spendscholar at the Richmond Fed, showed that only a contempoing or investment today? And, as a result, might
raneous shock to total factor productivity (TFP), such as
this spending actually speed the growth of gross domestic
technological improvement, can produce aggregate comoveproduct (GDP) more than the forecasters expected?
ment. In their model, Jaimovich and Rebelo go a step further
Stanford University economist Nir Jaimovich and Northto introduce three new elements into the neoclassical
western University economist Sergio Rebelo ponder those
growth model to generate comovement in response to news
sorts of questions in their new paper. They propose that
shocks. The first assumes that firms can vary their means of
“news shocks” about the economy’s future may, in fact, be
production — this is called “variable capital utilization.” The
a key driver of business cycles.
second factor, “adjustment costs to investment,” takes into
To economists, shocks are factors that unexpectedly
account the expense incurred from changing investment,
increase or decrease output and employment. A news shock
such as scrapping plans to buy new machinery. (For example,
is a change in the expectation about the future derived from
if it’s less costly to change your
new information that can affect
plans sooner than later, you’ll
your investment, consumption,
“Can News about the Future Drive
have an incentive to act more
and work decisions today.
quickly to news about the
While the idea of news
the Business Cycle?”
future.) The third factor is a
shocks can be traced as far back
by Nir Jaimovich and Sergio Rebelo.
short-run “wealth effect” on
as the work of British economist
labor supply that assumes
William Beveridge in 1909,
American Economic Review. September
people will alter the number of
interest in news shocks revived
hours they work in response to
after the U.S. tech stock boom
2009, vol. 99, no. 4, pp. 1097-1118.
positive news.
and bust of the 1990s and early
With a model able to pro2000s. The interest stems from
duce fluctuations from news shocks, the next question is
a quite plausible story: Between 1995 and 2001, forecasts of
whether the model can produce estimates that mirror the
the long-run growth rate of earnings for companies in the
empirical data. Jaimovich and Rebelo also use data from the
S&P 500 index rose rapidly, from 11.5 percent to 17.7 percent.
“Livingston Survey” of output forecasts. Started by Pulitzer
Investment increased when the earnings forecasts went up.
Prize-winning financial columnist Joseph Livingston in 1946
Yet investment in those companies, on average, went down
and compiled by the Federal Reserve Bank of Philadelphia
when the realized earnings were reported. Jaimovich and
since 1990, this survey gathers the forecasts of different ecoRebelo suggest that the initial news shock was driven by the
nomic variables by professional forecasters. This provided
prospects of new technologies, which then led to high
the authors with two-quarters-ahead forecasts of GDP for a
expectations about earnings growth. But when those technumber of years.
nologies or companies failed to live up to expectations,
Comparing the simulations of their model to the busiinvestment fell and a recession resulted.
ness cycle data from 1947 to 2004, the authors discovered
Economists have grappled with business cycle theory for
that their model generates nine recessions compared to the
decades. Yet it remains difficult to fit news shocks into the
14 they estimate actually occurred during that period.
standard neoclassical economic model. Business cycle data
However, the recessions in the model are less severe than
feature two forms of “comovement” — meaning, you can see
those in the data. Jaimovich and Rebelo explain that a
the factors move together in the data. “Aggregate comovepossible explanation for the discrepancy is that the model
ment” describes how major macroeconomic aggregates such
does not take into account other shocks to the U.S. econas output, consumption, investment, hours worked, and real
omy such as a rise in energy prices.
wages rise and fall together in all sectors of the economy.
Their results indicate that a neoclassical model can
“Sectoral comovement” occurs when those aggregates rise
indeed generate business cycles without relying solely on
and fall together in different sectors of the economy indenegative productivity shocks. Instead, news about the econpendent of whether the same aggregates rise or fall in other
omy’s potential future — and, in particular, estimates of
sectors. The trick is to find a model that can account for
variables such as future TFP — can heavily influence the
both types of comovement in response to shocks that
pattern of economic growth.
RF
include news shocks.

W

Fa l l 2 0 0 9 • R e g i o n Fo c u s

9

POLICYUPDATE
New Credit Card Rules Could Harm Some
BY DAV I D VA N D E N B E RG

he Federal Reserve and Congress have announced
plans to bar credit card issuers from some controversial practices and require card issuers to disclose
more information to card holders. Some of these changes
have already taken effect, while others are scheduled for
2010. While these tougher rules are designed to protect
consumers from some questionable practices, the policy
change could bring unintended consequences.
The Federal Reserve approved a set of changes to credit
card regulations in December 2008 after a lengthy review
process. In May 2009, Congress approved and President
Obama signed the Credit Card Accountability,
Responsibility, and Disclosure Act, which built on the Fed’s
rules. The Federal Reserve will implement the law, and
expects to complete that process by August 2010.
The Fed’s new rules bar credit card companies from using
“double-cycle billing.” This is a method used to calculate
interest for a given billing period. It takes into account not
only the average daily balance of the current billing cycle but
also the average daily balance of the previous period.
Consider a card holder who makes a credit card purchase
on January 10 for $1,000. When the bill arrives in February,
the customer pays $700, leaving a $300 balance. When the
March bill arrives, under double-cycle billing, the customer
would face interest charges dating back to the January purchase of $1,000 as well as on the remaining $300 balance.
The Fed also substantially restricted fees on subprime,
low-limit credit cards. These cards are known as “fee
harvester” cards because they have low credit limits yet
require relatively sizable fees from the consumer. In addition, the Fed initiated rule changes that: 1) require a
“reasonable amount of time” for consumers to make a
payment 2) mandate that payments beyond the minimum
due be allocated to the balances with the highest interest
rate, and 3) ban annual percentage rate increases in the first
year except in certain instances such as when a customer is
more than 30 days delinquent.
The Fed created new disclosure requirements, too, mandating that key terms be stated clearly when an account is
opened. Credit card companies will be required to show not
just the amount of time it would take for the borrower to
repay the debt if he makes only the minimum payment each
month but also an itemization of interest charges for different types of transactions. Fees and interest charges will now
have to be grouped separately on statements, as will a tally of
the total fees and interest paid for the given month and for
the year to date.
Congress added more rules to those the Fed approved.
Among them is a mandate that promotional interest rates

T

10

R e g i o n Fo c u s • Fa l l 2 0 0 9

must last at least six months. They also require college
students under the age of 21 to prove their ability to repay or
get an adult co-signer in order to receive a credit card.
Although many credit card practices are addressed in the
new regulations, notes Adam Levitin, a law professor at
Georgetown University, the new rules address only problems that are apparent today without solving the problems
of tomorrow. Levitin says this approach can start “to look
like a regulatory game of Whac-a-Mole. No sooner do regulators put the kibosh on one problematic practice, then
another one pops up.”
There could be other unintended consequences of the
regulation. The new rules limit some of the tools lenders
currently use to manage the risk they take on, argues
Kenneth Clayton, senior vice president and general counsel
of the American Bankers Association’s Card Policy Council.
Card issuers can either price risk for all consumers upfront,
or price for individual consumers as their circumstances
change.
The latter option, while leading to the evolution of certain practices that have been outlawed by the new
regulations, has arguably also allowed credit card companies
to offer lower rates and more credit to some borrowers.
Yet if credit card companies are sufficiently restricted
from charging credit card holders for the risk the bank is
taking, the lenders might operate under the assumption that
all borrowers are about equally likely to default. This could
manifest itself in lower credit limits for existing qualified
borrowers or a decrease in the number of credit opportunities for new borrowers.
What’s more, traditional fees often provide informational value to the consumer as well as the provider, notes a 2008
study by the Federal Reserve Bank of Chicago’s Sumit
Agarwal, the Federal Reserve Board’s John Driscoll, Harvard
University’s David Laibson, and New York University’s
Xavier Gabaix. The researchers studied fees assessed on
cash advances, over-limit purchases, and late payments.
Paying such a fee is a form of “negative feedback.” Paying a
fee in the previous month reduced the likelihood they paid a
fee in the current month by about 40 percent. The more
time that passes after a consumer pays a fee, the more likely
the consumer will be to forget about it. “However,” the
study concludes, “on net, knowledge accumulation dominates knowledge depreciation. Over time, fee payments
drastically fall.”
Once the rules take effect, policymakers should pay close
attention to monitor the behavior of credit card issuers and
consider the long-term aggregate effects of the new rules on
both pricing strategies and the availability of credit.
RF

AROUNDTHEFED
Lending Standards and the Foreclosure Crisis
BY DA N I E L B RO O K S

“Decomposing the Foreclosure Crisis: House Price
Depreciation versus Bad Underwriting.” Kristopher Gerardi,
Adam Hale Shapiro, and Paul S. Willen, Federal Reserve
Bank of Atlanta Working Paper 2009-25, September 2009.

here are two competing theories to explain the sudden
outbreak of foreclosures from 2007 to 2009. One
theory centers on poor underwriting standards: Borrowers
had trouble making payments on their mortgages because
those loans were either unrealistically generous or because
borrowers were taking out loans based on little income and
bad credit. An alternative explanation suggests that housing
values were the main explanatory variable in the growth of
foreclosures. After all, the authors point out, subprime
mortgage performed well until 2006 when house prices
began falling.
Using deeds records from Massachusetts — including
residential mortgages, purchase and sale, and foreclosure
transaction between 1989 and 2008 — the authors create a
model to describe the explosion of foreclosures in
Massachusetts since 2005. To isolate the effects of the
underwriting standards, they estimated what the foreclosure
rate of subprime borrowers in 2005 would have been if the
price of the homes purchased were in line with the 2002
pricing levels. They discovered that the foreclosure rate
would have been vastly lower relative to the actual observed
foreclosure rate despite the larger percentage of subprime
borrowers that existed in 2005.
The authors conclude that “relaxed underwriting standards did severely aggravate the crisis by creating a class of
homeowners who were particularly vulnerable to the decline
in prices.” Yet, “that emergence alone, in the absence of a
price collapse, would not have resulted in the substantial
foreclosure boom that was experienced.”

T

“Vacancy Posting, Job Separation, and Unemployment
Fluctuations.” Regis Barnichon, Federal Reserve Board
Finance and Economics Discussion Series 2009-35, July
2009.

n this paper, Barnichon asks the question, “At the beginning of a recession, does unemployment go up because of
few hirings, more job losses, or both?” To provide an answer,
he suggests determining the relative importance of the two
main forces that drive unemployment — vacancy posting
(more job losses) and job separation (fewer hirings) — in
explaining the movement in the unemployment data.
He finds that, on average, vacancy postings drive unemployment during normal times. But if you look at the turning

I

points of the business cycle, as Barnichon describes, job
separation drives “rare but violent fluctuations in unemployment.” It’s responsible for almost all of the movements in
unemployment during the first two quarters after unemployment reaches a low or high point. (Vacancy postings don’t
become the main contributor until a year later.) The author
also concludes that previous studies which found the opposite could lead economists to “understate the breadth and
speed of adjustment of unemployment around turning
points.”
“Why do the Elderly Save? The Role of Medical Expenses.”
Mariacristina De Nardi, Eric French, and John Bailey Jones,
Federal Reserve Bank of Chicago Working Paper 2009-02,
July 2009.

n this paper, the authors address two important questions with respect to savings and the elderly: Why do
the elderly keep large amounts of assets until late in life?
And why do the wealthy elderly spend their assets more
slowly than the poor elderly?
In the paper, the authors describe their model of saving
by retired elderly singles. To develop their model, they use
data obtained from the Assets and the Health Dynamics of
the Oldest Old (AHEAD) survey conducted by the
University of Michigan. Based on this dataset, they
estimate the different processes for mortality and out-ofpocket expenses as dependent on sex, health, permanent
income, and age. The authors also take into account social
insurance programs such as Medicaid, which was not
included in the AHEAD data.
Their analysis shows that out-of-pocket medical
expenses grow at an increasing rate with both age and
permanent income. This leads the authors to conclude
that for many elderly people the risk of having to pay
expensive medical bills as a result of living longer is a more
important motivation to save than the desire to leave
assets to loved ones in a will — the “bequest motivation,”
as economists call it. Indeed, the wealthy elderly in the
sample tend to live longer and have higher medical bills
than those below them on the income ladder.
The poor are faced with a different scenario. Because
social insurance programs help protect against catastrophic medical expenses, the poor tend to consume
rather than save, leaving little money behind for retirement. Such programs can also benefit the wealthy elderly
as well because these programs can protect them against
catastrophic medical expenses, which could have the
potential to bankrupt them.
RF

I

Fa l l 2 0 0 9 • R e g i o n Fo c u s

11

Recessions and Entrepreneurship
Is necessity the mother of invention?
BY DAV I D VA N D E N B E RG

I

n 1986, at the age of 29, Gary Erickson started a wholesale bakery called Kali’s Sweets
& Savories. The bakery made Greek calzones and cookies, and sold them to specialty
food retailers in the San Francisco Bay area. By 1991, the company had 10 employees

and was generating more than $300,000 in annual sales, but failed to break even. Erickson
was working nights and driving the delivery truck on Tuesdays and Fridays, and working at
During the 1990-1991 recession, Erickson started Clif
Bar, an organic nutrition bar company. He got the idea to
start the firm while on a 175-mile bike ride in November
1990. While eating an energy bar he brought with him,
Erickson decided he could make a better one. He spent
hours in his mother’s kitchen, and in 1992, shipped the first
Clif Bars to distributors. His business partner continued to
run the bakery for 15 months while Erickson focused on creating the energy bars.
As a startup entrepreneur, Erickson had the luxury of
focusing on only a few key tasks, such as product development, sales and marketing, packaging and distribution,
while his competitors faced economic challenges. The
product Erickson worked hard to create in his mother’s
kitchen has grown into a diversified firm offering the
Clif Bar energy bars, sports drinks, wine, and more. The
company took in $176 million in net revenue in 2008.
Clif Bar is just one high-profile firm launched or conceived during a recession. More than half the Fortune 500
were born either in a recessionary period or in a bear
market, according to research from the Ewing Marion
Kauffman Foundation, a Kansas City, Mo.-based nonprofit
that researches entrepreneurship. Notable startups in recessionary periods include Microsoft, Burger King, and Hyatt.
Like many startup entrepreneurs, Erickson launched Clif
Bar by applying lessons from prior experiences, receiving
help from family, and following his passion. Success was far
from certain. That’s true for business creators who launch
firms in economic expansions too. Entrepreneurship offers
challenges and opportunities in good times and bad, though
those challenges and opportunities may differ with changing economic conditions. But, one thing remains constant:
Entrepreneurship soldiers on no matter the economic
circumstances.
New technology costs have fallen over time, opening up
new avenues for entrepreneurs-to-be. That’s true regardless
of whether the economy is in an expansion or a downturn.
However, only recessions offer entrepreneurs something

12

R e g i o n Fo c u s • Fa l l 2 0 0 9

COURTESY OF BURGER KING CORP.; CLIF BAR & COMPANY; HYATT HOTELS & RESORTS

a bicycle company by day.

Necessity vs. Opportunity
The Index of Entrepreneurial Activity, produced by the
Kauffman Foundation based on data from the Census
Bureau and the Bureau of Labor Statistics, documents new
business creation. Overall, the index shows the rate of business creation rose slightly in 2008 to 0.32 percent of the
adult population, meaning 320 out of 100,000 adults started
a business each month in 2008. That’s up from 2007’s rate of
0.30 percent. Since 1996, the rate of business creation has
fluctuated between 0.27 percent and 0.32 percent.
In the Fifth District, Washington, D.C.’s entrepreneurship rate has been the most volatile. Its rate spiked sharply
from 1999 to 2000 and again from 2006 to 2007, but a steep
decline followed both surges. Yet Washington, D.C., had
a higher entrepreneurship rate in 2008 than in 1996.
As shown in the table on page 15, the District of Columbia
and Maryland are the places in the Fifth District that had
higher entrepreneurship rates from 2006 to 2008 than from
1996 to 1998.
Trends in the overall average business creation rate may
mask divergent patterns in business creation for different
types of businesses. For instance, rough economic times tend
to see the creation of more one-person businesses, while in
better times, multi-employee firms emerge, says Brian Headd,
an economist with the U.S. Small Business Administration.
One-person firms represent 3 of every 4 businesses in the
country, but their economic activity is relatively small. In
2002, of the 17.5 million one-person firms, only 3.5 million (20
percent) had annual receipts of $50,000 or more.
Not all one-person firms stay in that category, however.
Between a quarter and a third of firms that eventually
employ workers started as one-person businesses, says Dane
Stangler, an analyst at the Kauffman Foundation.
The survival rates for firms started in recessions are quite
similar to those founded in economic expansions. An average of 48 percent to 49 percent of all new firms between 1977
and 2001 survived to age five. The biggest drop-off occurs in
years one and two. This makes sense, Stangler writes, “when
we consider that there is remarkable consistency from year
to year in the number of new firms and establishments that
Americans start.”
Stangler notes that determining the effects of a recession
on any single company is difficult. “There is so much churn
and turnover, combination and recombination occurring
at any given time in the American economy, that it’s often

Recession Reduces the Supply of Credit
1000

TOTAL CONSUMER CREDIT ($BILLIONS)

vital to the development of new firms: a larger than usual
supply of available skilled labor.
Starting a business requires capital. Financing through
bank loans, credit cards, or by venture investors may prove
harder to get in a recession, says Stephen Kaplan, a professor
of entrepreneurship at the University of Chicago Booth
School of Business. This will have a bearing on the success
rate of businesses. Looking at current and past recessions
can help us ascertain some insight about how businesses are
born and how they survive.

980
960
940
920
900
880
860
840
820
Jul
2007

Oct

Jan
2008

Apr

Jul

Oct

Jan
2009

Apr

Jul

SOURCE: Federal Reserve Board of Governors

difficult to trace the effects of recessions or bear markets on
any one company. Within one company, moreover, there will
be multiple changes of business as management moves from
opportunity to opportunity.”
The Kauffman index also groups businesses into three
categories of future income potential (based on past performance of firms in various industries): low, moderate, and
high. From 2007 to 2008, the two lowest income potential
categories drove the increase in business starts. During
recessions generally, the creation rates for these businesses
tend to be higher.
There are different motivations for entrepreneurial
activity and some of that is dependent on the situation in
which a businessman may find himself. What analysts call
“necessity entrepreneurship” is the type that arises when a
displaced worker launches a firm because it is his only
option. That is distinct from “opportunity entrepreneurship,” which can occur when an entrepreneur sees a niche to
fill regardless of his employment status.
Both types of entrepreneurship can appear in recessions,
says Ted Zoller, a professor of entrepreneurship at the
University of North Carolina-Chapel Hill’s Kenan-Flagler
Business School. He says that you’d see necessity-based
motivations among people just trying to replace their wages.
Then you’d see opportunity motivations among individuals
who are displaced by large companies that have access to
markets being ignored because of the economic circumstance of the larger enterprises.
Small businesses create most of the nation’s new jobs,
employ about half of the nation’s private-sector work force,
and provide half of the nation’s nonfarm, private real gross
domestic product as well as a significant share of innovations. That’s all according to the U.S. Small Business
Administration in its 2009 report, The Small Business
Economy.
Stangler suggests that of the firms started in 2008 and
2009, it’s likely a tiny number may grow into the largest
companies in 2020 or 2030, and a few hundred may eventually receive accolades as fast-growing firms. “When two or
three dozen young firms hire four, six, or eight people at a

Fa l l 2 0 0 9 • R e g i o n Fo c u s

13

time for several years, it mostly goes unnoticed. Only when
they reach sufficient collective size do they begin to appear
in the public consciousness, even though they have been
regenerating the economy for several years,” he writes.
The big question still remains: How do you fund a new
business? The answer to that question will be influenced by
how much capital entrepreneurs can access. In a recession,
this entrepreneurial fuel may be hard to find.

A Great Future in Plastic?
Entrepreneurs have several different places to turn for
money to launch their firms. They can utilize credit cards,
take out other loans, get help from friends or family, or turn
to venture-capital investors. Venture investing is a high risk
game with a high rate of failure but large returns. During a
recession, venture capitalists are more likely to invest in new
companies only after they have shown some promise and are
more likely to band together to make investments to share
risk, Zoller says.
Philippe Sommer, director of the entrepreneurship
program at the University of Virginia Darden School of
Business, formerly ran a venture fund and described the
thought process he used when picking projects. “We used to
talk about three kinds of companies: A companies, B companies, and C companies,” he says. “A companies” were firms
with the right people, the right approach to the market, and
a strategy that made sense to a funding source. It’s no surprise that those companies tended to get funded in both
good and bad times. But the “B” and “C” firms, some of
which might receive funding when the economy was doing

well, were less likely to receive money during recessions.
No matter how one gets the funding, creating a business
and reaping the benefits of entrepreneurship is costly.
Starting a business, on average, costs about $31,000, though
those costs can vary widely by industry.
Entrepreneurs often use credit cards to cover those costs.
From 2007 to 2008, more than 90 percent of small business
loans were for less than $100,000, and many of them were
on credit cards, according to Kauffman research. More than
half the firms surveyed by the foundation relied on credit
cards to launch, and nearly a third carried a balance.
However, every $1,000 in debt increased the likelihood of
firm failure by 2.2 percent.
As the recent financial crisis worsened, you would expect
the supply of credit to tighten and also for people to be more
reluctant to take on debt. Federal Reserve data indicates
that the amount of revolving debt — which is overwhelmingly in the form of credit cards — plummeted in 2009.
The decline of more than 7 percent between January and
September is the largest over the last 30 years.
Existing credit card holders are finding their credit limits
have been cut too. According to a FICO study, 33 million
people, or 20 percent of the country’s card holders, had their
limits reduced between October 2008 and April 2009. Of
these, only 27 percent had their limits reduced because of
negative items in credit reports.
Credit card loans may be harder to get because of bank
fears over increased default risk during a severe recession.
The October 2009 Federal Reserve Senior Loan Officer
survey also notes that demand for loans continued to

The Demographics of Entrepreneurship
One of the most iconic faces of entrepreneurship in
American history is one that appears on signs all across the
nation. “Colonel” Harland Sanders started franchising his
Kentucky Fried Chicken restaurant at age 65 with a family
recipe for fried chicken and a $100 Social Security check.
That was in 1955. Less than a decade later, Sanders had 600
franchises in the United States and Canada.
Today, one of the prominent faces of entrepreneurship is
Mark Zuckerberg, the 25-year-old billionaire and cofounder of Facebook, a social media Web site that is one of
the 10 most visited sites in the world. Zuckerberg took
Facebook live from his Harvard dorm room in 2004,
dropped out of school and relocated to Palo Alto, Calif.,
where he now runs the company.
Popular perceptions may suggest people like Zuckerberg
and his Facebook co-founders represent the future
of entrepreneurship. But they’re actually outliers, says Duke
University professor Vivek Wadhwa. He co-authored a paper
on technology entrepreneurship for the Ewing Marion
Kauffman Foundation, and found that the average and
median age of founders of technology firms was 39.
“Experience is the most important ingredient of success,”

14

R e g i o n Fo c u s • Fa l l 2 0 0 9

Wadhwa says. “The stereotypes are inaccurate and a legacy
of the dot-com days.”
The age group that had the highest rate of entrepreneurship across all industries from 1996 through 2007 was 55 to
64. The age group of 20 to 34 had the lowest rate during this
period, which included the dot-com boom.
Entrepreneurship in the age group of 55 to 64 is hardly a
new phenomenon. People in that age range are far more
“experienced, balanced, and wiser,” Wadhwa says. They also
have less of something else — fear. “The strongest factor
that prevents people from becoming entrepreneurs is the
risk,” Wadhwa adds. “Once you’re in this age group, the risk
and the fear is much less.”
Other factors can explain the prevalence of entrepreneurship among older adults. For one, people 55 and older
may have more wealth they can use to launch a business,
notes Dane Stangler, a Kauffman Foundation analyst. Also,
some older adults may not possess the skills most in demand
in a rapidly changing economy so they find themselves turning to self-employment as a way to making a living. That fact
“lurks in the data” but is hard to tease out, Stangler says.
Stangler says these older and bolder adults could fuel

weaken across all major categories except for prime residential mortgages. Just as banks may have more fear about
credit card holders defaulting, it’s possible the specter of
bankruptcy may cause consumers to hesitate taking on
additional debt.
Access to credit is “the No. 1 issue” for entrepreneurs,
Stangler says. Lack of credit may dampen new firm creation
in a severe recession. Zoller says tighter credit will make it
harder for the new firms that get started to survive. That’s
because new firms have to establish a durable cash flow and
need working capital to cover operations while economic
growth is slow. Without that, “I think you’re going to see
more failures during this recession than you’ve seen in
previous recessions,” Zoller says.
Financing troubles may affect employment growth at
small firms that do survive. David Altig, Federal Reserve
Bank of Atlanta senior vice president and research director,
notes that businesses with fewer than 50 employees account
for about a third of net employment gains in expansions.
They’ve also accounted for about 45 percent of job losses
since the current recession started. “Given that these are the
types of businesses most likely to be dependent on bank

lending — and given that bank lending does not appear
poised for a rapid return to being robust, the prognosis for
an employment recovery in these businesses is a question
mark,” he writes.
Despite the difficulties, recessions can reinforce the cando attitude of many — and that can spur entrepreneurship
generally. “People are being called upon to take responsibility for their lives in different ways than they were 20 or 30
years ago,” says Jerome Engel, chair of the new venture program of the Haas School of Business at the University of
California at Berkeley. “The current economic downturn
makes increasingly apparent to people the need for
autonomous individual action, on a local and small scale, to
secure their financial futures.”
Those small-scale actions can lead to broader improvement in the economy over time. As Stangler writes, “despite
the pain of the current recession, there is reason for hope —
good things do grow out of recessions.” While the future of
any specific new firm is hard to predict, the overall net value
to the economy is likely to be positive in the long run. “Every
generation of startups is, often invisibly, both a renewal and
restructuring of the economy,” he notes.
RF

READINGS
Fairlie, Robert W. Kauffman Index of Entrepreneurial Activity.
Kansas City, Mo.: Ewing Marion Kauffman Foundation,
April 2009.

Stangler, Dane. The Economic Future Just Happened.
Kansas City, Mo.: Ewing Marion Kauffman Foundation,
June 9, 2009.

“How are Credit Line Decreases Impacting Consumer Credit
Risk?” FICO Insights, No. 22, August 2009.

The Small Business Economy: A Report to the President.
Washington, D.C: Small Business Administration, 2009.

Average Monthly New Business Starts
an entrepreneurship boom. The
Per 100,000 Adults
country is now experiencing rapid
1996-1998
State
growth in the numbers of people in
the 45 to 64 age group, and life
280
D.C.
expectancy keeps growing. By
250
MD
2050, American life expectancy is
NC
280
projected to be 83 years, compared
to 78 now. If the rate of entrepreSC
300
neurship in this group stays
VA
260
constant, this could all translate to
WV
a multitude of new companies in
190
the future.
U.S.
290
Rising entrepreneurship among
older adults isn’t confined to one
SOURCE: Kauffman Foundation
type of business either. “The
pattern generally holds across industries,” Stangler says.
“There’s no industry that stands out for being where the
young people are starting companies, and another where
the middle age and older demographic groups are starting
companies.”
While more senior entrepreneurs may be forming companies, how likely is it those companies will produce major

innovations like Facebook? Might
an older entrepreneur be more like2006-2008
ly to start a “lifestyle” firm, while a
younger person more apt to start a
350
groundbreaking one? “Such a con270
cern makes sense, but further
250
research needs to be done,” Stangler
writes.
230
Kauffman Foundation research
230
also demonstrates that immigrants
tend to be more entrepreneurial
150
than native-born Americans. One in
300
four engineering and technology
companies launched between 1995
and 2005 had an immigrant founder.
Immigrants are also more likely to start businesses of all
types. They had higher entrepreneurship rates across all
industries every year from 1996 through 2008. In that last
year alone, the gap between the overall entrepreneurship
rates of the two groups was the widest in the survey period:
The rate was almost twice as high for immigrants as it was
for the native-born population.
— DAVID VAN DEN BERG

Fa l l 2 0 0 9 • R e g i o n Fo c u s

15

Has the financial crisis provided a fatal blow to the
efficient market hypothesis?
BY R E N E E CO U RTO I S

usiness cycle fluctuations are costly, but they do
come with a small upside for economists: They serve
as a way to test how well prevailing economic
theories hold up to reality.
The recent recession is no different. Some have suggested that the long-standing “efficient market hypothesis”
(EMH) has been disproved once and for all by the financial
crisis. The EMH says that financial market participants act
as powerful information gatherers about an asset’s “true”
value, such that an asset’s price will generally reflect all information available about that asset.
But if financial markets are efficient, critics argue, how
could investors have gotten things so wrong as far as housing
and securitization markets are concerned? Housing prices
soared, and securities backed by risky subprime mortgages
were sold throughout the financial system at prices that, as
now seems apparent, didn’t reflect their true risk. “In short,
the belief in efficient financial markets blinded many, if not
most, economists to the emergence of the biggest financial
bubble in history. And efficient-market theory also played a
significant role in inflating that bubble in the first place,”
wrote Nobel laureate Paul Krugman in a September 2009
article in the New York Times Magazine.
Krugman’s concerns represent one side of the divide over
the EMH. The theory extends back to the birth of modern
finance. Before the 1950s, the world had few workable models for how asset prices are determined, but this changed
with the advent of computers. Statisticians began to study
stock market prices, an obvious area in which to apply their
new high-powered tools because the daily activity of stock
markets provides exceptionally abundant data.
The EMH emerged from this research. Economist
Eugene Fama first formalized the theory in his 1964 economics Ph.D. dissertation at the University of Chicago —
although the hypothesis was intuited by some scholars long
before economics itself became a discipline. The basis for
the theory is the self-interest of investors: Because they
want to earn a profit, investors will work fervently to expose
and trade on even the tiniest bit of new information relevant
to an asset’s intrinsic value, leaving no available information
left unexploited. This means if there is something knowable
about an asset, it will quickly be reflected in its price. The

B

16

R e g i o n Fo c u s • Fa l l 2 0 0 9

more frequently trading activity takes place within a market
— stock markets, for example, enjoy nearly constant price
discovery through daily trading activity — the more likely
the asset’s price is to reflect everything there is to know
about its true risk and economic prospects.
Fama’s famous 1970 paper, “Efficient Capital Markets: A
Review of Empirical Work,” helped set off the 1970s as the
growth period of the EMH’s development. Economists
showed how equity prices reflected information about
economic fundamentals (like corporate earnings or, on the
macroeconomic level, interest rates and consumption) consistent with the “rational expectations” paradigm that was
beginning to dominate economic research at that time.

Efficiencies and Anomalies
Most of the early research found that stock markets do tend
to meet a certain degree of efficiency. Fama’s 1970 work
focused the EMH literature by defining possible degrees of
market efficiency. In general, where the costs to markets
of gathering information exceed the benefits of trading
on it, a market will be inefficient in the sense that it will
not reflect that information. This is largely why “strong
form” efficiency — a market so efficient that an asset’s price
even reflects private information held by CEOs and
other insiders — is broadly acknowledged as being highly
unrealistic.
However, in a “semi-strong” efficient market, stock
prices reflect all information that is publicly available (think
price histories, publicly available accounting reports, and
other corporate announcements). A semi-strong efficient
market would require significant skill on the part of traders
to analyze and interpret information. The early research on
market efficiency generally suggested that stock markets
met this threshold. For instance, data indicated that
investors were unlikely to “beat” the market as a whole and
were better off putting their money in index funds that purchase the entire market rather than managed funds or
individual stocks.
But by the mid-1980s, economists had increasingly
uncovered “anomalies,” occurrences in which financial markets appeared to act in a way that was contrary to the EMH.
Some economists found statistically important amounts of

predictability in stock prices. These can be exploited by simple trading strategies, which should not be possible in an
efficient market that exhausts all profit opportunities. One
of the most enduring anomalies is the idea of stock price
momentum: that the same stocks which are doing well for a
several-month period tend to be the same ones doing well
over the next several months. The converse appears to be
true for underperforming stocks. This pattern, too, is predictable. These anomalies cast doubt on the idea that a
stock’s price reflects all relevant and available information.
Why would markets leave information on the table with
profit opportunities unexploited? Explaining this was a ripe
area for research. “There are a number of psychologists and
behavioral economists who started coming up with counter
examples to standard efficient markets results,” recalls economist Andrew Lo of M.I.T., who contributed to the early
research on anomalies. “Things like loss aversion, herding
behavior, mental accounting, probability matching. These
are experimental results where they would take test subjects
and give them various different kinds of gambles and these
individuals would behave in a manner that was not consistent with efficient markets.”
A key example is the phenomenon first documented by
Harvard economist David Laibson: that people follow the
path of least resistance when choosing whether to invest in
their 401Ks. If enrollment is voluntary, they tend not to do
it. But if enrollment is compulsory and opting out requires
an extra step, they tend to stay invested. That is, people
appear to irrationally make investment decisions based on
factors other than their expected financial gains. By the mid1980s, the field of behavioral finance had fully emerged,
applying psychology to financial markets to understand how
and why investors might sometimes make irrational trading
decisions.
It’s not investor irrationality that is at odds with the
EMH, just that irrationality can easily bleed into asset
prices. Savvy investors have incentive to identify and trade
against investors who are ignoring fundamentals, driving
asset prices back to rational levels. Under the EMH, “it’s the
smart money that matters,” Lo explains. “Some people are
crazy all the time, and all the people are crazy some of the
time, but the smart money will
drive out all these behavioral
anomalies.”
Indeed, behavioral economics
has had a hard time turning documented anomalies and instances
of irrationality into models that
consistently explain movements
in stock prices. Many anomalies
disappear once you try to pin
them down with a model. Since
bouts of irrationality seem to
exhibit themselves randomly in
stock markets, they’re hard to
i
predict — which, of course, is

exactly what the EMH says about stock prices: You can’t
consistently predict them.
Behavioral economists aren’t dismayed by this, however,
since there’s nothing in psychology that suggests people
should under- or overreact in any consistent manner. “Of course,
we do not expect [behavioral] research to provide a method to
make a lot of money off of financial market inefficiency very
fast and reliably,” wrote Robert Shiller, one of the most prominent EMH critics, in a 2003 paper. “We should not expect
market efficiency to be so egregiously wrong that immediate
profits should be continually available.”
Fama takes that as a small victory. He says behavioral
economists have good evidence that people are sometimes
systematically irrational. “All that stuff I think is great. The
work that they do is really good in describing the kinds
of biases that people have, and how it shows up in their
behavior.” But to argue that irrationality can be systematic
enough to make prices predictable is something else, he says.
“Most of them don’t even make that jump. They don’t think
people can take advantage of whatever inefficiencies are
there. And my opinion is, they’re basically conceding that
for all practical purposes, markets are efficient.” In the
absence of an alternative model for how asset prices might
deviate from intrinsic values, much economic research
implicitly assumes the EMH holds by using stock prices
as a proxy for a firm’s or market’s true value.

The Bubble Debate
Even though assuming market efficiency is the dominant way
of modeling asset prices, the debate still looms over whether
this holds true for the economy as a whole — and no such
debate is more alive than the one over the nature of asset bubbles. “[M]arket efficiency can be egregiously wrong in other
senses,” Shiller continued in 2003. “For example, efficient
markets theory may lead to drastically incorrect interpretations of events such as major stock market bubbles.”
Many economists have agreed that the recent run-up in
housing prices was, in retrospect, unjustified by economic
fundamentals — the common definition of an asset price
bubble. One possible explanation from the behavioral camp
is herd behavior, in which some financial market participants mimicked the actions of
others.
The fact that everyone everywhere seemed to be profiting from
the housing and securitization markets may have validated to both
homeowners and investors the
belief that house prices would
continue to rise indefinitely.
Herd behavior could have led
both groups to dismiss the risks
associated with mortgage and securitization markets and invest in
them beyond the degree that fundamentals — such as their incomes,

Fa l l 2 0 0 9 • R e g i o n Fo c u s

17

balance sheets, and other macroeconomic characteristics —
would justify. (It should be noted that herd behavior is often
witnessed in sell-offs too. “The fight and flight response has
been very well documented as being a rather ancient part of
our neurophysiology. All of us are hardwired with a very simple set of reactions to fearful events. If you smell smoke and
you see a big fire in the room next to you, you will get scared
and you will run as quickly as you can,” says Lo. “If you’re on
the floor of the New York Stock Exchange and you just lost
20 percent of your wealth, you’re going to get scared and
you’re going to want to run like hell, but that’s not really
going to help your financial wealth.”)
But there can also be rational explanations for bubbles
that are perfectly consistent with the EMH. The EMH says
that investors’ financial decisions at a given time reflect their
perception of economic fundamentals. Yet there is nothing
in the EMH that says those perceptions are always correct.
To the extent that information about an asset is not widely
available or is costly to obtain and interpret, investors will
form expectations absent that information. It could also be
the case that investors fully suspect asset prices are inflated,
but believe they can “ride” the boom a bit longer, causing
them to rationally buy in. People’s perceptions also can be
affected by policy: If people believe unsuccessful risks will be
bailed out by government, then they will rationally take on
more risk than fundamentals alone would justify. Short of
being inside investors’ heads, rational bubbles are very difficult to distinguish from irrational bubbles.
The EMH says “that all the information that we have is
reflected in prices,” Lo summarizes. “EMH critics might say
that’s not true, because the very great risks were not in prices.
But, again, an efficient markets type would say, ‘Well, maybe
back then we didn’t have that information. Maybe back then
we didn’t fully appreciate just how dangerous some of the
toxic assets were. And so at that point in time, the best available information was incorporated into prices.’”
Both arguments may be right, according to Lo, but both
also misconstrue the concept of market efficiency: It is an
ideal, not something that either is or isn’t true. “It’s not that
the EMH is wrong, it’s that it doesn’t always work. Markets
are not always efficient all the time. Sometimes they are
efficient, sometimes they are not,” he says.
What determines whether a market is efficient are
factors like the degree to which information is available and
the frequency with which price discovery takes place. The
research on market efficiency has focused almost entirely on
stock and traditional bond markets in which both of those
conditions are likely to be met. In the market for mortgage-

18

R e g i o n Fo c u s • Fa l l 2 0 0 9

backed securities and
other securitization
products, on the
other hand, information arguably is more
opaque, and concentrated within a
relatively small group
of investors. And the market for a home clears only once
every several years, Lo points out.
He suggests that engineers have a more constructive way
of thinking about efficiency. Engineers evaluate engines
relative to the ideal of being 100 percent efficient in terms of
how much energy goes in for the output it produces. Of
course, an engine of 100 percent efficiency is an unattainable fantasy — just like the idea that markets can be
perfectly efficient. “I think that is changing, slowly,” Lo says,
“but it will have to change a bit more before I think we have
a more complete view of market dynamics.”
Concerning the recent episode, maybe it’s who seemed to
be on the losing side of inflated asset prices that has intensified doubt over the EMH. Over the last couple of years it is
the “smart money” Lo refers to — the large, savvy investors
with lots of analytical tools at their disposal — that has
seemed to take the largest economic hits. Yet it is the smart
money that should have been able to identify and undo bubble behavior. “That’s exactly why the efficient market
hypothesis has a bit of a black eye. It’s because what was
supposed to have been the smart money ended up losing
tremendous amounts of money over the course of the last
couple of years. So it really calls into question the whole
premise of efficiency,” Lo says.

The Price is Right, Except When It Isn’t
The stakes in this debate reach beyond academic dispute.
If economists can find a way to identify and measure asset
bubbles in real time, then they might be able to prick them
before excessive damage is done.
The bursting of an asset bubble can be costly, as we have
seen, but pricking a bubble before it inflates too high — for
example, by the Fed raising interest rates — may bring about
recession. Policymakers who are deciding whether to act
must gauge whether the costs of a potential recession are
greater than the costs of a potential asset bubble bursting.
This gamble is highly uncertain, largely because policymakers would have to identify a bubble in real time and also
gauge by how much prices are overinflated. Yet, if profitmotivated market participants can’t gauge when an asset
bubble is occurring, should policymakers be able to do any
better?
According to some EMH critics, policymakers and financial market participants didn’t give enough credence to the
possibility that markets had gotten prices wrong. “Some
economists took the fact that prices were unpredictable to
infer that prices were in fact ‘right,’” wrote behaviorial
economist Richard Thaler of the University of Chicago in

the Financial Times in August 2009. As early as 1984, Shiller
wrote that conflating the EMH with the idea that prices are
right has been “one of the most remarkable errors in the
history of economic thought.”
Under this view, the run-up in housing prices was dismissed by investors and policymakers alike with the efficient
market rationale that markets have greater wisdom than
individuals. Furthermore, critics claim the idea that markets
always get things right may have pervaded the very aspects
of modern finance that typically serve to dissuade excessive
risk-taking, from abiding by generally accepted accounting
standards to a casual approach to risk management.
Justin Fox, economics and business writer for Time magazine, thinks the EMH gradually evolved into the erroneous
view that markets should not be questioned. According to
Fox, there was a line of people believing that market-established prices literally are correct, and advocating that stance
broadly. “That permeated the teaching of finance in business
schools and in economics departments and elsewhere for a
couple of decades,” according to Fox. He describes how he
believes this evolution took place in his 2009 book, The
Myth of the Rational Market.
In Fox’s view, this interpretation of the EMH engendered
a complacent view of asset bubbles. Under the EMH “you
basically don’t believe in bubbles. When a bubble is going
on, you instead try to come up with all these rationalizations
for why prices must be that high, because they must be that
high for a reason.” In his experience observing the financial
community, he believes there has been a natural tendency
when markets are doing well for a long period of time for
market participants to start believing in what prices are
saying, rather than any other signals they are getting.
“Anybody out there who’s saying, ‘This is crazy, prices of
houses or tech stocks aren’t worth this much,’ is made to
look stupid for year after year as the bubble grows,” he says.
“Some elements of the EMH offered a theoretical basis for
believing those things.”
But, according to Fama, the EMH does not preclude
market mistakes. If the bubble can’t be easily pinpointed,
that actually reinforces the EMH. “Bubbles are 20/20
hindsight, basically. In my opinion, a bubble means that you
could predict when it’s going to break. I don’t think that
was the situation,” he says. Indeed, many investors convinced that they had identified the end of the tech and
housing bubbles lost a great deal of money prematurely
trying to short-sell (placing a bet on a decline) in those

markets — and many, of course, remained optimistic and
stayed in past their peaks.
Similarly, for policymakers it is not enough to know
whether a bubble exists. Policymakers must also decide by
how much prices are inflated, the likely magnitude of the
potential fallout, whether the tools they have in their arsenal
would be effective in reducing the bubble, and whether
pricking the bubble could cause the very economic contraction they are trying to avoid. These questions rely on far
more judgment than just whether prices are providing an
accurate signal of an asset market’s true value.

Market Mistakes vs. Market Failures
Also lurking behind discussions of the validity of the EMH
seems to be a latent debate over the desirability of relying on
markets in general. When prices aren’t “right,” they could
provide misguided signals and may therefore prevent capital
from being allocated to its best uses. This idea caused John
Maynard Keynes to complain that the capitalist system
leaves the country’s investments in the hands of a “casino.”
Few would advocate that markets be dissolved in favor of
government-managed capital allocation, but those who view
markets as a predominant source of harmful economic
fluctuations might advocate a stronger role for policy in
managing them.
For regulation to strike the right balance, policymakers
must understand the difference between market mistakes
and market failures. Market mistakes can be costly, as we
have seen, but trying to avoid them might be a poor goal for
policymakers. If such mistakes are indeed unpredictable,
it would be difficult or impossible to form policy based on
avoiding them. Market mistakes also are hard to identify
in real time with enough certainty to thwart them.
Intervening even when there is pretty good reason to believe
things are out of hand is still exceptionally risky, which is
why Fed policymakers have been hesitant to do it (although
some Fed policymakers have proposed revisiting that stance
in light of the fallout from the housing decline).
Market failures, on the other hand, involve some fundamental flaw in market functioning that policy might be able
to improve. It is not obvious that the crisis reflected a fundamental market failure. It could instead have reflected a
failure of regulation, for instance. A discussion about the
validity of markets should include recognition that policy
and regulation can play a role in the functioning of free
continued on page 34

Fa l l 2 0 0 9 • R e g i o n Fo c u s

19

The Business of Higher Ed
Prices and costs of a college education
BY B E T T Y J OYC E N A S H

est Virginia Wesleyan University froze tuition
and fees for the current academic year, the
second time in four years the school in
Buckhannon has done so. The sticker price with room and
board comes close to $30,000. The college, like most institutions, offers discounts on tuition and fees for students
who qualify.
College sticker prices have outstripped inflation for three
decades. High prices can deter access and completion, especially for students whose parents never went to college, like
many at West Virginia Wesleyan. Located in the middle of
Appalachia, most of its 1,400 students receive some financial
help; 30 percent qualify for federal need-based Pell grants.
The sheepskin typically brings benefits — wiser lifetime
choices and better lifetime earnings. Demand for higher
education has increased, spurred by public subsidies, including those for student loans made by private and government
lenders. (Until the credit crisis, unsubsidized private student
loans were also widely available.) But stagnant graduation
rates and middle-class incomes, rising prices, and now
reduced student lending have renewed conversations about
how institutions use resources and how transparent their
finances are.

W

Education Economics
Nationwide, the published in-state price of attending a
public institution went up by an inflation-adjusted 6.5
percent in 2009-2010 over the previous year, and the privates went up by 4.4 percent. But that doesn’t tell the whole
story. Net prices — which factor in financial aid and tax
breaks — crept up, on average, by 2 percent in the current
academic year 2009-1010, but fell between 2005 and 2009.
The difference between published prices and net prices
make analysis of college costs difficult. For instance, one of
the fastest-growing budget items for institutions is financial
aid, especially for private schools. On average, tuition and
fees account for about two-thirds of money families spend
to send a student to a private, four-year college, and a third
at a four-year public, according to the College Board’s 2009
“Trends in College Pricing.”
Because education retains the centuries-old model of students and teacher in a classroom, labor costs keep prices
high. When other firms substitute capital for labor, output
improves and wages do too. Real wages rise as fast as productivity. But in personal service industries such as higher
education and other “craft professions,” wages may rise
without the productivity shift. The phenomenon is known
as “cost disease.” By way of example, economists William G.
Bowen and William Baumol explained that a quartet takes
the same length of time and number of musicians to perform

20

R e g i o n Fo c u s • Fa l l 2 0 0 9

a concerto as it did centuries ago. Yet the wages of the
musicians increase because they, like professors, have opportunities elsewhere in the market where productivity is
actually rising.
Another possible explanation: Revenues may dictate
spending. Without shareholders to demand efficiency, institutions spend whatever funds they raise or receive to achieve
a break-even budget. This “revenue theory of costs” may
apply to nonprofits like colleges. If the buyers of a college
education paid the total freight, costs might be contained
through price competition. But because education consumers are subsidized, there’s the potential for revenues to
drive costs.
Higher education institutions also suffer from a
“principal-agent” problem. The agents are the faculty, staff,
administration, and governing boards who manage money
on behalf of the principals, the students, parents, and taxpayers. The agents may decide on the level of overhead
expenses that may or may not benefit principals, says Bob
Martin, an economist at Centre College in Danville, Ky.
Martin notes that “bundling” in higher education has
added to cost — services previously not included such as
spiffier accommodations, gourmet food, travel opportunities, and entertainment options. Such amenities enhance a
school’s reputation. Since U.S. News and World Report began
to rank colleges in 1983, competition among schools has
intensified. The more money that an institution spends per
pupil, on average, the higher its rank, although the magazine
doesn’t count spending on dorms, sports, or hospitals. Some
observers suggest the ensuing competition has contributed
to an arms race of sorts.
“There’s been a huge emphasis on the U.S. News ratings,”
says Patrick Callan of the National Center for Public Policy
and Higher Education. “Almost everything there is an input
that you can buy. You can even buy students with financial aid.”

Productivity Logjam
Public universities are shifting costs to families to make up
for declining state subsidies even as enrollment has
increased.
Although the price of a college education is rising, the
graduation rates are similar to the 1970s. The success rate
after four years of attendance in 2007 was 36 percent; five
years, 53 percent; and six years, 57 percent. The numbers
capture full-time, first-time bachelor’s or equivalent degree
seekers, according to the National Center for Education
Statistics.
Growing enrollments are also affecting costs. Between
1997 and 2007, enrollment grew by 26 percent. This largely
reflects increases in the number of 18- to 24-year-olds in the

PHOTOGRAPHY: COURTESY OF VIRGINIA TECH

United States. The proportion of that population enrolled in
college increased by 2 percent. Yet state tax appropriations
per student this year fell by 12 percent in inflation-adjusted
terms compared to a decade earlier. State general fund appropriations have fallen in South Carolina, for instance, from 15
percent in 1999-2000 to about 10 percent in 2008-2009.
As public subsidies decrease, tuition costs tend to rise.
One growing expense is not only salaries but also benefits,
says Jane Wellman of the Delta Cost Project, funded by the
Lumina Foundation for Education in Indianapolis.
Especially for public institutions. “For a while, when the
state retirement systems were making money hand over fist,
they started giving away more generous benefit packages.”
After adjusting for inflation, average benefit expenditures
for full-time instructional faculty on nine-month contracts
grew by 80 percent from 1977-1978 to 2006-2007.
Administration costs are rising too. “It’s hard to know
whether it’s the lawyer you had to hire, or increased campus
security because of legitimate needs to increase campus
security,” Wellman notes. “They also have to spend more
money on legal stuff; that’s the world we live in.”
The expenditures may be desirable — for instance, hiring
professional counselors to work with undergraduates can
benefit those students. “The problem is the patterns are not
examined and they occur without people being aware of
them,” Wellman says.
The Delta Cost Project bundles student services and
instruction in its data and shows declines relative to
increased spending on overhead and administration. And for
those institutions that compete for students on the residential character, “they would all say there’s been an arms race to
add those enhancements,” she notes.
To cope with the cost of instruction, more nonprofit
institutions have started online classes. Virginia Tech’s Math
Emporium, “a learning center for the study of mathematics,”
accommodates more than 500 students at a time, 24 hours a
day, seven days a week. The University of North Carolina at
Chapel Hill will offer Spanish 101 exclusively online starting
in spring of 2010. Traditional instruction for the typical 250student enrollment would cost about $80,000, according to
Larry King, who chairs the romance language department.
The online course cost is estimated at about $50,000.
While eventually these enterprises may break “the
productivity logjam,” most students are educated the oldfashioned way — in a classroom by a professor at a board,
stimulating inquiry, and issuing grades. “We’re still in the
early stages of this,” says University of Virginia economist
David Breneman. “To the best of my knowledge there is no
documentation that somehow you will eliminate the need for
more faculty.” For better or worse, online learning sooner or
later may change the model among nonprofit institutions as
it has among for-profit schools.
However large an institution’s labor costs, spending on
faculty is not going up, Wellman notes. The share of spending devoted to instruction (which includes labor) declined by
1.4 percent to 63 percent from 1996 through 2006 in public

research universities. Institutions reduced instructional
spending per student between 1995 and 2006 but increased,
by similar amounts, spending on administrative support and
student services.
The percent of tenured faculty has also dropped. About
half of full-time faculty was tenured in 2005-2006, according
to the Digest of Education Statistics, a decline from 56 percent in 1993-1994. Spending on faculty is a minority of total
spending in most institutions, and it’s been declining for the
past two decades, according to the Delta Cost Project.
The American Association of University Professors has
found that between 1976 and 2005, the number of full-time,
tenured and tenure-track faculty had grown by a scant
17 percent. Meanwhile, the number of full-time nonfaculty
professionals has more than tripled, an increase of 281 percent.
Full-time, nontenure track faculty grew by about 200 percent.
The number of administrators doubled over that time.
At the same time, tuition prices have grown faster than
education and general spending per student. This further
suggests that public and private schools are depending more
on tuition to pay for other functions such as research.
Effects vary by institution and year. The University of
Virginia’s state appropriations have been cut four times in
the last one-and-a-half years, Breneman says. “Frankly, we’re
not making all that up in tuition.” Endowment earnings
have fallen and there’s been little hiring, so the economic
cost of production is possibly stagnant or falling. But long
term, production costs trend upward. “We know the
production cost considerably exceeds the price even in the
private sector.”

Virginia Tech students can take certain math classes 24/7 on 531
computers at the Math Emporium. Tech inaugurated the online
learning center in 1997 to cope with growing enrollments.

Fa l l 2 0 0 9 • R e g i o n Fo c u s

21

Increasing prices also highlight the differential pricing
that occurs in higher education. Many students at private
schools don’t pay the sticker price. State schools can bring in
more revenue by enrolling nonresident students who pay
more than twice as much as residents. And professional
schools of law and business at the University of Virginia and
other top public institutions can raise tuition to market rates.

Tracking Costs
Kevin Carey, policy director of Education Sector, a think
tank funded in part by the Bill and Melinda Gates
Foundation, ticks off myriad tax incentives postsecondary
institutions receive either directly or indirectly: tax breaks
for individuals with children in college, nonprofit status that
allows them to pay no taxes on endowment earnings or
property, plus subsidized loans and grants. Such subsidies
and barriers to entry likely contribute to the cost problem.
Colleges and universities say, on average, students pay
much less than the full cost of their education. But the student share is rising. By 2006, students at public research
universities were covering close to half their educational
costs, up from about 39 percent four years earlier. Shares of
educational costs covered by tuition increased more slowly
in private schools. Research by the Delta Project shows
that students who pay full price, on average, pay close to the
full cost.
Some courses of study are more expensive than others,
even though all undergraduates may pay the same tuition.
Biology and chemistry cost more than English. And then
there are freshmen classes: How much could it cost for an
adjunct to teach a 300-person lecture without receiving
health or retirement benefits? “There’s no way that’s not
profitable,” Carey says. “But they don’t organize their
finances in a way that would make that evident. They get
money from a lot of different sources, spend money on a lot
of different things, tend not to link revenues and expenditures in a way that allows you to calculate which are
profitable and which aren’t.”
The traditional four-year residential experience is a
shrinking share of the market, according to Guilbert
Hentschke of the University of Southern California School
of Education. But the market is exploding, so the “numbers
in the category are pretty robust.” He has studied for-profit
colleges and universities (FPCUs). The for-profits cultivate
customers in a demographic group who need a career, and
might not attend a traditional institution. They deploy the
model into different labor markets. And they’re flexible. If
there’s no demand for a class, then they won’t offer it. The
FPCUs offer instruction without athletics, and they’re not
invested in real estate. They also centralize curricula.

Nonprofits must cope with expenses beyond their control, such as state and federally mandated rules, but they are
also not likely to use infrastructure in a way that would
enhance productivity or to consolidate purchasing power. At
West Virginia Wesleyan, Bob Skinner says, the college walks
a “tightrope that all small colleges especially in rural areas
walk.” They make money when they rent out facilities in the
summer, and their president is “notorious for negotiating
the best rates when she travels.”
There are ways to lower the total costs students pay and
perhaps increase efficiency. For instance, an estimated 25
percent of additional cost is incurred because students take
more classes than are necessary to graduate.

Keeping the Education Advantage
The success of the United States over time will depend on
human capital, and that will require educating a bigger proportion of the population. Jobs require more skills than ever;
it typically takes more schooling just to maintain a standard
of living, never mind improve it. “Just being willing to work
hard in jobs doesn’t do it anymore,” says Hentschke.
Some of the very elements that make it hard to penetrate
higher education finance in the United States may be the
same ones that make the system the envy of the world.
For instance, the decentralized market structure makes
standardization of any kind difficult, even though schools
abide by appropriate accounting rules. And while the
scramble for student and faculty talent drives up costs for
universities (and leads to differential pricing), it also contributes to vigorous efforts to be the best.
Institutions’ numbers, diverse funding streams, and
autonomy create healthy competition, according to a working paper by Duke University public policy and economics
professor Charles Clotfelter. The United States has held a
“first-mover advantage” in higher education for more than
half a century, and has attracted global talent. But the financial crisis may jeopardize that standing as it affects
government spending and endowments, and college prices.
Other countries are catching up. The U.S. share of higher
education enrollments worldwide fell from 29 percent in
1970 to 12 percent in 2006. Its share of science and engineering doctorates is also likely to fall.
Getting first-generation students to college is a big deal,
particularly in Appalachia. And encouraging them to study
science and math is a good idea too. West Virginia Wesleyan
just opened a new science research center, funded in part
with $6.5 million in grants through U.S. Sen. Robert Byrd’s
office. “That’s the first time we’ve seen that kind of money,”
Skinner says. The research center, it is hoped, will continue
to pay for future generations.
RF

READINGS
Clotfelter, Charles, ed. American Universities in a Global Market.
Chicago: University of Chicago Press, forthcoming.

22

R e g i o n Fo c u s • Fa l l 2 0 0 9

“Trends in College Spending: Where Does the Money Come
From? Where Does it Go?” Washington, D.C.: The Delta Project
on Postsecondary Education Costs, Productivity, and
Accountability, January 2009.

Questions Grow Along with Ginnie’s Portfolio
BY B E T T Y J OYC E N A S H

he housing market may still be in recovery, but the
Government National Mortgage Association’s
business is booming. That growth has led some to
question whether it will be able to remain stable over the
long run.
Ginnie Mae, as the government agency is known, guarantees mortgage-backed securities issued by approved private
lenders and composed of federally insured or guaranteed
loans. Through November 2009, Ginnie had guaranteed
about $407 billion in mortgage-backed securities, compared
to $246 billion over the same period in 2008. Most of its
collateral consists of mortgages insured by the Federal
Housing Administration (FHA). While the FHA doesn’t
make loans, it insures lenders against defaults on loans
that meet its standards. The loans are then sold on the
bond market.
The volume of FHA loans has grown since 2008 as private lenders have retreated from risk. The FHA alone has
insured 75 percent more loans in fiscal 2009, which ended
Sept. 30, than the previous year. The FHA helps low- and
moderate-income families who might not meet conventional standards buy homes by lowering loan costs.
Down payments can be as low as 3.5 percent.
But as Ginnie’s portfolio grows, more of these government-insured mortgages are defaulting, prompting some to
believe that Ginnie will need help, too, just as Fannie Mae
and Freddie Mac have been sustained by a credit line from
the U.S. Treasury and a commitment by the Fed to buy up to
$1.25 trillion of GSE debt and mortgage-backed securities.
It’s unlikely that the Fed will hit that ceiling.
Ginnie Mae and Fannie Mae were offspring of the
Federal National Mortgage Association, formed in 1938 to
guarantee Uncle Sam’s mortgages. Fannie was designed to
serve conventional loans and Ginnie to support the market
for FHA, Veterans Affairs, Office of Public and Indian
Housing, and U.S. Department of Agriculture Rural
Development Housing and Community Facilities Programs.
When Fannie Mae was spun off from the federal government in 1968, its activities went off the federal government’s
balance sheet. Freddie Mac, formerly the Federal Home
Loan Mortgage Corporation, shortly thereafter also
became a publicly traded, shareholder-owned corporation
in 1989.
The idea behind all three entities was to create a
national — and global — market for housing capital by
selling bundled mortgage loans on the secondary market.
That allows lenders to free up cash for more loans. (On the
flip side, however, if investors weren’t buying securities,
they might place their funds in banks, which could then lend
that money.) But the government-sponsored enterprises,
Fannie and Freddie, also held on to more mortgages in their
own portfolios, according to the U.S. Government

T

Accountability Office. That exposed them to interest rate
risk on outstanding debt.
However, Ginnie Mae retains no such portfolio of
mortgages. Nearly all (more than 95 percent) of Ginnie Maeguaranteed loans wind up in pools of securities, but a small
percentage could be held in a lender portfolio or securitized
through another entity such as Fannie or Freddie, according
to a Ginnie Mae spokesperson.
Ginnie has sustained itself financially, and so has the
FHA. But some observers worry about FHA default levels.
Delinquency rates for FHA loans grew by 1.4 percentage
points between third quarter 2008 and the same period of
2009, according to the Mortgage Bankers Association. By
comparison, the rate had not changed this time last year,
between third quarter 2008 and the same quarter 2007.
FHA-insured loans represent 18 percent of all mortgages
originated, up from 4 percent two years ago. But as the
FHA’s share has been growing, its capital reserve has not.
And the FHA’s recently released actuarial study found its
capital reserve ratio to be 0.53 percent, below the 2 percent
threshold required by law. An independent actuarial study
released in November says reserves fell to $3.6 billion as of
Sept. 30, down 72 percent over 2008. However, the FHA has
$30 billion in 30-year-reserves, according to the study.
A recent Inspector General’s report in September faulted
the FHA for its lack of controls over lender approvals. The
FHA also failed to obtain or consider negative information
on lenders from other Housing and Urban Development
offices, and to make sure supporting documents and application fees were collected. Despite approving triple the
number of lender applications in fiscal 2008 as in 2007, the
FHA staff has remained constant. The Inspector General
report cited oversight as a significant problem.
For its part, the FHA has announced an expansion of risk
management efforts. For example, the agency is using more
extreme scenarios in its models, including ones in which
reserves drop below zero. And the FHA has tightened underwriting standards on refinancing and beefed up lender
oversight. Borrower credit scores have improved too. The
average FICO score today is 693 compared to 633 two
years ago.
In addition to providing a boost to the mortgage market,
Ginnie Mae securities (“Ginnies”) have become an attractive
investment option with commercial banks. That’s because
there has been a flight to security occurring in the overall
credit markets, says economist Tony Plath of the University
of North Carolina at Charlotte. The government guarantee
mitigates investor risk and Ginnies offer a better yield than
Treasuries. And banks have used Troubled Asset Relief
Program money to buy Ginnies, Plath notes. Total bank
holdings of Ginnies rose from around $40 billion to $120
billion between midyear 2008 and 2009.
RF

Fa l l 2 0 0 9 • R e g i o n Fo c u s

23

How to judge the success of “Cash for Clunkers”
BY R E N E E CO U RTO I S

n the late summer months of 2009, a government
program helped nearly 700,000 owners of old cars
replace them with new vehicles. The Car Allowance
Rebate System (CARS), better known as “Cash for Clunkers,” is credited with stimulating auto sales and gross
domestic product (GDP) in the third quarter of 2009. It’s
an example of an economic stimulus program that attempts
to accelerate consumption — or, better yet, spur entirely
new consumption — to provide an immediate boost to
economic activity.
The program, in effect primarily in July and August of
2009, granted rebates between $3,500 and $4,500 for car
buyers who traded in older vehicles with low fuel efficiency
to purchase new vehicles with better gas mileage (plus a few
other criteria). Generally, the greater the improvement in
fuel efficiency from the swap, the higher the rebate granted.
The program required the clunkers to be destroyed,
getting relatively fuel inefficient cars off the road. Strong
demand quickly consumed the program’s $3 billion
budget, which ended the program on August 24, earlier than
anticipated.
The program was popular, and without a doubt provided
a short-term boost to the economy. But that’s not enough
to know whether its benefits outweighed its costs.
Economists say its immediate stimulus should be weighed
with its medium- and long-term effects.
With a program like Cash for Clunkers, many economists
worry first about efficiency. By making cars artificially
cheaper to consumers, the program distorts the allocation of
resources. Economic theory suggests that prices derived
from freely functioning markets will coordinate buyers

I

Fifth District Clunkers
$97,716,500
$74,283,500 $77,630,500

$36,750,500

$13,295,500
$67,500

D.C.
(17)

WV
(3,143)

SC
(8,787)

MD
(17,671)

NC
(18,399)

VA
(23,231)

NOTE: Number of vouchers in parentheses
SOURCE: Official Web site of the Car Allowance Rebate System (CARS)
Accessed Dec. 14, 2009

24

R e g i o n Fo c u s • Fa l l 2 0 0 9

and sellers until all mutually beneficial transactions are
exhausted. This outcome will be “efficient,” meaning no one
can be made better off unless you take from someone else to
do it. The catch is, to produce this powerful result, prices
must be allowed to reflect how goods and services are truly
valued.
This basic idea can easily be applied to the Cash for
Clunkers program. The program’s rebates distorted that
powerful price mechanism. When that happens, resources
are less likely to be allocated to where society values them
most. Those resources include everything from car supplies
and labor to the energy it takes to produce a new car, all of
which arguably could have been used to produce something
that provided greater societal benefits.

CARS Costs and Benefits
In addition to economic stimulus, program onlookers
anticipated a host of desirable side effects, ranging from
environmental benefits to assistance to low-income groups.
Others noted distortions to secondary markets affected by
auto sales and what economists call an economic “payback”
effect later.
The array of possible short- and long-term effects makes
it hard to gauge the program’s success, but analyzing its
initial costs and benefits is one way to start. CARS had a
temporary stimulative effect on auto sales and economic
growth. Monthly auto sales jumped from a 9.5 million
annual rate in the first half of 2009 to 11.2 million in July and
14 million in August while the program was in effect.
Automakers ramped up production to make up for the
inventory depleted under the program, which provided a
boost to GDP. A report by the White House’s Council of
Economic Advisers (CEA) estimates the boost from Cash
for Clunkers to the auto sector directly added $3.6 billion to
GDP in 2009, and about 35,000 “job-years” (one job held for
one year) in the second half of 2009.
But this effect is temporary. Once the short-term production is exhausted, the demand for those jobs will likely
be too. Furthermore, cars purchased during the program
were cars that would have been bought at some point in
the future, whether months or years later. Automakers will
sorely miss that demand later when those purchases would
have taken place. Because CARS borrowed demand from
the future, auto sales and GDP will face a dip in those future
months that will tend to offset the boost in the third quarter
of 2009.
To be sure, the CARS program likely brought some
clunker owners into the market who otherwise would have

Secondary Effects
CARS also may have borrowed demand from the used car
market since some car purchasers would have been in the
market for a used car instead. That implies less of a payback

Cash for Clunkers Boost to Auto Sales
25
20

MILLIONS OF UNITS

held on to their cars for years to come. Estimates of this
number range widely, with many hovering around one-third
of all CARS purchases. Nonetheless, to the extent that
demand came from the near future, there will be what is
called a “payback” effect on economic trends. The payback
is the amount of consumption that was borrowed from the
future, and therefore will be absent from sales in those
future months. The trouble is, we can’t know for sure from
what future date demand was borrowed, so the impact of the
payback will be hard to measure. Even if auto sales dip after
the program’s end, this will not necessarily be due to the payback effect because auto sales are notoriously volatile from
month to month. And since CARS borrowed consumption
from an unknown future date, it follows that any payback
should be spread more benignly over many months or even
years.
Yet if auto sales don’t dip, it could indicate a strengthening economy rather than proof that the payback is small.
Vehicle sales for September, after the program’s close,
dipped back to below-trend levels seen earlier in 2009, with
initial signs of recovery in October and November. On the
flip side, a Cars.com survey reports that consumers who participated in CARS planned to scale back holiday season
shopping as a result, potentially revealing an unintended
effect that will eat into the program’s boost to the overall
economy. The CEA’s best estimate of the payback is a drop
in GDP in the first half of 2010 that will more than reverse
the boost provided by CARS in 2009.

15
10
5
0
1990

1995

2000

2005

2009

NOTE: Total light weight vehicle sales (import and domestic), seasonally
adjusted at an annual rate
SOURCE: Bureau of Economic Analysis/Haver Analytics

for the new car market, but pain for used car sales (as well as
used car supply, since many clunkers would have gone into
the used car market). This could have significant distributional effects. It was suggested by some commentators
that the program would benefit primarily lower-income
people, who would seemingly be the predominant owners of
clunkers. But this may not have been borne out.
“I think for the most part the people who partook of this
largesse by the government were people who drove clunkers
by choice, not economic necessity, because if you were driving a clunker by economic necessity, you did not have the
money to go into the market,” says economist George
Hoffer of Virginia Commonwealth University.
In the months leading up to the program, reports of credit difficulties pervaded the auto industry. Sales reportedly
fell through because financing was scarce. Before Cash for
Clunkers, the No. 1 problem for new car sales was credit,
continued on page 34

The Auto Industry in the Fifth District
The transportation industry in the Fifth District includes
manufacturers, automotive parts suppliers, and the biggest
used-car retailer in the nation, CarMax, headquartered in
Richmond, Va. Manufacturers include the BMW plant
in Spartanburg, S.C., and Toyota Motor Manufacturing in
Buffalo, W.Va., which produces engines, automatic transmissions, and gears. However, the moribund vehicle market
has affected profits and, in some cases, the very existence of
several suppliers.
In Virginia, Alcoa Wheel Products in Lebanon and steering-parts maker JTEKT in Daleville have announced
closings. Nevertheless, the firms are continuing to produce
in the short-term because the federal Cash for Clunkers
program generated a short burst of demand, according to
Mike Lehmkuhler of the Virginia Economic Development
Partnership. Others haven’t been so lucky. GM plans to
close its Fredericksburg powertrain plant by year’s end in
2010. But some suppliers in the state are weathering the
downturn. For instance, Dynax in Botetourt County

remains in the business of producing clutch/friction plates
for automatic transmissions.
Transportation-related manufacturing employment has
dropped dramatically in South Carolina. At the end of first
quarter 2009, the sector employed about 27,000, down from
32,537, the annual average for 2007, according to Steve
McLaughlin, a labor analyst at the S.C. Employment
Security Commission. North Carolina has seen layoffs in the
transportation sector too. Annual average transportationrelated employment was 34,773 in 2007. First quarter 2009
employment in the sector, however, fell to 26,095, according
to the N.C. Employment Security Commission.
In West Virginia, however, transportation sector employment is stable, according to Joe Doran of Workforce West
Virginia. Most of the firms are small, with the exception of
the Toyota plant in Buffalo. Employment in the first quarter
of 2009, when compared to the same period in 2008,
declined 3.4 percent, from 2,059 workers to 1,989.
— BETTY JOYCE NASH

Fa l l 2 0 0 9 • R e g i o n Fo c u s

25

INTERVIEW

George Kaufman
Editor’s Note: This is an abbreviated version of RF’s conversation with George Kaufman. For the full interview, go to our
Web site: www.richmondfed.org/publications.
It will be many years before economists have a comprehensive understanding of what caused the financial
crisis. But policymakers need to act in real time to help
resolve such crises and to take steps that will improve
the overall stability of the financial system.

Kaufman worked as an economist at the Federal
Reserve Bank of Chicago from 1959 to 1970. He then
spent the following decade at the University of Oregon,
before returning to Chicago in 1981 to teach at Loyola
University and to direct its Center for Financial and
Policy Studies. Kaufman is the founding editor of the
Journal of Financial Stability, serves as co-chair of the
Shadow Financial Regulatory Committee, and is a consultant at the Chicago Fed.
Aaron Steelman interviewed Kaufman in December
2009.

26

R e g i o n Fo c u s • Fa l l 2 0 0 9

RF: There are signs that the economy may have turned
the corner. Looking back at the financial crisis from our
current vantage point, what are the major lessons that
policymakers should take from it?
Kaufman: There are a number of important lessons. First,
capital matters for banks. It is not everything, but with too
little, banks are likely to freeze up and fail, and contagion is
likely if losses at one bank wipe out capital at other banks in
chainlike fashion. Capital should be the primary concern for
any prudential regulatory system.
Second, asset price bubbles are dangerous to the economy
and the longer they last, the more dangerous they become.
More attention needs to be devoted to them, including
whether to include asset prices in the measure of prices
targeted by policymakers and how to protect financial
institutions against their bursting. In addition, the implications of low interest rates on asset prices as well as on goods
and services prices and employment need to be carefully
studied.
Third, planning and preparation for tail events, such as
financial crises and insolvency of large financial institutions,
are very important. These plans should be made public so
everyone understands the ground rules. Part of the reason
for the inconsistency in public policies attacking the current
crisis was the lack of advance planning for the measures that
were announced publicly. The inconsistency in policy
increased uncertainty in the market and intensified the turmoil. Strategies adopted should be consistent through time
so that participants can make plans. Inconsistent actions
lead to inconsistent and unpredictable responses.
Four, simplicity trumps complexity.

PHOTOGRAPHY: MARK BEANE, LOYOLA UNIVERSITY CHICAGO

George Kaufman has spent his professional career,
now ranging over five decades, studying the financial
industry. His work has spanned both “theory” and
“practice” — or, perhaps more precisely, has connected
the two. He has brought academic rigor to bear on
important policy questions. Like all economists who
endeavor to influence policy, some of his research findings have been heeded while others have not. Indeed,
Kaufman has long maintained that the financial system
would benefit from greater market discipline. But the
lack of such discipline arguably was one of the major
factors contributing to the onset and severity of the
crisis — and remains an issue that policymakers must
confront in the wake of the safety net protection that
was recently extended to numerous institutions.

RF: In March of 2008, you remarked, “Everybody
knows Santayana’s saying that those who fail to study
history are condemned to repeat it. Those who study
financial history are condemned to first agonize over
the patterns they recognize and then repeat it anyway.”
Do you think that will be true this time as well?
Kaufman: Yes, very much so. Many of the policy actions
taken were the same or similar to the actions in past crises —
say, in the S&L crisis of the 1980s — but even larger in scale.
They focused on bailouts and forbearance. In part, this
reflects a combination of being caught by surprise, lack of
preparation, need to act quickly (frequently over a weekend)
with no grand plans, extreme risk aversion, and political
pressure. Thus, moral hazard is likely to be stronger coming
out of the crisis than going in.
RF: That leads me to a broader issue, one that you may
have not directly addressed, but I imagine you have considered: Why is there often such a large gap between the
recommendations of academic economists and the
actions of the policymakers they seek to influence or
may even directly advise? And in which areas do you
think economists have been most successful in bridging
that gap?
Kaufman: There is a gap because policymakers are in the
hot seat and under pressure from various constituencies,
many of whom focus on the short run, while academic
economists focus primarily on long-run efficient solutions.
The academic economists would act more like the policymakers and vice versa if there was a role reversal.
Policymakers are likely to respond more favorably to advice
from academics and other outsiders when the leading
constituencies are out of favor or discredited. For example,
the prompt corrective action and least-cost resolution
provisions of the Federal Deposit Insurance Corporation
Improvement Act, which were designed with the help of
academics, were enacted in 1991 over the objection of most
bankers and bank regulators, whose credibility had been
tarnished by the S&L crisis. In contrast, in the current financial crisis, while bankers may have had their credibility
tarnished again, regulators appear to have maintained theirs
better and academic proposals have not advanced as far. But,
as Keynes concluded:
The ideas of economists and political philosophers,
both when they are right and when they are wrong,
are more powerful than is commonly understood.
Indeed the world is ruled by little else. Practical men,
who believe themselves to be quite exempt from any
intellectual influences, are usually the slaves of some
defunct economist. Madmen in authority, who hear
voices in the air, are distilling their frenzy from some
academic scribbler of a few years back.

RF: Given the expansion — both implicit and explicit —
of the federal financial safety net during the crisis,
what practical steps could policymakers take to restore
meaningful market discipline?
Kaufman: Very few in the short run. Based on the experience of the last two years, most market participants
believe that by exerting sufficient pressure on the government and regulators they can receive a wide range of
guarantees on their deposits and other creditor securities.
Only actual losses through time will dissuade them. But
losses, if any, are likely to be permitted by regulators only in
noncrisis periods. To build their credibility, regulators
must be willing to let market discipline operate and permit
losses on all de-jure uninsured deposits and other liabilities
over a number of years. In a crisis atmosphere, such as
recently, market discipline is again likely to be an early
casualty. Thus, the long-run cost of recent bailout programs
in terms of weakening market discipline through time is
very high.
RF: In a perfect world, to what extent would you limit
the safety net? For instance, is there good reason to
do more than simply guarantee small depositors at
commercial banks?
Kaufman: No.
RF: Many in the public — and a nontrivial number of
economists — believe that the financial system is inherently fragile and requires significant regulation to
reduce systemic risk. How would you respond?
Kaufman: There is a difference between fragility and
breakage. For example, fine wine glasses are more fragile
than ordinary drinking glasses, yet, at least in my household,
the ordinary drinking glasses break more frequently. That is
because they are handled more carelessly. The same is true
with banking. Before the introduction of the Fed, banks
operated with lower capital ratios than nonbanks, as they
do now, yet their failure rate was no higher on average.
However, banks did fail more in clusters, as their high
leverage makes them more sensitive to tail shocks that affect
them in common. But, in the absence of a government
safety net, bankers are likely to handle their banks with care,
taking only as much ex-ante risk as is consistent with their
capital. Of course, ex-post risk may exceed ex-ante risk,
particularly in a crisis. Then the central bank needs to
provide liquidity, but not to protect creditors of insolvent
banks. Such a strategy is only self-defeating. The greater the
protection provided, the greater the risks bankers take, and
the greater the number and cost of failures.
RF: Some blame the financial crisis on financial innovations that went astray. What are your thoughts?

Fa l l 2 0 0 9 • R e g i o n Fo c u s

27

Kaufman: I believe that innovation both in methodology
and application have been and continue to be an integral
part of finance and that, on the whole, both finance and the
macroeconomy have benefited from it. Numerous empirical
studies have shown convincingly that political jurisdictions
which have deeper and more sophisticated financial sectors
have experienced faster economic growth. But there are
costs as well as benefits to having large financial sectors.
When things go right in the financial sectors, the economy
benefits. But when things go wrong, they have adverse
consequences for the economy — and the larger the financial sector, the more serious the damage.
The rapid growth in financial institutions and markets in
the United States in recent years has in part been driven
by innovation. Because finance basically involves information collection, storage, processing, and distribution,
innovations in computer and telecommunication technology have shortened the time necessary to perform these
functions and reduced their costs. This has encouraged
innovations that permit financial products to be tailored
more to the unique needs of existing or potential participants in financial markets.
Innovations of any kind are risky and their lasting value
should be judged on the basis of their benefits relative to
costs. Some may not work as advertised and possibly do considerable damage at high cost. Others may work but require
a long learning curve, during which time the costs exceed
the benefits but are then reversed. And some may generate
benefits immediately.
Many of the world’s greatest innovations required
lengthy learning curves to gain the full benefit. Early application of the steam engine to railroads and ships resulted in
numerous explosions that killed or maimed users. And the
bigger the engine, the more deadly the accidents. Likewise,
early flying machines, including those that proceeded or
immediately followed the Wright Brothers, had a poor
safety record and the higher the flight, the greater the
severity of injury. Those that did not fly high did not produce
many injuries but they achieved little.
The great advances in computer technology and telecommunications in recent years have encouraged the
development of increasingly complex financial instruments.
Some of the innovations were so complex that they outran
the ability of both users and regulators to understand them
quickly. Thus, they had the potential for misuse with resulting serious damage. And the potential was realized.
An example is subprime residential mortgages. They
were designed to increase the flow of mortgage credit to
households that previously had not qualified for regular
mortgages because their credit rating and income were too
low. Thus, they were not eligible for homeownership. But, as
we now know, these mortgages were often misused to provide credit to those who could not afford them or did not
fully understand the conditions of the mortgage contract.
So, the default rate was unexpectedly high and subprime
mortgages have almost disappeared from the market. But

28

R e g i o n Fo c u s • Fa l l 2 0 0 9

undoubtedly some low-income or credit-challenged households are using them successfully to purchase homes
that they otherwise would not have been able to.
Nevertheless, it appears that, as of now, the costs have
exceeded the benefits.
I believe that in the not-too-distant future subprime
mortgages will reappear, but probably under a different
name and with an improved design. I am reminded of the
development of corporate junk bonds in the 1980s. They
were the subprime mortgages of their day. Junk bonds were
subprime corporate bonds that opened capital markets to
risky, often younger, corporations. And like subprime mortgages, they were misused at first. Judging their risks required
different analytics than for regular corporate bonds and they
experienced high default rates. Indeed, they resulted in the
bankruptcy of the investment banking firm Drexel
Burnham Lambert, which was the largest underwriter of
junk bonds, and in a prison term for the firm’s Michael
Milken, who was the primary champion of junk bonds. As
Drexel was also the largest market maker for junk bonds, its
demise almost shut down the junk bond market. But the
need for bonds to service this underserved part of the corporate market remained, and investors learned in time to
understand them and use them correctly. Junk, or more tactfully, high-yield, bonds made a comeback and now comprise
some 20 percent of the corporate bond market and no
longer raise eyebrows.
The feeling that financial innovation has gone too far is
quite widespread. Recently, for instance, former Fed
Chairman Paul Volcker expressed concern over the value
added by such innovation. I admire and respect Paul Volcker
greatly. He was one of the world’s great central bankers and
is now one of the truly wise men in finance. But here I
believe he may overstate the negative. The innovated
securities did not cause the crisis but magnified its impact.
The basic cause was the bubble in home prices, which provided the base for many of these securities. Would we have
been better off if we had banned the steam engine and the
airplane because of the high casualty rate at their births? I
don’t think so. But one can come back and argue that the
aggregate cost of the financial accident was much higher.
And they may be right, but the cost of correcting the problem I believe is also far less. And this includes not only
subprime mortgages but also the more complex securitized
products like collateralized debt obligations and credit
default swaps. If used correctly, they show great promise in
adding to our future economic welfare by diversifying risk
over a broader base of investors and thus increasing the flow
of funds and investment. I believe that in time market participants will climb up the learning curve and at least
partially resuscitate the less complex of these innovations
and use them more safely. But it will take time.
RF: How would you define an asset price bubble? And
when we believe that one is emerging or has emerged,
what, if anything, should policymakers do in response?

Kaufman: Asset price bubbles are
difficult to identify. One person’s
bubble is another person’s fundamental value. As I noted in my
answer to the first question, the best
protection against damage from the
bursting of a bubble is to fortify the
financial system, which, as also noted
above, is highly leveraged and fragile
to tail shocks. Higher capital ratios
would cushion and absorb the
adverse impact and reduce systemic
risk. Alternative policies of incorporating asset prices in the inflation
target or leaning against the bubble
are insufficiently researched to date.
RF: What do you think of the
recent rules aimed at limiting
executive compensation?

George Kaufman
➤ Present Position
John F. Smith Jr. Professor of Finance and
Economics, and Director of the Center
for Financial and Policy Studies, Loyola
University of Chicago
➤ Previous Faculty Appointment
University of Oregon (1970-1980)
➤ Education
B.A. (1954), Oberlin College; M.A. (1955),
University of Michigan; Ph.D. (1962),
University of Iowa
➤ Selected Publications
Author of The U.S. Financial System: Money,
Markets, and Institutions; editor or coeditor of more than 20 books on financial
economics; author or co-author of
numerous papers in such journals as the
American Economic Review, Journal of
Political Economy, Journal of Monetary
Economics, and Journal of Finance

Kaufman: I understand the public
backlash against the outlandish
bonuses paid by some financial institutions, but trying to
stop the practice is both costly and likely to be unsuccessful.
In a competitive environment, limiting compensation in
some firms or industries is likely to be costly to those firms
or industries as their best talents are bid away. Ironically, this
may hurt the government if the affected firms are those that
the government aided and received ownership in. Moreover,
compensation regulation is relatively easy to circumvent.
For example, in response to a similar public outcry against
high executive compensation in the mid-1990s, the government imposed a ceiling of $1 million on the deduction that
corporations could take on cash compensation by their top
executives. The response was an increase in compensation in
stock options rather than cash, leading to an increase in risktaking. One promising avenue, however, is increased
emphasis on deferred payments.
RF: How important do you think independence from
the political branches is for the conduct of sound monetary policy by the Fed? And if you think it is desirable,
are you concerned that some of the proposals in
Congress could compromise that independence or
might they shed useful light on the Fed’s actions?

Kaufman: I think Fed independence for monetary policy is
very important. At times, to achieve favorable longer-term
outcomes in employment and price stability, temporary
short-run outcomes may be politically unpopular — say, high
interest rates or high unemployment. If politics made it
difficult to permit these short-run outcomes, the desired
long-run results may be more difficult to achieve. Permitting
the Government Accountability Office to audit the
activities of the Federal Open Market Committee (FOMC)

to increase transparency would lead
to second-guessing of its actions and
introduce an additional element of
uncertainty. An interesting alternative proposal suggested recently by
the Shadow Financial Regulatory
Committee, which I co-chair, is to
have the FOMC speed up the
release of the transcripts of its
meetings from the current five-year
delay to three or four weeks, in line
with the release of the summary
minutes. This would achieve transparency without providing a
platform for second-guessing by
another government agency.
RF: Please tell our readers a little
about the Shadow Financial
Regulatory Committee. For
instance, why was it founded, who
serves as its members, and which
issues has it been considering?

Kaufman: The Shadow Financial Regulatory Committee,
which is now entering its 25th year, is a group of independent experts on the financial services industry and its
regulatory structure. The purposes of the Committee are: (a)
to identify and analyze developing trends and ongoing
events that promise to affect the efficiency and safe operation of the financial services industry; (b) to explore the
spectrum of short- and long-term implications of emerging
problems and policy changes; (c) to help develop private,
regulatory, and legislative responses to such problems that
promote efficiency and safety and further the public interest; and (d) to assess and respond to proposed and actual
public policy initiatives.
The results of the Committee’s deliberations are intended to stir debate and to increase the awareness and
sensitivity of members of the financial services industry,
policymakers, the media, and the general public to the
importance and implications of current events and policy
initiatives affecting the efficiency and safety of the industry.
The longevity of the Committee attests to its success in
achieving its objectives. Perhaps its most lasting contribution to date is its role in developing the prompt corrective
action and least-cost (structured early intervention and
resolution) provisions of the Federal Deposit Insurance
Corporation Improvement Act of 1991.
Members of the Committee are drawn from academic
institutions and private organizations and reflect a wide
range of views. The only common denominators of the
members are their public recognition as experts on the
industry and their preference for market solutions to problems and the minimum degree of government regulation
consistent with efficiency and safety.
RF

Fa l l 2 0 0 9 • R e g i o n Fo c u s

29

ECONOMICHISTORY
A Tale of Two Virginias
BY ST E P H E N S L I V I N S K I

What we know today as the state of West
Virginia used to be 55 counties that were
part of Virginia. By the Civil War, the
economic and demographic differences
between many of these counties and the
rest of the Old Dominion had become so
pronounced that the secession was seen
by many at the time as inevitable.

30

R e g i o n Fo c u s • Fa l l 2 0 0 9

n April 17, 1861, the convention called by the Virginia
General Assembly to consider secession from the Union met
in secret in Richmond. This meeting
was viewed with suspicion by some
attendees — namely, many members
of the delegation from the western
counties of Virginia. When the final
vote was tallied, the secession measure
had passed: 88 to 55. At the convention, most of the “no” votes came
from 32 delegates from what we know
today as West Virginia, mainly from
the northern, western, and central
parts.
Three days after the convention
adjourned, 22 of the delegates opposed
to Virginia’s secession found themselves in another secret meeting to
contemplate another secession. The
site of the meeting was the room of
Sherrard Clemens at the Powhatan
Hotel near the capitol. Clemens was a
U.S. congressman who was famous for
challenging the governor’s son to a
duel in 1859 over actions taken during
a gubernatorial race. (He sustained
a near-fatal wound then.) The attendees of the Powhatan Hotel meeting
decided they would “oppose secession
to the last,” wrote Charles Ambler,
one of the most
prominent historians of this period.
And they also went
one step further:
They agreed that it
was time to promote secession of
the western counties of Virginia —
or, at least, as many
of them as possible
— from the Old
Dominion and cast their lot with
the Union.
This was not a rash decision but
instead the peak of an emerging

O

“sectionalism” that saw the western
counties as too unique to fit with
the rest of Virginia. Part of the difference was from attitudes over the
morality of slavery. Another was
demographic — the ethnic composition of the new immigrants to the
western counties was different than
that of the eastern ones.
The main divergence was largely
over economic issues. Many within the
western counties viewed the attitudes
and policies supported by many in
Virginia as inhospitable to the prosperity of non-slaveholder farmers and
businesses. Ambler makes the case
that the split was inevitable in his history of the period. Historian Barbara
Rasmussen notes: “West Virginia
statehood was long in the making and
had its start in politics driven by
economic interests, not abolition.”

The Seeds of Secession
The economic differences between
the western counties that eventually
seceded from Virginia and the rest of
the state were long-standing and based
on a series of specific factors. The
economic differences that arose from
western Virginia’s unique geography
were indeed large factors. These
western and northwestern areas
of Virginia were mountainous and
rugged, and winters came early.
None of this made the area conducive
to the production of tobacco, a cash
crop for the eastern counties. The
plantation system that was typical
of tobacco farming never took hold
and, as such, neither did widespread
slave ownership. The counties were
instead characterized by collections of
yeoman farmers.
Some of the wealth generated after
1812 also came from the mining of
natural resources. The production of
salt was lucrative, and later came the
dominance of iron and coal mining.

PHOTOGRAPHY: WEST VIRGINIA STATE ARCHIVES

The economics of
West Virginia’s
secession

On the farms, hogs, corn for whiskey, sheep, apples, and
lumber were the main agricultural emphasis. And having
the Ohio River along their western border gave the
western counties of Virginia better initial access to
the interior waterways of America to transport all of their
products — an endeavor made all the more profitable by the
invention of the steamboat.
The demographics of the western counties also differed
from those of eastern Virginia. Scotch-Irish and German
immigrants tended to move west to where the nonslave jobs
were. Many other workers also moved west, mainly from
Pennsylvania, New Jersey, New York and New England —
none of which had ingrained loyalties or ties to the Old
Dominion. Meanwhile, the eastern counties were largely
characterized by bloodlines that could be traced to original
colonists and over the ocean to England. As historian
Charles Ambler wrote in his classic 1904 work, Sectionalism
in Virginia from 1776 to 1861, as population moved westward
and became more diverse in nationality, the “contrasts and
conflicts between the older and newer societies became
more pronounced.”
One of the most pronounced differences was over the
issue of slavery. In addition to the relatively small benefit
of slave labor in the more mountainous counties, the
new immigrant population tended to be religiously and
ideologically opposed to slavery. Couple that with the aforementioned lack of slaves in the western counties generally,
and it is easy to see the budding schism. Such was identified
at the time by many Virginia residents, including those
in the slaveholding eastern counties, like the areas of
the Tidewater along the Chesapeake Bay.
Yet, as volatile as the slavery issue was — and although
there was indeed a small enslaved population in the western
counties — other pressing political questions had a more
direct economic influence on the western counties. Of
primary importance in the early 19th century was access to
capital. At that time, the only two legally chartered banks in
all of Virginia were located in Richmond. Coins issued by
these banks were too scarce to serve as a suitable medium
of exchange out west. So private citizens created unincorporated banks to issue notes, which circulated freely based on
the reputation of the issuer.
After petitioning the state legislature to open some
western banks, the legislature agreed to charter banks in
Winchester and Wheeling in 1817. Yet other banking centers
— Baltimore, Pittsburgh, and Philadelphia, in particular —
had already gained a foothold in the region and continued
to fund much of the commercial activity afterward.
While the western counties became more prosperous
and populous, the political dynamic in matters of East versus
West was dictated by a political system that Thomas
Jefferson noticed as early as 1790 was unbalanced. The
imbalance was driven largely by a voting system that gave
more weight to slaveholding counties. The attempt to
address the inequities resulted in a constitutional convention in 1830. But the outcome was not to change the

apportionment rules that favored slave owners. Instead, the
agreement added more seats to the state’s Senate and House
which were awarded arbitrarily to the western counties.
This hardly ended the sectional strife. In fact, it set off a
short-lived and unsuccessful movement among the northern
panhandle counties to consider a merger with Pennsylvania.
Another important sectional conflict is illustrated after
the advent of rail travel. The Baltimore and Ohio Company
wanted to extend their railroad through Virginia, heading
west to the Ohio River by passing through the Kanawha
River valley. The General Assembly was concerned about the
proposed route running too far south and causing the state
to lose trade coming from the west as it could be rerouted to
Baltimore and Philadelphia instead. So they initially vetoed
the request of the railroad. They opted instead to support
other projects — both rail and waterway — that they
reasoned would direct more trade eastward toward
Richmond and Norfolk.
Instead, the B&O Railroad eventually extended west out
of Washington, D.C., through Cumberland, Md., and back
into what is now West Virginia. Over time, it made cities
along its route, such as Grafton and Fairmont, into industrial centers. It also contributed to making Wheeling a vibrant
center of commerce, rivaling Pittsburgh. “With the building
of the [B&O Railroad],” wrote historian James Morton
Callahan in 1923, “the trans-Allegheny Northwest became
independent of Richmond. Trade could no longer be
diverted from Baltimore to Richmond.” He concluded that
this indicates that the “line of business separation was drawn
a quarter of a century before the act of political separation
was accomplished.”
Of course, many railroads and road improvement
projects didn’t come cheap and many were financed at least
partly by tax revenue. But the counties of western Virginia
were wary of any project that the eastern legislators might
have been likely to support that mainly benefited the
southern and eastern parts of the state.
The tax code tended to favor slave owners at the expense
of eastern farmers by exempting slaves under a certain age
from taxation and nominally taxing the others. Meanwhile,
taxes on land and livestock, assessed at their full value, hit
those in the West hardest. Also, the convention called in
1850 to reform the state constitution — like the one in 1830
— yielded a change that prohibited the state government
from pledging the credit of the state to defray the obligations of any company or corporation. This effectively put an
end to some government-supported “internal improvements,” such as certain road projects, that the western
delegations were demanding. Later, the tax inequality was
further exacerbated in 1860 when the General Assembly
increased taxes on wool — raised mostly in western Virginia
— while keeping untaxed eastern tobacco and wheat crops.

The Wheeling Conventions
The year after the wool tax was passed as a way to finance
the state’s military mobilization, the state of Virginia voted

Fa l l 2 0 0 9 • R e g i o n Fo c u s

31

to secede from the Union. Yet, as expected, the support for
severing ties with the North was far from unanimous.
Indeed, there were even some differences of opinion in the
western counties.
When the “Cotton States” like South Carolina proposed
secession, opinions in the western part of Virginia were
generally skeptical. Here economic concerns seemed again
to play a role. As Ambler wrote in his 1933 history of the
state: “To West Virginians, Constitutional guarantees were
generally considered sufficient protection for property
rights of all kinds and for other rights as well.”
The chief concern among many was that, in the case of
Virginia’s secession, their own land would be seen as a battleground frontier by the competing armies. Council meetings
in some of the counties resulted in resolutions stating
adherence to the Union, invoking the Constitution as an
important protection of their prosperity.
It is worth noting that there was hesitancy over secession
in the eastern parts of Virginia as well. Seven southern states
seceded in late 1860 and early 1861. During this time, eastern
newspapers urged state leaders to take the lead in securing
concessions from the North before considering secession.
This was, reports historians Otis Rice and Stephen Brown,
“a view shared by many Virginians, even those east of the
Blue Ridge.”
In early 1861, when the secession convention that had
been called by Governor John Letcher had adjourned and
the western delegates had finished their meeting in the
Powhatan Hotel, the fate of Virginia was left in the hands of
voters. The secession ordinance was to come to a vote on
May 23. But some western leaders were urging a preemptive
meeting to consider plans to secede to the Union if the ordinance passed.
Then, almost as if the intent was to further alienate the
western counties, the Virginia governor instituted on May 11
a ban on shipping flour, grain, pork, beef, or bacon to Ohio
or Pennsylvania. This further drove a wedge between the
economic interests of western Virginians and the political
reality of staying dependent on Virginia.
The counties that largely supported creating a new state
chose the city of Wheeling as the site of their convention
which took place May 13 to May 15. Turning back proposals
to declare their intent to secede before the May 23 vote, the
delegates settled on reconvening on June 11 after the result
of the secession referendum had been established.
Most of the counties of northwestern Virginia voted substantially against joining the Confederacy, perhaps by as
much as a 3 to 1 vote. There were, however, 11 counties —
mostly in the center of what is known today as West Virginia
— that didn’t favor the Union. These counties were sparsely
populated, but their inclusion in the final boundaries of the
new state wasn’t a foregone conclusion at the time.
As the war progressed, Union forces drove Confederate
soldiers out of the Kanawha and Monongahela Valley by July.
The most pro-Union of the delegates to the final stage of
what became known as the Second Wheeling Convention —

32

R e g i o n Fo c u s • Fa l l 2 0 0 9

which had initially begun in June — now had momentum at
their back. At its end, the delegates elected Francis Pierpont,
a delegate from Marion County, to be the nominal “governor” of the new state.
The convention adjourned in late August after deciding
to submit the statehood referendum to a vote on October
24. It passed overwhelmingly — 23 votes in favor for every 1
opposed. All told, 39 counties approved the formation of a
new state.

A New State
The boundaries of the new state were still a sticking point
when delegates to a convention called for the purpose of
writing the state’s constitution met again in Wheeling starting in late November 1861. In addition, there were some
sectional issues that arose during this final Wheeling convention as a result of each county’s economic concerns.
The route of the Baltimore and Ohio Railroad was a key
factor in the inclusion of the counties that would become
West Virginia’s eastern panhandle, even though these counties were generally supportive of the Confederacy. Those
counties — Jefferson, Berkeley, Morgan, Hampshire, Hardy,
and Pendleton — were added only on the condition that voters approve admission into the new state. Of 11,000 voters,
only 1,610 cast ballots. Only 13 of those votes were against
statehood, leading some to speculate that pro-Confederacy
sentiment was suppressed. (Later legal challenges to the
boundaries of the state — mounted by the eastern panhandle counties — were rejected by the Supreme Court
in 1871.)
Moreover, the counties of the southern part of the state
that had supported Virginia secession from the Union “were
included even against their wishes,” suggests Ambler.
To safeguard against their future political influence,
however, he reports that when the constitution was finally
written, their representation in the state legislature was
reduced to a minimum.
All told, 50 counties were included in the state boundaries. Five counties (Mineral, Grant, Lincoln, Summers, and
Mingo) were added after statehood.
Another issue concerning government support of infrastructure projects seemed to mimic the political debates
about why West Virginia should secede from Virginia in the
first place. Delegates from the Kanawha Valley wanted road
and railroad improvements and argued in favor of constitutional authorization for the state to issue bonded debt for
such projects. The northern counties indicated no desire to
include such a provision. When the vote was taken on the
amendment to allow bonding, it was rejected by a vote of
25 to 23. A last-minute compromise that allowed the state to
support infrastructure in other ways, including a mechanism
that allowed the creation of specific taxes to pay off new
projects, allowed the convention to end on a note of
harmony. The constitution also included provisions to eliminate classifying property of different types for the sake of
taxation — a response to the offense many took to the favor-

ing of slave property over other forms of property in the
antebellum days.
In April 1862, voters of the then-fledgling state approved
the new constitution, and in May the new “Restored
Government of Virginia” petitioned the U.S. Congress for
recognition of the state. As Congress deliberated, the Union
was effective at holding the line in West Virginia despite a
few attempts by the Confederate army to capture territory.
Indeed, when the de facto legislature of West Virginia sent
to the Virginia General Assembly a request to secede in May,
it was granted. When Congress finally granted approval in
December and President Lincoln concurred, the only step
to be taken was a referendum terminating slavery in their
territory, which passed handily.
The state of West Virginia was accepted into the Union

on June 20, 1863. It has the distinction of being one of only
two states formed during the Civil War (the second was
Nevada). Additionally, it was the only state to form by seceding from a Confederate state (though similar proposals were
debated in other states, including North Carolina and
Tennessee).
Yet, while many of the debates about secessions are
largely looked upon as epic battles over abolition, West
Virginia’s secession was mainly the result of economic concerns. As Rasmussen notes, those most eager to secede from
the Old Dominion were acting on “an extremely rational
expression of enlightened self-interest.” In retrospect, it’s
no mystery why the western counties sought to leave
Virginia. Perhaps a more difficult question is why the
marriage persisted as long as it did.
RF

READINGS
Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861.
Chicago: University of Chicago Press, 1910.
____. A History of West Virginia. New York: Prentice-Hall, 1933.
Link, William A. “‘This Bastard New Virginia:’ Slavery, West
Virginia Exceptionalism, and the Secession Crisis.” West Virginia:
A Journal of Regional Studies, Spring 2009, vol. 3, no. 1, pp. 37-56.

Rasmussen, Barbara. “Charles Ambler’s Sectionalism in Virginia:
An Appreciation.” West Virginia: A Journal of Regional Studies,
Spring 2009, vol. 3, no. 1, pp. 1-35.
Rice, Otis K., and Stephen W. Brown. West Virginia: A History.
Lexington, Ky.: University Press of Kentucky, 1993.

F E D E R A L R E S E RV E continued from page 7
whether current Fed actions may jeopardize Fed independence in the future. This recession has spurred new
expansions in the Fed’s loan portfolio, opening up its lending
window to institutions that were not privy to Fed funds
before the economic downturn. Indeed, some have argued
that this has been a long-standing shift in Fed credit policy
that started with lending meant to prop up the Penn Central
Railroad in 1970, the infusion of liquidity the Fed provided
to the failing Continental Illinois National Bank in
1984, and the engineered bailout of Long-Term Capital
Management in 1998.
Consequently, economist Marvin Goodfriend, formerly
of the Richmond Fed and currently of Carnegie Mellon
University, has proposed a “new accord” for Fed credit
policy. Meant to mimic what the Treasury-Fed Accord did
for monetary policy, the goal would be to place explicit
boundaries on actions that could harm Fed independence.
“It’s important to appreciate the difficulties to which the
Fed exposes itself in the pursuit of credit policy initiatives

that go beyond traditional last resort lending to banks,”
notes Goodfriend. Not only does it open the door for more
congressional pressure to lend to some and not to others,
but it also puts the Fed in an untenable position when the
Fed must cooperate with the Treasury on items such as
banking regulation and payments system policy. “This interdependence exposes the Fed to political pressure to make
undesirable concessions with respect to its credit policy initiatives in return for support on other matters.”
Only time will tell whether the recent expansion in Fed
lending will be temporary or not. In the meantime, it’s
important to understand the historical experience of the
Fed. The independence of the Fed is something that Fed
policymakers still tend to guard closely. Yet it’s not always
the case that independence is taken away all at once as it has
been in previous decades, particularly during wartime. Some
Fed observers and policymakers worry that actions that may
seem well-intentioned and short-lived today could chip away
at Fed autonomy over the long term.
RF

READINGS
Meltzer, Allan H. A History of the Federal Reserve, Volume 1: 1913-1951.
Chicago: University of Chicago Press, 2003.
Hetzel, Robert L. The Monetary Policy of the Federal Reserve:
A History. New York: Cambridge University Press, 2008.

Hetzel, Robert L., and Ralph F. Leach. “The Treasury-Fed Accord:
A New Narrative Account.” Federal Reserve Bank of Richmond
Economic Quarterly, Winter 2001, vol. 81, no. 1, pp. 33-55.
Timberlake, Richard H. Monetary Policy in the United States:
An Intellectual and Institutional History. Chicago: University of
Chicago Press, 1993.

Fa l l 2 0 0 9 • R e g i o n Fo c u s

33

E M H continued from page 19
markets and the forming of investor expectations in both
positive and negative ways. Regulations like disclosure laws
can help markets become more efficient by making information widely available. But a too-large public safety net that
convinces market participants they will not have to
bear all or most investment losses can induce investors to
rationally take risks they otherwise would not have.
Financial market participants may have taken market efficiency for granted, as Fox believes. The only scenario that
would be at odds with what the EMH really says would be one
in which information had been accessible and market participants just didn’t use it. Yet the vast majority of economists,
policymakers, and financial market participants did not see
the financial crisis coming, perhaps indicating that such infor-

mation about the true risk was not there for the taking. Or
perhaps parties who ignored information about the risks were
rationally responding to perverse incentives to do so.
Economists don’t yet fully understand all the factors
that might cause markets to occasionally get prices wrong.
To explain this, you can favor behavioral theories on
psychology and investor biases, errors of regulation, or
perhaps just a pervasive difficulty of accessing information
due to characteristics of the market in question. But none of
these explanations are inherently at odds with the EMH.
Studying the financial crisis with the benefit of hindsight
will help economists, investors, and policymakers better
understand the causes behind fluctuations in asset prices for
which there is no easy explanation.
RF

READINGS
Fama, Eugene F. “The Behavior of Stock Market Prices.”
Journal of Business, January 1965, vol. 38, no. 1 pp. 34-105.
____. “Efficient Capital Markets: A Review of Theory and
Empirical Work.” Journal of Finance, May 1970, vol. 25, no. 2,
pp. 383-417.
Fox, Justin. The Myth of the Rational Market. New York, N.Y.:
HarperCollins, 2009.

Jones, Steven, and Jeffry Netter. “Efficient Capital Markets.” In
David R. Henderson (ed.), The Concise Encyclopedia of Economics.
Indianapolis: Liberty Fund, 2008.
Lo, Andrew. “Efficient Markets Hypothesis.” In Steven N. Durlauf
and Lawrence E. Blume (eds.), The New Palgrave Dictionary of
Economics, Second Edition. New York: Palgrave Macmillan, 2008.
Shiller, Robert J. “From Efficient Markets Theory to Behavioral
Finance.” Journal of Economic Perspectives, Winter 2003, vol. 17,
no. 1, pp. 83-104.

C L U N K E R S continued from page 25
Hoffer says, yet “there was not one word about credit
problems during Cash for Clunkers.” This may imply CARS
participants had good credit, large down payments, or both.
All are consistent with a higher income population.
Richmond, Va.-based CarMax lobbied Congress unsuccessfully to include used cars in the program. Had the
program included used cars, it might have benefited the lesswealthy, who tend to be more active in the used-car market,
Hoffer says. “It would have been more income-neutral.”
Environmental benefits were a selling point for the program too. But they’re not as straightforward as they appear.
Many vehicles scrapped under the clunkers plan would have
gone into the used-car market, so CARS removed older polluting cars from the road. All else equal, this should have
reduced emissions. CARS participants enjoyed a 9.2 MPG
increase in fuel efficiency, on average. This will certainly
be a direct benefit to drivers of those cars: Consumer Reports
estimates that will save owners $720 apiece in annual fuel costs.
But scrapping the clunkers produces carbon, as do new
car production processes. Perhaps more important, many of
the clunkers likely were driven less than the new replacements will be. These owners now have more comfortable
fuel-efficient cars that are cheaper to drive and thus likely to
be driven more. This will eat into emission savings. Hoffer
believes it could even produce more emissions for a number
of cars, not less. The bottom line is that assessing the environmental benefits of CARS requires looking deeper

34

R e g i o n Fo c u s • Fa l l 2 0 0 9

than just the car-for-car improvement in fuel efficiency.

Jaws of Life for the Auto Industry
Like any economic stimulus, CARS is likely to be more
effective when there are idle economic resources, a description that certainly matched the economy in 2009. But it
matters why resources are idle. By most accounts, the auto
industry has faltered because its products are not highly valued relative to competitors. The program may have
provided only a temporary reprieve to an industry facing a
long-term structural decline. And since two of the Detroit
Three were effectively closed for the summer, when the
vehicles started selling, they couldn’t take advantage of the
sales momentum, Hoffer notes.
Moreover, the program used valuable economic
resources to replace still-functioning cars. Destroying those
productive assets represents a loss of welfare to society.
That’s why a true estimate of the program’s net benefits
must also subtract the value of the destroyed assets.
It is not easy to quantify this welfare loss. One could even
argue that the cost is small, since the program affects a small
number of cars relative to the total number on the road. But
more important, if policy broadly used artificially low prices
to affect individual decisionmaking in an attempt to subsidize industries precisely because they are not highly valued,
then the distortions and unintended consequences could
produce losses that may overwhelm the gains.
RF

BOOKREVIEW
The Precursors of Financial Crises
THIS TIME IS DIFFERENT:
EIGHT CENTURIES OF
FINANCIAL FOLLY
BY CARMEN M. REINHART
AND KENNETH S. ROGOFF
PRINCETON: PRINCETON
UNIVERSITY PRESS, 2009, 463 PAGES
REVIEWED BY STEPHEN SLIVINSKI

he subtitle of this new book by University of
Maryland economist Carmen Reinhart and
Harvard University economist Kenneth Rogoff
gives the reader the best hint to the content inside.
The heart of the book consists of a wealth of new data
and analysis — much of which appears in a nearly
100-page appendix — on public indebtedness, currency
crises, and financial meltdowns over the past eight centuries. The scope of the data — much of which have never
been collected before — is impressive and the book stands
as a testament to the scholarship of its authors. Such a
treasure trove should not go unnoticed by economic
historians.
The broader theme of the book is indicated by the main
title. In their tour of the data, the authors highlight empirical regularities that tend to correlate with the onset of
financial crises. Most of the book deals with the levels of
government indebtedness that predate financial crises.
The run-up in sovereign debt (particularly “external debt”
issued to bondholders who reside outside the country) has
often presaged defaults of one form or another by the government. This is seen more frequently in developing
economies that tend to be more affected by swings in export
prices and international financial conditions.
Yet, while developing nations can fall into the trap
of serial default on bond debt, all nations, including developed economies, have at one time or another indulged in
another, less severe, form of default: the inflation of the
nation’s currency. Reinhart and Rogoff rightly spend some
time seeking to counter the notion, all too common in
conventional discussions of the topic in the media, that
currency debasement is a distinctly different creature than
debt default. Instead, the authors suggest that inflation is
akin to defaulting on a bond or restructuring a debt because
the outcome is the same: The government forces the bondholder or currency holder to accept a payment lower in real
terms than the original value of the debt. Reinhart and
Rogoff call it “default through debasement” and note that if
“serial default is the norm for a country passing through the
emerging market state of development, the tendency to

T

lapse into periods of high and extremely high inflation is an
even more striking common denominator.”
The other empirical regularity they encounter as a precursor to financial crises is a run-up in asset prices —
particularly housing prices — above a long-term trend.
More often than not, the authors argue, borrowing fuels the
asset price bubble. The bursting of the bubble then tends to
precipitate a banking crisis that drags down lending institutions. In addition, recessions that accompany banking
crises tend to be deeper and harder to recover from than
other recessions.
The authors close, however, with a chapter that perhaps
inadvertently points out the principal shortcoming of
the book itself. Once they have firmly established how
often leverage and asset price appreciations precede crises,
they go one step further to suggest that the regularities
may be a basis for a real time “early warning system” for
policymakers. So, for instance, since housing prices are at
the top of their list of “reliable indicators” of an impending
banking crisis, real-time collection of housing price data
might help regulators anticipate potential crisis scenarios.
Yet the authors don’t weave a robust narrative to suggest
how the variables interact. Although excessive leverage, a
capital windfall from abroad, and faster-than-usual housing
price appreciation do tend to correlate with financial crises,
determining which variable might potentially burst the
bubble is much harder. And, contrary to the spirit of the
book’s title, it’s plausible that different crises can be precipitated by all three aspects occurring in various sequences.
The authors do acknowledge that the metrics identified
by their analysis will not provide “an obvious indication” of an
impending crisis. But any early warning system that is built on
a theory-free framework is bound to be problematic.
It’s also important to acknowledge that the empirical
regularities don’t occur in a policy vacuum. Institutions
matter, and the embedded rules of financial markets and
the regulations that govern financial firms and depository
institutions play a role in economic outcomes. The case for
an early warning system for policymakers is weakened by
this lack of analysis — and, in the end, a sober analysis of
how regulatory institutions work and the limitations on
what government can effectively do might even topple the
case for codifying these early warning metrics. Foremost
among the concerns may be the notion that any governing
body given power to act on threats to systemic stability may
actually encourage the type of behavior it was designed to
discourage.
Despite such caveats, Reinhart and Rogoff have written
a very valuable book, one that can be read profitably by a lay
audience, policymakers, and academic economists alike. RF

Fa l l 2 0 0 9 • R e g i o n Fo c u s

35

DISTRICTDIGEST

Economic Trends Across the Region

Location, Location, Location: The economic differences
between rural and metro areas in the Fifth District
BY A N N M AC H E R A S

hat does it mean to be a “rural” area in today’s
economy and, more specifically, in the Fifth
District? We may visualize open fields and farm
equipment, or dense woods with scant development, or
small towns connected by country roads, or no towns at
all — just open space.
Although there are some official definitions, there is no
consensus on precisely what classifies an area as rural. Many
of us may imagine “rural” to be the opposite of “urban,” thus
rural areas are often described as those that are not part of
an officially designated metropolitan statistical area. A more
careful definition will be explored in this article — one that
will allow us to examine the degree to which economic
performance differs across areas in the Fifth District to
the extent they are more rural or more urban. Common
measures of economic prosperity, including employment
growth, income, and poverty can be used to assess differences across regions. Another important determinant of
economic growth, the mix of industries, can help explain the
variation in economic performance revealed by comparing
more rural with more urban regions.

W

Defining Rural Areas
The temptation to use the metropolitan versus nonmetropolitan distinction as a way to categorize places as rural and
nonrural (or urban) is understandable. After all, there are
much more economic data available for metropolitan statistical areas than for other area types, such as counties. This
results in a dichotomy in which an area such as a county
must be considered either entirely rural or entirely urban.

Fifth District Counties by Rural-Metropolitan Level

Level

A
B
C
D
E
F
G
SOURCE: Indiana Business Research Center and U.S. Census Bureau

36

R e g i o n Fo c u s • Fa l l 2 0 0 9

}
}

Metropolitan
Sphere
Rural
Metropolitan
Interface
Rural Sphere

Yet rural areas can differ widely in both structure and
complexity. As an example, Amelia County, which is officially
part of the Richmond, Va., metropolitan area, has a total population of 12,808 and a population density of 32 persons per
square mile. Compare that to Henrico County, also part of
the Richmond metro area, which has a population of 292,599
and a population density of 1,102. Nearby Nottoway County,
similar in population and population density to Amelia
County, is not part of the Richmond metro area. So, by the
simplest definition, Amelia County would be labeled as
“urban” while Nottoway County would be labeled as “rural.”
In fact, Amelia County and Nottoway County share more
characteristics than either does with Henrico County,
despite the fact that one is urban and the other is rural.
Fortunately, recent research has improved the way we
define rural places. One measure, the Index of Relative
Rurality (IRR) was developed by Purdue University agricultural economist Brigitte Waldorf. It uses as its base four
characteristics of rural places that are commonly used in
existing definitions of rurality: population, population
density, extent of urbanized area, and distance to the nearest
metropolitan area. The IRR then combines these four characteristics of rural areas and generates a single index measure
ranging in value along a continuous scale from 0 to 1, with
smaller numbers assigned to the least rural areas and larger
numbers for the most rural areas.
Returning to our comparison of Amelia and Nottoway
counties, the Index of Relative Rurality defines Nottoway as
the more urban county (IRR=.486) and Amelia as the more
rural county (IRR=.630), even though Amelia is part of the
Richmond metro area.
One obvious difference between Amelia and Nottoway
counties is Amelia County’s relative proximity to the amenities of the Richmond metro area that provides its residents
with easier access to shopping, airports, and cultural opportunities. Perhaps more important than access to these
amenities is the advantage that businesses derive from
“agglomeration economies,” or the benefits of access to a
critical mass of suppliers, labor pools, and entrepreneurial
networks. Although proximity to a metro area is one of the
metrics embedded in the index, it is useful to explicitly highlight accessibility to a metro area in combination with the
rurality index when describing a rural-metropolitan sphere.
This helps us to examine the differences among rural areas.
A research team developed the rural-metropolitan
sphere definition for a project designed to explore rural
competitiveness in Indiana. (The team included researchers
from Purdue University, Indiana University, and the Strategic

Economic Indicators by Rural-Metropolitan Levels
Development Group, Inc. The rural-metropolitan
sphere consists of seven levels ranging from levels A
Rural-Metropolitan Interface Rural Sphere
Metropolitan Sphere
Economic Indicator (Mean)
A
B
C
D
E
F
G
and B — which contain highly urban metropolitan
Population
core counties that differ only by population — to level
12.4%
4.2%
5.8%
6.8%
2008 (Share of District Total)
3.7%
24.0% 43.1%
G, which contains nonmetropolitan counties that are
Avg. Annual Change, 2000-2008
4.6%
9.5%
11.2%
2.8%
17.7%
11.2%
6.3%
Median Household Income
not adjacent to a metropolitan area. The metropolitan
2007
$68,294 $50,863 $63,001 $46,388 $39,497 $38,410 $35,143
sphere contains levels A and B, but also adds the outlyAvg. Annual Change, 2000-2007
3.1%
2.1%
2.3%
2.5%
2.9%
2.5%
2.6%
Average Wage
ing metropolitan counties that are less rural
2008
$58,433 $40,986 $37,873 $33,652 $33,415 $32,125 $31,436
(IRR<0.4). In contrast, the rural sphere contains only
Avg. Annual Change, 2000-2008
3.0%
3.1%
4.0%
3.6%
3.4%
3.5%
3.4%
level G, the most remote counties.
Total Covered Employment
4,010,221 6,018,554 602,927 547,712 447,720 1,190,427 451,831
2008
By far, the most revealing levels bridge together
Avg. Annual Change, 2000-2008
0.6%
1.8%
0.4%
-0.2%
0.8%
-0.4%
0.8%
the rural and metro spheres with the rural-metropoliUnemployment Rate
tan interface (levels D, E, and F) where we find a range
2008
4.6%
5.0%
4.9%
5.6%
6.8%
7.1%
5.8%
Avg. Annual Percentage Point
that accounts for both rurality and remoteness —
0.3
0.2
0.3
0.2
0.3
0.1
Change, 2000-2008
0.3
Poverty Rate
from less rural to more rural and from the most
17.4%
9.1%
13.5%
17.7%
14.8%
12.3%
8.4%
2007
metropolitan to the more remote adjacent counties.
Avg. Annual Percentage Point
0.3
0.2
Change, 2000-2007
0.4
0.4
0.1
0.2
0.3
The hybrid rural-metro group (known as the “ruralSOURCE: Indiana Business Research Center, Bureau of Labor Statistics, U.S. Census Bureau
metro interface”) defines Amelia County, Va., as an
>
outlying metropolitan county with IRR 0.4 (level D)
Abundant employment opportunities in the metroand Nottoway County, Va., as a nonmetropolitan county
politan-sphere counties contributed to the relatively high
adjacent to a metropolitan area with IRR > 0.4 (level F).
wages and incomes in these areas. Employment growth was
The Fifth District has a wide variety of different types of
strongest in the more urban outlying metropolitan counties,
counties ranging from least rural to most rural and varying
which also averaged the lowest poverty rate. In contrast,
by distance from the officially defined metropolitan
the most rural counties (levels F and G) have the lowest
statistical areas. The map summarizes the number of Fifth
median household income, the lowest average wages, and
District counties (and, in the case of Virginia, independent
the highest poverty rates. Employment growth in these
cities) at each level of the rural-metropolitan sphere.
areas has been negligible or has declined since 2000, leading
Not surprisingly, most of the population in the Fifth
to higher average unemployment rates than any of the areas
District resides in the metropolitan sphere, which also conin the metropolitan sphere.
tained some of the fastest-growing areas from 2000 to 2008.
What explains the difference in economic performance?
The areas in the rural-metropolitan interface generally grew
While a definitive answer remains the subject of much
more slowly and accounted for 23 percent of population in
academic and policy debate, the composition of economic
2008. The more metropolitan and less rural counties in level
activity clearly matters. Industry composition is one imporC grew the fastest, at 18 percent, while the outlying metro
tant determinant of regional economic growth, although
counties that were more rural (level D) grew faster than any
regional competitiveness also depends heavily on the availother areas in the rural-metropolitan interface.
ability and quality of labor and the innovation capacity that
However, outlying metro counties grew by 11 percent,
allows firms to adopt new technologies and develop new
which is not as quick as their less rural counterparts. The rural
products and services to meet changing market demand.
sphere, level G, experienced the smallest increase in populaIndustry mix differs across the rural-metropolitan sphere
tion since 2000, growing just under 3 percent. Even based
and, more important, certain industries grow at different
solely on the summary information for population shares and
rates — some industries become economic drivers while
population growth, the need to differentiate among metropolothers become a drag on growth. By focusing on three major
itan areas and among rural areas becomes clear.
industry sectors: 1) manufacturing, 2) professional, scientific
and technical services, and 3) health care and social assisIndicators in the Rural-Metropolitan Sphere
tance, we can compare regional concentration in sectors
Economic prosperity plays out differently depending on the
that have declined as well as in sectors that have grown over
degree of rurality and proximity to metropolitan areas. The
the past decade.
economic indicators in the table provide some common
Nationally, the manufacturing sector accounted for
measures of income, employment opportunity, and poverty
nearly 10 percent of total employment in 2008 and has
for the Fifth District.
declined by an average annual rate of 3 percent from 2000 to
The largest urban counties have the highest median
2008. Over the same period, professional, scientific, and
household income and the highest average wages, although
technical services grew at an annual average rate of 2 percent,
the cost of living would presumably be higher relative
but accounted for only 6 percent of total employment in
to more rural areas. Compared to other areas, large
2008. Of these three sectors, health care and social assistance
metropolitan counties also have the lowest unemployment
accounted for the greatest share of national employment,
rate and among the lowest poverty rates.
Fa l l 2 0 0 9 • R e g i o n Fo c u s

37

20
Metropolitan Sphere (A, B, C)
Rural-Metro Interface (D, E, F)
Rural Sphere (G)

18
16

PERCENT

14
12
10
8
6
4
2
0
Manufacturing

Professional, Scientific,
and Technical Services

Health Care and
Social Assistance

SOURCE: Bureau of Labor Statistics

nearly 12 percent. It also had the highest average annual
growth rate, at just under 3 percent from 2000 to 2008.
National employment growth rates provide important
information because industry growth within a region often
follows national or even global trends, especially in the case
of the manufacturing sector, where the market for products
extends well beyond the region.
As measured by the share of employment, the rural-urban
interface counties have twice the concentration of manufacturing employment when compared to the metropolitan
counties and the most rural counties (see chart). Adjacency to
the more populated metropolitan sphere allows the manufacturing sector to tap into the available labor force while still
having access to developable land. Interestingly, the most
rural areas have only half the concentration of employment
in manufacturing as the counties of the rural-metro interface.
This may be because rural counties not adjacent to metropolitan areas lack the critical infrastructure and transportation
networks that connect manufacturers with their supplier
network and customer base.
Since 2000, employment in the manufacturing sector has
declined broadly across every level of the rural-metropolitan
sphere, but the greatest contraction has occurred in the
rural-metro interface counties (see chart). The high concentrations of employment in manufacturing resulted in a
declining or very low rate of overall employment growth
since 2000. This may explain the lower wages and income
levels characteristic of these areas.
Over the past decade, the professional, scientific, and
technical services sector has been a driver of economic
growth across all areas in the rural-metro sphere, but
employment in this sector is twice as concentrated in the
urban and outlying areas of the metro sphere as it is in the
rural-metro interface or the rural sphere. In fact, this sector
accounts for more than 12 percent of employment in the
most populated metropolitan areas (level A), presumably
because the concentration of potential customers attracts
companies that can operate on a large scale.
However, the fastest growth in the professional,
scientific, and technical services sector has occurred in the

38

R e g i o n Fo c u s • Fa l l 2 0 0 9

outlying metropolitan counties. Counties in the ruralmetro interface and the rural sphere also experienced high
growth in this sector, albeit from much lower levels of
industry concentration. With less than 3 percent of employment in the professional, scientific, and technical services
sector, it should be no surprise that high rates of employment growth fall short of counteracting the much larger
and deteriorating employment in manufacturing.
The health care and social assistance sector depends
much more on regional demographic trends than the other
two industry sectors discussed here, which experience
the effects of national and global trends more acutely.
Therefore, the prospect for growth of the health care and
social assistance sector depends on both the current and
future needs of the regional population.
Health care and social assistance employs significant
shares of total employment in the metropolitan sphere, but
also employs a sizable share even in the most rural areas.
Within the rural-metro interface and the rural sphere, only
retail trade, accommodation and food services, and the manufacturing sectors employ more people than the health care
and social assistance sector. Moreover, health care and social
assistance employment growth in the rural-metro interface
and rural sphere outpaced growth in the metropolitan
sphere from 2000 to 2008. Thus, while growth in population in the more rural areas was relatively slow compared to
more urban areas, clearly the demand for health care and
social assistance services grew in response to demographic
shifts such as the aging of the population.
Differences in economic prosperity between the more
urban and more rural areas are real, but strategies have been
implemented in many rural areas to leverage connections
with higher-education institutions to foster innovation and
explore ways to diversify the regional economy and train a
more highly educated work force to promote economic
growth. As rural areas develop ways to diversify into growing
industry sectors, thereby increasing the share of their work
force engaged in high-growth industries, the result will be an
improvement in their economic prosperity.
RF

Employment Growth: 2000–2008
10
8
6

Metropolitan Sphere (A, B, C)
Rural-Metro Interface (D, E, F)
Rural Sphere (G)

4
PERCENT

Employment Shares: 2008

2
0
Manufacturing

-2

Professional, Scientific,
and
Technical Services

-4
-6
-8
SOURCE: Bureau of Labor Statistics

Health Care and
Social Assistance

Country Pork
Swine producers fight low demand and high costs
BY B E T T Y J OYC E N A S H

ivestock and crops have fed eastern North Carolina by
generating jobs and spending. Sampson and Duplin
counties are the top hog-producing counties in the nation.
In fact, the world’s largest pork processing plant is in
Tarheel, N.C., owned by Smithfield Foods. That shows up
in the Sampson County seat of Clinton, N.C., for instance,
which now has underground power lines and a revitalized
downtown.
But this mainstay of rural eastern North Carolina counties has fallen on tough times for the past two years because
of what some are calling the most severe crisis in the pork
industry’s history.
Coharie Hog Farm in Clinton, N.C., the nation’s 22nd
largest, filed Chapter 11 in November, as did three other hog
operations. The Coharie bankruptcy threatens the livelihood of about 80 “contract farms” that provide swine barns,
management, and maintenance for raising company-owned
animals at various stages in return for a per-pig price.
Pork producers nationwide are losing money on each
animal as they cope with low market prices and rising production costs. As the nation’s second-largest pork producer,
North Carolina is feeling the pain. Here’s the problem: A
swine diet consists of mostly corn. Ethanol-driven demand
for corn boosted prices from $6.50 to $8 per bushel. Corn
prices are forecast to be in the $4 per bushel range. Recent
rains are causing more consternation because some of this
year’s crop may be inedible. Add the declining price for pork
products — wholesale hog prices fell to about 51 cents a
pound in August (a six-year low) but have since fluctuated
around 61 cents per pound.
Producers have lost about $21 per hog since October
2007, according to N.C. State University economist Kelly
Zering. Premium prices for corn and soybean meal, also used
in feed, continue to damage not only the swine industry but
also the broiler chicken and turkey producers, also critical to
Tarheel agriculture.
Four of the nation’s top pork producers are in the Fifth
District: Virginia-based Smithfield Foods and, in North
Carolina, the now-bankrupt Coharie Farms along with
Prestage Farms and Goldsboro Hog Farm. About 18 million
North Carolina pigs have been sold annually over the past 10
years, according to Zering. The pork industry generates
about 46,657 jobs in the state, directly and indirectly.
Effects of fears about the H1N1 flu virus and subsequent
ban on pork imports by China and Russia briefly dragged
down demand, says Deborah Johnson of the N.C. Pork
Council. “When that happened, we saw consumption drop
for several weeks,” she says. “We are seeing it recover.”
Another factor affected supply: Producers widely
adopted a vaccine in 2007 for a disease that had thinned

L

herds. The vaccine improved output, driving down prices.
In addition, profits were good and that encouraged more
production.
But as pork production increased, U.S. consumption
began to decline — from 50.8 pounds to 49.5 pounds per
person in 2008 and 49.1 pounds in 2009. Consumption in
2010 is projected at 46.5 pounds. The decline is largely
attributed to the global downturn.
From almost 22 billion pounds in 2006, production went
to 23.3 billion in 2008, driven by strong export demand. Pork
exports have since fallen by 10 percent through July 2009
over 2008, but remain higher than in 2007.
Pork producers are blaming, in part, the misperception
that the H1N1 virus has anything to do with pork. For the
record, the respiratory illness cannot be contracted via pork
consumption or handling. Hog prices, however, have been
depressed all year, according to U.S. Department of
Agriculture economist Mildred Haley. A dip in early May
could be attributed to the virus panic, she says, but prices
recovered to previous 2009 levels shortly thereafter.
Wholesale prices had declined even before the H1N1 virus
outbreaks emerged in late April and early May.
And although exports to China, Japan, and Canada fell,
exports to Mexico have risen. Haley points out that exports
in 2008 rose by 48.6 percent over 2007. “U.S. pork exporters
shipped 4.7 billion pounds of pork,” she says. That compares
to exports of 3.1 billion pounds in 2007. China imported
more pork than usual in 2008 because of disease problems in
herds in 2008 and “they also had the Olympics and didn’t
want any shortages.”
The recent slack demand in exports has diverted product
back onto domestic markets, and further depressed wholesale prices. To help, the U.S. Department of Agriculture says
it will buy $50 million in pork products for the nation’s
school lunch program. Producers are shrinking herds now,
and the declining supply will boost prices eventually.
Demand will improve with the economy, Haley says, but
right now “people have less money to allocate to their food
budget.”
Nationwide, the average number of hogs per farm has
grown, while the number of farms has declined. In North
Carolina, the hog population intensified from 2 million hogs
in 1982 to almost 10 million by the end of that decade.
With that growth came controversy over the industry’s
animal waste and its effect on waterways, especially after
Hurricane Floyd inundated eastern North Carolina in 1999.
A state moratorium on hog farms that use waste lagoons
remains in effect. By December 2008, there were about 9.6
million hogs in the state, about 15 percent of the 67 million
raised in the United States.
RF

Fa l l 2 0 0 9 • R e g i o n Fo c u s

39

State Data, Q2:09
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

702.9
-0.6
-0.1

2,543.7
-0.6
-2.5

3,943.3
-1.2
-4.9

1,852.8
-0.6
- 4.6

3,672.9
-0.5
-2.5

738.0
-1.5
-3.0

Manufacturing Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

1.3
0.0
-22.0

123.5
-0.9
-4.3

450.2
-3.8
-13.5

216.9
-3.0
-11.6

241.4
-4.1
-9.7

51.0
-4.4
-10.3

Professional/Business Services Employment (000s) 149.3
Q/Q Percent Change
-2.1
Y/Y Percent Change
-2.5

395.4
-1.4
-1.0

465.1
-2.2
-8.3

213.7
3.9
-5.1

641.2
-0.5
-2.4

58.5
-0.8
-3.9

Government Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

237.0
0.5
1.2

492.1
0.8
1.2

717.5
-0.4
2.0

342.6
0.6
-1.4

703.2
0.5
1.7

146.7
0.3
0.3

Civilian Labor Force (000s)
Q/Q Percent Change
Y/Y Percent Change

327.8
-0.9
-1.3

2,958.9
-0.4
-1.2

4,567.1
0.1
1.0

2,199.0
0.5
2.7

4,166.0
0.3
1.3

792.9
-0.3
-1.8

10.5
9.6
6.6

7.1
6.6
4.1

10.9
10.4
5.9

11.8
10.9
6.3

7.0
6.5
3.8

8.4
6.0
4.3

34,454.2
-0.3
-3.8

252,578.7
-0.1
0.9

295,370.1
0.1
-1.9

133,149.2
0.4
-1.8

313,095.7
-0.1
-0.8

54,114.4
0.9
2.7

Building Permits
Q/Q Percent Change
Y/Y Percent Change

35
-86.5
-81.5

2,554
21.9
-34.7

9,929
36.4
-36.2

4,092
14.2
-49.3

5,789
24.1
-21.2

424
14.9
-49.5

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

586.5
-3.3
-5.7

462.5
-3.9
-8.7

340.2
-1.6
-1.3

323.6
-1.1
-0.9

434.1
-2.4
-4.4

228.7
-1.2
-1.5

7.6
18.8
5.6

66.8
15.2
4.4

124.0
8.4
-26.5

67.2
7.0
-20.8

110.4
-0.7
-1.4

24.4
7.0
-6.2

Unemployment Rate (%)
Q1:09
Q2:08
Real Personal Income ($Mil)
Q/Q Percent Change
Y/Y Percent Change

Sales of Existing Housing Units (000s)
Q/Q Percent Change
Y/Y Percent Change

NOTES:
Nonfarm Payroll Employment, thousands of jobs, seasonally adjusted (SA) except in MSAs; Bureau of Labor Statistics (BLS)/Haver Analytics, Manufacturing Employment, thousands of jobs, SA in all but DC and SC; BLS/Haver Analytics, Professional/Business
Services Employment, thousands of jobs, SA in all but SC; BLS/Haver Analytics, Government Employment, thousands of jobs, SA; BLS/Haver Analytics, Civilian Labor Force, thousands of persons, SA; BLS/Haver Analytics, Unemployment Rate, percent, SA
except in MSA’s; BLS/Haver Analytics, Building Permits, number of permits, NSA; U.S. Census Bureau/Haver Analytics, Sales of Existing Housing Units, thousands of units, SA; National Association of Realtors®

40

R e g i o n Fo c u s • Fa l l 2 0 0 9

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 1999 - Second Quarter 2009

Change From Prior Year

First Quarter 1999 - Second Quarter 2009

First Quarter 1999 - Second Quarter 2009

4%
3%
2%

10%

1%

8%

0%
-1%

7%

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%

9%

6%

-2%
-3%

5%

-4%

4%

-5%

3%
99 00 01 02

03 04 05 06 07 08

09

99 00 01 02

03 04 05 06 07

08 09

Fifth District

99 00 01 02

03 04 05 06 07 08

United States

Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 1999 - Second Quarter 2009

First Quarter 1999 - Second Quarter 2009

First Quarter 1999 - Second Quarter 2009

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%

Change From Prior Year

30%

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
99 00 01 02
Charlotte

03 04 05 06 07 08
Baltimore

20%
10%
0%
-10%
-20%
-30%
-40%
-50%
99 00 01 02

09

Washington

03 04 05 06 07 08

Charlotte

Baltimore

09

99 00 01 02

Washington

03 04 05 06 07 08

Fifth District

FRB—Richmond
Manufacturing Composite Index

House Prices

First Quarter 1999 - Second Quarter 2009

First Quarter 1999 - Second Quarter 2009

First Quarter 1999 - Second Quarter 2009

Change From Prior Year
15%
13%
11%
9%
7%
5%
3%
1%
-1%
-3%
-5%

30
40
20
30

10
0

10

-10

0

-20

-10

-30

-20

-40

-30

-50
99 00 01 02

03 04 05 06 07 08

09

09

United States

FRB—Richmond
Services Revenues Index

20

09

99 00 01 02

03 04 05 06 07 08

09

99 00 01 02

03 04 05 06 07 08

Fifth District

09

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and
employment indexes.
2) Metropolitan area data, building permits, and house prices are not seasonally adjusted (nsa); all other
series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

Fa l l 2 0 0 9 • R e g i o n Fo c u s

41

Metropolitan Area Data, Q2:09
Washington, DC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:09
Q2:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:09
Q2:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Unemployment Rate (%)
Q1:09
Q2:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

42

R e g i o n Fo c u s • Fa l l 2 0 0 9

Hagerstown-Martinsburg, MD-WV

2,409.0
0.7
-1.4

1,290.9
1.4
-2.7

99.4
0.8
-2.7

6.1
5.9
3.4

7.5
7.3
4.1

9.7
9.3
5.0

2,863
-4.9
-22.7

1,054
74.2
-15.8

175
8.0
-45.1

Asheville, NC

Charleston, SC

Durham, NC

169.4
0.8
-4.7

813.9
-0.6
-6.1

285.6
-0.2
-2.3

9.2
9.2
4.5

11.9
11.3
5.7

7.9
7.6
4.5

324
-7.2
-40.4

2,088
34.4
-46.4

606
-4.6
-5.5

Raleigh, NC

Wilmington, NC

347.1
-0.4
-6.1

505.1
0.4
-3.3

142.0
1.1
-4.7

11.6
11.2
5.8

8.8
8.5
4.5

9.9
10.3
4.9

669
38.5
-27.4

1,551
89.6
-51.1

784
72.3
-29.3

Greensboro-High Point, NC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Baltimore, MD

Winston-Salem, NC

Charleston, SC

Columbia, SC

212.4
0.1
-3.3

295.9
1.6
-3.0

362.5
0.9
-1.7

Unemployment Rate (%)
Q1:09
Q2:08

10.2
9.9
5.3

9.4
8.9
4.8

9.1
8.6
5.2

Building Permits
Q/Q Percent Change
Y/Y Percent Change

422
197.2
0.5

915
66.1
-30.1

862
-6.6
-31.2

Greenville, SC

Richmond, VA

Roanoke, VA

313.0
0.6
-3.1

612.6
0.9
-3.5

160.5
0.9
-1.7

Unemployment Rate (%)
Q1:09
Q2:08

10.2
9.5
5.0

7.9
7.5
3.9

7.5
7.0
3.6

Building Permits
Q/Q Percent Change
Y/Y Percent Change

380
-5.9
-65.7

812
51.2
-31.5

105
31.3
-44.4

Virginia Beach-Norfolk, VA

Charleston, WV

766.8
1.9
-0.9

150.4
1.0
-1.1

118.2
0.3
-0.8

7.0
6.9
3.8

7.4
5.7
3.7

8.1
7.3
4.9

1,387
18.5
-20.0

38
35.7
-32.1

9
50.0
-25.0

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:09
Q2:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Huntington, WV

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail sonya.waddell@rich.frb.org

Fa l l 2 0 0 9 • R e g i o n Fo c u s

43

OPINION
Why Efficiency Matters…Even If You Value Equality
BY K A RT I K B . AT H R E YA

ow do you judge whether the outcomes delivered
by a market or another economic system are good
or bad? One concept that economists use is Pareto
efficiency. To understand Pareto efficiency, it is useful to
first define a Pareto improvement. A Pareto improvement
is a change in outcomes that leaves no one worse off and
at least some better off. A Pareto efficient allocation, then,
is simply one from which there are no Pareto improvements: To make someone better off, you would have to
hurt another. Similarly, a Pareto inefficient outcome is one
where Pareto improvements can be made.
Yet, some object to using Pareto efficiency as a guide
for policymaking, in part because it does not place clear
limits on inequality. And, in fact, there are many reasons
why you might generally prefer a more even distribution
of resources. First, and perhaps most obvious, inequality
might strike you as ethically wrong. No person should be
able to live off his riches while others have to labor long
hours in difficult conditions just to get by, you might argue.
Another objection involves social stability. You might
claim that a society with considerable income inequality is
unlikely to avoid internal conflict over the long run. In
both cases, you might say that to reduce inequality you
would be willing to give up a little efficiency.
In this column, I will suggest that efficiency is still a
useful guidepost for economists and policymakers. First,
inefficient outcomes are by definition unambiguously
wasteful. Two or more parties could be made better off
without hurting anyone else. Second, there are good
reasons to believe that a substantial portion of observed
inequality stems from inefficient trading arrangements.
Therefore, improvements in the efficiency of markets —
particularly those markets that help people insure against
certain types of risks such as poor health or other events
that may be difficult to foresee — would likely lower
inequality and make all better off. As a result, there
are important classes of situations in which there is no
inherent trade-off between equity and efficiency. Third,
over the long run inefficiency is the single biggest source
of inequality. The most profound sort of inequality today
exists between nations. Those countries that have pursued
efficiency-enhancing policies are generally rich and those
that haven’t are not. Thus, trading efficiency for greater
equality is not always as easy as it might seem.
Inefficiency can cause inequality more locally as well.
Consider a tax on luxury boat purchases, with all revenues
used to fund public expenditures for the poor. On the face
of it, it sounds like if anyone will be hurt by this policy, it

H

44

R e g i o n Fo c u s • Fa l l 2 0 0 9

will only be a set of wealthy households who can afford to
pay the taxes. The problem is that such taxes will reduce
the number of luxury boat purchases in lieu of, say, luxury
cruises, or some other activity that is a close substitute.
This means that some boat workers will now have to
search for new employers. And in the event that labor
and equipment cannot be reallocated seamlessly,
the consequences are potentially much larger. Therefore,
inefficiency harms not just the rich, who either pay
the tax or opt for a less-preferred option, but their
trading partners as well, most of whom are not rich.
Meanwhile, the revenues — and, hence, resources with
which to make transfers — may not amount to much, since
the demand for luxury items, such as yachts, is sensitive
to price.
More generally, concerns about “fairness” or equality
have led to many wasteful interventions — and noninterventions — in market function. For example, on the
one hand, we often employ inefficient subsidies and forms
of taxation, while on the other hand, we routinely fail to
charge people for congesting roads or emitting carbon
dioxide. Each of these policy choices either creates or
abets inefficiency — and represents a foregone opportunity to make all of us better off. What’s worse, in the
cases where prices are hamstrung (for example, by rent
controls), inequality may increase. Price-based allocation
may be supplanted with “influence”-based allocation, and
the latter almost by definition will favor the wealthy.
There is a better way. Competitive markets generally
work well. Most of us are routinely able make the purchases and sales we plan on, at prices that we usually are
not surprised by. A central result of economics is that
under ideal conditions the outcome of trade in such
settings will be efficient. And under realistic conditions,
it is still likely to deliver a serviceable approximation for
the allocation of many goods and services. This in turn
suggests that we should focus public policy on efforts that
fall into one of three categories: 1) those that remove the
malfunctions in markets that can raise inequality, 2) those
that do not directly alter prices in otherwise well-functioning markets, and 3) those that allow most people to gain
from any given Pareto improvement. Efficiency and
equality are not necessarily at odds; in the pursuit of the
former, society may actually find that it has more of
the latter.
RF
Kartik Athreya is a senior economist at the Federal Reserve
Bank of Richmond.

NEXTISSUE
Jobs and Economic Recovery

Federal Reserve

Many economists are predicting a jobless recovery in
which GDP grows but employment remains stagnant. Has the
economy’s “natural rate” of unemployment permanently
changed for the worst? We’ll explore the future of labor
markets.

We’ll discuss the history of the Federal
Reserve System’s decentralized structure and
the importance of the regional banks.

Counting America
The U.S. Census, a decennial count mandated by Congress, is
nearly always controversial because it’s hard to count every
person in America. But it’s important because the numbers
determine congressional representation and the formulas used
to disburse federal funding. How will Census 2010 differ in its
use of statistical sampling and counting methods?

Interview
David Friedman of Santa Clara University
discusses the importance of applying
economic analysis to the law.

Economic History
The Jamestown colony is generally considered the first permanent English settlement
in what is now the United States. But the
colony struggled until it adopted a reasonable set of economic incentives.

Abandoned Neighborhoods

Jargon Alert

Higher foreclosure and vacancy rates have left some neighborhoods a shadow of what they once were. Can policymakers
realistically hope to address the decline in demand for real
estate in these areas?

It’s common for people to ask, “What if?”
When economists do it, they are probably
positing a “counterfactual.”

The Consumer Economy
Consumption spending accounts for two-thirds of GDP. Yet
much of that is not what we traditionally think of as retail
purchases. Join us on a tour of the history and economics of
consumption.

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and Web-exclusive content
• To add your name to our
mailing list
• To request an e-mail alert of
our online issue posting

Federal Reserve Bank
of Richmond

PRST STD
U.S. POSTAGE PAID
RICHMOND VA
PERMIT NO. 2

P.O. Box 27622
Richmond, VA 23261

Change Service Requested

Please send subscription changes or address corrections to Research Publications or call (800) 322-0565.

Monitoring Trends
in Fifth District Business Activity
For nearly two decades, the Richmond Fed has

FRB — Richmond
Manufacturing Composite Index
January 2004 - December 2009

40
30
20

PERCENT

conducted monthly surveys of manufacturing, retail, and
services firms in the Fifth District. Participants are selected to
ensure that the type, size, and location of survey respondents
reflect the distribution of firms in the District. The surveys
collect information about current and future (expected) levels
of business activity. Over time, respondents’ assessments have
proven to be an accurate gauge of economic conditions in
the region.

10
0
-10
-20

Manufacturing Surveys
Manufacturers provide information on current activity, including
shipments, new orders, order backlogs, and inventories.
Respondents also supply information on employment conditions,
prices, and expectations of business activity for the next six months.

-30
2004

2005

2006

2007

2008

2009

FRB — Richmond
Services Revenues Index
January 2004 - December 2009

30
20

Service Sector Surveys

10
0

PERCENT

Representative retailers and services-providing firms are also polled
each month to provide their assessment of current and expected
activity. Retailers provide information on sales revenues, big-ticket
sales, inventories, and shopper traffic. Services firms also report
on revenues. Both sets of respondents provide information
about employment, wages, and prices at their firms, and express
expectations regarding customer demand for the coming six months.

-10
-20
-30
-40
-50
-60
2004

2005

2006

2007

2008

2009

For more information on District manufacturing and service sector
conditions, check out the Regional Economic Surveys at:
http://richmondfed.org/research/regional_economy/surveys_of_business_conditions/index.cfm

To become a participant in one of our surveys, please contact Faye Ball at 804-697-4490.