View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

FIRST QUARTER 2010

THE

FEDERAL

RESERVE

BANK

OF

RICHMOND

VOLUME 14
NUMBER 1
FIRST QUARTER 2010

COVER STORY
12

The New Normal? Economists ponder whether the “natural”
rate of unemployment has risen
Economists often speak of a natural rate of unemployment that the
economy tends to gravitate toward over the long run. But if the
natural rate has risen lately, as some economists suspect, then
unemployment may not fall anytime soon to the levels seen before
the recent downturn.

FEATURES
17

Shoppers for the Long Haul: The past, present, and future
of consumption

Our mission is to provide
authoritative information
and analysis about the
Fifth Federal Reserve District
economy and the Federal
Reserve System. The Fifth
District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
The material appearing in
Region Focus is collected and
developed by the Research
Department of the Federal
Reserve Bank of Richmond.
DIRECTOR OF RESEARCH

John A. Weinberg

Consumer activity still represents the biggest chunk of the nation’s
output, but the economic downturn has dampened the enthusiasm
of U.S. shoppers. What has influenced previous patterns of
consumption and how might that help inform expectations about future
consumption trends?
20

EDITOR

Aaron Steelman
SENIOR EDITOR

Stephen Slivinski
MANAGING EDITOR

Kathy Constant
STA F F W R I T E R S

Renee Courtois
Betty Joyce Nash

The National Headcount: Census emphasizes outreach
to improve accuracy

CONTRIBUTORS

It’s difficult to count everyone during the nationwide decennial
census. The Census Bureau, using federal stimulus money, has decided
to expand its outreach to increase the response rate of those who have
been overlooked in past counts.

Daniel Brooks
Shannon McKay
Robert Schnorbus
Sonya Ravindranath Waddell
Kimberly Zeuli
DESIGN

22

Nobody’s Home: Weighing the prospects for neighborhoods
hit hard by foreclosure

C I RC U L AT I O N

A historically large number of homes are vacant across the Fifth
District, some from recent foreclosures, some from long-term decline
in local economies. How to explain and to cope with the potentially
deleterious effects these homes may have on localities has become
an important question for community leaders and policymakers.

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org

DEPARTMENTS

PHOTOGRAPHY: GETTY IMAGES

BIG (Beatley Gravitt, Inc.)

1 President’s Message/Expectations and Monetary Policy
2 Upfront/Economic News Across the Region
4 Federal Reserve/Central Banking and the Merits of a Federated Structure
8 Jargon Alert/Counterfactual
9 Research Spotlight/A Tale of Two Fed Banks
10 Policy Update/Federal Debt Limit Raised to Cover Shortfalls
11 Around the Fed/Stock Market Investing Is a Family Affair
27 Economic History/The Lessons of Jamestown
30 Interview/David Friedman
34 Book Review/A History of the Federal Reserve: Volume 2
36 District Digest/Economic Trends Across the Region
44 Opinion/Deregulation Should Not Be Blamed for the Financial Crisis

Alice Broaddus

Subscriptions and additional
copies: Available free of
charge through our Web site at
www.richmondfed.org/publications
or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Region Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Region Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal
Reserve System.
ISSN 1093-1767

PRESIDENT’SMESSAGE
Expectations and Monetary Policy
he cover story of this issue explores the possibility that there may be a new and higher “natural”
rate of unemployment. The natural rate notion
itself has historically been linked to another important
mid-20th century idea — the relationship between inflation and unemployment known as the Phillips curve. The
early analysis of this relationship posited a negative correlation between unemployment and inflation: When one
of these variables went up, the other went down.
The Phillips curve relationship heartened ambitious
policymakers in the 1960s and 1970s. If they wanted to drive
unemployment lower, the reasoning went, all they needed to
do was tolerate a bit more inflation.
The article in this issue explores some of what we know
about the Phillips curve trade-off today and whether we can
realistically expect that statistical relationship to remain
stable over time. These are important questions — ones that
captivate the attention of academic economists and
policymakers, and rightly so. The debate over these issues
has been intense not just within the economics profession
but also at Federal Open Market Committee meetings.
This is especially important because of the way the traditional Phillips curve relationship broke down in the 1970s.
The Federal Reserve allowed somewhat higher inflation in
the late 1960s in an effort to keep unemployment low. But
inflation kept rising, forcing policymakers to change direction and tighten policy, causing a recession that pushed
unemployment rates into double digits. The result was that
both inflation and unemployment were high and variable
throughout the 1970s. That was not supposed to happen
according to the classic Phillips curve story.
So what happened? People eventually reoriented their
expectations about inflation. If the Fed wanted to try to
drive unemployment down further, it needed to increase the
money supply even more the next time. So that’s what it did.
Meanwhile, a rethinking of these policies had begun in
earnest in the 1960s by economists such as Milton
Friedman, Edmund Phelps, and Robert Lucas. Each in their
own way, and others pursuing similar research in this area,
came to the conclusion that the expectations of market
participants mattered a great deal. In fact, these expectations could alter the traditional Phillips curve relationship.
Money was, as economists say, “non-neutral,” but only in the
short term. In other words, inflation would produce more
employment only if the Fed were able to surprise markets
with a higher-than-expected inflation. After folks caught on,
the effect on employment would go away.
The Fed lost credibility in the 1970s as an institution
committed to keeping the money supply in check.
A persistently high rate of inflation was seen as a fact of life.

T

The classic Phillips curve
trade-off evaporated. This reality even caused some to
speculate that monetary policy
might be unable to influence
employment growth unless, for
instance, the pressure of labor
unions to raise wages could be
resisted. In retrospect, it may
not be surprising to note that
those who shared this “costpush” view of inflation, in
which the labor input costs to
production were the primary drivers of general price
increases, were missing the real story.
The good news is that there is a broad consensus today
that this monetary policy experiment of the 1970s failed.
While many economists and members of the FOMC still
use a fundamentally Keynesian framework to view the
relationship between elements of the economy, they all
generally acknowledge that the traditional Phillips curve
story cannot stand on its own.
It’s also helpful that the Fed has well-trained economists
employed in studying these important macroeconomic
relationships over time. And the level of economic knowledge on the FOMC itself has gotten stronger over the past
20 years as more professional economists have ended up on
the Board of Governors and as regional Fed presidents. This
is a change from previous decades when FOMC members
often didn’t have much formal economic training or might
have even been distrustful of economists.
There are still differences of opinion over how to
interpret economic trends, of course. But the tone of the
discussion is always tempered by the knowledge that it’s
important for policymakers to take into account the expectations of market participants when making monetary
policy decisions. That insight will be especially important in
the near term. The Fed will at some point change its stance
on the federal funds rate, and it has started closing the lending facilities it created to provide liquidity during the recent
recession. These “exit strategies” will be executed best if
they are firmly based on what history has taught us about
market expectations.
RF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

Region Focus | First Quarter | 2010

1

UPFRONT

Economic News Across the Region
The 1960 Greensboro Sit-In

Famed Woolworth’s Now Houses Civil Rights Center
The five and dime closed in 1993, but the brass letters above the former store
are polished to perfection, as is the interior. Fifty years to the day after the
first sit-in there, the International Civil Rights Center & Museum opened.

PHOTOGRAPHY: INTERNATIONAL CIVIL RIGHTS CENTER & MUSEUM

The former store where
black students sat at the
whites-only lunch counter
opened as a civil rights
museum in downtown
Greensboro, N.C.

2

Region Focus | First Quarter | 2010

The museum also may house a proposed Joint
Center for the Study of Civil and Human Rights. The
effort took 17 years and $23 million in private contributions and historic tax credits sold to private investors.
The idea for the museum took hold when news spread
that the former store would be razed to make way for a
parking deck.
On Monday, Feb. 1, 1960, four black North Carolina
A&T University students shopped for toothpaste and
school supplies at F.W. Woolworth Co. on South Elm
Street, Greensboro’s main drag. Then they seated themselves at the lunch counter reserved for whites. They
were met with silence. A waitress finally told them the
store didn’t serve blacks seated at the counter. They
stayed anyway and eventually left without incident.
Elsewhere in the South, racial barriers were being
tested and protested in schools and on buses. But those
actions were less common in private businesses.
Historical estimates say Woolworth’s lost $200,000
during the five months of on-and-off peaceful demonstrations that disrupted business and likely deterred
some shoppers. That amount of money has the same
buying power as $1.5 million today.
The actions of the four Greensboro students defined
“a pivotal moment in the civil rights movement,”
according to Melvin “Skip” Alston, chairman of the center and museum. The freshmen who planned and
executed the protest were Ezell Blair Jr., Franklin
McCain, Joseph McNeil, and David Richmond.
As television and radio stations joined the newspapers in coverage, more students, black and white, from
the city’s five colleges joined the four and by Saturday of
the same week, all seats at the counter were occupied in
protest. A picket line stretched down Elm.
The sit-in set off a chain of similar protests at
Woolworth’s stores in Charlotte, Winston-Salem,
Fayetteville, and Durham. By July, after months of negotiation, the Woolworth’s and S.H. Kress store, also
located on Elm, formally announced their lunch counters would be integrated. The first black people to eat at
the now-famous lunch counter were Woolworth’s own
kitchen workers.
— B E T T Y J OYC E N A S H

Exit Signs

The Fed Pulls Back from the Mortgage-Backed Security Market
Along with better weather and the new baseball season, this
spring brought the end of the Federal Reserve’s agency mortgage-backed securities (MBS) purchase program. Launched in
early 2009 to ease financial market conditions, the program’s
end was no surprise; the Fed announced six months prior its
intention to wind down the program as economic and financial conditions improved. The real question has been how
mortgage rates would respond.
The Fed originally pledged to purchase up to $1.25
trillion in securities in the troubled MBS market by the end of
the first quarter of 2010. The goal was to support the availability of credit in the housing market as mortgage rates shot
higher during the economic downturn. Financial institutions
failed to find willing buyers for the MBS on their books, and
the prices of those assets fell, causing mortgage rates to spike.
The Fed’s purchase of these securities should have put upward
pressure on asset prices and thus downward pressure on mortgage rates.
The program appeared to work: The spread between 30year fixed-rate mortgages and Treasuries fell significantly after
the program went into effect. Brian Sack, head of the New
York Fed’s Markets group, said in a March speech that he
believes the program was indeed responsible. He points out
that mortgage spreads remained low even after the Fed began
to slow its MBS purchases. In his view, the total amount of
MBS held in the Fed’s portfolio had eased mortgage market

conditions, not the volume of the Fed purchases. If this is true,
then the program’s end shouldn’t have a large effect on mortgage rates since there are no plans to unload that stock of
securities from the Fed’s balance sheet.
Researchers Johannes Stroebel and John Taylor of Stanford
University have a slightly different view. They argue in a 2009
paper that the decline in mortgage spreads since the start of
the program stems from other factors affecting the mortgage
market, such as declines in prepayment and default risks.
But the implication of Stroebel and Taylor’s view is much the
same: The end of the MBS purchase program shouldn’t have a
large effect on mortgage rates.
But the jury is still out. Stroebel and Taylor emphasize that
the government intervened in a variety of markets during the
time period in question. That will make it difficult to isolate
the effect of the MBS program.
Mortgage rates exhibited no large reaction to the Fed’s
announcement that the purchases would cease at the
end of the first quarter. Regardless, the mortgage-rate
response will also depend on another significant wind-down
decision: when and how the Fed will unload the MBS from its
balance sheet. A good portion of the securities will mature
and thus exit the balance sheet naturally. The Fed will decide
when and at what pace to draw down the remainder of its
MBS holdings based on how well the economy and financial
markets recover.
— R E N E E CO U RTO IS

Proposed Training Center Draws Fire

Maryland’s Eastern Shore Targeted for Stimulus-Funded Facility
The American Recovery and Reinvestment Act of 2009
included $70 million toward construction of a center to
consolidate diplomatic security training, now performed at
19 scattered locations. The Eastern Shore county targeted
for this facility currently ranks No. 1 in Maryland in corn,
wheat, and soybean production.
A farm near Ruthsburg, Md., has been chosen from 30 sites
in five states, although the 2,000-acre site on former grain
fields has not yet been purchased as of press time. Proximity
to Washington, D.C., at least 150 miles, is a requirement for
this proposed Foreign Affairs Security Training Center (FASTC).
The site was also chosen because the land is easy to develop,
available, and meets mission requirements, according to
project literature.
It is also next to the Tuckahoe State Park. An environmental assessment has been delayed until late spring. But
the Environmental Protection Agency has recommended
that the General Services Administration begin preparing an

environmental impact statement because of the center’s
proximity to sensitive natural areas. The GSA oversees federal construction.
Reactions to the project are mixed. Though it’s impossible
to determine who will be hired, or where the workers will
come from, an estimated 350 to 500 jobs will be created in
the first phase of construction. FASTC may employ more
than 400 permanent employees in the future, according to
project literature.
The campus will feature high-speed driving tracks,
indoor and outdoor firing ranges. It will also house improvised explosive device (IED) training ranges, and simulated
urban areas for counterterrorist drills. Fences and vegetation would buffer the site. The project’s timetable and cost
estimates are still being determined, according to
spokeswoman Gina Gilliam of the Government Services
Administration. As a federal project, the center requires no
local government approvals.
— B E T T Y J OYC E N A S H

Region Focus | First Quarter | 2010

3

FEDERALRESERVE

Central Banking and the Merits of a Federated Structure
BY S H A N N O N M C K AY A N D K I M B E R LY Z E U L I

he recent recession has caused many to question
the role of the Federal Reserve. The reforms under
debate in Congress could significantly alter the
regulatory responsibilities of the Federal Reserve or change
the current decentralized nature of Fed policymaking.
Many proposals, however, often do not consider why the
Federal Reserve’s policymaking process might be wellserved by its organizational structure as a federated system.
The logic of a decentralized Fed is an important
framework for analyzing competing reform proposals.
Additionally, federated structures are not unique to central
banks. The value added to other types of institutions and
industries can highlight the advantages of a decentralized
Federal Reserve System.

T

What is a Federated Structure?
The federated structure is prevalent in many sectors of the
economy: agriculture, wholesale purchasing, and nonprofit
service organizations. In agriculture, federated cooperatives
have a long history of strategic importance in the United
States as well as other countries and often dominate a significant share of their markets. For example, CHS Inc. is one of
the largest farm supply businesses in the United States and
is a Fortune 100 company. Internationally, Colombia’s
National Federation of Coffee Growers (the owner of the
famous Juan Valdez logo) dominates their coffee market. In
the nonprofit sector, the YMCA and the Red Cross are

some of the largest community service organizations in the
United States and the world.
In a federated structure, a group of autonomous organizations with local or regional representation are part of an
alliance under the umbrella of a national- or internationallevel organization. The local or regional organizations
(often referred to as affiliates) retain independence over
their internal affairs and are at least partially self-governing.
Certain powers are, however, ceded to the national or
centralized coordinating body, which is wholly or partially
owned by all of the affiliates.
In the Federal Reserve System, there are 12 semiautonomous regional Reserve Banks, each operating in a
distinct geographical territory, referred to as a district. The
Board of Governors, which is a federal government agency,
provides general supervision and regulatory oversight of the
operations of the regional Banks. The Board comprises
seven governors appointed by the President of the United
States. The Board of Governors and five regional Bank presidents constitute the Federal Open Market Committee
(FOMC), which sets the nation’s monetary policy. The
Board also approves the appointments of presidents and
first vice presidents at each Bank. Each regional Bank has its
own Board of Directors representing member banks and the
general public. The regional Banks implement other functions of the Federal Reserve System, including payment
processing, currency distribution, bank exams, discount
window operations, and certain banking operations for the U.S. Treasury.
The regional Banks earn their primary revenue from interest on
securities and fees for services provided to depository institutions. Service
fees are set at the System-level so as to
cover the costs of providing these services. The net revenue is first allocated
as fixed dividend payments to member
banks and then to maintaining an adequate surplus. The remainder, which
has historically been approximately 95
percent of the total, is paid to the U.S.
Treasury.

Historical Background:
The Populist Influence
Federal Reserve Districts 1-12
Board of Governors/Washington, D.C.
Federal Reserve Bank Cities
SOURCE: Board of Governors of the Federal Reserve System

4

Region Focus | First Quarter | 2010

When it was created in 1913, the structure of the Federal Reserve System was
a political compromise. The original
idea of some legislators was to con-

struct a national banking system led by a strong central (and
centralized) bank. However, the country’s relatively recent
experiences with the First and Second Banks of the United
States (1791-1811 and 1816-1836) tainted the public’s opinion
of a central banking system. As a result, policymakers
explored decentralized banking systems and central bank
models in other countries.
One difficulty in imposing a truly centralized structure
on the banking industry in the early 20th century was related to the industry’s size. A survey taken on April 28, 1909,
reported 22,491 banks in existence in the United States and
its island possessions. At the time, the banking industry was
composed of national banks, state banks, mutual savings
banks, stock savings banks, private banks, and loan and trust
companies. National banks are distinct from state banks
because their charter comes from the federal government
rather than a particular state. The federal government
thus had regulatory control over the national banks, but not
state banks.
In the banking industry of 1909, state banks vastly outnumbered national banks (11,319 and 6,893, respectively).
However, in terms of assets, national banks dominated state
banks. The national banks that participated in the survey
reported a total of more than $9.3 billion in assets compared
to roughly $3.3 billion for the state banks. This difference in
assets partly reflects the role of national banks as depository
institutions for bonds from the U.S. Treasury.
The banking industry’s characteristics also varied by
geography. In the New England and Eastern states, the number of national banks dwarfed the number of state banks.
The opposite trend prevailed in the Southern, Middle
Western, Western, and Pacific regions.
Decentralization was the predominant characteristic of
the U.S. banking industry. Yet the banking system reforms
designed by Senator Nelson W. Aldrich, which arose out of
the work of the National Monetary Commission of 1908 to
1912, advocated a more centralized system than what was
finally passed in the Federal Reserve Act of 1913. The
purpose of the commission was to study the existing U.S.
financial system and its history as well as look to other
nations for ideas on appropriate currency and banking
reforms that would prevent or lessen the damage from
events like the Bank Panic of 1907. Senator Aldrich, chairman of the commission and a Republican from Rhode
Island, politicized the reform effort by identifying the
Republican Party as the supporters of a central bank plan.
The Democratic Party platform of 1912 opposed a central
bank. The Aldrich Plan introduced in the Senate
on Jan. 9, 1912, would have established a “National Reserve
Association” which was “strictly a bankers’ bank with
branches under the control of separate directorates
having supervision over the rediscount operations with
member banks.”
The element of centralization in the Aldrich Plan came
from the establishment of one central body that controlled
the system and a membership board that would be chosen by

both the banking sector and the federal government. The
system was to be comprised of 15 districts with a branch of
the National Reserve Association in each district. The
Executive Committee of the National Reserve Association
would be in charge of operations. However, the banking
industry would be given more of a voice on the board than
government officials. In recognition that his plan leaned
more heavily to banking interests, Aldrich sought to minimize this dominance by limiting the powers of the National
Reserve Association and spreading membership on the
directorate board across the geographic banking regions.
Aldrich’s approach was described as “fifteen chapels united
by a solid dome.”
Because of the political climate of an election year,
Aldrich’s bill never received full congressional consideration.
The congressional and presidential elections of 1912 placed
the Democrats as the party in power both in the Congress
and the White House and they began to fashion their own
banking and currency reform legislation. The Democratic
effort was spearheaded by Congressman Carter Glass of
Virginia and Senator Robert Owen of Oklahoma. Both men
represented a departure from Senator Aldrich whose work
on banking reform was viewed as tainted by moneyed interests because John D. Rockefeller, Jr., the son of the founder
of Standard Oil, was his son-in-law. Glass’ background was in
journalism as a newspaper reporter, editor, and owner while
Owen had worked as a teacher and lawyer before organizing
the First National Bank of Muskogee, a small bank in
Oklahoma.
To ensure its passage, any piece of banking reform legislation put forth by the Democrats needed William Jennings
Bryan’s stamp of approval. Bryan, the Secretary of State
appointed by President Wilson in 1913, was born and raised
in rural Illinois and Nebraska and represented the latter as a
U.S. Representative between 1891 and 1895. His Democratic
Party base was comprised of newly arrived immigrants,
agrarian reformers, and supporters of women’s suffrage.
William Jennings Bryan became an overnight sensation in
Democratic circles while still in his 30s and was that party’s
nominee for president in 1896, 1900, and 1908. His platform
included breaking up perceived monopolies, fighting big
banks and railroads, and generally promoting populist ideas.
In 1896, Bryan was also the Populist Party’s presidential
nominee. That party, established in 1892, grew out of the
Panic of 1873, which began in September of that year with
the failure of Jay Cooke & Company, an investment bank
heavily involved in the financing of railroad expansion. Its
failure triggered the collapse of other banks which led to the
temporary closure of the New York Stock Exchange. The
effects of the panic were felt across the nation and led to the
Depression of 1873-1879. During this period, a constrained
money supply lead to deflation resulting in plummeting values for agricultural prices. Many farmers believed the
government’s monetary policy was being controlled by the
large banks and industrial monopolists on the East Coast.
They strongly advocated the abolition of national banks

Region Focus | First Quarter | 2010

5

and the control of the currency by the “people” instead of
bankers.
The Federal Reserve Act that was ultimately passed
in 1913 allowed between eight and 12 regional Banks. This
approach gained support for two reasons. First, the system
needed to be able to adapt to the economic conditions
occurring in the different regions of the country, particularly with regard to setting discount rates. Each regional central
bank could set the appropriate rate for its region rather than
trying to have one central bank maintain several different
discount rates. Second, there was a desire to break up or
weaken the control that New York banks had on the money
market. The ability for another “money trust” to develop
and dominate the financial sector would be curtailed if
economic power was more decentralized across the country.
The Federal Reserve Act also departed from its predecessors in terms of the distribution of power in the system. The
Aldrich Plan was severely criticized for the perceived dominance that business interests could have over the National
Reserve Association, so the Federal Reserve Act went in the
opposite direction by including a stronger voice for the
federal government in the system. The government’s influence is embodied in the fact that the members of the Federal
Reserve Board are nominated by the President and subject
to congressional approval.
The interjection of politics into the Board’s membership
was one of the necessary changes made in order to gain
Bryan’s support for the Federal Reserve Act. As Bryan wrote
to Glass in August 1913, “the bill provides for Government
control of the issue of this money …This is another distinctive triumph for the people, one without which the
Government issue of the money would be largely a barren
victory.” The Republicans and the banking industry were
opposed to such governmental interference in banking. As
Glass wrote at the time, “I also told the President his proposition would put the whole scheme into politics and that he
could not expect a powerful Republican minority in the
Senate to sit quietly by and permit the creation of a banking
system, the absolute control of which, to begin with, would
be in the hands of men all appointed by a Democratic
President.”
The banking industry was presumed to have a voice in
the boards of directors of the regional Federal Reserve
Banks, which represent their member banks. Each regional
board consists of nine members. The Federal Reserve Board
of Governors controls the appointment of three directors.
The remaining six directors are elected by their member
banks with three directors representing the interests of
stockholding banks and the other three as representatives
of nonbanking activity like agriculture, commerce, or industrial sectors. E.W. Kemmerer, a Princeton University
economist, argued in 1922 that the term “federated” was
applied to the structure of these boards because they were
organized in a way that would, “1) recognize the public’s
dominant interest in matters of broad policy; would 2) recognize the dominant interest of the bank and the banker’s

6

Region Focus | First Quarter | 2010

business customer in the narrower banking questions, such
as the goodness of paper against which advances were to be
made, the amounts to be loaned individual member banks,
the quality of open-market investments, and the like; and
would 3) permit of a democratic control among the member
banks of this banking business.”
Bryan and other populist Democrats at the time also
would have been familiar with the federated agricultural
cooperative structure, which was particularly prevalent
in the Midwest. In 1915, there were more than 5,400 agricultural cooperatives in the United States with more than
650,000 members. This type of organizational structure was
promoted by Edwin Nourse, an economist trained at the
University of Chicago. Nourse, who grew up on a small farm
in Illinois and eventually served as chairman of the first
Council of Economic Advisers under President Truman, was
staunchly opposed to monopolies and believed that local
cooperatives could force agribusiness firms to behave more
competitively by achieving scale through a federated system.
The division of power in the Federal Reserve Act
between the Federal Reserve Board and the regional Federal
Reserve Banks is a distinctive feature of the U.S. central
banking system. Since 1913, the System’s inherent regional
structure has been able to remain in place with only a few
revisions. Some restructuring occurred during the Great
Depression. The Banking Act of 1933 redefined the Federal
Reserve’s powers and the Banking Act of 1935 established
the FOMC. The “accord” of March 3, 1951 between the U.S.
Treasury and the Federal Reserve solidified the notion of
the independence of the Federal Reserve System within
the government.

Federated Tensions and Resilience
The relationship between the local and national organizations in any federated structure is complex and contains
inherent tensions. The sustainability of federated systems
requires that all local organizations remain “loyal” to the
system. For example, if local cooperatives conducted most
of their business outside of their federated system, it would
threaten the viability of their regional organization and
ultimately the entire federated structure.
In the case of the Federal Reserve, this loyalty manifests
itself as speaking with one voice on policy decisions after
they have been made. Although FOMC votes are recorded
and dissents are noted, the final decision is formally
supported by all Reserve Bank presidents. Paul M. Warburg,
a member of the first Federal Reserve Board, wrote in 1930,
“A regional system that is to operate successfully must
remain a balanced system. That is to say, the Reserve System
must be under the leadership and direction of the Reserve
Board; but with a generalship on the part of the Board
that does not rest on the assertion and bureaucratic or
dictatorial exertion of its legal powers but on the reserve
banks’ full confidence in the competence, fairness, and
impartiality of the Board, and on the clear recognition by
the reserve banks of a coordinating leadership by a Board

seeking their harmonious cooperation as indispensable to
the successful and undisturbed functioning of the System.”
The fundamental tension in any federation stems from
the potential incompatibility between maximizing benefits
derived from the national organization and maximizing local
benefits. If local benefits can be increased by doing business
outside the federation, the local organization has to weigh
those potential gains against possibly smaller benefits
derived from a weaker federated system. In the extreme
case, if a local organization fails to derive substantial benefits from the federated system, they are better off operating
“disloyally,” which is to say as a truly independent organization. This extreme case is hard to imagine in the case of the
Federal Reserve System, because the regional Banks are
legally bound to the System — they cannot set their own
monetary policy, for example. However, it is also important to
distinguish between loyalty to the policy decisions made by the
System and allowing a difference of opinion about monetary
policy and theory to be expressed by each regional Bank.
As Warburg recognized in 1930, the relationship between
the Board and the Reserve Banks is complicated by all of
the factors that made the European central bank model
inappropriate to the United States, such as the “vast expanse
of our country; the immensity and diversity of its resources
and interests; the complexities of our political life and of a
decentralized system of thousands of individual banks; and
the existence of stock exchanges and industries of towering
strength, standing outside of the System’s immediate
control” to avoid interference.
The federated structure, however, has some significant
comparative advantages in the face of diverse local conditions. The primary advantage in comparison to alternative
centralized structures is that it allows the local affiliates or
organizations to retain their flexibility when serving their
unique local markets. For example in the nonprofit sector, a
federated model of governance has allowed YMCAs in
different countries and communities to offer diverse
programs that meet local needs. In the case of the Federal
Reserve System, it allows each Reserve Bank to respond to
local conditions when regulating their member banks and
providing technical assistance to local communities.
The federated structure also supports the flow of local,

independent information and opinions upward within the
organization. In the case of the Federal Reserve, each
Reserve Bank collects its own economic data and information that is used to define an independent monetary policy
perspective. Each Reserve Bank president provides policy
opinions at FOMC meetings. A centralized structure
would not provide the same incentives for independent
information and opinions from each region. Instead, policies
would be made centrally and funneled down through the
organization. Although this type of decisionmaking may be
less costly in some sense, such centralized policymaking
might not generate or accommodate diverse opinions as
effectively as the current structure and thus might result in
uninformed policies.
Federated structures are also often criticized for operational inefficiencies. For example, local affiliates may
operate their own IT and payment systems or maintain different accounting standards. These types of inefficiencies
are avoided in centralized structures where uniform systems
are typically adopted by headquarters and all branches. In
the case of the Federal Reserve, some system-wide operations have been adopted. For example, all 12 banks share the
same payment, contracting and IT platforms.

Why a Federated Structure Still Matters
During a time of crisis, it is common to want to undertake
major policy changes in order to prevent another from
occurring. However, in a rush to reform the national banking system, there may be a tendency to dismiss the broader
rationale behind the central bank’s organizational structure.
Arguments for keeping a federated structure for the United
States’ central banking system still have the same credibility
in 2010 as they did in 1913 when the structure was created.
Each regional Reserve Bank in the Federal Reserve
System has a unique culture and perspective that reflects its
district. The federated structure has allowed each regional
Bank to maintain its unique policy voice while also
realizing the efficiencies of consolidated operations. The
diversity of opinion within the Fed continues to
generate solid, consensus-driven policy decisions and can be
seen as one of the strongest arguments in favor of the
current structure.
RF

READINGS
Aldrich, Nelson W. Special Report from the Banks of the United States.
Washington, D.C.: National Monetary Commission, 1909.

Warburg, Paul. The Federal Reserve System: Its Origin and Growth.
New York: The MacMillan Company, 1930.

The Federal Reserve System: Purposes & Functions. Washington, D.C.:
Board of Governors of the Federal Reserve System, 2005.

Willis, Henry Parker. The Federal Reserve System. New York:
The Ronald Press Company, 1923.

Kemmerer, E. W. “The Purposes of the Federal Reserve Act as
Shown By Its Explicit Provisions.” In King, Clyde L. (ed.),
The Federal Reserve System — Its Purpose and Work. Philadelphia, Pa.:
The American Academy of Political Science and Social Science,
vol. XCIX, January 1922.

Region Focus | First Quarter | 2010

7

JARGONALERT
Counterfactual
he nation’s unemployment rate continued to grow
in 2009 despite a $787 billion fiscal stimulus
package passed that February. Does this mean the
stimulus was a failure? Comparing unemployment today
to when the stimulus was passed won’t tell us. Had the
stimulus not been implemented, employment would not
likely have stayed exactly where it was in February 2009
— the economy would have either worsened or improved
due to other factors. A more accurate assessment of the
program would ask a hypothetical question: Where would
employment be today if no stimulus had been passed?
That hypothetical what-if scenario is called a “counterfactual.” Many academic disciplines use counterfactual
scenarios to help understand the impact on the world of
some event or policy. Counterfactual historians, for example, imagine what the
world would look like had the alliance
between Germany, Japan, and Italy prevailed in World War II, or if the United
States hadn’t purchased Alaska and its
rich oil reserves from Russia in 1867.
In economics a counterfactual often
refers to a numerical estimate of how
some economic variable would have performed had some policy action been
different. The more accurately analysts
can estimate what the counterfactual
scenario would have been, the better picture we’ll have of the policy’s effects.
There are generally two tools for estimating a counterfactual to a macroeconomic policy: statistical estimates and
theoretical economic models. To generate a statistical estimate, an economist will create the forecast he would have
made before the stimulus affected the economy. He’ll use
regression analysis to estimate how the economic variables
in question have tended to behave in the past and therefore
what levels they were likely to achieve today without a stimulus. Comparing the counterfactual estimate of where
employment would have been to actual employment is one
way to gauge the stimulus’s effect on jobs.
Statistical analysis tends to rely more on history than economic theory. The method does require making a few
important assumptions about how variables relate to each
other. But one needn’t construct a full model of how the
economy operates, which requires taking a more explicit
stand on potentially unresolved issues, such as how likely
households are to spend after a tax cut.
The statistical approach is relatively straightforward but
it does have significant drawbacks. Since the forecast cuts
off data starting from when the policy in question was

T

8

Region Focus | First Quarter | 2010

implemented, this method will lump together all the factors
that have affected employment since then and attribute
their effects to the stimulus. This includes other policies
designed to help the economy, such as efforts by the Federal
Reserve and other agencies to provide liquidity to credit
markets, or perhaps fluctuations in international conditions
that also affect employment in the United States.
Relying too heavily on statistical estimates may assume
too much of historical relationships. The economic variables
in question might not behave during the recession the way
history, and thus statistical models, would predict. Perhaps
the recession and financial crisis have hampered employment to an unprecedented degree, or new policies
implemented since the onset of the recession have changed
the usual relationships between variables.
Indeed, the policy being studied could
itself have changed people’s behavior
in such a way as to make statistical relationships diverge from their historical
patterns.
That’s where theoretical models may
usefully supplement the analysis. A theoretical model of the economy is a detailed
story of how economic variables relate to
each other based on the theories the
economist finds most convincing —
theories designed to be consistent with
statistical relationships. For example, if
they think households are likely to have an
unusually weak reaction to tax cuts, they can tweak a theoretical model to include that effect.
Such models will not only tell economists what the counterfactual scenario would likely have been without a given
policy, but may also shed more light on which underlying
factors in the economy have reacted to produce that outcome. And because of this feature, the theoretical method
for estimating a counterfactual might allow a richer analysis
of the trade-offs involved with a policy. The downside of
imposing many theoretical assumptions on a model is there
can be as many estimates of the counterfactual as there are
theories of how the economy operates. To avoid this pitfall,
economists seek to discipline their use of theories to those
that fit data across a variety of applications.
Of course, any model is likely to miss some real-world
detail and that can skew the results. That’s why using both
statistical and theoretical tools when analyzing macroeconomic policy often provides the most complete picture of a
policy’s effects. Using many estimates simply comes with the
territory when trying to estimate what the world would be
like in an alternate scenario.
RF

ILLUSTRATION: TIMOTHY COOK

BY R E N E E CO U RTO I S

RESEARCH SPOTLIGHT
A Tale of Two Fed Banks
BY DA N I E L B RO O K S

times of weak economic activity.
conomists continue to debate whether the Federal
In order to assess the outcome of this natural “quasiReserve would have been able to mitigate the
experiment,”
the authors needed a wide range of sources to
banking crisis that preceded the Great Depression.
provide the basis for their historical analysis. Archives of the
Some believe that regardless of what the Fed might have
Board of Governors detail communication between the
been able to do, banks would have continued to fail because
Board and both regional banks and illustrate the approach
the economy contracted so dramatically. Others believe
of each. A wide variety of Census Bureau sources allowed
that the Fed could have served as a lender of last resort in
them to control for the differences between Federal Reserve
response to the widespread run on the banks and avoided
districts.
their collapse.
Although a number of different statistical methods were
Even with the right data, properly evaluating the role of
used to analyze the data, the results tell very similar stories.
monetary policy — and public policy generally — during the
In the Sixth District — where the Bagehot intuition govcrucial years before the Great Depression poses
erned policy — the rate of bank failure was lower than in the
several challenges. Both federal and state governments
Eighth District.
changed policies often in light of economic conditions.
The authors note that one criticism of this type of analyAdditionally, shocks to markets were transforming
sis is that the results may apply only to this region during the
the economic landscape. These dimensions make discerning
time period studied. Yet there
the impact of Federal Reserve
are real lessons that can be drawn
policy difficult.
“Monetary Intervention Mitigated
from such a natural experiment.
In order to overcome such
The evidence in this study is
obstacles, Gary Richardson of
Banking Panics during the Great
important to understanding the
the University of California,
Depression: Quasi-Experimental
link between banking panics,
Irvine, and William Troost of
monetary policy, and the real
the University of Southern
Evidence from a Federal Reserve
effect of both on the economy. In
California set out to find a
District Border, 1929-1933.”
fact, Richardson and Troost look
group of banks within an ecodeeper at the economic outnomically similar environment
Gary Richardson and William Troost.
comes in these two Fed districts
which were subject to the same
and discover that commerce
state regulations but influJournal of Political Economy. December
slowed down less in the Sixth
enced by different monetary
2009, vol. 117, no. 6, pp. 1031-1073.
District as a result of a comparapolicies. Banks in Mississippi
tively stronger credit market in
fit the bill. In 1913, the state
the southern part of Mississippi that resulted from the
was split evenly into two Federal Reserve districts. The top
Atlanta Fed’s actions. The drop in the number of wholesale
half of the state was placed in the Eighth District presided
firms, which relied on available credit, was about half as
over by the St. Louis Federal Reserve Bank. The lower half
much in the Sixth District portions of the state as it was in
was part of the Sixth District which was the domain of the
the Eighth District portions. Additionally, net sales did not
Atlanta Fed.
drop as much in the Sixth District portion as they did in the
“Mississippi was homogeneous economically and demoEighth District portion.
graphically,” write the authors. “Unemployment rates were
All in all this paper supports other studies that suggest
low. Farm debt hovered around one-third to one-fifth of
stopping bank panics could have led to a smaller contraction
farm value. Rural counties concentrated on cultivating
for the economy as a whole. It also reinforces the idea that
cotton.” Yet, the approach to monetary policy taken by the
Federal Reserve banks missed an opportunity in the 1930s to
Fed bank in each district could not have differed more. The
stabilize the banking sector and potentially avoid the severe
Atlanta Fed followed a policy of lending based on
downturn that followed. Whatever caveats can be ascribed
“Bagehot’s rule.” According to that doctrine, the central
to a historical study of the sort authored by Richardson and
bank should act as a lender of last resort and provide credit
Troost, this paper is a strong addition to the body of
to troubled institutions based on good collateral and at a
research detailing the failures of monetary policy in the
penalty interest rate. By contrast, the St. Louis Fed adhered
1930s. Such lessons are important today, particularly as they
to the “Real Bills” doctrine. Under that view, monetary polrelate to how the Fed can best perform its role as lender of
icy should allow the supply of credit to contract as the
last resort.
RF
economy contracts because less credit is demanded during

E

Region Focus | First Quarter | 2010

9

POLICYUPDATE
Federal Debt Limit Raised to Cover Shortfalls
BY ST E P H E N S L I V I N S K I

n February 12, President Obama signed into law a
$1.9 trillion increase in the federal debt limit. The
new debt limit sits at $14.3 trillion.
Over the past year, lagging revenue and spending programs created to shore up the banking system and to
respond to various other elements of the recession spurred
the issuing of new Treasury debt for auction to the public.
The amount of outstanding federal debt subject to the limit
was rapidly closing in on the $12.4 trillion cap at the time the
president signed the increase. If the limit had not been
raised, the Treasury would have had no legal authority to
issue additional debt to finance the spending.
An immediate consequence of not raising the debt limit
is that it could cause operational problems, such as an inability to pay for the day-to-day expenses of government
agencies, which might spur disruptions in a variety of
federal programs. A potential but arguably improbable outcome is that the federal government could default on debt.
That could result in a loss of confidence by investors in the
U.S. government and sharply raise the cost to the government of financing debt in the future as lenders demand
higher interest rates to compensate for new risk.
How likely these outcomes might be is open to debate.
For instance, it’s quite unlikely that pressure to raise the
debt level would be resisted by policymakers. Since the late
1950s, the debt limit has been raised by Congress approved
by the president almost every year except in the five-year
span between fiscal years 1998 and 2001. Those were years in
which the federal government actually ran budget surpluses
and didn’t need to issue any debt. In fact, the government
was able to buy back some bonds and marginally reduce its
debt load.
The genesis of the debt limit can be found in the Second
Liberty Bond Act of 1917. This law allowed the Treasury to
issue long-term debt to finance the military expenditures of
the United States during World War I.
Before the war, Congress would have to authorize specific loans or debt instruments on a case by case basis, as when
it approved the debt to build the Panama Canal, for
instance. The limit in the act applied to both certificates of
indebtedness and Liberty Bonds. This was meant to allow
some discretion and flexibility to the Treasury to meet its
needs.
In the next two decades, however, Congress would pass
separate limits on other categories of debt that included traditional Treasury bonds. In 1939, Congress eliminated these
separate limits and created the first aggregate limit that covered nearly all federal debt.
The debt limit as we know it today covers publicly held

O

10

Region Focus | First Quarter | 2010

debt — bonds that are sold by the Treasury at auction and
are purchased by foreign governments and individual private
investors, just to name a few. The federal government can
also hold debt that is subject to the limit. Since the mid1980s, the Social Security program has collected more in
revenue than it has paid out in benefits. This surplus has
been committed to current spending on other programs by
Congress. In its place, Treasury bonds have been issued to
the Social Security account.
Some argue that a debt cap so frequently raised hardly
seems like a constraint. The importance of fiscal restraint,
however, isn’t always absent from the minds of some policymakers. Some of the legislative debate over the recent debt
limit hike centered on the need to restrain the rates of
government spending and to limit the amount of new
debt needed. But a number of amendments to place some
constraints on the budget process were voted down.
Many analysts argue that debt levels should be viewed in
relation to the size of the economy. Today, debt held by
the public is equal to about 60 percent of GDP. The
Congressional Budget Office (CBO) estimates that, under
current policies, the level of publicly held debt could reach
66 percent of GDP by 2020. Enacting new spending proposals in the president’s budget could increase that figure to 90
percent in the next 10 years, according to the CBO. In contrast, that figure never rose above 50 percent between 1970
and 2008.
Others argue that the important number to keep in mind
is the amount of interest the federal government needs to
pay on the national debt. As long as the interest rates on the
bonds — the cost of carrying that debt — are low, there will
be less real fiscal strain. Today the interest on the debt
equals 1.4 percent of GDP. Even in the worst-case scenario
currently projected by the CBO, that figure will equal
3 percent of GDP in 2020.
Many observers also note that the high cost of federal
benefits to be paid to retirees in the future should be cause
for concern. As members of the baby boom generation begin
to retire, the money to fund their benefits will have to come
from the current revenue stream because the Social Security
accounts are filled not with cash but with Treasury securities. Pressure to issue even more debt or to raise taxes
will likely increase. An alternative would be to cut benefits
or raise the retirement age, but it’s unclear how those
proposals would fare politically.
If the past is an indication of future political will to
restrain budget deficits and maintain the debt limit, we may
be in for much higher debt levels — and higher debt limits
— in the years to come.
RF

AROUNDTHEFED
Stock Market Investing Is a Family Affair
B Y BS Y
T EDP A
HN
EN
I NOSKKSI
I E LS LBI RV O

“Information Sharing and Stock Market Participation:
Evidence from Extended Families.” Geng Li, Federal Reserve
Board Finance and Economics Discussion Series 2009-47,
September 2009.

n this paper, Geng Li of the Federal Reserve Board of
Governors suggests that sharing of information about
the stock market between members of a family plays a large
role in influencing each family member’s stock market
participation. Existing literature along these lines tends to
focus on the transmission of knowledge from parents to
children. Li suggests that the relevant transmission mechanism is a two-way street and parents can learn from the
stock market experiences of their children.
Li concludes that whether a parent or child had entered
the stock market during the previous five years increases by
30 percent the chances that a member of that same family
will enter the stock market within the next five to six years.
Additionally, even investors older than 65 years of age — a
group often found to have lower stock ownership generally —
are significantly influenced by their children’s past stock
investment. Information sharing among siblings, however,
doesn’t seem to influence stock market entry in a statistically
significant way. To show that the phenomenon observed isn’t
just a coincidence — or that it’s simply a reflection of members of a family having similar preferences — Li studied the
sequence of stock market entry among family members.
If the entry was simply a matter of upbringing, he argues, you
might see each member of the family enter the stock market
at similar stages of their respective life cycles. Instead, Li’s
analysis implies that the entry of one family member will positively influence the entry decision of another member who is
at a very different stage in his life cycle.
Li concludes his analysis with a discussion of whether any
of this can be explained by simple “herd” behavior. He looks
at stock market exits by the same family members. As it
turns out, exit of one family member does not necessarily
precipitate the exit of others, suggesting that herd behavior
does not dominate and lends credence to the idea that information sharing between family members is a more potent
motivator of stock market investment decisions.

I

“Boomerang Kids: Labor Market Dynamics and Moving
Back Home.” Greg Kaplan, Federal Reserve Band of
Minneapolis Working Paper No. 675, October 2009.

tories in the popular press have provided anecdotal
accounts of “boomerang kids.” These are young adults

S

who have moved back in with their parents after having
initially moved out of the home. Greg Kaplan of the
Minneapolis Fed looks at not only the empirical prevalence of this phenomenon but also how economic activity
may affect such choices.
Kaplan examined the National Longitudinal Survey of
Youth 1997. This survey provides information on labor market behavior and educational outcomes, as well as detailed
information on the youths’ family and community background. Kaplan’s paper examines a sample of young adults
who completed high school but did not attend college.
Among that group, about 51 percent of males and 49 percent
of females returned home for at least one month by age 23.
The intensity of the boomerang effect was strongly related to trends in the labor market. Males who moved out,
became employed, and then unemployed were 64 percent
more likely to return home than those who remain
employed. For females in the same situation, the figure was
72 percent. Kaplan suggests that a careful examination of the
movement characteristics of the college educated would be
a useful addition to his paper and to the anecdotal reports
that have largely focused on this group.

“The Long Run Effects of Changes in Tax Progressivity.”
Daniel R. Carroll and Eric R. Young, Federal Reserve Bank of
Cleveland Working Paper 09-13, December 2009.

revious studies often lend support to the notion that
flattening the tax code — in essence, making the
income tax less progressive — would result in gains for the
economy. These gains tend to be a result of more efficient
allocations of capital.
Daniel Carroll of the Cleveland Fed and Eric Young of
the University of Virginia have constructed a model in which
households can more fully insure against economic risk, a
feature missing from many previous models. (An example of
such insurance might be the ability to borrow in the present
based on expected future income.) They find that in such a
world more progressive, though revenue-neutral, tax schedules can actually lead to steady states with as much as 47
percent and 40 percent greater capital and labor input,
respectively. Progressivity increases labor output in simulations of their model because it reallocates labor from less
productive to more productive agents — and this is true
despite a decrease in the total number of hours worked.
Carroll and Young also find that increased progressivity
generally lessens income inequality but raises wealth
inequality.
RF

P

Region Focus | First Quarter | 2010

11

BY R E N E E CO U RTO I S

M

12

Region Focus | First Quarter | 2010

has a long way to go before reaching full employment. But if
the natural rate has risen, as some economists suspect, then
we may not expect unemployment to fall anytime soon to
the low levels seen before the recession.

A Moving Target for Policy
Promoting employment is half of the Federal Reserve’s
mandated policy objective. But when unemployment is
especially low, labor markets are tight and that puts upward
pressure on wages and, therefore, inflation. This poses a
problem for price stability, the other half of the Fed’s
mandate. In general, the Fed’s monetary policy tools push
inflation and unemployment in opposite directions. This
inverse trade-off reflects the famed Phillips curve.
To know what level of unemployment the Fed can
reasonably expect to achieve without igniting inflation, Fed
policymakers must have in mind some estimate of the
natural rate. This is a challenge because the natural rate is
not an observable statistic — it must be inferred from other
data — and it changes over time. The natural rate is determined by features of the economy that are more or less
permanent, like the flexibility of labor markets and the
policies and laws that affect it.
Though these are usually deeply embedded features of an
economy that change slowly over time, this doesn’t mean we
should too easily assume the current natural rate will remain
the status quo. “The medium term natural unemployment
rate can dart around just like any other economic variable,”

PHOTOGRAPHY: GETTY IMAGES

any economists believe that the recent recession
is technically over. But it may not feel that way
on Main Street. This recession brought the
largest post-war upswing in the unemployment rate, rising
from a pre-recession low of 4.4 percent to about 10 percent
in recent months. Many economists predict a “jobless
recovery,” in which gross domestic product — the foremost measure of the economy’s overall output — rises,
but employment continues to fall or remains stagnant.
Is some of this unemployment here to stay?
Economists often speak of a “natural” rate of unemployment that the economy will gravitate to after working
through business cycle fluctuations. There will always be
some positive level of unemployment. Firms continually
create and destroy jobs in response to supply and demand
conditions. Moreover, at any given time some industries are
declining while others are expanding. The supply of labor,
too, changes with people graduating, retiring, moving
between jobs, and choosing to work more or less throughout
their lives, and it can take time for job seekers to locate
opportunities.
The natural rate of unemployment — or, conversely, the
level of “full employment” — is the rate that exists due to
this constant churning even when the economy is running
smoothly. Before the recession, the Congressional Budget
Office, which produces the most widely used estimate of the
natural rate, judged it to be about 5 percent. A current unemployment rate of almost double that implies the economy

Constant Churning
Actual Unemployment and the Natural Rate
12
10
8
PERCENT

says Edmund Phelps of Columbia University who won the
2006 Nobel Prize in economics in part for his work on the
natural rate. For example, he says the natural rate is partly a
function of the values that entrepreneurs and investors put
on business assets. “If that takes a jump, your best guess
about the medium term natural unemployment rate
takes a jump too,” and the actual unemployment rate will
eventually follow. Perhaps the best example of this was in
the late 1990s tech boom that endowed the economy with
lasting productivity gains and, as a result, arguably lowered
the natural rate of unemployment. But it is hard to definitively know in real time whether the natural rate is changing.
Some economists, such as Stanford University’s Robert Hall,
have gone as far as suggesting that the natural rate is too
variable to be useful in policymaking.
The Phillips Curve relationship, too, is far from stable.
When it was first documented by New Zealand economist
A.W. Phillips in 1958, economists initially believed the
relationship presented a relatively simple trade-off for
policymakers: If low unemployment was the priority, they
could “buy” it in each period by printing money (or, similarly, through fiscal expansion), fooling employers into thinking
demand for their products had increased, leading them to
hire more workers. This meshed well with the Keynesian
view of the day that endorsed the government’s ability to
manage demand to produce high employment.
But such a policy trade-off was too simple to be true since
it would rely on tricking people indefinitely, as Phelps and
Milton Friedman, also a Nobel laureate, pointed out in their
respective research during the 1960s. Eventually, people
would figure out that the boost in demand was only an illusion created by the increased money supply. Workers would
be unwilling to work at their old wages since inflation had
eroded their purchasing power, and nominal wages would
have to rise at a magnitude equal to the increase in inflation,
bringing unemployment back up to the natural rate. In the
medium run — a period long enough for the economy to go
through this learning process — the result of this attempt at
expansion would be a higher price level with no change in
any “real” economic variable like unemployment or production. “The natural unemployment rate idea is all about how
demand doesn’t matter in the long run,” says Phelps.
Moreover, the inflation trick would only boost employment once or twice before the public grew to expect it.
Then, Friedman explained in 1968, only perpetually higher
and higher inflation surprises would produce the short-run
boost in employment. It wasn’t that the natural rate corresponded to a particular rate of inflation — say, 3.5 percent.
Instead it corresponded to no change in inflation. When
unemployment was equal to its natural rate, inflation could
be expected to be relatively stable. That’s how the natural
rate of unemployment got a rather awkward nickname: the
non-accelerating inflation rate of unemployment, or
NAIRU.
Then in the 1970s policymakers learned painfully that
high inflation from easy monetary policies doesn’t translate

6
4
2

Natural Rate of Unemployment (CBO estimate)
Civilian Unemployment Rate

0
1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010

NOTES: Shaded areas correspond to recessions. Data through March 2010.
SOURCE: Bureau of Labor Statistics/Congressional Budget Office

to low unemployment. Additionally, oil price spikes made
inflation a more consistent phenomenon. Rising inflation
was simply the norm, so it no longer had the beneficial effect
on unemployment. Stagflation, or simultaneously rising
inflation and unemployment, was the result.
The Great Moderation of the 1980s, 1990s, and 2000s
also challenged conventional thinking about the Phillips
curve. The economy performed well during this period, and
both inflation and unemployment were low and stable relative to their historical averages. When macroeconomic
variables don’t vary much, it is harder to identify a statistical
relationship between them. Here the Phillips curve
appeared to be a less concrete description of the short-run
trade-off between inflation and unemployment. A 2001
study by University of California, Los Angeles economists
Andrew Atkeson and Lee Ohanian documented that since
the mid-1980s, the short-run Phillips curve relationship
had not been very stable, and therefore had limited use
for policymakers.
But the Phillips curve may redeem itself when the
variables move to extremes. It is in steep recessions that the
inflation-unemployment relationship seems strongest, argue
San Francisco Fed economists Zheng Liu and Glenn
Rudebusch in a January 2010 analysis. That implies the
Phillips curve relationship, though inconclusive, could be a
useful tool for monetary policy as the economy recovers
from the recent severe recession. If the natural rate has not
risen from its pre-recession level of about 5 percent, then
unemployment is currently much too high, and this could
potentially be addressed by sustained accommodative
monetary policy. But if the natural rate has risen, then the
point at which accommodative monetary policy becomes
inflationary should occur sooner.

Prospects for a Jobless Recovery
That means economists have to turn to the difficult task of
gauging whether the natural rate of unemployment has
changed, and if so, by how much. This depends on the labor

Region Focus | First Quarter | 2010

13

market conditions currently contributing to unemployment,
their magnitude, and how permanent they are likely to be.
No two recessions are alike in this regard. In recessions
immediately following World War II, up through that of
the early 1980s, the economy experienced a sharp boost
in employment soon after each recession’s trough. But the
more recent recessions of 1990-91 and 2001 were characterized as jobless recoveries with sluggish or nonexistent job
growth even as GDP recovered.
At first blush the recession that began in 2007 shares
labor market characteristics of both modern and older
recessions, according to New York Fed economist Aysegul
Sahin. Two important factors contribute to an increase in
the unemployment rate: how many workers lose their jobs
and flow into unemployment, and how many find new ones
and flow out of unemployment. The recent recession began
with a large number of layoffs, causing an increase in inflows
into unemployment like the also-steep recessions of the
1970s and 1980s. In addition, even after layoffs subsided,
unemployed workers continued to have difficulty finding
new jobs, causing a drop in outflows, similar to the
recessions of the early 1990s and 2001. As a result, the unemployment rate more than doubled from 5 percent at the start
of the recession in December 2007 to a high of 10.1 percent
in October 2009, and still remains at elevated levels.
So which employment recovery will the current recession
resemble? As with all recessions, inflow rates have receded
as the economy has begun to pull out of the recession, so the
mystery is how outflow rates will behave going forward.
Outflows depend on two factors: job creation and the
labor market’s ability to match job seekers with openings.
Job creation should be relatively swift if much of unemployment is cyclical — that is, a result of the business cycle —
writes Chicago Fed economist Ellen Rissman in a 2009
study. Laid-off workers can simply be called back to work
when demand for goods and services picks back up. But
unemployment resulting from structural realignment — in
which some industries decrease in size for good — tends to
hang on longer. Affected employees must find new industries, which in some cases will mean moving to new locations
or acquiring new job skills.
It’s not clear how much of the current unemployment
rate comes from structural realignment. Some industries
have been hit harder than others in the recent recession,
most notably those that expanded as a result of the housing
and lending boom. The so-called FIRE sector — finance,
insurance, and real estate — as well as construction, have all
declined more than employment as a whole. Total nonfarm
employment has contracted by more than 5 percent since
the recession’s start, while the FIRE sector has lost more
than 6 percent of its jobs. In construction, more than 25 percent of jobs have been eliminated.
Much of the unemployment within construction and the
FIRE sector is indeed being caused by large structural
reallocations, according to Rissman. But so far structural
realignment was not adding much to the economy’s overall

14

Region Focus | First Quarter | 2010

unemployment rate, she found in her 2009 study. Economist
Rob Valletta and analyst Aisling Cleary of the San Francisco
Fed also examined the role of sectoral imbalances in late
2008. Based on updated analyses using data through the end
of 2009, Valletta reports that labor demand imbalances
across industries — which require a reallocation of workers
— did not appear to be adding much to overall unemployment. In fact, they found the imbalances had begun to
dissipate in the second half of 2009. Their findings imply
that the increase in unemployment during 2008-2009 was
primarily cyclical rather than structural.
One possible explanation for this is that the sectors economists think might be permanently shrinking represent a
relatively small component of total employment.
Manufacturing, for example, was about 10 percent of total
nonfarm employment before the recession and also was hit
hard during the downturn. Though the hit to construction
has been severe, the sector constituted just 5.4 percent of
total employment going into the recession
But the nature of layoffs may provide some evidence of
structural realignment. “Starting from the 1990s, firms’ use
of temporary layoffs declined a lot. As a result only a tiny
fraction of unemployed people today are on temporary
layoff,” Sahin says. A job’s permanent eradication may be a
harbinger of a permanent shift away from that industry.
Even if permanent layoffs don’t reflect a structural
realignment, they can still hint at a slower employment
recovery. “Temporary layoffs are very easy to reverse because
you basically have the desk, the computer, and now you just
call the worker back,” Sahin says. “It’s much cheaper than
actually setting up a new position and investing in the
capital and posting a vacancy.”
To the extent that a worker’s old job has permanently
gone away, re-employment will have to come from new job
creation. Once demand starts to pick up, firms still may be
hesitant to hire until they are sure to be out of the woods
economically — especially if they can tap into other means
of producing more with the same number of workers in the
meantime. Some firms may increase the number of hours
their current employees work without hiring new workers,
or they call on temp workers when possible. “They can just
push the existing workers even more because the quit rate is
very low,” Sahin says. “Less people produce more as a result.
In those recessions productivity increases a lot.”
This seems to be the case now, since productivity
has stayed strong in this recession. In the 2001 recession
productivity growth never dropped below 2 percent, note
Cleveland Fed economists Paul Bauer and Michael Shenk. So
far in this recession productivity growth dropped to a low of
1.4 percent before surging well above 5 percent at the end of
2009. Historically, productivity growth would fall or even
turn negative in recessions. Most economists point to “labor
hoarding,” in which firms hang on to workers through the
recession so as to not lose good workers familiar with their
production. In the recent recession, productivity coming
from more intense use of capital has increased even though

History of the Jobless Recovery
Employment Recoveries Have Been Slower Since the 1990s
6
4

YEAR-OVER-YEAR PERCENT CHANGE IN
NONFARM EMPLOYMENT

investment has not, implying that employers are indeed finding ways to produce more with the same number of workers.
Regardless of how many jobs are created, the labor
market’s efficiency at matching available workers to jobs has
gone down in this recession, Sahin says, which puts a crimp
in employment recovery. There are several explanations
behind this lower match efficiency. Workers’ skills may not
sync well to the jobs opening up, especially if the jobs are in
new industries. “It could be that lots of people lost manufacturing jobs and there are many jobs in the health sector, but
they are not good matches so they need retraining,” Sahin
says. Rissman suggests that workers in the FIRE sector, for
instance, may have skills that are more easily transferable to
other industries whereas workers in construction have skills
that are not as easily adaptable. Also, those who have been
unemployed for long durations can experience skill depreciation. And even if a worker’s marketable skills are still largely
intact, long spells of unemployment may appear as a negative signal to would-be employers.
The weak housing market may also limit the ability or
willingness of some unemployed to relocate to new jobs. It’s
not clear how quantitatively important this effect could be,
but it could be geographically concentrated in areas that are
economically struggling and have weak housing markets.
Economists Fernando Ferreira and Joseph Gyourko of the
University of Pennsylvania and Joseph Tracy of the New
York Fed found in a 2009 study that negative equity in one’s
home reduced mobility of affected households between 1985
and 2007, making them one-third less mobile on average.
Their results do not cover the current housing downturn,
but their evidence would be consistent with the most recent
U.S. Census estimates that the “mover rate” — a measure

2
0

-2

-4
-6

6
4
2
0
-2
-4
-6
1970
1975
1970
1975

1980
1985
1980
1985

1990
1995
1990
1995

2000
2005 2010
2000
2005 2010

NOTE: Shaded areas correspond to recessions.
SOURCE: Bureau of Labor Statistics

that captures the mobility of households — fell in 2008
to the lowest level since the data were first collected in
1948. The proportion of movers who have stayed within the
same county has spiked, while the proportion who have
moved out of state has fallen to the lowest level since
the mid-1990s.
The federal government’s expansion of unemployment
benefits also could temporarily reduce the labor market’s
match efficiency. More generous unemployment insurance
(UI) regimes have been known to contribute to unemployment for two reasons: Workers receiving UI can be more
selective about the job they choose to take, and some unemployed workers who otherwise may have stopped looking for
jobs (and therefore would no longer be included in unem-

The Role of Demographics in the Natural Rate of Unemployment
“[M]any of the market characteristics that determine
[the level of the natural rate of unemployment] are manmade and policy-made,” Milton Friedman said in 1968. But
one of the most important determinants of the natural rate
of unemployment is entirely out of the control of policymakers: the composition of the labor force. Changes in
demographics, particularly concerning the average age of
workers, account for the bulk of the shifts in the natural
rate over time. Younger workers are more likely to change
jobs than middle-aged people who have a mortgage
and other responsibilities, and they are more prone to
unemployment.
Thus, the biggest contribution that demographics
makes to the natural rate is the proportion of the work
force aged 25 or less. It is easy to understand why when you
look at unemployment rates by age group. Before the
recession, the 16-19 age group had an average unemployment rate in excess of 15 percent, compared with about
8 percent for 20- to 24-year-olds, and well under 4 percent
for workers aged 25 and above.

This explains why the natural rate of unemployment
rose in the 1980s, when a large crop of baby boomers
entered the labor market. Then in the 1990s, after that
major component of the labor force had aged some, the
natural rate fell. Now many baby boomers are retiring,
lowering the average age of labor force participants.
Nobel laureate Edmund Phelps of Columbia University
says it is the policy response to retiring baby boomers
that could most affect the natural rate. “I’ve been bracing
for a rise in the natural rate for a long time on the
thinking that as we get nearer to 2020, when spending for
Social Security, retirement pensions, and Medicare and so
forth reaches full force, markets will start to factor that
in and that will mean expectations of higher tax rates
sooner or later to pay for those entitlements.” That should
depress business asset values, reducing the return to
capital investment and innovation, he says. That slower
innovation could set a new higher floor for the natural rate
of unemployment.
— RENEE COURTOIS

Region Focus | First Quarter | 2010

15

Some Worse Off than Others
Job Losses in Selected Industries Since Start of Recession
CUMULATIVE JOB LOSSES (THOUSANDS)

0
-500
-1,000
-1,500
-2,000

Information
Leisure and Hospitality
FIRE (Finance, Insurance, Real Estate)
Construction
Manufacturing

-2,500
Dec 07 Mar 07 Jun 08 Sep 08 Dec 08 Mar 09 Jun 09 Sep 09 Dec 09 Mar 10

SOURCE: Bureau of Labor Statistics

ployed numbers) may keep searching to continue receiving
benefits. But this effect is not likely to be large or persistent.
“Even with the temporary increase in generosity, benefits
are still pretty stingy by the standards of other advanced
economies,” says economist Larry Ball of Johns Hopkins
University. “This is not only temporary but, even while it
lasts, a pretty small step in the direction of the welfare
state.” Sahin adds, “People expect benefits to go down as the
economy recovers.”

Is High Unemployment the New Normal?
While it is relatively easy to lay out the risk factors that
could point to a jobless recovery, it is highly uncertain how
important each of these factors might be. “We believe there
are temporary factors that are causing the unemployment
rate to be higher than suggested by the stable relationships
in the U.S. economy,” Sahin says, summarizing a recent
analysis of the behavior of employment in the recent recession with coauthors Bart Hobijn and Michael Elsby, of the
San Francisco Fed and the University of Michigan, respectively. These factors could contribute to the risk of a jobless
recovery, she says. But there is good news: The U.S. labor
market is exceptionally dynamic and flexible, which might
help it work through these temporary issues faster.

Would a slow employment recovery mean the natural
rate of unemployment has risen too? That is a much trickier
issue than simply deriving an outlook for employment.
Economists think of the natural rate of unemployment as a
function of structural and permanent features of the
economy. That said, the difference between a shock that
takes a long time to work through and a rise in the natural
rate is, to a degree, a matter of semantics. “There’s not a
clear distinction between what’s really permanent and
what just takes a long time,” Ball says. “One way to think
about a change in the natural rate is something that lasts
substantially beyond two or three years.”
The outlook for the natural rate and actual unemployment will also depend on prospects for the economy’s
future. Though a theoretical premise of the natural rate of
unemployment is that demand doesn’t affect the level of
employment in the long run, Phelps says, “that’s not to say
that the structure of demand doesn’t matter.” In particular, he
says, investment demand has a large impact on the natural
rate. “I think you’re going to have an overhang of people
whose careers were tied to investment-like activities who are
not going to get picked up again for employment unless, and
until, there’s a revival of business investment and more
generally of forward-looking projects in companies.”
Though economists can’t with certainty pin a number to
the changing natural rate in real time, Phelps ventures what
he admits is a rough estimate: “I think the new normal is
somewhere between six and a half and eight percent,” he
says, an estimate which ranges from “very rosy” to “maybe a
little too pessimistic.”
One could also try to pin down the natural rate in terms
of the Phillips curve — that is, in terms of what’s simultaneously happening with inflation. “It seems very unlikely that
we’re going to be back down to five percent unemployment
or very close to that in the next five years,” says Ball, “and
inflation seems very stable. And the definition of the
NAIRU is the unemployment rate consistent with stable
inflation,” he reminds. “So if we see unemployment staying
well above five percent without inflation continuously
falling, then by definition the NAIRU has risen.” That is, he
adds, if our basic model of the Phillips curve is right.
RF

READINGS
Ball, Laurence, and N. Gregory Mankiw. “The NAIRU in Theory
and Practice.” Journal of Economic Perspectives, Fall 2002, vol. 16,
no. 4, pp. 115-136.
Elsby, Michael, Bart Hobijn, and Aysegul Sahin. “The Labor
Market in the Great Recession.” Unpublished manuscript,
March 2010.
Friedman, Milton. “The Role of Monetary Policy.” The American
Economic Review, March 1968, vol. 58, no. 1, pp. 1-17.

16

Region Focus | First Quarter | 2010

Lacker, Jeffrey, and John Weinberg. “Inflation and Unemployment:
A Layperson’s Guide to the Phillips Curve.” Federal Reserve Bank of
Richmond Economic Quarterly, Summer 2007, vol. 93, no. 3,
pp. 201-227.
Liu, Zheng and Glenn Rudebusch. “Inflation: Mind the Gap.”
Federal Reserve Bank of San Francisco Economic Letter, 2010-02,
Jan. 19, 2010.
Rissman, Ellen. “Employment Growth: Cyclical Movements or
Structural Change?” Federal Reserve Bank of Chicago Economic
Perspectives, Fourth Quarter 2009, pp. 40-57.

Shoppers for the Long Haul
The past, present, and future of consumption
BY B E T T Y J OYC E N A S H

amily Dollar Stores Inc. has changed its product
mix to reflect preferences for consumables such
as canned food or bread or paper towels. The
efforts of the Mathews, N.C-based retailer have paid off.
December sales grew 4 percent above the same month in
2008, even in stores open for at least a year. They’re now
attracting higher-income shoppers they hope will return
even when consumers start spending more freely.
In this recession, people have cut back on purchases of
everything from new clothes to homes to refrigerators. In
2009, overall personal spending declined, particularly on
vehicles and other household durable goods. Spending on
services was more stable, but people ate out less and cut
back on vacations, even though disposable income rose by
1.1 percent, largely because of tax cuts and rebates.
In contrast, savings rates for the year reached 4.3 percent,
up from 2.6 in 2008. This makes economic sense after a
shock. People will “rein in their spending aggressively in
order to increase their buffer stock of savings,” says Satyajit
Chatterjee, an economist at the Philadelphia Fed.
Consumer activity still represents the biggest chunk of
the nation’s output, as it does in most countries. But today
people are spending less for many reasons. Perceptions
about future wealth are one of the biggest influences on
spending.

F

Income and Consumption
How people view future income growth potential affects
spending patterns. But for decades economists took a view
of consumption that didn’t account for these expectations.
John Maynard Keynes introduced the first “consumption
function” in 1936 to chart the relationship between annual
disposable income and consumer spending. His “absolute
income hypothesis” suggested that consumption depends
on current income only. Available household data at the time
seemed to bear this out: Higher-earning households tended
to spend more than poorer households although the portion
of wealthier households’ income consumed was smaller.
The real test of this theory would be whether it held for
aggregate consumer spending and aggregate income at different points in time. In Keynes’ day, data on aggregate
spending and income for different years weren’t available,
explains Chatterjee. The spending-income relationship has
since been studied as scholars have developed better
datasets and statistical techniques. Two newer theories add
important insights.
Both theories are based on assumptions of “rational
choice.” Both estimate how a household will act over time
in the face of uncertainty about future income. Nobel Prize-

winning economist Franco Modigliani explained that people
spend and save according to expected lifetime income, not
current income. “In a rational choice context, current
spending need not respond to a change in current income if
that change is fully anticipated in a previous period,”
Chatterjee writes of Modigliani’s research with his student
Richard Brumberg. An implication of the model is that economic growth increases average lifetime incomes over time
and this causes people to assume that over time they will get
richer. This suggests that aggregate spending will grow proportionally.
Yet Milton Friedman may have been most responsible for
showing how the relationship between consumption and
income would be borne out in real life. In his 1957 book,
A Theory of the Consumption Function, he developed the
“permanent income hypothesis.” People are more likely to
consume more when permanent income — expected lifetime income — rises. He argued that if income jumps and
there’s a reason to believe it’s temporary, people may
consume more but not as much as if they had received a
permanent raise. A similar relationship can be inferred from
a drop in income: Becoming temporarily poorer may not
inhibit consumption in the short-term as much as an
expected long-term decline might.
In each of these theories, the relevant comparison is
between consumption and expectations of lifetime income
or total wealth. Empirical tests of these theories have
yielded mixed results. But the element that survives in most
studies is the underlying approach that recognizes household response to expectations about the future.
In the 1990s, work by economists Christopher Carroll of
Johns Hopkins University and Angus Deaton of Princeton
University has added the idea of “precautionary savings”
into models. Simulations reveal that people often accumulate a buffer stock of savings to protect their households
from unforeseen circumstances.
These theories may explain why people who make more
money are shopping at discount stores such as Family Dollar,
and also may explain rising savings rates in the first half of
2009. Expectations about the future would have to be a part
of any explanation for this renewed savings behavior. Theory
helps explain consumer response in the current economic
climate but can’t forecast when people will start spending.
That will occur as people pay down debt, accumulate rainy
day funds, and regain confidence about future income.

The Contours of Consumer Spending
Decreases in consumer spending like this haven’t been seen
since the recession of the early 1980s. This time around, the

Region Focus | First Quarter | 2010

17

The Change in Goods and Services Consumption
(Selected Categories)

PERCENT OF TOTAL
PERSONAL CONSUMPTION EXPENDITURES

80
70
60

Services

1950
2009

50

Nondurable
goods

40
30
20
10

Durable
Goods Motor
vehicles/ Furnishings/
parts household
equipment

Housing/ Health
utilities
care

Groceries
Clothing/
footwear

0

SOURCE: Bureau of Economic Analysis

credit market conditions and the growing home equity that
softened the 2001 recession are missing.
As people consumed consistent with anticipated wealth
based on rising house prices, they helped fuel economic
expansion until house prices fell and triggered the financial
crisis. Then spending slowed and so did growth in the
nation’s gross domestic product (GDP). These personal consumption expenditures (PCE) dominate GDP and are said
to drive the economy.
Consumers’ share of GDP has grown over time. In 1951,
PCE was 61.5 percent of GDP, where it stayed for three
decades until it rose to 63 percent in 1980. By 2008, it comprised 70.1 percent. Today, spending on services ($6.8 trillion)
is almost twice as large as spending on goods ($3.3 trillion).
In 1950, services represented about 40 percent of spending,
but today it hovers around 70 percent (see figure above).
Since that time, real incomes have grown, along with
household wealth. From 1959 to 2000, real per-capita disposable personal income grew at an average annual rate of 2.3
percent. More women entered into the work force, which
encouraged even more services purchases — think day care
and eating out. During that time, there was also the addition
of government payments for health care via Medicare and
Medicaid. As workers matured, they also earned more to pay
for such services. Increasing household wealth through
homeownership and individual stock purchases also fueled
discretionary spending on vacations, vehicles, and electronics. Technological change also expanded the field of
consumer options for both goods and services.
Growth in services has partly come from sectors that
have seen rapid technological innovation such as communications. In 1995 purchases of communications services
accounted for $89.3 billion (in 2005 dollars), or about 1.5 percent of PCE that year. In just 14 years, the category has
grown to 2.4 percent of PCE, and more than doubled in real
terms ($231 billion).
Meanwhile, people spent less of their overall budget on
nondurable goods such as food to prepare at home, and less

18

Region Focus | First Quarter | 2010

on durable goods like automobiles and furniture. The
decline in clothing expenditures partly reflected falling
relative prices.
By far the biggest services category is health care — $1.4
trillion, or more than 15 percent of PCE in 2009. That’s
roughly the same share of PCE as it was in 1995, but more
than five times what it was in 1950. That includes doctor
and dental services as well as home health care and expenditures on medical labs.
The durable goods category is often considered a barometer of economic activity because it includes items such as
appliances, cars, and electronics — goods that wear out
over time. Purchases of durable goods tend to rise with
economic expansions and fall in economic contractions.
In fact, the variance in durable goods purchases is much
higher than that of personal consumption expenditures
generally. For example, spending on household durables
went from $820 million in 2000 to almost $1.2 billion in
2007 in real terms, which reflects the run-up in housing,
before declining in 2008 and 2009. During the recession of
1960, durable goods purchases fell by 12 percent. In the 1980
recession, they fell by 13.4 percent, and by 10 percent in 1990
through the first quarter of 1991 (see figure on page 19).

How Personal Are Consumption Expenditures?
Spending money on cell phones, on eating out, or on refrigerators is fairly easy to understand, but other elements of the
PCE are less transparent. The PCE tracks money households pay for products such as carpets, tools and computers,
cereal and meats, jewelry and therapeutic appliances to services such as health care and dry cleaners. The category also
includes money consumers don’t spend except indirectly
through taxes.
Health care goods and services, for instance, includes
government payments to physicians and hospitals. The total
health care category represents about 15 percent of GDP. Yet
about only 15 percent of that health care spending is out of
pocket, according to economist Michael Mandel, formerly
of Business Week. The remainder comes from government or
employee health plans.
There’s also the amount of “imputed rental value” for housing. The government assumes an imputed rent on housing
even if a house is paid off. This figure is included to help capture in the PCE data the amount of money spent on shelter.
Then there’s an amount imputed for financial services
such as interest-free checking accounts. But that’s not really
an explicit cost but rather an opportunity cost because the
bank doesn’t pay interest but uses the money. Had customers
invested savings elsewhere, they might have earned a return
on the money. Other items, like the net income of nonprofits, are also not strictly consumer-driven but are included as
consumption expenditures in the national accounts.
Another 12 percent of GDP is represented by imports
such as computers and televisions. These are goods manufactured elsewhere and the only contribution to U.S. GDP is
perhaps through the money spent to transport or sell those

Household Behavior
Overall spending barely dipped in the 2001 recession, as
people were able to consume via credit card borrowing or
home equity loans. “Interest rates were low and people were
able to take advantage of that and borrow more; there was a
lot of cash-out refinancing that made for a mild recession”
in 2001, says Karen Dynan, currently of the Brookings
Institution and formerly an economist at the Federal
Reserve Board of Governors.
In the past two decades, credit innovation and increasing
incomes allowed people to align spending with long-term
income prospects. Wide participation in equity markets and
increasing home prices added to household wealth.
Aggregate household wealth stayed at about four times
aggregate personal income from 1960 through the mid1990s. It then grew to 5.25 and 5.5 times personal income in
1999 and 2006, respectively, according to Dynan in a 2009
paper published in the Journal of Economic Perspectives.
Meanwhile, the debt-to-income ratio has grown.
Households as a group came into this recession more highly
indebted than in other recent recessions, Chatterjee
explains. And that’s worked against spending. Credit card
firms have responded to the downturn “by slashing credit
limits and raising interest rates because the loans look more
risky to them now.” And that led to curtailed spending as
households aggressively paid down debt, another way of
increasing savings. Consumers are also saving another way:
cash-in refinancing. That’s when people put more money on
the mortgage to reduce payments. The government housing
corporation, Freddie Mac, reported that in the final quarter
of 2009, cash-in financing grew to a third of all refinancings.
Spending will certainly return, but how quickly? The
drop in consumer spending in the early 1980s was followed
by a spending spree. “After recession ended in 1982, we saw a
real snapback in consumption — 5 percent or 6 percent
growth in the first year. That is unlikely to happen in this
episode,” Dynan says. The unemployment rate in that recession reached the same level as that of the current downturn,
but the damage to household balance sheets will linger.
While consumers “have recovered some, on net they are still
down by a substantial amount, about 20 percent, and they
lost tremendous wealth through their homes.”
Savings rates, however, are going up. Between 1959 and
1990, savings averaged almost 9 percent, but the rate went

Durable Goods Expenditures
25
Real Personal Consumption Expenditures
Real Durable Goods Expenditures

20
YEAR-OVER-YEAR PERCENT CHANGE

goods. Taking that into account changes the math. Mandel
suggests that domestically produced goods and services
drive less economic activity than is often commonly cited.

15
10
5
0
-5
-0
-15
1960

1967

1974

1981

1988

1995

2002

2009

NOTE: Shaded areas correspond to recessions.
SOURCE: Bureau of Economic Analysis

negative in the third quarter of 2005. Soon after, in the
second quarter of 2006, household borrowing peaked at
nearly $1.4 trillion before falling to a negative $279 billion by
third quarter 2008. Households are now de-leveraging by
paying down (or defaulting) on their debt, according to a
2009 paper by economist Riccardo DiCecio and research
associate Charles Gascon of the St. Louis Fed. Savings rates
have climbed since 2008.
That has an upside since long-term economic growth
stems from innovation and capital investment. By examining
savings rates relative to GDP growth since 1948, St. Louis Fed
economist Daniel Thornton found that the savings rate grew
from 6 percent in the late 1940s to 12.5 percent in second
quarter of 1975, and fell to 1.2 percent by fourth quarter 2007.
“Over these same periods, output grew at rates of 3.8 percent,
3.2 percent, and minus 2.4 percent respectively.” Personal
savings and growth may correlate, and a higher savings rate
may not hinder an economic expansion, although these
results are not conclusive. More savings, besides buffering
economic shocks to households, could flow to increased capital investment. Mainly, these results serve to underscore that
the relationship between household savings and broader
macroeconomic growth is more complicated.
As households rebuild wealth and feel secure about
employment, spending will likely resume. But will the
decline in current wealth be seen by households as a permanent change or a temporary one? Stores like Family Dollar
aren’t waiting for an academic consensus on that. They are
planning future product mixes on the assumption that they
will keep their new customers in the future — the ones who
traded down when the purse strings tightened.
RF

READINGS
Chatterjee, Satyajit. “The Peopling of Macroeconomics:
Microeconomics of Aggregate Consumer Expenditures.” Federal
Reserve Bank of Philadelphia Business Review, Quarter One, 2009,
pp. 1-10.
Dynan, Karen. “Changing Household Financial Opportunities and

Economic Security.” Journal of Economic Perspectives, Fall 2009,
vol. 23, no. 4, pp. 49-68.
Thornton, Daniel. “Personal Saving and Economic Growth.”
Federal Reserve Bank of St. Louis Economic Synopses no. 46,
December 2009.

Region Focus | First Quarter | 2010

19

The National Headcount
Census emphasizes outreach to improve accuracy
BY B E T T Y J OYC E N A S H

ounting the nation’s diverse and mobile population
can be difficult and contentious. The final numbers
will determine the state and local funding allocations for many federal spending programs for a decade, and
can also reassign seats in the 435-member U.S. House of
Representatives. South Carolina, for instance, may gain a
seat after 2010’s final tally.
But this year’s census — or that of any year — couldn’t
possibly count households everywhere with 100 percent
accuracy. For one thing, in 2000, the final response rate was
only 67 percent. That’s down from a high of 78 percent in
1970 but better than 1990’s rate of 65 percent. And imagine
the possibilities for error. You might count your college student on the form but the college does too. Parents of
multiple children may not list the baby. Fear can prevent the
poor or undocumented from naming people in the household.
One way to increase response rates is to increase awareness of the national headcount. Some of the economic
stimulus money has been used to double the resources
devoted to publicizing Census 2010. This effort will help but
may not completely solve the disproportionate undercounts
of minorities and overcounts of whites identified first in
1940. In 2000, for instance, whites were overcounted by an
estimated 1 percent.
The emphasis on advertising will help reach people who
are hard to count. This is an alternative to relying on statistical adjustments and allocation to the local level after the
fact, ever a controversial practice. Even though Census 2010
numbers will not be statistically adjusted, the debate
remains unresolved over how to account for those who
might be missed.

C

Census 2010
Using $14 billion in expenditures, 1.4 million temporary
employees, and 500 field offices, the decennial census is the
nation’s biggest peacetime undertaking. Census data underpinned some $430 billion in federal assistance to states in
fiscal 2008, according to a 2009 Brookings Institution analysis. That’s money disbursed for Medicaid and education and
many other programs. People in business also use the data to
make investment, location, capital expenditure, and
employment decisions. Migration, commuting, and housing
patterns as well as education, income, and information
about poverty emerge from the data.
For Census 2010, the federal government is spending
roughly $400 million, including $250 million in stimulus
funds, to advertise and promote the count. The ad campaign
even included a Super Bowl TV spot. This is only the second
paid ad campaign in its history — the first ever was in 2000.

20

Region Focus | First Quarter | 2010

Previous censuses relied on public service announcements,
typically aired at times like 2 a.m., when most people aren’t
watching television.
Census 2010 also hit the road and the Internet to generate buzz. Representatives traveled to communities like
Gaffney, S.C., where an event included a performance by a
Hispanic dance troupe. The road show also went to
Huntington, W.Va.’s Marshall University to remind students
to list Cabell County as their primary residence since they
live there more than six months of the year.
To simplify the process this year, the Census Bureau has
switched to a short form with fewer questions, leaving
detailed information to the timelier, monthly American
Community Survey, introduced earlier in the decade. ACS is
a rolling sample of 250,000 households designed to provide
detail. Aggregated over the decade, ACS will in theory provide the same number of interviews captured by the long
form in years past.
Completing and returning a census form is required by
law, and the Census Bureau follows up with nonrespondents
by telephone or in person. Still, final response rates vary
from state to state. For instance, in South Carolina, it was 58
percent in 2000.
With improved mapping technology and geo-coding,
workers canvass neighborhoods using handheld computers
to verify addresses. Technological glitches, however, have
prevented the use of handhelds in the follow-up visits to
nonresponders.
Contacting the least reachable is the goal: the poor,
minorities, children, and immigrants who comprise the
undercounted. “That’s where the resources have shifted
instead of working on a technical adjustment process,” says
Margo Anderson, a professor at the University of Wisconsin,
Milwaukee. Anderson and co-author Stephen Fienberg
of Carnegie Mellon University have written widely about
census and statistical sampling controversies, including
the 1999 book, Who Counts? The Politics of Census-Taking in
Contemporary America.
In every city, committees have been formed to tap grassroots groups to publicize and demystify the census. Carmen
Morosan is a Baltimore city planner who is coordinating
efforts to ensure a successful count. In 2000, the census
missed less than 1 percent of Baltimore’s population of
651,000. Mail responses are the most accurate, yet in 2000,
Baltimore’s mail response rate of 53 percent was the lowest
in the nation among cities with similar populations, according to Morosan.
Education about the purpose of the census is critical
because the counters on foot with clipboards may not fare
any better. “When someone comes to your door, you might

not want to answer questions for a stranger,” Morosan says.
The city’s got characteristics typical of hard-to-count areas:
a high percentage of recipients on public assistance, a high
ratio of renters to owners, a higher than average number of
unoccupied housing units, and others. Surveys in general
don’t do well there, Morosan says, and the census is no
exception. “There’s a lack of understanding about the
purpose and benefits.”

PHOTOGRAPHY: U.S. CENSUS BUREAU

Census and Statistical Sampling
The first census was held in 1790, mandated by the
Constitution, with federal marshals directed to count
people. This involved hiring reputable assistants who would
canvass towns and territories. Assistants sometimes tallied
on court day, the day people came to town. “They were told
to visit each home, but obviously in the frontier world in
much of the 18th and 19th century, that was hit or miss,”
Anderson says. And in the mid- to late-19th century, enumerators were provided army escorts in frontier areas. “At
no point in the nation’s history was there a physical count of
each person in the country.”
In 1940, a natural experiment revealed the level of what’s
known as the “differential” undercount when 453,000 more
men registered for the draft than had been recorded by
the previous April’s Census. Though the results varied by
region and race, 13 percent of draft-eligible black men
had been missed. Nationally, 229,000 more black men
registered for the draft than would have been expected from
Census estimates. Overall, the net undercount in 1940 was
5.6 percent, 10.3 percent for all blacks and 5.1 percent for
nonblacks.
While there had been complaints about the census
before, it wasn’t until the development of large-scale data
systems that alternative estimates could be compared to
census numbers. Until the 1960s, the undercount and methods to evaluate the work of the Census Bureau held interest
for few besides statisticians. The increasing flow of taxpayer
money through urban renewal, highway, public health, and
other government programs, though, upped the ante on the
census count.
That was the era of Great Society programs and equal
protection laws, when funds began to be disbursed, according to the headcount data. Voting rights tests hinged on
population numbers in voting precincts. And in 1962, the
Supreme Court decided a case that set off a chain of reapportionment lawsuits. More than ever, accuracy counted.
By 1970, coalitions of state and local officials and private
citizens had started to challenge methods through lawsuits.
The government usually won. A 1996 ruling over the
potential undercount in the 1990 Census, brought in 1988 by
a coalition of city and state governments led by New York
City, went to the Supreme Court. The plaintiffs sought to
reinstate a statistical sampling plan that had been developed
by panels from the National Academy of Sciences as well
as private and government researchers. The issue was over
post-enumeration surveys that could estimate population in

The Census Road Show stopped in Gaffney, S.C., and the rally featured
the Ballet Folklorico Internacional of Greenville, S.C.

areas of high undercount. Ultimately, the Commerce
Department, the agency in which the Census Bureau is
based, opted against adjustment. The Supreme Court
upheld the department’s decision.
In the 1999 case of Department of Commerce v. House of
Representatives, the Supreme Court disallowed sampling
but only for congressional apportionment. The court decisions, however, didn’t end the sampling controversy.

Adjusting the Count
Since the 1950s, the Census Bureau has used probabilitybased evaluations of population subsets, in addition to other
demographic tools, to assess accuracy. One type of demographic analysis takes vital records data and immigration
records and projects the size of any particular cohort.
“So, we can make an estimate of how many white females
aged 40 to 44 there are in the country. Then you look
and see what number comes out in the census,” Anderson
notes. But that doesn’t reveal the location of those over- or
undercounted.
The second method is capture-recapture, first used to
count wildlife. The idea is to combine two estimates to
generate one that is closer to the actual number. In the census, the traditional count is the “capture” phase and a second
nationwide survey serves as the “re-capture” phase. That
allows an estimate to be extrapolated. This year, the instruments to allocate population to local jurisdictions based on
the derived estimates have not been put in place. It would
take, according to Anderson, a large-scale sample size to
ensure accuracy. This estimated allocation was planned for
the 1990 and 2000 censuses, but did not happen and will not
be part of Census 2010 either.
As it turns out, Census 2000 overcounted the population
by several million. While over- and undercounts are not
unusual, on net, until 2000, there was always an undercount.
In 2000, proposals for sampling in the case of follow-up
(when people can’t be reached or don’t return the survey
form) met with resistance and were eventually abandoned
continued on page 26

Region Focus | First Quarter | 2010

21

Nobody’s Home
Weighing the prospects for neighborhoods hit hard by foreclosure
BY R E N E E CO U RTO I S

n some of the nation’s neighborhoods, the height of grass is a serious
economic indicator. “The first indication we have that a house is vacant is
that a neighbor will call about tall grass and weeds,” says Liz Via-Gossman,
director of community development for the city of Manassas, Va.

I

The eyesore of unkempt lawns has helped the city detect
properties that have become vacant as a result of the housing downturn and foreclosure crisis. The City of Manassas
received about six tall grass complaints in 2007. By halfway
through 2008, when foreclosures had really begun to mount
around the country, it had received 277.
Manassas is one of the areas hit hardest by the foreclosure crisis in the Fifth District, together with the nearby city
of Manassas Park and surrounding Prince William County.
All are located just southwest of Washington, D.C. Prince
William’s foreclosure rate is about double that of Virginia as
a whole.
What has been left behind by the nationwide housing
downturn is a record number of vacant homes throughout
the country: nearly 19 million housing units, according to
Census estimates, about 14 percent of total housing units.
To be sure, many of those empty homes are for sale through
the usual process, while others are seasonal homes or those
listed for rent.
But by all accounts, a larger number of homes than ever
before are vacant as a result of the severe housing downturn.
Most notorious are those in the process of foreclosure or in
real-estate ownership (REO) at banks or other financial
institutions. REO homes are foreclosed properties from
which the borrower has been fully extricated. The title has
reverted to the lender or, increasingly, a loan servicer hired
to manage a pool of properties on behalf of investors in
a mortgage-backed security. Yet, in some cases, homes
become vacant prior to foreclosure because the resident has
simply given up on the mortgage and walked away.
Many foreclosed homes are listed for sale, sometimes in a
market saturated with other for-sale properties, which can
depress local house prices if there are enough of them. The
health of the local housing market largely determines
whether a foreclosed property is maintained: If the cost of
maintenance exceeds the net return from the sale of the
property, the home runs a greater risk of being neglected.
The net return depends on the value of the house and the
loan-to-value ratio.
Such homes will lack cosmetic maintenance or exhibit
more serious structural damage. Others can be damaged by
normal seasonal patterns. Because some are not properly
winterized, pipes can freeze and burst in cold weather.
Vacant homes without electricity won’t have operating sump

22

Region Focus | First Quarter | 2010

pumps, and the undrained rainfall can result in mold growth.
The surrounding neighborhoods can also suffer from
foreclosure since empty and decaying homes impose an
externality. High numbers of vacancies may bring down the
value of surrounding properties. If not maintained, these
abandoned homes may provide a signal about the area’s
desirability to potential buyers. Some research has also
linked high numbers of foreclosures to crime, fire, and the
resulting strains on local government resources — though
the connections here may be tenuous in some cases.
It’s not that foreclosure necessarily leads to deterioration
of a home and the neighborhood in which it resides. In areas
that haven’t experienced a large number of foreclosures, a
foreclosed property is more likely to be cared for and
resold before long. Many foreclosures concentrated in one
area, on the other hand, can imply broader economic strain,
residents’ declining commitment to the neighborhood,
or a misalignment of lenders’ and borrowers’ incentives to
produce the best possible outcome for the home and neighborhood. Those are the places in which a local foreclosure
problem is more likely to result in neighborhood blight.

A Tale of Two Crises
There are generally two types of areas heavily affected by
foreclosure. The first is the archetype of the housing boom
and bust: areas like Phoenix, Las Vegas, and interior
California that experienced massive amounts of new
building. Many buyers in these areas based their purchases
on what seemed at the time to be reasonable expectations
about future income and house price appreciation, but those
expectations often did not prove correct.
The second story is one of structural economic decline
that started long before the housing boom and bust, with
populations abandoning urban cores and, in many cases, the
region as a whole. In areas like Cleveland and Detroit, the
foreclosure incidence can be timed not with the fall in
housing prices, but with the acceleration of the decline
in manufacturing employment around the turn of the
millennium. The Fifth District is home to both types of
areas, including supply-ridden Charleston, S.C., and parts
of the Washington, D.C., area, and regions with longer-term
struggles like Baltimore and much of South Carolina.
This is a simplification, of course, since many areas are
affected by both the housing bust and economic woes. The

housing downturn surely exacerbated the economic problems of depressed Ohio and Indiana. And job loss due to the
recession has been a surefire way to push Phoenix and
Miami homeowners into delinquency. In both types of areas,
subprime lending and the credit boom helped get many borrowers into houses they couldn’t sustainably afford.
For areas in longer-term economic decline, today’s abandonment problems may feel a lot like the shrinking
industrial centers of the late 1960s and through the 1980s.
For these areas, the decline has been merely exacerbated by
the foreclosure crisis and recession. But the nation’s more
recent neighborhood abandonment problems also have
some new elements. For the first time, abandonment seems
to be affecting neighborhoods whose economic prospects
are mostly viable in the long term, and are still intrinsically
desirable places to live.
Though there is abundant research on the problems that
abandoned homes can pose for surrounding communities,
there is a lack of hard data about where vacancies and foreclosures have begun to deteriorate neighborhoods. “The
problem is, there’s no way to really track this kind of information except going out physically with a clipboard or
handheld computer and documenting it,” says Alan Mallach,
a senior fellow at the Brookings Institution. “You can track
foreclosures because every time somebody forecloses a
record is created. But nobody creates records about vacant
and abandoned properties. So there’s no overall way of getting a handle on it.”
Still, to get the best sense of the scope of the blight
problem you should look at cities where the foreclosure incidence is highest (see map on page 24). The comparison isn’t
perfect, however, since not all foreclosures result in longterm abandonment and decay. But what’s clear is that
foreclosures, which can be associated with long-term vacancies, are no longer occurring primarily in decaying urban
cores: They are simultaneously affecting small cities, suburbs of big cities, and rural areas.
In the absence of hard data on the problem, local practitioners do the best they can to keep track of and treat vacant
and decaying properties, Via-Gossman says. “I would suspect when the spring rolls around and we hit that grass
season, then we’ll get this year’s picture about where we are
with vacancies.”

PHOTOGRAPHY: STEVE SANDERFORD (TOP) AND COURTNEY MAILEY

The Choice to Foreclose
The severity of the housing crisis is redefining what economists know about the causes of foreclosure.
When economists think about a homeowner’s decision
to default on a mortgage, they borrow from what is called
“option price theory.” An option is a security that entitles
the owner to buy or sell an asset in the future at a predetermined price. A “put” option is useful when one expects the
price of the underlying asset to decrease: The holder can sell
it for a higher price than one would be able to get on the
market. A homeowner’s ability to default on a mortgage is
like a put option when the home’s value is falling, since one

When no foreclosure has been
completed, complaints about
overgrown lawns can be the
first signal to cities that a
home is abandoned, like this
Prince George’s County, Md.,
property.

Multiple adjacent
homes await buyers
in this Fairfax, Va.,
neighborhood.

is effectively selling the house to the lender.
Applying the most simple option model to the housing
market would lead to the conclusion that every homeowner
would default on his mortgage the instant the market value
of his house falls below what he owes on it — a seemingly
alarming prediction given estimates that a quarter of all
homeowners are currently “under water” on their mortgage.
The reason we haven’t seen defaults quite that catastrophic
is that in reality there are transaction costs to walking away,
as well as a number of personal considerations.
But many of the deterrents from walking away are less
present in areas where local house prices are still falling or
the neighborhood has already seen a large number of foreclosures. This has led to a growing number of “strategic
defaults” in which borrowers are technically able to afford
the mortgage, but choose to walk away because they view
the home as a losing investment that will never pay off.
Moral obligation, the stigma attached to default, and
emotional attachment to one’s home all may prevent someone from walking away. But all three are being eaten away
by the extreme conditions of the current housing market,
says economist Luigi Zingales of the University of Chicago.
Some people may feel less of a moral obligation to make
good on their debts if they are severely under water. (“There
is a price to morality, and when the price becomes too high
you default anyway,” he says.)
People who bought homes as an investment, more com-

Region Focus | First Quarter | 2010

23

Percent of Mortgages in Foreclosure/REO in the
Fifth District

Percent

NOTE: Uncategorized zip codes have fewer than
100 loans or no data available.
SOURCE: Federal Reserve Bank of Richmond
estimates using data from Lender Processing Services (LPS) Applied
Analytics (November 2009) and
Mortgage Bankers Association (2009:Q3)/HaverAnalytics

0-1.5
1.5-3.0
3.0-4.5
4.5-6.0
6.0+

mon during the housing and lending boom, especially in
areas currently most afflicted by housing oversupply, don’t
face the transaction and emotional costs of moving. “Second
homes and especially investments are where people look at
their choice to buy a house in a strictly rational or financial
way, and it’s much more likely that they will walk away when
there’s not the emotional investment that there is in their
main residence,” Zingales says.
And people are more likely to walk away when they know
one or several people who have defaulted strategically. “If
everybody does it then it becomes socially acceptable,”
Zingales says. A 2009 study by Zingales and economists
Luigi Guiso and Paola Sapienza collected survey data that
indicate people who know someone who has defaulted
strategically are 82 percent more likely to say they’d be willing to also default strategically should they find themselves
severely under water. “There might be some critical point
beyond which the social stigma breaks down, and then
defaults spread like wildfire,” he says.
Given that the costs to borrowers of walking away appear
to be diminishing, it is not surprising that lenders face a
tough choice when deciding whether to help a borrower
avoid foreclosure by modifying the terms of their original
mortgage. It is often in the best interest of the counterparty
to a mortgage to avoid foreclosure. So if foreclosure seems
imminent under the existing mortgage terms, lenders may
be willing to modify them for the borrower, by reducing the
interest rate or principle, for example.
Of course, modifying a mortgage reduces the expected

24

Region Focus | First Quarter | 2010

return from the mortgage for the lender, but under certain
circumstances it may still be worthwhile. If the state does
not have “recourse” laws — which allow the lender to go
after a borrower’s other assets in the event of default — then
lenders may be more willing to modify the mortgage to avoid
foreclosure. This has an influence on the borrower’s
incentives too. Research by City University of New York
economist Andra Ghent and Richmond Fed economist
Marianna Kudlyak found that recourse laws are a powerful
deterrent to foreclosure since they increases the cost to
borrowers of walking away.
But lenders who do offer modifications face an adverse
selection problem since mortgages that are modified tend to
have a high re-default rate, according to a study by Boston
Fed economists Manuel Adelino, Kristopher Gerardi, and
Paul Willen. Some loan modifications will simply delay the
inevitable. Since lenders know this, and there is no way to
know in advance who will prove to re-default, it may reduce
their willingness to modify loans for everyone. This may
partially explain why relatively few mortgages nationwide
have been modified so far.

From Vacancy to Blight
The health of the local housing market in part determines
the fate of foreclosed homes in an area. Therefore, whether
an area’s economic prospects are generally weak or strong
is a key determinant of whether a high foreclosure rate
evolves into a broader problem of neighborhood decay.
Unfortunately, the lenders’ incentives once foreclosure is
imminent are not always aligned with the health of the
neighborhood.
“What the lender wants to do is dispose of the property
in a way that maximizes its return,” says Guhan Venkatu, an
economist at the Cleveland Fed. A foreclosed property
generates no income and is costly to maintain. So in most
cases, the lenders’ incentive is to minimize the amount of
time that a vacant property is sitting on their books.
Expediting foreclosure is not always an option for the
lender. Some states have a “judicial” foreclosure process in
which foreclosure requires a court action. In those states
foreclosures take much longer and are administratively costlier to the lender. Perhaps more importantly, Venkatu says, it
imposes less flexibility on the lender to make the decision
that maximizes the value of the property for them.
Dragging out foreclosure, on the other hand, may also be
an option. In areas hit by many foreclosures, the prospects
for reselling property at a reasonable price may appear more
dismal to lenders, providing an incentive for them to delay
the foreclosure process.
“They’re just letting the process slow down,” Mallach
says. Instead of initiating foreclosure when a home becomes
90-days delinquent, when foreclosure legally becomes a possibility, he says they might wait 180 or 210 days, or longer to
help them wait out the market. And since in these cases
eventually selling the property is the goal, he says the lender
sometimes doesn’t want it vacated and subject to decay. “I’ve

been losing ground and people in it
heard in some cases that some
The Status of Distressed Homes
are basically just hoping to move
lenders will actually call up the
out, and nobody in particular is
homeowners and say, ‘Please don’t
Delinquent: Mortgages for which the
eager to move in, if you start to see
leave the house. Stick around. We
borrower has ceased mortgage payments.
a significant number of foreclosures
know you’re not paying.’”
Typically, foreclosure can be initiated after
it’s likely to have a much more draMallach says it lacks a catchy
90 days of delinquency.
matic effect than the same number
name so far, but the phenomenon is
in a neighborhood that is still seen
what those in the industry are calling
In Foreclosure: Properties for which the foreby its residents, and by other people
a “shadow REO inventory.” “That
closure process has been initiated. There is no
in the region, as being desirable.”
means this huge inventory of propernational standard for length of the foreclosure
The federal government has
ties that could become REO
process.
devoted funds to help deal with
properties but aren’t because the servacant and abandoned homes. The
vicers are not pushing to finalize the
Bank owned or REO (real estate owned):
federal Neighborhood Stabilization
process that would turn them into
Properties that have gone through
Program (NSP) directs funds
REO properties.” Some estimates
the foreclosure process (i.e., the homeowner
(about $6 billion in total under the
claim these properties number in the
is fully extricated from the loan) and are now
2008 and 2009 economic recovery
millions.
owned by the lender or the trust that has
acts) to local governments and comObservers in the industry have
purchased the mortgage as part of a
munity developers to try to stem
also noted a new occurrence in which
mortgage-backed security.
falling home values in areas already
lenders and loan servicers who have
hit hard by foreclosure.
acquired properties through forecloVacant: A home that, for whatever reason, is
The funds could be used for a
sure withhold those homes from
not legally occupied. Most foreclosed and REO
number
of things that try to mainthe market for fear of depressing
properties are vacant, although not all vacant
tain
or
improve
the condition of
prices further in an already supplyhomes are in foreclosure or REO. Sometimes
abandoned
properties.
Funds can
saturated area.
the borrower has given up on the mortgage
help
community
organizations
purDragging out foreclosure and
and vacated the property.
chase and rehabilitate deteriorated
withholding REOs from the market
homes, for example, or establish
can make the housing statistics
Toxic titles: Homes for which the
“land banks” (local legal entities
appear rosier since it could lead to an
borrower has ceased mortgage payments and
that hold vacant, abandoned, and
underestimation of supply. It also
vacated, but for which there is no immediate
foreclosed properties and transfer
lengthens the market adjustment
intention to foreclose. The foreclosure process
them back to productive use). The
process, and masks how many homes
costs more than the home is worth on the
money also can fund incentives to
are, or will eventually be, on banks’
current market, so for a lender or loan servicer
get homeowners into empty homes.
balance sheets.
it makes better financial sense to walk away
Community organizations and
In the very worst areas, mortgage
than to complete the foreclosure. The result
local
governments have found it
defaults become the city’s problem.
can be homes with unclear legal ownership.
hard
to
implement NSP funds, howIf the prospects for reselling the
These exist in areas where the housing market
ever,
for
a variety of reasons. Among
property are extremely poor, the cost
is especially weak.
the largest is difficulty identifying
of foreclosure and maintaining the
the current owner of the mortgage.
property may exceed the value the
Only the original lender is listed in the loan documentation,
property holds for the lender. In the case of so-called “toxic
whereas most mortgages today are sold after initiation in
titles,” the tenant has defaulted and vacated the house, but
secondary mortgage markets and ultimately to large groups
the lender never completes the process to take legal ownership. This leaves the property in a state of legal limbo: The
of unnamed investors. The loan servicers who represent
lender holds the lien, but the tenant technically owns the
these investors may be located in a completely different
home. Meanwhile, the house sits empty and often deterioregion, and thus have no expertise on the local housing
rating or subject to vandalism, with no steps being taken to
market in question, and may have any number of other
bring it back to market. Toxic titles are most likely to be
distressed properties they are also managing. And in the case
found in very low value areas where the market is essentially
of toxic titles, it takes legwork and legal expertise to deternonexistent, Mallach says.
mine who owns the property.
Even when the loan servicer is identified, they often have
limited
discretion when dealing with resale of the property.
Stabilization and Revitalization
Service
agreements obligate them to maximize the value of
Neighborhood stabilization is the key to neighborhood
the trust for the investor owners, but these agreements were
recovery, according to Mallach. He defines a stable market
written in a dramatically different housing market environas one in which homeowners and potential buyers feel their
ment in which it would have been hard to imagine the
investment is secure. “If you have a neighborhood that has

Region Focus | First Quarter | 2010

25

discounted sales of distressed properties to local governments and nonprofits.
There is no reason to believe that overbuilt neighborhoods must stay empty forever. Venkatu sees an analogy
between fundamentally desirable neighborhoods and the
vast expansions and oversupply of fiber-optic capacity during the late 1990s tech boom. “It’s not like that stuff doesn’t
get used,” he says. “It gets used — it just gets sold at a loss.”
At some point these new developments — which for now
look more like movie sets than neighborhoods — will start
to look attractive to buyers.
Not all homes will be candidates for resale. In economically declining areas that are rapidly losing both jobs and
residents, the strategy of community organizations buying
and rehabbing homes nicely and trying to sell them would
almost certainly be a failure. “And in fact, it should be,”
Mallach says, “because it’s crazy to spend that kind of money
or try to entice people into a neighborhood that may be

already three-quarters empty,” he says.
That may be where there is potential to find alternative
uses for vacant homes, from rental units to office space or, at
the extreme, razing the property to use the land for something else. But this requires new strategic plans for the
community at a time when local governments are being
stretched thin. When faced with the choice of spending
resources to convene local community organizations and
neighbors to gain consensus on the direction of an abandoned property, or funneling those resources to programs
that attract jobs, the latter often seems to be the priority.
And perhaps that’s for good reason. Job opportunities are
a large part of what will make neighborhoods hit hard by
foreclosures once again desirable places to live. For many
areas affected by foreclosure, economic recovery that brings
strong employment prospects and income stability, as well as
a well-functioning housing market, may be the quickest path
to community revitalization.
RF

READINGS
Adelino, Manuel, Kristopher Gerardi, and Paul S. Willen. “Why
Don’t Lenders Renegotiate More Home Mortgages? Redefaults,
Self-Cures and Securitization.” Federal Reserve Bank of Boston
Public Policy Discussion Paper No. 09-4, July 6, 2009.
Armstrong, Amy, Vicki Been, and Josiah Madar. “Transforming
Foreclosed Properties Into Community Assets.” New York
University Furman Center for Real Estate and Urban Policy White
Paper, December 2008.

Guiso, Luigi, Paola Sapienza, and Luigi Zingales. “Moral and Social
Constraints to Strategic Default on Mortgages.” National Bureau
of Economic Research Working Paper No. 15145, July 2009.
“Community Perspective: Implementing the Neighborhood
Stabilization Program.” Federal Reserve Bank of Richmond
Marketwise, Fall/Winter 2009.

C E N S U S continued from page 21
along with proposals to adjust the population figures afterward using statistical sample results.
“The methodology in 2000 could not be shown to be an
improvement over the actual enumeration,” Anderson says.
“You have to argue that it is better, not just that it’s different,
and there are big arguments over what better means —
numerical accuracy, distributive accuracy, all of which get
tangled up in the politics of it.”
Anderson further explains: “There were two approaches;
one was that the Census Bureau should improve the method
to reach everybody and the other was that it’s more efficient
and potentially more accurate to use statistical methodology.” Many statisticians and demographers believe the

sound methodology of sampling techniques could provide
more accuracy, but the two main political parties have
staked out opposite positions. That has made compromise
unworkable. Not surprisingly, the census has become a political issue. Republicans have tended to favor unadjusted
counts while Democrats tend to support adjustments. Most
Americans, Anderson says, don’t object to sampling, but do
worry about the possibility of political machinations behind
technical matters that are hard to understand.
In any case, methods for counting people are likely to
always be an imperfect way to capture the scope of a constantly evolving nation. The controversies over how we
count may be as recurrent as the decennial census.
RF

READINGS
Anderson, Margo, and Stephen Fienberg. Who Counts? The Politics
of Census-Taking in Contemporary America. New York: Russell Sage
Foundation, 1999.
____. “To Sample or Not to Sample? The 2000 Census
Controversy.” Journal of Interdisciplinary History, Summer 1999,
vol. 30, no. 1, pp. 1-36

26

Region Focus | First Quarter | 2010

2010 Census Communications Campaign Has Potential to Boost
Participation. Washington, D.C.: U.S. Government Accountability
Office, March 2009.

ECONOMICHISTORY

The Lessons of Jamestown
BY ST E P H E N S L I V I N S K I

hen British royalty and
men of commerce looked
westward to the New
World in the late 16th century they
saw the benefit that a permanent
colony could provide. Yet they had
different notions of why such a
settlement could prove worthwhile.
Queen Elizabeth, and her adviser
Sir Walter Raleigh, had an interest in
establishing a colony in North
America for the sake of keeping
Spanish outposts there in check. In
1587, Raleigh attempted to settle a
colony on Roanoke Island in the
Outer Banks area of present-day
North Carolina. The ongoing sea war
with the Spanish empire, however,
tied up the ships that would have
replenished the colony. It would be
three years before a resupply voyage
was made to Roanoke. Upon arrival,
the captain of the resupply vessel discovered that the settlement was
deserted. Historians today still debate
what caused the colony’s demise.
The next effort at starting a permanent colony in North America had
its roots in 1606, under the reign of
James I. This time the colony was not
seen as a launching-off point to conquer and plunder the outposts of
England’s rivals. That was strictly forbidden by the new king, in fact, as he
had recently made peace with Spain.
Instead, it would be an endeavor
that was privately funded and motivated mainly by commerce. As historian Edmund Morgan described it, the
investors had “hopes of finding
precious metals or minerals, of discovering valuable plants for dyestuffs and
medicines, and perhaps of opening a
northwest passage to the Pacific. But
they were prepared to settle for glass,
iron, furs, potash, pitch, and tar, things
that England needed and mostly had
to import from other countries.”
They knew generating a profit

PHOTOGRAPHY: COURTESY OF PRESERVATION VIRGINIA

W

would take time and were taking a long
view of the investment. After all,
this new settlement wasn’t meant to
resemble the trading posts that
England had established in other
countries where English goods were
unloaded in exchange for native products. Instead, the settlers would have
to produce original articles of trade
that could not or would not be produced at home.
Thus, the Virginia Company of
London came into existence in 1606,
composed of a group of investors led
by well-known merchant, Sir Thomas
Smythe. The company was granted a
charter by the king that awarded them
the right to establish a colony of
100 miles square somewhere approximately between Cape Fear and Long
Island Sound.
In December 1606, three ships —
the Susan Constant, the Godspeed, and
the Discovery — left England carrying
the first settlers, just over 100
mariners and adventurers in all, to the
shores of Virginia. When the
captains of those vessels
looked for a proper place to
drop anchor, they used a formula devised by Richard
Hakluyt, a writer and geographer who was an adviser to
King James and an investor in
the Virginia Company. It was
a simple recipe: Find a place
near the entrance of a navigable river that could be easily
defended.
They found a preferred
spot on May 14, 1607, along
what is called the James River
today, about 60 miles from
the opening at the Chesapeake Bay,
placing it at a great enough distance to
give the settlers ample warning of any
invasion by sea. The settlement was
also situated on a peninsula that made
it defensible by land, and the river was

How land
privatization
benefited one
of the earliest
British colonies

Archaeological excavations of the
Jamestown colony have identified a
severe drought as a problem that beset this
early British settlement. Economists and
historians point to the collective
ownership of farmland as another.

Region Focus | First Quarter | 2010

27

navigable for another 75 miles into the interior of Virginia —
as far as present-day Richmond.
Jamestown seemed as if it was off to a good start. The
land was hospitable to farming, and two of the colony’s
leaders, John Smith and Christopher Newport, were relatively successful at opening initially cordial relations with
the nearby Powhatan Indian tribe. By the end of June,
Newport was able to return to England with some of the
exports the colonists had already created.
Yet, by Newport’s return in January 1608, nearly twothirds of the settlers were dead. For the entire official life of
the colony, an average of one out of every four settlers would
survive. Part of the hardship was beyond the direct control
of the colonists. But another element — the labor and land
polices of the colony — exacerbated the difficulties the
settlement faced during the first 10 years of its existence.
In 1624, the Virginia Company would lose its charter.
Historians are often quick to refer to Jamestown as a failed
settlement: Until the rapid revenue growth generated by
tobacco farming that started around 1613, the colony didn’t
turn a profit. Yet, the years before then were an important
learning experience for the residents of Jamestown.

A Grim Beginning
Although it was not meant to be primarily an agricultural
endeavor, the Jamestown settlers anticipated the need to
establish a strong agricultural output. But that would take
time. So they initially based their sustenance on the trade
that would result from maintaining a good relationship with
the Indian tribes in the area. While they did trade for food
with the nearby Powhatan tribe, it was a sporadic and unreliable arrangement as the relations with the Indians were
often tense and occasionally broke down.
In any case, the trading activity would not likely have
yielded enough to sustain the settlers for a long period of
time. Instead, they tended to rely on supplies sent from
England. It could not be assured that the supply ships would
always arrive on schedule, however, especially since the
vessels sometimes encountered harsh weather. (In 1609, for
instance, one of the supply ships was temporarily shipwrecked in Bermuda. The incident became the inspiration
for William Shakespeare’s play The Tempest.)
Malnutrition was one of the biggest drivers of the high
mortality rate in those early years. Weather patterns had a
profound effect on the rate of malnutrition. An analysis of
800-year old bald cypress trees at the colony site conducted
in 1997 indicated that lack of sufficient rainfall in the period
of 1606 to 1612 produced a severe drought that has not been
matched since. Historians and scientists note that not only
did this put a large strain on the Jamestown crops but it also
made the food harvested by the Powhatan more valuable to
the tribe itself and most of it was not traded to the settlers.
Additionally, the colony didn’t have a freshwater well
until 1609. This forced them to rely on water from the James
River. That water was much too salty and consumption of it
surely contributed to the deteriorating health of the

28

Region Focus | First Quarter | 2010

colonists. The lack of rainfall during the drought only exacerbated this problem.
Beginning three months after the landing at Jamestown,
historian Philip Bruce explains that until the fall “hardly a
day was unmarked by death.” New York University historian
Karen Kupperman notes that virtually every letter written
during this period by colonists from Virginia speaks of “the
helplessness the colonists felt before the phenomenon of
widespread deaths.” Of the 104 people who had left London
in 1607, all but 38 were dead within six months of arriving in
Jamestown.
The winter of 1609 was particularly hard. Although the
Virginia Company had sent 500 new recruits to the colony
that summer, the population was reduced to about 60 only
six months later. The period is known in the histories of the
colony and first-hand accounts as the “starving time.”

The Problem of Adverse Incentives
The drought and harsh winters, while dealing a massive blow
to the settlers’ ability to sustain themselves, were not
the only contributors to the colonies’ low agricultural productivity. The legal status of workers and property — and the
adverse incentives this created for the colony’s workers —
played a role in the colony’s failures through 1619.
University of Chicago economist David Galenson
explains that the Virginia Company treated workers “as
bound servants of the company for lengthy terms.” This
indentured servitude worked in a very specific way. The
laborers would sign a contract that pledged them to work for
the recruiting agent — in this case, the Virginia Company —
for a specified period. The company then paid for the
servants’ transportation to Jamestown, housed them in barracks, provided them with rations and clothes, and put them
to work.
The incentive this created for the workers seems obvious,
at least in retrospect. Once arriving at the colony, there
wasn’t necessarily a reason that the servants wouldn’t be
better off trying to escape. The company realized this and
instituted martial restraints on the servants. That did not,
however, stem the frequent escape of a servant to the countryside to start his own settlement or even to live within the
protection of the nearby Indian tribes.
Additionally, even those who remained under contract
saw little reason to be productive. The contracts were set for
a period of seven years and the terms were rigid. A servant
was unlikely to be able to terminate his contract early if he
worked hard. As Morgan describes it, “The work a man did
bore no direct relation to his reward. The laggard would
receive as large a share in the end as the man who worked
hard.” Thus, the general tendency of the servants was to
work less or less efficiently.
By 1611, the company thought simply providing more
manpower might solve many of the problems the colony
was facing. (The colony, for instance, was still relying on
corn obtained from the Powhatans, something the investors
thought would not bode well for the long-term prospects

of the colony.) The company sent Sir Thomas Dale, a
British naval commander, to take over the office of colony
governor in 1611.
Yet, upon arrival in May — a time when the farmers
should have been tending to their fields — Dale found virtually no planting activity. Instead, the workers were devoted
mainly to leisure and “playing bowls.” “[T]he settlers did
not have even a modified interest in the soil, or a partial
ownership in the returns of their labor,” explains historian
Philip Bruce.
Another fundamental problem built into the Virginia
Company’s original plan for the colony was its treatment of
property. All land was owned by the company and farmed
collectively. The lack of private property in that case encourages people to use up a resource faster because nobody has
an incentive to preserve it for future use. Instead, the collective property of the Jamestown colony reinforced the
adverse incentive structure of the indentured labor arrangement: The workers would not hope to reap more
compensation from a productive farming of the land any
more than the farmers would be motivated by an interest in
making their farming operations more efficient and, hence,
more profitable.
Seeing this, Dale decided to change the labor arrangements: When the seven-year contracts of most of the
original surviving settlers were about to expire in 1614, he
assigned private allotments of land to them. Each got three
acres, 12 acres if he had a family. The only obligation was
that they needed to provide two and a half barrels of corn
annually to the company so it could be distributed to the
newcomers to tide them over during their first year.
Dale left Jamestown for good in 1616. By then, however,
the new land grants had unleashed a vast increase in agricultural productivity. In fact, upon returning to England with
Dale, John Rolfe — one of the colony’s former leaders —
reported to the Virginia Company that the Powhatans were
now asking the colonists to give them corn instead of vice
versa. A letter written from one stockholder to another at
the time noted that “the worst of that colony is past”
because the colonists “were well victualed by their own
industry.”
The reform was so successful that the company decided
to further expand the land grants in 1618. Those who had
arrived before 1616, referred to as “Old Planters,” were
awarded 100 acres apiece whenever their terms of service
were up. (If a colonist had paid his own way to the colony, he

would immediately receive his 100 acres.) Shareholders in
the company also received 100 acres for every share they
owned. Settlers who arrived after 1616 got 50 acres. The
reform was also used as an enticement to attract settlers to
the colony: Anyone who came on his own thereafter would
receive the “headright” of 50 acres, as would anyone who had
paid for the transportation of a new settler.
The labor arrangements were also modified to attract
more workers, particularly impoverished English laborers
who could be persuaded to take a chance in the New World:
Anyone sent to the colony at company expense would be
assigned some land to work then as sharecropping tenants
and turn over half of his earnings to the company for seven
years. At the end of those seven years, the laborer would also
receive 50 acres of his own.

Colonial Parallels
The experience of Jamestown seemed to have a parallel in
the Plymouth colony farther north. That settlement was
founded in 1620 and financed by the Plymouth Company, a
joint stock company that was granted an identical charter
by James I as was granted to the Virginia Company for the
mid-Atlantic zone.
It soon became obvious that the Cape Cod colony could
hardly feed itself. The governor, William Bradford, diagnosed the problem as a lack of incentive to produce
efficiently resulting from the common property restriction.
“For this community (as far as it was),” he wrote in his
memoirs, “was found to breed much confusion and discontent and retard much employment that would have been to
their benefit and comfort. For the young men, that were
most fit and able for labour and service, did repine that they
should spend their time and strength to work for other
men’s wives and children without any recompense.” Private
property was allowed in the Massachusetts Bay colony in
1623, only three years after the settlers landed on shore.
In Jamestown, tragedy struck in 1622 when an attack by a
Powhatan tribe destroyed the settlement and killed many of
the colonists. For the next two years, the company officials
and the British government were at odds over whether the
colony could survive as a commercial endeavor. In 1624,
King James decided to revoke its charter to the Jamestown
settlement, after which it came under direct control of the
crown. But it is in Jamestown that the general presumption
of private land ownership as being a key to prosperity in the
New World had its earliest roots.
RF

READINGS
Bethell, Tom. The Noblest Triumph: Property and Prosperity Through
the Ages. New York: St. Martin’s Press, 1998.
Bruce, Philip Alexander. Virginia in the Seventeenth Century.
New York: Macmillan, 1896.

Galenson, David W. “Labor Market Behavior in Colonial America:
Servitude, Slavery, and Free Labor.” In Galenson, David W. (ed.),
Markets in History. New York: Cambridge University Press, 1989.
Morgan, Edmund S. American Slavery, American Freedom:
The Ordeal of Colonial Virginia. New York: W.W. Norton, 1975.

Region Focus | First Quarter | 2010

29

INTERVIEW
David Friedman

u

RF: Your formal academic training was in the physical
sciences. How did you become interested in economics?
And what was your path toward an academic career in
economics?
Friedman: While I was working on my doctorate in physics
I was also writing columns on political and economic topics
as the token libertarian columnist for The New Guard, a conservative student magazine. While a post-doc in physics at
Columbia, I did a piece on population economics for the
Population Council as well as writing and publishing

30

Region Focus | First Quarter | 2010

The Machinery of Freedom. I concluded that I was a better
economist than physicist — had better intuition for the field.
Julius Margolis, who ran the Fels Center for State and
Local Government at the University of Pennsylvania, saw
some of my work and offered me a post-doctoral position at
his center as an opportunity to switch fields. I accepted,
spent two years as a post-doc and one as a lecturer, and wrote
my piece on an economic theory of the size and shape of
nations, which was eventually published in the Journal of
Political Economy.
At some point I met James Buchanan and found that we
had similar ideas about the application of economics to
understanding political institutions. Buchanan was the
dominant figure in the economics department at VPI. He
arranged for me to be hired there as an assistant professor. He
also, I think deliberately, arranged for me to be assigned to
teach quite a large part of the total syllabus over the course of
the next few years, thus filling in some of the gaps in my economic education; teaching is a good way of learning.
RF: In your first book, The Machinery of Freedom, you
make the case for a polycentric legal system. How would
such a society overcome classic public goods problems,
such as courts and, perhaps an even more difficult issue,
national defense?
Friedman: Courts don’t produce a public good, since they
can choose not to settle disputes between people who have
not paid them for the service. In the book, I describe how
rights protection and the resolution of disputes could be
produced as private goods.
National defense, on the other hand, is a public good, and
perhaps the most serious problem for the sort of society

PHOTOGRAPHY: COURTESY OF SANTA CLARA UNIVERSITY

David Friedman is one of the leading figures in the law
and economics movement, that group of scholars who
use economic analysis to better understand legal
systems and to consider reforms that may lead to
more efficient outcomes. Unlike many in this field,
Friedman’s formal training is neither in law nor
economics. Rather, he holds a Ph.D. in physics from the
University of Chicago.
Friedman’s first full-time academic appointment was
at Virginia Polytechnic Institute and State University. At
VPI, he was colleagues with James Buchanan, Gordon
Tullock, and other members of the “public choice”
school, who employ economics to analyze political decisions and institutions. By that time, Friedman had
published his first book, The Machinery of Freedom, in
which he made the case that a society without a state
could effectively handle such classic “public goods” as
police, courts, and national defense. His “anarchocapitalism” distinguished him from his father, Milton
Friedman, who, while an eloquent spokesperson for
a market system, believed that some goods could not
be produced privately and argued instead for a
minimal state.
David Friedman currently teaches at Santa Clara
University. In addition to his academic work, he has
recently written two novels, Harald and Salamander,
which consider many important economic, legal, and
philosophical issues. He is also the author of Hidden
Order: The Economics of Everyday Life, published in 1996,
one of the first of several relatively recent books to
explain economic reasoning and policy in nontechnical
language for a popular audience. Many of Friedman’s
papers and books are available on his personal Web
site: www.daviddfriedman.com. Aaron Steelman interviewed Friedman in April 2010.

I described. There are, however, a variety of ways in
which public goods are (imperfectly) privately produced —
consider radio broadcasts, or the public good of encouraging
taxi drivers to do a good job by tipping them, or the production of open source software such as Linux, or the public
good of painting my house and so benefitting my neighbors.
Imperfect production means that a public good may be
worth more than it costs but still not get produced.
Whether that will happen with the public good of national
defense depends on a variety of things, including how much
an adequate defense costs. I was more pessimistic about
doing it when the United States was facing a hostile power
with a thermonuclear arsenal than I am now. A discussion of
some of the imperfect ways in which it might be possible to
produce an adequate amount of this particular public good
would take more space than I’m inclined to give it here.
RF: In The Machinery of Freedom, you also make the case
for reforming the way students pay for their university
educations. Could you explain that proposal — and do
you think it could gain support in a world where tuition
and associated costs seem to be rising considerably
more rapidly than inflation?
Friedman: I proposed something along the lines advocated
by Adam Smith — a university where professors were paid
directly by students. This would involve disaggregating the
variety of services that current universities perform. There
is no strong reason why the same institution should be running hotels (called dormitories), restaurants (dining halls),
producing schooling, testing and certifying.
I think something along the lines I described may happen
online, where some of the difficulties of doing it in real space
disappear — no need to have everyone housed in the same
area, for instance. The main requirement is some form of
certification, or substitute for certification, that lets a
student learn a subject from whomever he wants and then
get his learning certified by a credible testing agency. I find
it hard to believe that the actual education part of what
current colleges produce could cost as much as half of what
they now charge.
RF: You also have written about monetary policy.
In your opinion, what would be the most desirable way
to construct a monetary system?
Friedman: What I proposed was a system of competing
fractional reserve banks, each guaranteeing its currency with
a commodity bundle. If you bring me a million Friedman
dollars, I agree to give you in return a bundle of commodities: 500 pounds of grade X steel, 200 bushels of grade B
wheat, four ounces of gold, etc. Familiar mechanisms will
result in a price level, measured in my dollars, at which the
total value of the bundle is just a million.
The most important argument for that system is that
competing private money issuers have the right incentives.

Their profit-maximizing strategy generates stable money,
since if their money is not stable, people will choose not to
use it, depriving them of the seignorage — probably in the
form of interest — that is their source of income. That is not
true for a government monopoly money. The advantage of a
commodity bundle over a single commodity is that it is less
subject to unpredictable fluctuations in value due to changes
in demand for or supply of individual commodities. Readers
interested in a more detailed account can find an article
I wrote for the Cato Institute titled “Gold, Paper, or … Is
There a Better Money” on Cato’s Web site.
RF: The law and economics movement has made considerable inroads in law schools and academia generally
over the past 30 years. What do you think are the big
remaining questions that law and economics scholars
ought to address?
Friedman: One of them is how to include legislators, judges,
and enforcers in the theory. In a consistent version of the
economic analysis of law, they have to be treated as rational
self-interested actors, just like criminals, victims, tortfeasors, and everyone else.
Another problem that has not been adequately dealt
with, at least in anything I have seen, is the effect on optimal
enforcement theory of differences in offenders, most obviously the fact that different offenders face different
probabilities of apprehension.
A final big question is the technology of judging — what
courts can do how well. If you assume a perfectly wise court
system, efficient law is easy — just severely punish anyone
who takes any inefficient action. If courts have no ability at
all to detect truth, on the other hand, they are of very little
use. Where between those points actual courts are, what
sorts of questions they can or cannot get reasonably correct
answers to, is a major element in figuring out what the law
should be.
RF: This is an intentionally broad question to which I
would like to get your reaction: How close does the
American common law come to approximating an efficient legal system? Where does it go right? Where does
it go wrong?
Friedman: I discuss that at some length in Law’s Order,
in the context of evidence for and against the Posner conjecture. My favorite example of inefficiency is that in the
common law of tort the value of life is zero, since if you tortiously kill someone, his claim against you dies with him.
That cannot be the right answer.
In many other cases, one can make arguments for either
of two different legal rules, sometimes more than two.
It’s tempting to observe what legal rule exists and then produce the argument to show that it’s efficient — but if there
were a different rule you could produce an argument for
it instead. My own view is that there are elements of effi-

Region Focus | First Quarter | 2010

31

ciency in Anglo-American common law, but that a good deal
of it does not fit that pattern.
RF: How would you suggest we think about the tradeoffs associated with our current intellectual property
regime? Does the system go too far in protecting the
rights of creators? If so, how far and what would be a
more desirable scheme?
Friedman: The immediate issue is not how far the rights of
creators should be protected but how far they can be. As
more and more intellectual property takes digital form,
easily reproduced and distributed, and more and more
people have access to fast Internet connections, it becomes
increasingly difficult to enforce laws giving creators control
over the intellectual property they create. At some point,
the sensible response is to abandon the attempt and shift to
other mechanisms for rewarding creators.
One of the subthemes of Future Imperfect, my most recent
nonfiction book, is that the proper response to technological change is not to ask how we can keep doing what we are
doing that has become harder to do — enforce copyright
law, for instance. It is rather to ask how we can best achieve
our objectives under the new circumstances. Sometimes
that means abandoning the approach that has become
unworkable, and shifting to a different approach that the
new technology makes more workable than before.
RF: Many people seem to believe that less crime is
always more desirable, by definition. Could you briefly
explain your theory of the efficient level of crime and
what the policy implications of that theory might be?
Friedman: The optimal level of traffic accidents would be
zero if we could eliminate them without giving up anything
else we value. But the obvious way of eliminating all traffic
accidents is to stop driving, and few of us consider that a net
improvement.
If law enforcement were costless, we would still not want
to eliminate all illegal acts; consider the driver who is breaking the speed limit on his way to the hospital with his wife in
the back seat going into labor, or Posner’s example of
the hunter lost and starving who comes across a locked
cabin containing canned food and a telephone. Some such
“efficient offenses” can be permitted by modifying the law to
make them legal — for instance under the doctrine of necessity. In other cases the facts that make the offense efficient
cannot readily be demonstrated to an outside party such as a
court, so the best solution may be to set a penalty reflecting
the damage done by the offense and then let the potential
offender decide whether it is a price worth paying. The
result would be a level of offenses above zero.
In the real world, law enforcement is not costless. Once
one allows for that, it may be desirable to set levels of
enforcement at which some inefficient offenses — offenses
that harm their victims by more than they benefit the

32

Region Focus | First Quarter | 2010

offender — occur because preventing them costs more than
it is worth. Less obviously, it may be desirable to deter some
efficient crimes, because deterring them saves us the cost of
punishing them. For details see my Law’s Order.
RF: You have written a fair amount about the economics
of population growth. As you note in Law’s Order, there
seems to be a fairly widespread notion that “babies are a
bad thing.” How would you address that concern?
Friedman: Babies don’t arrive with a deed to a per-capita
share of the world’s resources clutched in their fists; if they
want land to build on or gasoline to drive with they will
have to give those who own those resources something in
exchange worth at least as much. So to first approximation,
adding another person to the world’s population makes
existing people better off, not worse off. It’s worth noting
that there is very little relation between how rich a country
is in resources per capita and how rich its population is;
the most important productive resource isn’t land or oil
but people.
Of course, people sometimes produce negative externalities. But they also produce positive externalities. There is no
general reason to expect the negative effects to be larger
than the positive; when I tried to estimate both long ago (in
the article “Laissez-Faire in Population: The Least Bad
Solution,” which is available on my personal Web site) I
concluded that I could not tell whether the sum was positive
or negative.
RF: Much of the recent debate regarding medical care
legislation seemed to be predicated on the idea that
medical care should not be treated like other goods and
services. How would you respond?
Friedman: There are strongly felt emotional attitudes
toward medical care that do not exist for many other goods
and services and that makes it more difficult to treat it as an
ordinary commodity. But I think it is hard to offer rational
reasons in support of those attitudes, or good economic
arguments against having it produced on the free market
like food or housing. For details, see my article “Should
Medical Care be a Commodity?” available on my
personal Web site.
RF: What do you think of “behavioral economics”? And
what does law and economics have to contribute to it?
Friedman: The observation that humans are not perfectly
rational is neither novel nor, I think, very useful. The point
at which it becomes interesting is when one can create a
theory of how and why they are irrational, and hence just
how the choices they make will differ from the choices that
conventional economics predicts that they will make. I have
an article that attempts to do that, based on evolutionary
psychology; it’s titled “Economics and Evolutionary

David Friedman
raising and supporting an army
without taxes, a draft, or feudal
obligations. One consequence is
➤ Previous Faculty Appointment
that he is very sparing with the lives
RF: Since you wrote Hidden Order,
University of Chicago (faculty fellow),
of his troops, since if fighting for
several other popular economics
Tulane University, University of
him turns out to be too dangerous
books have been published by
California at Los Angeles, and Virginia
he
won’t have many volunteers next
commercial houses, some to great
Polytechnic Institute and State
time.
Another is that he is reasonfanfare and success. To what do
University
ably
sparing
with the lives of his
you attribute the demand for and
➤ Education
enemies,
since
if he forces them to
popularity of such books?
B.A., Harvard University (1965); M.S.,
surrender
instead
of killing them he
University of Chicago (1967); Ph.D.,
can
ransom
them
back to the
Friedman: Part of it is that people
University of Chicago (1971)
Empire,
the
antagonist
he is fightwant to understand economics,
➤ Selected Publications
ing,
for
the
money
he
will
need for
part is that economics is inherently
Future Imperfect: Technology and Freedom in
his next campaign. A lot of the
interesting.
an Uncertain World (2008); Law’s Order:
military strategy ends up being
What Economics Has to Do with Law and
aimed at putting the opponent in
RF: What issues are you working
Why It Matters (2000); Hidden Order:
a position where, if he doesn’t
on currently?
The Economics of Everyday Life (1996);
surrender, or at least withdraw, his
The Machinery of Freedom (1973); author of
army will run out of food or water.
Friedman: My next nonfiction book
numerous papers in such journals as the
One of the two people the book is
will probably be based on a seminar
American Economic Review, Journal of
Political
Economy,
Journal
of
Law
&
dedicated
to is the author of a fasciI have taught for some years under
Economics, and Journal of Legal Studies
nating book on the logistics of the
the title of “Legal Systems Very
army of Alexander the Great.
Different from Ours.” It will describe
Salamander started out as an exploration, in a fantasy
a considerable range of legal systems, including those of modcontext, of the central planning fallacy, the idea that if only
ern gypsies, imperial China, ancient Athens, the Cheyenne
all of a society’s resources were under someone’s intelligent
Indians, Jewish law, and other arrangements, and attempt to
control, wonderful things could be done with them. The
identify common threads that run through many or all legal
equivalent, in a society where magic exists but is weak, is a
systems, and look at the different ways in which different
procedure that lets one wizard take control of the magical
societies have dealt with their common legal problems.
power of many others. The inventor is a brilliant but
naive academic type who intends only good. Like his equivRF: Please tell us about your recent novels. What do you
alents in the real world, he misses both the fact that those
think science fiction and fantasy can teach us about
resources are already being used by their owners for their
economics and alternative legal systems? And why are
own ends and the risk that the power will be used for other
those types of fiction particularly good forums for
than benevolent purposes. My other protagonist points out
discussing such issues?
the first problem to him early on, and his colleague and
collaborator forcibly demonstrates the second by seizing
Friedman: I’ve published one novel — marketed as a fancontrol of the process during the first full scale experiment
tasy, but more nearly a historical novel with made-up
with it. As the book developed, it turned out to have at least
history. A second, Salamander, is finished and webbed, but
one other theme — in what sense the ends do or do not
has not yet found a publisher; that one is a real fantasy, comjustify the means. Interested readers can find Harald online
plete with my own, I think original, version of magic. I’m
at http://www.webscription.net/p-196-harald.aspx and
currently working on a sequel to it.
Salamander on my personal Web site.
The published novel, Harald (from Baen Books), is a story
not a treatise, but it contains a good deal of implicit ecoRF: Which economists have influenced you the most?
nomics and political philosophy. One implied message is
that political structures are about relations between persons,
Friedman: Other than my father, probably Alfred Marshall,
not formal tables of organization; the mess my protagonist
perhaps David Ricardo. I got one idea that has been
has to deal with in the early part of the book grows out of the
important to me from Thomas Schelling, another from Earl
acts of a young and inexperienced king who thinks people can
Thompson, and also some interesting ideas from Robert
be trusted if and only if they are in allegiance to him, and so
Frank. I am in some sense a follower of Gary Becker in
tries to convert, by force, allies into subjects.
economic imperialism, Buchanan and Tullock in public
The economics is mostly about the implications for the
choice theory, Posner in economic analysis of law, but I’m
project of raising and using armies of the fact that soldiers
reluctant to call that an influence, since it is more a matter of
don’t want to be killed. My protagonist is a military leader
doing the same sorts of things that those people did first. RF
from a semi-stateless society, faced with the problem of
Psychology” and is available on my
personal Web site.

➤ Present Position
Professor of Law, Santa Clara University

Region Focus | First Quarter | 2010

33

BOOKREVIEW
The Birth of the Modern Fed

A HISTORY OF THE FEDERAL RESERVE: VOLUME 2, BOOK 1, 1951-1969
CHICAGO: UNIVERSITY OF CHICAGO PRESS, 2010, 682 PAGES
A HISTORY OF THE FEDERAL RESERVE: VOLUME 2, BOOK 2, 1970-1986
CHICAGO: UNIVERSITY OF CHICAGO PRESS, 2010, 628 PAGES
BY ALLAN H. MELTZER
REVIEWED BY STEPHEN SLIVINSKI

he first volume of Alan Meltzer’s comprehensive
two-volume history of the Federal Reserve, published in 2003, focuses on the years between 1913
and 1951 when the Fed was mainly a vehicle for the federal
government to finance wartime federal debt. The Fed
would buy Treasury bonds at the command of the government with little attention paid to the effects that this
manipulation of the money supply would have on the
economy. Part of that was due to a lack of knowledge or a
coherent theoretical model to guide monetary policy.
A large part, however, was due to the immense political
pressure applied to the new institution. Such political pressure is something that the Fed has had to endure since the
beginning, and how it deals with those pressures is a major
theme in Meltzer’s first book.
The second volume being reviewed has been published as
two books. Book one begins in 1951, the year in which the
Fed as we know it today began to take shape. The big turning point was the March 1951 accord with the Treasury
Department that, as described by Meltzer, “changed the
Fed’s formal status from subservient to co-equal partner
with the Treasury.” Yet, as both books of Meltzer’s second
volume make plain, de jure independence isn’t always an
indicator of de facto independence.
The chairmanship of William McChesney Martin is the

T

34

Region Focus | First Quarter | 2010

subject of most of book one. It’s a period in which the Fed
was seen as largely independent — dramatically so when
compared to previous wartime years — and stands as the
first test of the newly independent Fed’s ability to maintain
price stability. Between 1951 and 1965, inflation dropped
from more than 8 percent to being constrained within a
range of zero percent to 4 percent.
This accompanied a shift in Fed policy to the “bills-only”
doctrine, which meant that the Fed would conduct openmarket operations only through the purchase of short-term
Treasury bills. This left the long-term Treasury rates to be
set by market forces and was a departure from the years
when the Fed was used as a tool to cheaply finance wartime
spending.
Yet Meltzer also highlights a contrary and important
element of Martin’s tenure. Martin described his view of Fed
independence as qualified. It assumed the Fed’s independence within the government, not from the government.
There were times when Martin was willing to coordinate
policy with the executive branch.
In fact, Martin saw an important function for the Fed’s
open-market operations as a way to pursue an “even keel”
policy wherein the Fed would stand ready to make sure that
auctions of Treasury bills would not fall flat. This meant that
the Fed would implicitly commit to buying enough T-bills to
satisfy a specified interest rate target desired by the Treasury.
Between 1951 and 1965, this didn’t influence monetary policy
very much. The federal government under President Dwight
Eisenhower was balancing its books, obviating the need for
the Treasury to issue debt that the Fed could buy. Meltzer
argues, however, that Martin’s willingness to collaborate
with the executive branch — usually implicitly — opened
the door to policy mistakes that precipitated the Great
Inflation of the 1970s.
President Lyndon Johnson was never shy about applying
pressure to Martin. To his credit, Martin was generally
impervious to the attempts to directly influence Fed actions
before 1966. Yet the monetary loosening began when his
even keel approach dominated and the federal government
began to run deficits to finance the Vietnam War and social
spending through the Great Society transfer programs.
It wasn’t obvious initially that this could lead to inflation.
The lack of a coherent or accurate model to predict how
monetary policy might influence the economy, Meltzer
argues, is a reason why Martin, normally vigilant about inflation, allowed the Federal Open Market Committee (FOMC)
to embark on an easy money policy. Fed policymakers “did
not distinguish between real and nominal [interest] rates
until much later.” Higher nominal rates led Fed economists

to overestimate the degree of restraint that FOMC actions
were creating.
What was missing from the analysis, Meltzer points out,
was an understanding that higher anticipated inflation
would drive up nominal interest rates. Once monetary
policymakers fell behind the curve by misinterpreting the
signals that interest rates were sending, their control over
inflation slowly began to ebb.
An additional strain came in the form of the
Employment Act of 1946. The law set up a new dual mandate
for the Fed: the pursuit of price stability and maximum
employment. Over time, the political emphasis on keeping
unemployment down would become a strong pressure on
the Fed. And, as an empirical matter, Fed policymakers in
the 1960s began to sense that they could lower the unemployment rate if they followed a looser monetary path.
At the time it wasn’t clear to many Fed economists that such
a policy could become unsustainable. But, as Meltzer notes,
the employment mandate would gradually become the main
concern of the Fed in the next decade.
The period during which the Fed made the pursuit of
full employment its main guiding principle occurred once
Martin was replaced by Arthur Burns as Fed chairman in
1970. The Nixon administration was keen on reducing
unemployment. (Indeed, Nixon believed a too-tight monetary policy under Martin destroyed his chances to win the
presidential election against John F. Kennedy in 1960
although he was fresh off a second term as Eisenhower’s vice
president and was effectively running as an incumbent.)
Nixon and his advisers wanted a loose monetary policy to
achieve higher short-term employment. Burns was quite
willing to provide such support and did tremendous damage
to the Fed’s credibility as an independent institution.
Meltzer flavors his chapters in this period with transcripts of
White House meetings, recorded by Nixon’s infamous Oval
Office microphones, in which it is clear that Burns was
actively seeking input from the president and his advisers.
Burns did believe the Fed could do little to stem the
inflationary tide. His view was that a rising cost of labor,
bolstered by rigid contracts written under pressure by
powerful labor unions, was the main force driving prices
higher. This “cost-push” explanation of inflation led Burns to
support wage and price controls as a means to arrest price
increases while the Fed tried to drive down unemployment
with an expansionary monetary policy. As Meltzer points
out, Burns was more than just a willing accomplice — he was
a forceful advocate behind the scenes. Burns, Meltzer writes,
had “little opposition” on the FOMC. Most of the members
voted with the chairman out of deference or mainly because
they believed that monetary policy did indeed need to be
eased to further spur employment.
We know in retrospect these expansionary policies were
misguided and contributed to high and persistent inflation.
This was not without its critics at the time. Adherents to the
monetarist school, led by economist Milton Friedman, were
developing the models that implied the unemployment-

inflation trade-off was unsustainable. Another set of
scholars who began to have substantial influence in
academic macroeconomics during this time were those from
the “rational expectations” school. Both they and the monetarists came to the same general conclusion: As soon as the
market built into its assumptions a new, higher price level,
the employment boost would abate but the inflation would
remain.
This point was understood and respected by Paul
Volcker, the Fed chairman who is most responsible for not
only restoring a hard-money approach to policy but also
restoring the badly damaged reputation of the Fed as an
independent institution. Appointed by President Jimmy
Carter in 1979, Volcker immediately changed the way the
Fed approached its task. The main shift was a stated focus
on restraining the money supply, an approach that was
endorsed heartily by the monetarists.
Both Carter and his successor, Ronald Reagan, tended to
let Volcker pursue the policy. Meltzer suggests that part of
the reason there wasn’t a broader political backlash against
the policy or the Fed was because there was a popular consensus by that point that inflation was a bigger evil than
unemployment. Since then, inflation control has been seen
as the primary role of the Fed. In fact, to underscore his
belief that the intellectual battle over the importance of
price stability had largely been won, Meltzer ends this
second volume of his history in 1986, just as Alan Greenspan
assumed the chairmanship of the Fed.
The second volume of Meltzer’s history suffers from
some of the difficulties of any history aimed at being truly
comprehensive. It can be redundant at times and has a
tendency to read a bit like an authoritative list of events and
policy actions with details that would appeal only to a select
set of readers. However, everyone can benefit by the introductory material in each chapter and the synthesis of the
most important elements at the end of each chapter.
Overall, it would require an uncharitable reading to deem
the books anything other than a success, a monument to a
lifelong study of monetary history.
Also, just because it ends in 1986 doesn’t mean that
Meltzer has nothing important to contribute to the current
policy debate. “Perhaps the most enduring lessons for
central bankers from the Great Inflation and subsequent
disinflation was that the responsibility for stopping inflation
fell on them,” he writes. Indeed, his history serves as an
important reminder that it took the Fed a long and painful
time to learn that lesson. As such, Meltzer ends the book
with a chapter outlining what he sees as some unsettling
trends, foremost among them the potential that recent Fed
actions have to undermine the institution’s independence.
It is even more important now, suggests Meltzer, to redouble
efforts to reclaim Fed independence in the wake of current
policies. The hard-won lessons of the past depicted in this
book are an important reminder of why the Fed must remain
steadfast in its independence of the political branches if it is
to effectively pursue a policy of price stability.
RF

Region Focus | First Quarter | 2010

35

DISTRICTDIGEST

Economic Trends Across the Region

Small business employment in the Fifth District
and the impact of recessions
BY RO B E RT H . S C H N O R B U S

S

Defining a Small Business
Small businesses play a major role in job changes over any
business cycle, but their exact definition often varies among
studies. The Small Business Administration (SBA) defines a
small business as any establishment with less than 500
workers. That casts a pretty large net across the labor market.
By that definition, more than 99 percent of all establishments would be classified as a small business. It also means
that more than 80 percent of all jobs in both the nation and
the Fifth District are based in such establishments.
To match employment statistics by size of establishment
as reported by the Bureau of Labor Statistics with surveybased data from the NFIB, three “size categories” can be
defined for closer analysis. The first category, small business
firms, is defined as establishments having less than 50
employees, which corresponds to more than 95 percent
of all the firms in the NFIB survey. The second, medium
business firms, is defined as establishments having 50 to 499
employees. The third, large business firms, is defined as
establishments with 500 employees or more.

36

Region Focus | First Quarter | 2010

Figure 1: Employment Growth by Quarter
50
40

YEAR-OVER-YEAR PERCENTAGE CHANGE

mall business firms are widely regarded as a key source
of job growth and as largely recession-proof, but the
current recession brought severe job losses even to
relatively high-growth regions like the Fifth District. Of
course, small businesses are constantly being created and
destroyed both in and out of recessions, with job growth
perhaps slowing during recessions, but rarely have small
businesses as a category suffered net job losses.
The current recession, which continues to generate
employment declines into 2010, is a notable exception to the
historical pattern of uninterrupted small business job
growth, with significant job losses occurring even in the
high-growth services sector. While a clear picture of the
severity of the recession in the Fifth District can be seen in
the available data on government employment, data on
the performance of small businesses at a regional level are
limited.
Fortunately, survey data of small business firms from the
National Federation of Independent Businesses (NFIB)
provide interesting insights into the problems small businesses faced during past recessions and, to some degree, how
the surviving firms adjusted to such difficult times.
Combined with the government data, this survey allows a
close look at the performance of small business firms in the
Fifth District during three recessionary periods over the last
20 years.

30
20
10
0
National
Fifth District

-10
-20
-30
-40
-50
1990

1995

2000

2005

2009

NOTE: The shaded areas correspond to recessions.
SOURCE: Bureau of Labor Statistics

Under this classification system, about 95 percent of all
establishments in the District and the nation still fall into
the small business category. However, in terms of employment, small and medium firms each normally account for
roughly 40 percent of total employment, with large firms
accounting for the remaining 20 percent. Finally, it should
be noted that the government data used in this study
are based on individual establishments (or locations of
each plant or store in an area) rather than actual firms (or
complete business entities).
Many firms are composed of more than one establishment. For example, Lowe’s is a single firm made up of many
establishments (or store locations). However, the vast majority of small establishments are single-establishment firms
with very few employees — the more common image of a
“small business.” Thus, the terms establishment and firm are
used interchangeably here, although some discrepancies do
exist.
While the District composition of small firms is in broad
terms similar to the nation, differences in both employment
shares and their changes among firm-size categories over
time should be noted. For example, the total employment
share of small businesses in both the District and the nation
has been rising over time, from roughly 39 percent in 1990 to
45 percent in 2009. In addition, heading into the recent
recession the Fifth District had a slightly higher concentration of small businesses (45.3 percent) than the nation
(43.8 percent) and a slightly lower concentration of large
businesses (14.7 percent) than the nation (16.9 percent) —

the result of small businesses growing faster and large businesses tending to decline faster in the Fifth District than in
the nation on average.
Yet the share of small business employment in goodsproducing industries (which include construction,
manufacturing, mining, and other natural resources industries) in 1990 was lower in the Fifth District (27.1 percent)
than in the nation (30.4 percent). In 2007, however, the Fifth
District’s share was virtually equal to the nation’s
(37.5 percent). In contrast, the Fifth District’s share of small
business employment in the services sector, which includes
such industries as health, education, financial, and other
professional services, has always been slightly higher than
the nation.

The Sensitivity of District Small Business Firms
to Past Recessions

um firms becoming small firms due to job losses and from
displaced workers at large firms starting their own small
businesses. Such factors may overstate the underlying
strength of small businesses during recessions. However,
since churning of jobs at the small business level occurs
throughout the business cycle and is a normal part of the
process of employment change, small business employment
gains during recession go far beyond these two limiting factors. Thus, it seems safe to conclude that small businesses in
aggregate were still the center of substantial job gains during
these earlier recessions.
Perhaps not too surprisingly, the experience of small
firms was not uniform across the goods and services sectors;
indeed, all of the job gains that occurred during the recessionary periods were concentrated among service-producing
firms. For example, small business employment in the Fifth
District’s goods-producing sector declined 8.6 percent in
the 1990-1992 period, which was almost as much as the total
decline in employment over the period and substantially
larger than the decline of small business employment for
that sector nationally. Indeed, it was the decline in goodsproducing small businesses that accounted for the fact
that total employment in the Fifth District declined substantially more than the nation. In the 2001 to 2003
recessionary period small business employment in the Fifth
District’s goods-producing sector declined by only 1.7 percent, far less than in the previous recession and much more
in line with the national experience.

To evaluate the severity of the current recession on small
business firms, two recessions since 1990 — both comparatively mild by historical standards — are examined. The first
recession began in the early 1990s (that is, from the third
quarter of 1990 to the first quarter of 1991), when real GDP
(in constant dollars) fell by 1.4 percent over the course of the
recession. The second recession began in the early 2000s
(that is, from the first quarter of 2001 to the fourth quarter
of 2001), when GDP remained virtually flat.
In both recessions, employment declines in the Fifth
District (from peak to trough on a seasonally adjusted basis)
lasted slightly longer than in the nation as a whole,
even though only employment in the first recession
Employment Growth by Establishment Size
experienced a significantly deeper decline in the Fifth
During Recessionary Periods
District (-2.5 percent) than in the nation (1.3 percent).
Both experienced employment declines of about 2
United States
Fifth District
percent during the second recession. Otherwise, Fifth
Under 50 50-499 Over 500 Total
Under 50 50-499 Over 500 Total
Employees Employees Employees Employees Employees Employees Employees Employees
District employment closely tracked the pattern of
Total Nonfarm
national employment growth on a year-over-year basis
1990-92
-6.0%
-13.7%
-3.2%
6.0%
-3.5%
-14.6%
-2.3%
5.5%
(see figure 1).
2001-03
-3.0%
2.3%
-11.0%
-2.7%
1.3%
-4.6%
-4.4%
-9.1%
2008-09
-4.5%
-2.5%
-6.3%
-5.7%
-4.0%
4.4%
-6.6%
-2.7%
Since the available data on employment by size
Goods Producing
categories are limited to just the first quarter of every
-11.5%
-11.0%
-7.7%
-7.5%
-8.9%
-8.1%
-5.4%
-8.6%
1990-92
year since 1990, the two recessions described above
-21.7%
-10.5%
-17.8%
-9.6%
-1.7%
-2.2%
-10.1%
-10.8%
2001-03
-12.4%
-10.3%
-7.6%
-12.1%
-11.6% -10.4%
-11.0%
2008-09
can best be captured by measuring employment
-9.8%
Services
changes over the time periods from 1990 to 1992 and
-5.3%
-15.1%
-0.5%
9.3%
-1.7%
1990-92
-16.1%
-0.2%
9.1%
2001 to 2003 (see table). While total employment
-5.9%
-2.6%
-0.3%
3.2%
2.1%
-5.7%
2001-03
-2.5%
-1.0%
-3.0%
-1.9%
-5.4%
-1.5%
-3.9%
-3.1%
-1.3%
-4.6%
2008-09
declined during these recessionary periods, the patEducation/Health Services
tern of job changes by firm size supports the claim
4.1%
8.2%
17.0%
10.6%
15.4%
6.4%
11.8%
1990-92
8.2%
that small firms were a key source of job growth even
6.8%
7.1%
7.5%
8.9%
7.7%
2001-03
6.5%
6.8%
6.5%
1.5%
2.7%
2.6%
2.8%
2.8%
2.1%
2.5%
2008-09
2.3%
during recessions. For example, during both the 1990
Leisure/Hospital Services
to 1992 and 2001 to 2003 periods, small business
1990-92
17.2%
-11.1%
-31.2%
0.1%
-0.4%
-10.6%
6.2%
—
employment increased, while both medium and large
2001-03
-13.5%
1.6%
-0.1%
5.3%
0.5%
4.1%
2.7%
—
2008-09
-4.7%
-0.6%
-9.9%
-2.4%
-2.8%
-1.0%
-5.0%
—
firms absorbed all the job losses, resulting in a net
Professional/Business Services
decline in total jobs. To be sure, some of these job
8.3%
-11.7%
1990-92
3.4%
9.0%
4.3%
2.4%
-9.2%
2.9%
losses, especially among large firms, represented
2.7%
2001-03
5.0%
-15.1%
-5.8%
-13.7%
-7.2%
-5.0%
-3.3%
-0.1%
-0.5%
-7.8%
-4.3%
-10.2%
-8.3%
-5.5%
-4.1%
2008-09
structural changes as well as cyclical declines.
In contrast, the trends for small firms gained not
NOTE: Growth rates are based on first-quarter employment levels of each year indicated.
SOURCE: Business Employment Dynamics, Bureau of Labor Statistics
only from new business startups but also from medi-

Region Focus | First Quarter | 2010

37

time during a recessionary period (and probably for the first
time in many decades). Employment declines were registered nearly everywhere in the Fifth District — only the
District of Columbia registered a modest increase.
Perhaps what really distinguishes the current recession is
the fact that service-producing firms in the Fifth District lost
jobs even among small firms (with only Virginia and Maryland
managing modest gains, most likely due to the influence of
employment growth sustained by gains in the District of
Columbia). The employment decline among small firms
was relatively small, with medium firms accounting for the
largest decline in jobs. However, even among some of the
strongest services industries, such as professional services
and leisure/hospitality industries, small firms in the Fifth
District experienced net job losses for the first time ever.
Only small firms in the education/health services industries
were able to buck the trend and continued to add jobs in
this downturn.

Figure 2: Small Business Sales
(Last Three Months vs. Prior Three Months)
50
40

Services
Goods

30

DIFFUSION INDEX

20
10
0
-10
-20
-30
-40
-50
1990

1995

2000

2005

2009

NOTE: The shaded areas correspond to recessions.
SOURCES: National Federation of Independent Business; Richmond Fed

In sharp contrast, small services businesses in the Fifth
District had an unbroken chain of job gains through both
recessionary periods. It should be noted that both medium
and large services firms in the District suffered significant
job losses — though far less severe than their counterparts
among goods-producing firms nationally. The impact of the
economic shift to an increasingly service-based economy in
the Fifth District is clearly reflected in the strong employment growth during both recessions among some of that
sector’s major growth centers, such as education/health,
leisure/hospitality, and professional/other business services
industries. Indeed, employment tended to increase in all
three size categories during these recessions, indicating the
breadth of the job growth strength that these firms have
experienced. That is, until the current recession.

Viewing Recessions from a
Small Business Perspective

A recession takes its toll on small businesses in many ways.
As sales fall, survival is often a scramble to cut costs and gain
access to needed credit to keep the business running until
the recovery starts. The deeper the recession and the longer
the delay in recovery, the fewer small businesses can be
expected to survive.
A quarterly survey of more than 2,000 of NFIB’s members nationally and more than 100 in the Fifth District
provides an opportunity to gain insight into how small businesses viewed business conditions during past recessions
and what actions they took to keep their businesses running.
The questions are subjective in nature. Respondents were
asked whether a particular variable, such as sales or employment, increased or decreased over the previous quarter and
whether they expected increases or decreases over the
Small Business Experience
next three to six months. The result is a “diffusion index”
During the Current Recession
that measures the difference between the percentage of
In comparison to the two earlier recessions, employment
firms reporting increases in production or planned hiring
declines in the current recession have been deeper and more
and the percentage of firms reporting decreases in those
pervasive both nationally and in the Fifth District. Indeed,
variables. This allows comparison of small business
not only did GDP in this recession decline more than twice
as much as the earlier recessions, but job declines to date
behavior in the previous recessions and the current one
also have been far steeper than in the past
(see figures 2 and 3).
(and that assumes that no further job losses
From the perspective of small businesses
occurred after the fourth quarter of 2009).
in the Fifth District, the recessions in the
For example, from the first quarter of
early 1990s and 2000s were fairly similar in
2008 to the first quarter of 2009, total
both depth and duration, which is consistent
The “diffusion index”
employment in the District declined by at
with both recessions being relatively mild in
measures the difference
least a third more than either the 1990 to
terms of GDP declines. For example, in both
between the percentage
1992 or 2001 to 2003 recessionary periods,
recessionary
periods the index measuring the
of firms reporting
and that decline occurred in only half the
net
percent
of firms experiencing sales
increases in production
time (four quarters). While medium and
declines
over
the
past three months expandor planned hiring and
large firms still accounted for the bulk of the
ed,
with
the
percent
of firms reporting
the percentage of firms
job losses, small firms in this recession lost
declines
exceeding
the
percent of firms
reporting decreases.
more jobs than they created for the first
reporting increases by 15 to 20 percentage

QUICK
FACT

38

Region Focus | First Quarter | 2010

Figure 3: Job Hiring Plans by Quarter
50
40
30
20

DIFFUSION INDEX

points. The index did not return to positive territory — indicating more small businesses had expanding rather than
contracting sales, a net expansion — until long after these
recessions officially ended.
Other measures are not included in these figures. Credit
problems for small businesses, for instance, only seemed to
become a serious problem during the 1990 to 1992 recessionary period. Also, small business optimism dropped sharply as
both recessions were approaching and then slowly recovered
over the subsequent two years. Yet, consistent with the
employment declines discussed above, the number of small
businesses that planned to decrease their employment never
exceeded the number planning to increase employment during either of these two recessions. (Only during the
recessions in the mid-1970s and early 1980s did decreases
ever exceed increases and then rarely for more than two
quarters consecutively.)
In both earlier recessions, small businesses took several
years before they were back to planning net job increases at
a pace comparable to the normal expansionary phase of the
business cycle. In both recessions, however, the upward path
toward recovery was clearly evident in the percent of firms
that were planning to increase their hiring.
In sharp contrast to these two earlier recessions, the
experience of small businesses in the Fifth District during
the current recession, as reflected in survey responses, was
by far the worst in the survey’s history. For example, the
index for change in sales declined by nearly twice as much as
the two earlier recessions. Moreover, that decline continued
for at least 10 quarters (until the end of 2009, the most currently available survey numbers, and quite likely continued
to decline into 2010).
Indeed, the sales index suffered its worst decline on
record by a significant margin, along as did such other measures as earnings, capital outlays, compensation, inventories,
and prices — all reaching record lows. While credit problems never seemed to get as bad as during the “credit
crunch” in the early 1980s, the index measuring the degree of
problems obtaining credit followed a pattern similar to the
early 1990s recession — starting well before the official
recession began and by the end of 2009 falling farther than
during that earlier recession. Not surprisingly, given the
severity of general business conditions, small business optimism declined further than during any previous recession on
record and, despite three quarters of recovery in 2009, was
still little better than during the low point reached in the
early 1980s.
The response to the deterioration in general business
conditions was for the percent of small businesses planning
to reduce hiring to exceed the percent planning to increase
hiring for the first time since the early 1980s. Indeed, small

10
0
Services
Goods

-10
-20
-30
-40
-50
1990

1995

2000

2005

2009

NOTE: The shaded areas correspond to recessions.
SOURCES: National Federation of Independent Business; Richmond Fed

business in the Fifth District registered a net cut in planned
job hiring in five of the last seven quarters for which data
are available-the worst seven-quarter experience in the
survey’s history.

Small Business Expectations for 2010
By any measure, the current recession has been one for the
record books — at least for small businesses. And interestingly, despite widespread concerns about credit availability
as the recession took on the characteristics of a global financial crisis, small businesses rated weak sales, not credit
availability, as their most important problem by a wide
margin. While credit availability may be tight, until sales
begin to improve the need for small businesses to borrow
may be limited. Yet, as the most recent quarter (fourth
quarter of 2009) of hiring plans suggests, small businesses in
the Fifth District seem to be gearing up to start hiring again.
Their expectations for the recovery in 2010 seem to be
guarded, however. The index for expected sales (adjusted for
inflation) barely turned positive at the end of 2009. So, while
more and more small businesses are beginning to expect
sales to increase, the number of firms that are still expecting
decreases has continued to be substantial. Similarly, while
small business optimism began to improve at the end of
2008, the level of optimism at the end of 2009 remained far
below the lowest levels achieved in either the early 1990s or
2000s, suggesting that small businesses remain overwhelmingly pessimistic. Still, the fact that many indexes in the
survey, including sales and business conditions, have turned
positive is encouraging. Indeed, the most encouraging sign
may be that the index of hiring plans turned slightly positive
in the closing months of 2009, although again mostly in
services industries.
RF

Region Focus | First Quarter | 2010

39

State Data, Q3:09
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

703.8
0.2
-0.8

2,509.6
-0.8
-3.4

3,879.1
-1.0
-6.2

1,809.5
-0.6
- 5.9

3,617.8
-0.8
-3.9

740.1
-1.0
-3.0

Manufacturing Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

1.4
0.0
-10.6

116.5
-2.0
-8.8

437.7
-3.0
-14.8

209.2
-2.4
-13.5

235.1
-2.1
-10.8

49.3
-3.4
-12.2

Professional/Business Services Employment (000s) 146.8
Q/Q Percent Change
-0.5
Y/Y Percent Change
-3.6

381.8
-0.6
-4.0

455.8
-1.1
-9.1

200.4
0.9
-9.2

634.2
-0.5
-3.8

59.0
-0.8
-3.3

Government Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

246.5
2.8
4.2

493.2
-0.2
0.8

711.1
-0.3
0.4

349.8
0.5
1.2

695.1
-1.0
-0.1

150.3
-0.8
1.8

Civilian Labor Force (000s)
Q/Q Percent Change
Y/Y Percent Change

331.5
0.1
-1.1

2,979.3
-0.7
-1.6

4,526.3
-0.6
-1.1

2,178.4
-0.3
1.4

4,171.0
-0.5
0.9

797.4
-0.8
-1.1

Unemployment Rate (%)
Q2:09
Q3:08

10.8
9.7
6.9

7.2
7.0
4.6

10.9
10.9
6.5

12.1
11.7
7.2

6.9
6.8
4.0

8.6
7.8
4.3

36,206.7
-0.7
1.9

251,608.1
-0.5
1.5

294,457.1
-1.0
-0.7

132,167.8
-0.9
-0.8

315,911.8
-0.7
0.9

53,399.7
-1.4
1.9

Building Permits
Q/Q Percent Change
Y/Y Percent Change

163
365.7
7.2

2,412
-5.6
-36.8

9,360
-5.7
-35.8

4,365
6.7
-34.9

5,406
-6.6
-14.3

735
73.3
-15.8

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

564.2
-0.4
-3.8

450.2
-2.2
-7.3

332.1
-1.6
-2.0

336.3
-2.5
-2.1

424.1
-1.8
-3.9

226.8
-1.6
-1.3

9.2
21.1
27.8

75.2
12.6
15.3

146.8
18.4
-4.4

74.0
10.1
-7.5

126.4
14.5
2.9

29.2
19.7
14.1

Real Personal Income ($Mil)
Q/Q Percent Change
Y/Y Percent Change

Sales of Existing Housing Units (000s)
Q/Q Percent Change
Y/Y Percent Change

NOTES:
Nonfarm Payroll Employment, thousands of jobs, seasonally adjusted (SA) except in MSAs; Bureau of Labor Statistics (BLS)/Haver Analytics, Manufacturing Employment, thousands of jobs, SA in all but DC and SC; BLS/Haver Analytics, Professional/Business
Services Employment, thousands of jobs, SA in all but SC; BLS/Haver Analytics, Government Employment, thousands of jobs, SA; BLS/Haver Analytics, Civilian Labor Force, thousands of persons, SA; BLS/Haver Analytics, Unemployment Rate, percent, SA
except in MSA’s; BLS/Haver Analytics, Building Permits, number of permits, NSA; U.S. Census Bureau/Haver Analytics, Sales of Existing Housing Units, thousands of units, SA; National Association of Realtors®

40

Region Focus | First Quarter | 2010

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 1999 - Third Quarter 2009

Change From Prior Year

First Quarter 1999 - Third Quarter 2009

First Quarter 1999 - Third Quarter 2009

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%

10%

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

9%
8%
7%
6%
5%
4%
3%
99 00 01 02

03 04 05 06 07 08

09

99 00 01 02

03 04 05 06 07

08 09

99 00 01 02

03 04 05 06 07 08

United States

Fifth District
Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 1999 - Third Quarter 2009

First Quarter 1999 - Third Quarter 2009

First Quarter 1999 - Third Quarter 2009

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

Change From Prior Year

30%

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
99 00 01 02
Charlotte

03 04 05 06 07 08
Baltimore

09

20%
10%
0%
-10%
-20%
-30%
-40%
-50%
99 00 01 02

Washington

Charlotte

03 04 05 06 07 08
Baltimore

99 00 01 02

09

Washington

03 04 05 06 07 08

Fifth District

FRB—Richmond
Manufacturing Composite Index

House Prices

First Quarter 1999 - Third Quarter 2009

First Quarter 1999 - Third Quarter 2009

First Quarter 1999 - Third Quarter 2009

Change From Prior Year
15%
13%
11%
9%
7%
5%
3%
1%
-1%
-3%
-5%

30
20

30
10
20
0
10

-10

0

-20

-10

-30

-20
-30

-40
-50
99 00 01 02

03 04 05 06 07 08

09

09

United States

FRB—Richmond
Services Revenues Index

40

09

99 00 01 02

03 04 05 06 07 08

09

99 00 01 02

03 04 05 06 07 08

Fifth District

09

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and employment
indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

Region Focus | First Quarter | 2010

41

Metropolitan Area Data, Q3:09
Washington, DC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Hagerstown-Martinsburg, MD-WV

2,391.8
-0.4
-2.0

1,268.6
-1.0
-3.6

97.3
-0.6
-3.6

6.1
6.1
4.0

7.7
7.5
4.9

9.3
9.7
5.2

2,802
-2.1
-19.0

1,102
4.6
-32.0

208
18.9
-26.0

Asheville, NC

Charlotte, NC

Durham, NC

165.5
-1.2
-6.0

798.9
-1.7
-6.7

280.6
-1.8
-3.5

Unemployment Rate (%)
Q2:09
Q3:08

8.7
9.2
5.2

12.0
11.9
6.8

8.0
7.9
5.2

Building Permits
Q/Q Percent Change
Y/Y Percent Change

304
-6.2
-37.1

1,994
-4.5
-24.5

398
-34.3
-26.6

Raleigh, NC

Wilmington, NC

339.2
-1.5
-7.1

495.8
-1.1
-5.0

138.4
-2.1
-5.9

11.5
11.6
6.9

8.8
8.8
5.2

9.8
9.9
5.9

550
-17.8
-19.0

1,332
-14.1
-66.8

584
-25.5
-39.6

Unemployment Rate (%)
Q2:09
Q3:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment ( 000s)
Q/Q Percent Change
Y/Y Percent Change

Greensboro-High Point, NC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:09
Q3:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

42

Baltimore, MD

Region Focus | First Quarter | 2010

Winston-Salem, NC

Charleston, SC

Columbia, SC

206.7
-1.0
-4.9

282.8
-1.5
-5.2

344.3
-0.9
-5.1

Unemployment Rate (%)
Q2:09
Q3:08

10.0
10.2
6.3

9.7
9.4
6.2

9.3
9.1
6.5

Building Permits
Q/Q Percent Change
Y/Y Percent Change

329
-22.0
-6.8

887
-3.1
-18.6

811
-5.9
-41.0

Greenville, SC

Richmond, VA

Roanoke, VA

292.0
-1.4
-7.3

598.1
-2.0
-4.9

153.4
-1.9
-5.3

Unemployment Rate (%)
Q2:09
Q3:08

10.4
10.2
6.4

7.8
7.9
4.5

7.4
7.5
4.1

Building Permits
Q/Q Percent Change
Y/Y Percent Change

397
4.5
-33.4

974
20.0
-13.7

117
11.4
-17.0

Virginia Beach-Norfolk, VA

Charleston, WV

739.8
-1.0
-4.3

147.7
-1.2
-3.5

114.8
-1.5
-3.3

6.8
7.0
4.4

7.4
7.4
3.3

8.3
8.1
5.0

1,188
-14.3
-8.7

47
23.7
-68.9

7
-22.2
-12.5

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:09
Q3:08
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Huntington, WV

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail Sonya.Waddell@rich.frb.org

Region Focus | First Quarter | 2010

43

OPINION
Deregulation Should Not Be Blamed for the Financial Crisis
BY J O H N A . W E I N B E RG

he financial crisis, quite understandably, has motivated a broad re-examination of our approach to
financial regulation. Ideas for regulatory improvements have come from academics, the financial industry,
and Congress. Many commentators have argued that
regulatory shortcomings leading up to the crisis were
the direct result of deregulation implemented over the
preceding decades. One particular step in this process of
deregulation was the repeal of the Glass-Steagall Act.
Glass-Steagall became law in June 1933 as part of the
legislative response to the Great Depression. This law
required that the investment banking activities of underwriting and dealing in securities could not be conducted in
the same companies as the commercial banking activities of
taking deposits and making loans. The motivation for this
law was a widespread perception that the combination of
those activities had led to conflicts of interest which resulted, for instance, in questionable securities being sold to
investors so that banks’ borrowers could continue to service
their bank loans. While this separation became weaker over
time, many point to the Financial Services Modernization
Act of 1999, better known as the Gramm-Leach-Bliley Act,
as the action that “repealed” Glass-Steagall.
The crisis of 2007 and 2008 also involved the interaction
of securities and banking, although in a somewhat different
form. Securities created by pooling loans, particularly mortgage-backed securities, were at the heart of the crisis. This
process of securitization includes activities that resemble
both traditional commercial banking (making loans) and traditional investment banking (underwriting and dealing in
securities). While many of the riskiest mortgages in the
securitization market were originated by lenders outside the
commercial banking system, the largest commercial banks
still suffered significant losses on subprime loans. These
banks suffered losses because they had provided implicit or
explicit commitments of liquidity support to off-balance
sheet entities involved in subprime loan securitization.
In retrospect, it has become apparent that this process
led to an overexpansion of risks related to mortgages and
other lending. Would this expansion have been possible
before the weakening of the separation between investment
and commercial banking? At the time Gramm-Leach-Bliley
was passed, many of the securities activities that were tied
most closely to the current financial crisis were already
permissible for banks and bank-affiliated companies. So, in
this sense, the legislation did not significantly alter the
powers the banks had. Of course, one might respond that
before Gramm-Leach-Bliley the separation created by
Glass-Steagall had already been weakened considerably. This
weakening had occurred due to regulators’ rulemakings and

T

44

Region Focus | First Quarter | 2010

court decisions. As a result, during the 1980s and 1990s commercial banks were offering many investment banking
services and investment banks were offering many commercial banking services. But even so, it was not so much the
mixing of activities that led to the problems in the expansion and management of risk. Rather, it was the ways in
which some large financial firms, whether in commercial or
investment banking, approached their exposures to an event
that, at the time, looked relatively unlikely.
The securitization of assets like mortgages brings with it
some benefits of risk diversification by bundling the credit
made to a large number of borrowers. The risk that remains
is aggregate, undiversifiable risk, like that associated with a
change in interest rates or a broad decline in the value of real
estate. The latter, in fact, turned out to be the risk that
imperiled our financial system.
Markets are usually able to allocate large aggregate risks
to those firms best equipped to hold those exposures. One
factor that might give an individual firm a comparative
advantage at holding such risks is access to a reliable source
of emergency liquidity. The financial safety net that includes
deposit insurance and access to Fed lending did this by offering implicit and explicit commitments of liquidity to issuers
of mortgage-backed financial instruments. When the markets for those instruments turned sour, the risks came back
onto the books of the banks. Investment banks, especially
large firms which may have benefited from a presumption
that they would receive official support in a crisis, may have
similarly been advantaged when holding exposures to seemingly unlikely bad events, such as a decline in home prices.
This dynamic of aggregate risks being concentrated in
the hands of large institutions is independent of which firms
are allowed to engage in which activities. It comes instead
from the way in which emergency financial support
is provided by the public sector. The existence of such
support creates the need for regulatory oversight and, in
particular, for regulatory attention to aggregate risks that
tend to be concentrated in large institutions, the failure of
which can produce financial panic.
The regulatory agencies are undertaking efforts to
improve this aspect of oversight, and some components of
reform proposals are aimed at aggregate or systemic risks.
This is an important direction for improvements to take.
Yet, improvements to the way we constrain the financial
safety net are needed too. These changes would make a
greater contribution to financial stability than rebuilding
the Glass-Steagall wall.
RF
John A.Weinberg is senior vice president and director of
research at the Federal Reserve Bank of Richmond.

NEXTISSUE
Interview

Does the Deficit Matter?
There are many reasons why policymakers are concerned about
the national debt. Economists, however, have differing opinions
over how important deficit spending is to macroeconomic
outcomes.

Product Recalls
When a defective product is recalled, whether through government order or voluntary action, there is often reputational
damage to the manufacturer. We’ll explore the economics of
recalls and ponder whether market incentives are generally
robust enough to encourage companies to police themselves.

Charitable Giving
Business trends sometimes have little effect on how much
money people give to charity. In some cases, the type of
charity matters just as much as the level of discretionary
income available, as the recent downturn demonstrates.

Justin Wolfers of the Brookings Institution
and the University of Pennsylvania discusses
how prediction markets may be able to help
businesses and policymakers produce more
accurate forecasts.

Federal Reserve
For much of its history, the United States
operated on a gold standard. What did this
mean for the economy and why was that
system abandoned?

Economic History
The Commerce Clause of the U.S.
Constitution has been used to justify
numerous economic regulations over the
last century. We'll tell the story of this
often controversial provision and why some
legal scholars and economists argue that a
reinterpretation is due.

Broadband’s Last Mile

Jargon Alert

Fast Internet access has become as ubiquitous as telephone
service in most parts of the country. For those regions where
access isn’t as prevalent, however, it’s an open question about
how to best serve those potential customers.

How can you tell when a sluggish economy
is about to turn the corner? Look to the
“leading indicators.”

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and Web-exclusive content
• To add your name to our
mailing list
• To request an e-mail alert of
our online issue posting

Federal Reserve Bank
of Richmond

PRST STD
U.S. POSTAGE PAID
RICHMOND VA
PERMIT NO. 2

P.O. Box 27622
Richmond, VA 23261

Change Service Requested

Please send subscription changes or address corrections to Research Publications or call (800) 322-0565.

Summer 2009
vol. 13, no. 3
Cover Story
Measuring Quality of Life:
How should we assess
improvements to our
standard of living?

Winter 2009
vol. 13, no. 1

Spring 2009
vol. 13, no. 2

Cover Story
Know When to Fold ’Em:
How the corporate bankruptcy
system benefits and hinders
the economy

Cover Story
Reforming the Raters:
Can regulatory reforms
adequately realign the
incentives of credit rating
agencies?

Federal Reserve
Last Stop Lending

Federal Reserve
Capital Cushions

Interview
George Selgin
University of Georgia

Interview
Allan Meltzer
Carnegie Mellon University

Fall 2009
vol. 13, no. 4
Cover Story
The Price is Right? Has the
financial crisis provided a
fatal blow to the efficient
market hypothesis?

Federal Reserve
Role of the Beige Book

Federal Reserve
The Evolution of Fed
Independence

Interview
Timur Kuran
Duke University

Interview
George Kaufman
Loyola University Chicago