View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

THIRD QUARTER 2014

FEDERAL RESERVE BANK OF RICHMOND

Birds
of a

Feather
Does the
HAWK-DOVE
distinction
still matter
in the
modern Fed?

High School
Dropout Dilemma

Raise the
Minimum Wage?

The Geography of
Unemployment

VOLUME 18
NUMBER 3
THIRD QUARTER 2014

FEATURED ARTICLE	
	

3

Birds of a Feather
Does the hawk-dove distinction still matter in the modern Fed?

DIRECTOR OF RESEARCH

	
	

12

Raise the Wage?
Some argue that there’s no downside to a higher minimum
wage, but others say the poor would be hit hardest

EDITORIAL ADVISER
EDITOR

Aaron Steelman
SENIOR EDITOR

17

David A. Price
MANAGING EDITOR/DESIGN LEAD

How the Geography of Jobs Affects Unemployment
Why job accessibility is limited for some groups and what
it means for antipoverty policies
	

John A. Weinberg
Kartik Athreya

The Dropout Dilemma
Why do kids drop out of high school, and how can we
help them stay?
	

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.

Kathy Constant
STAFF WRITERS

20

Renee Haltom
Jessie Romero
Tim Sablik
EDITORIAL ASSOCIATE

Lisa Kenney

­

CONTRIBUTORS

Jamie Feik
Ann Macheras
Wendy Morrison
Frank Muraca
Karl Rhodes
DESIGN

Janin/Cliff Design, Inc.

DEPARTMENTS

	 1	 President’s Message/Financial Learning is a Lifelong Process
	 2		Upfront/Regional News at a Glance
	 7	 Policy Update/Cracking Down on Fraud?
	 8	 Jargon Alert/Disinflation
	 9	 Research Spotlight/The Value of High School Employment
	10		The Profession/When Economists Make Mistakes
	11	 Around the Fed/Limiting Bank Size is not the Answer
	24	 Economic History/Free to Speculate
27		 Book Review/Secrets of Economics Editors
28		 District Digest/The Rising Tide of Large Ships
36			Opinion/The Long View of the Labor Market

WEB EXCLUSIVE: Interview with Dani Rodrik

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/
RichFedResearch
Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Econ Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025x (Online)

PRESIDENT’SMESSAGE

Financial Learning is a Lifelong Process

I

ndividuals today face a broad array of difficult financial
choices, such as deciding how to pay for college or a
home or calculating how much to save for retirement.
Yet surveys reveal that many consumers lack the confidence
and knowledge to make these financial decisions.
Efforts to provide economic and financial education
have expanded in recent years. Nearly all states have made
economics and personal finance part of their K-12 education standards. Here in the Fifth District, North Carolina,
Virginia, and West Virginia require high school students
to take a personal finance class before graduating. National
organizations like the Council for Economic Education also
provide tools for financial educators and students.
Research suggests that knowledge of core financial
concepts, such as how to calculate compound interest, is
associated with an individual’s ability to navigate tough financial choices. For example, those who are able to make interest
rate calculations are much more likely to save for retirement
and are less likely to have difficulty paying off debt.
Providing this information to students from a young age
helps build a foundation for the decisions they will make
later in life. But it is important to also recognize that financial education is a lifelong process that requires ongoing
attention and updating. The knowledge students receive
in school may be far removed from the time when they are
faced with key financial decisions, and their circumstances
likely will have changed during this period.
A sound understanding of fundamental economic concepts is critically important for making informed financial
choices. At their core, many important financial decisions
are about economic principles. Being able to calculate interest may help an individual understand the cost of a home
mortgage, but without an understanding of opportunity
cost — the value of an alternative choice that a person has
forgone — it is difficult to fully consider the merits of buying
a home versus renting. Here at the Richmond Fed, we’ve
developed resources to help individuals learn core economic
and financial concepts, which you can find by visiting the
Education page of our website, richmondfed.org/education.
Another reason to focus on core skills such as these is
that each person is different. Financial education designed
only to guide students toward “correct” choices presupposes
that some decisions, like taking out a high-interest loan, are
a mistake. But it is difficult for an outside observer, such as
an educator or policymaker, to know enough to determine
when another individual is making an unwise choice. For
example, someone with little savings may find that a shortterm high-interest loan is the best option for fixing a car if
the car is that person’s only means of reaching work.
Recognizing that financial knowledge decays over time
and that people are different can inform how we approach

financial education for working adults. In addition to
educating Americans during
their school years, we should
focus on providing information to individuals about
major financial decisions as
they are preparing to make
those decisions. When consumers buy goods like a
microwave or television, they
have easy access to all the
information needed to make
a decision. Also, the consequences of making what later
appears to be a poor choice are not necessarily very large.
In contrast, major financial transactions, like purchasing a
home or going to college, require more specialized knowledge that is not so easily obtained. And the consequences
of those choices can be much more severe and long-lasting.
When people are making such important decisions they
are especially motivated to learn about the choices they face.
Research has found that providing even brief training during
these “teachable moments” can be as effective at improving
decision-making as more extensive training undertaken in
the months prior.
Regulators can also help by requiring clear and explicit
disclosure of significant information in financial contracts.
Here, simplicity and concision are key. Consumer testing
conducted by the Fed after the financial crisis revealed that
contracts like home mortgages often could be written in a
way that was more easily understood. Presenting the most
significant terms of a contract explicitly and at the beginning, for instance, would help individuals to make more
informed decisions.
In short, financial education efforts should avoid a narrow
prescriptive approach based on the idea that policymakers
know what’s best for everyone. Instead, we should focus on
providing the tools that assist individuals in choosing the
best options for themselves. In addition, timely information
about complex and consequential transactions can help
households better understand their choices when faced with
major decisions.	
EF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

1

UPFRONT

Regional News at a Glance

BY T I M S A B L I K

MARYLAND — Maryland oysters are making a comeback. In 2009, the state
lifted barriers to oyster farming. Since then, Maryland has issued leases covering
thousands of acres of water. But some watermen complain that the farms disrupt
fishing and have called on the state to limit licenses. Others have decided to go
into farming themselves, taking advantage of state grants and loans that support
such transitions.

NORTH CAROLINA — On Oct. 14, the U.S. Supreme Court heard arguments
in the case of North Carolina State Board of Dental Examiners v. Federal Trade
Commission (FTC). The dental board issued cease-and-desist orders to teeth
whitening services operated by non-dentists. The FTC argues that the board’s
actions violate antitrust laws, while the board contends that it is immune from
those laws. Because six of the eight board members must be practicing dentists,
critics argue that the board has an incentive to restrict competition.

SOUTH CAROLINA — Boeing Co. secured a lease for a new research and
development center in North Charleston, S.C., in September. The center will
employ between 300 and 400 workers. Separately, the company announced a new
agreement with Japan’s Toray Industries, which will supply carbon fiber for two of
Boeing’s passenger jet models. Toray will spend $865 million on a new carbon fiber
plant in South Carolina.

VIRGINIA — Worldwide construction firm Bechtel Corp. plans to relocate as
many as 1,100 employees from Frederick, Md., to its office in Reston, Va., in 2015.
The move is part of the $39.4 billion company’s global restructuring effort. Bechtel
previously moved 625 jobs from Frederick to Reston in 2011.

WASHINGTON, D.C. — On Nov. 4, 70 percent of District voters approved
a ballot initiative to legalize marijuana. The measure would allow residents and
visitors to possess and grow small quantities of marijuana. But Congress’ omnibus
spending bill approved in December includes a rider blocking the use of federal
funds to enact marijuana legalization, placing the future of the ballot initiative in
question.

WEST VIRGINIA — In September, a federal bankruptcy judge approved a
$2.9 million settlement against chemical producer Freedom Industries. The
settlement benefits residents whose water was contaminated by a chemical spill in
January. Freedom declared bankruptcy shortly after the spill. Other creditors have
argued that the settlement will prevent them from recovering anything on their
bankruptcy claims.

2

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

FEDERALRESERVE

Birds of a Feather

F

ed watching can seem a lot like
bird watching. “Behind the Fed’s
Dovish Turn on Rates,” reads a
recent Wall Street Journal headline; “Fed
Hawk Down,” reads the Washington Post
announcement of the retirement of a
Fed bank president. “Hawk” and “dove”
have commonly been used by the financial press to describe Fed policymakers
since the 1980s, and the term “inflation
hawk” can be found as far back as the
late 1960s. Both birds have even longer
traditions as wartime metaphors. The
dove has been a symbol of peace going
back to biblical times, and leading up
to the War of 1812, American politicians who advocated confrontation
with Great Britain were labeled “War
Hawks.” But what do these terms have
to do with monetary policy?
Hawk and dove are often used to
describe a divide over the Fed’s dual
mandate of promoting maximum
employment and price stability. Hawks
are said to worry more about price
stability and favor relatively tighter
monetary policy to keep inflation in
check. Doves are viewed as more open
to the possibility that monetary policy
can keep unemployment low and more
inclined to use accommodative policy
to attempt to do so.
The reason for the perceived divide
is that the Fed cannot always achieve
both objectives at the same time, at least
in the short run. Expanding the money
supply to boost aggregate demand during
a recession can help lower unemployment, but it also can create inflationary
pressure. By the same token, tightening can reduce inflation but it can also
raise unemployment, as it did during the
recession of 1981-1982.
In the past, Fed officials disagreed
about the proper focus and targets for
monetary policy. But has that debate
changed today? In 2012, the Fed
adopted an explicit long-run inflation
goal of 2 percent, suggesting a consensus on the goal of price stability. In the

BY T I M S A B L I K

wake of that decision, then-president of
the Cleveland Fed Sandra Pianalto commented that the bird labels had become
obsolete. “We now have agreement” on
inflation, she said. “So I don’t think the
titles of hawks and doves are useful.”
Have Fed officials all become birds
of a feather now? Dissents at Federal
Open Market Committee (FOMC)
meetings in recent years would suggest otherwise. Indeed, while “hawks
versus doves” is a simplification of the
disagreements at the Fed, the terms
do serve to highlight important differences in policymakers’ economic forecasts and their confidence in the Fed’s
ability to influence the future path of
the economy with monetary policy.

Does the
hawk-dove
distinction still
matter in the
modern Fed?

Inflation and Unemployment:
A Tradeoff?
Economists have long understood that
inflation and unemployment tend to
move in opposite directions. But the

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

3

idea that policymakers could exploit this tradeoff to target
specific levels of unemployment came to prominence in
the late 1950s following a paper by New Zealand economist
A.W.H. Phillips. Phillips traced the history of wages and
unemployment in the United Kingdom over the previous
century and found an inverse relationship — later dubbed
the Phillips curve.
Massachusetts Institute of Technology economists Paul
Samuelson and Robert Solow found a similar pattern for
prices and unemployment in the United States. In a 1960
paper, Samuelson and Solow produced a Phillips curve that
they presented as a “menu of choice between different
degrees of unemployment and price stability.” Although
Samuelson and Solow cautioned that attempting to exploit this
tradeoff could very likely shift the curve in the long run, policymakers in the 1960s latched onto the idea of manipulating the
tradeoff to achieve maximum employment.
President Kennedy’s economic team articulated
this belief in the 1962 Economic Report of the President:
“Stabilization policy — policy to influence the level of aggregate demand — can strike a balance between [price stability
and maximum employment] which largely avoids the consequences of a failure in either direction.” These economists
recognized that policies designed to stimulate aggregate

demand to lower unemployment would generate inflationary
pressures, but they were optimistic that they could respond
before inflation climbed too high.
Some economists at the time also went as far as to argue
that policymakers should seek the lowest unemployment
rate possible, even if it meant higher inflation. They viewed
the costs of inflation as small and confined to the wealthy,
compared with unemployment, which had a widespread
effect. Leon Keyserling, an economist who served as chairman of President Truman’s Council of Economic Advisers
and as an economic consultant to members of Congress
from 1953 to 1987, wrote in a 1967 journal article: “It is utterly
unconscionable that we should ask millions of unemployed
and their families to be the insurers of the affluent against
somewhat higher prices.”
But by the 1970s, steadily rising prices had become a concern for more than just the wealthy. A 1974 Gallup poll reported
that 81 percent of Americans cited the high cost of living due
to inflation as the country’s biggest problem. Moreover, episodes of “stagflation” — simultaneously rising unemployment
and inflation — further called into question the ability of
policymakers to reliably exploit the Phillips curve tradeoff.
Economists and Fed officials largely agreed that double-digit
inflation was proving costly, but they disagreed over how

Richmond’s Hawkish Tradition
In the Fed’s flock, Richmond Fed President Jeffrey Lacker is
often counted among the hawks by outside observers. While
he joked in 2013 that he wouldn’t mind being a different bird,
such as one of the great blue herons he sees flying outside his
office window, it’s not hard to see why the hawk label has
stuck. In 2012, he dissented at every FOMC meeting against
the Fed’s accommodative actions. In those dissents, he
expressed concern that the Fed might fall behind on its price
stability mandate and also voiced opposition to the purchase
of instruments like mortgage-backed securities, which he
argues constitutes fiscal rather than monetary policy since it
directs credit to specific sectors of the economy. Ultimately,
he argued, that could jeopardize the Fed’s monetary policy
independence and thus its ability to keep inflation low — a
hawkish argument indeed.
Lacker is certainly not the first Richmond Fed president
to object to the Fed’s conduct of monetary policy. He currently ranks third in dissents by bank presidents, immediately
followed by Robert Black at number four. Black was the
first Ph.D. economist to serve as Richmond Fed president,
starting in 1973. That decade was marked by vigorous debate
among monetary policymakers about the cause of mounting
inflation. Black drew from his own understanding of economics as well as the work of Richmond’s growing staff of research
economists (many of whom had a monetarist background) to
argue that the main cause of inflation was the growth of the
money supply. It was a view that was not widely held at the

4

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

time, and Black’s calls for substantial monetary tightening to
rein in double-digit inflation put him at odds with members of
the FOMC who favored a lighter touch. His stance was given
credence by the disinflation that occurred through monetary
tightening under Chairman Paul Volcker, and today the idea
that inflation is largely a monetary phenomenon is part of the
Fed’s statement of principles.
Richmond’s focus on price stability continued under
Black’s successor. Alfred Broaddus became president in 1993,
having served as a key economic adviser to Black. Although
inflation had fallen substantially by that time, Broaddus
was concerned that the Fed might become complacent and
lose the credibility on inflation that it had fought so hard to
obtain. He maintained Richmond’s hawkish tradition and
was a vocal proponent of a singular inflation target, or at the
very least a numeric inflation goal, as a way to anchor the
public’s expectations that the Fed would keep inflation low.
While the Fed has not adopted the former, it did announce
a long-run inflation goal of 2 percent in 2012.
In a 2012 interview, Lacker noted that the record left
by Black and Broaddus was “a real inspiration” for him.
Through speeches and dissents, he has often returned to the
theme of price stability and the “hawkishness” with which
the Richmond Fed has come to be associated. Indeed,
Lacker recalled that when he dissented for the first time in
2006, then-Chairman Alan Greenspan told him: “I would’ve
been disappointed if you hadn’t.” 	
— Tim Sablik

much the Fed could or should do to bring it down. Some, like
Chairman Arthur Burns, argued that inflation was driven by
other factors in the economy and that using monetary policy
to combat it would result in even higher unemployment.
After the experience of the 1970s, as well as advancements in theory suggesting that expectations are an important determinant of inflation, economists now generally
agree that there is no long-run tradeoff between inflation
and unemployment. But there is still disagreement on how
much the Fed can do to bring unemployment down in the
short run. “It’s a debate that has continued over time and
still exists today,” says David Wheelock, vice president and
deputy director of research at the St. Louis Fed.

Monetary Policy Goals	
At the crux of that debate is the Fed’s ability to predictably
affect unemployment in the short run. Economic theory
suggests that when the economy is operating below its
potential, monetary policy can stimulate growth without
generating inflationary pressure. But economists are generally skeptical that we can accurately predict the economy’s
potential, and they differ on the cost of guessing wrong.
Stanford University economist John Taylor reframed
this debate in 1993 when he proposed a mathematical
formulation for how central bankers set nominal interest
rates. Under this “Taylor rule,” monetary policymakers
respond to gaps in both inflation and employment targets.
Policymakers assign weights to each of these responses, and
while Taylor proposed that the weights be equal, it is clear
that not everyone at the Fed agrees.
“Hawks argue that monetary policy can affect the unemployment rate but not as reliably as we would like,” says
Wheelock. “So the best that you can expect from monetary
policy is price stability.”
This suggests that hawks assign a larger weight to monetary policy responses to inflationary gaps, but it doesn’t mean
that they assign no weight to employment gaps. Instead,
hawks argue that the Fed can best achieve maximum employment by focusing on price stability. William Poole, who
served as president of the St. Louis Fed from 1998 to 2008
and was labeled a hawk, captured this idea in the title of a
1999 speech: “Inflation Hawk = Employment Dove.”
“I put inflation as the Fed’s primary objective, but by no
means did I put employment as a nonobjective,” says Poole.
“The reason is that once you lose on the inflation front, then
you lose the possibility of success on the growth objective. I
think the 1970s demonstrated that.”
These views have been echoed by other hawks, such as
Philadelphia Fed President Charles Plosser. In an Oct. 16
speech, Plosser noted that economists do not know how
to “confidently determine whether the labor market is fully
healed or when we have reached full employment.” Waiting
to raise interest rates until it is clear the labor market has
fully recovered risks falling behind on inflation, he said.
Doves, on the other hand, tend to be more willing to risk
temporarily falling behind on inflation. “If you’re uncertain

about the natural rate of unemployment but you have a very
high weight on policy responses to unemployment, that
means you’re more willing to test the waters,” says Frederic
Mishkin, a professor of economics at Columbia University
Business School who served on the Board of Governors from
2006 to 2008 and was often labeled a dove. “If you overshoot
a little bit and a little inflation occurs but you lowered unemployment, then doves see that as a good thing.”
Recently, some Bank presidents have argued that the
Fed should be willing to tolerate overshooting the 2 percent inflation goal because inflation has been consistently
below that target in the last few years. In an Oct. 13 speech,
Chicago Fed President Charles Evans remarked, “One could
imagine moderately above-target inflation for a limited time
as simply the flip side of our recent inflation experience …
hardly an event that would impose great costs on the economy.” He proposed in 2012 that the Fed should commit to
keeping interest rates near zero even after inflation reaches
2.5 percent or 3 percent. This was dubbed the “Evans rule” —
a “dovish” alternative to the Taylor rule.
But hawk and dove are used to describe more than just
policymaker preferences and risk tolerances. They are also
used to describe how FOMC members vote on changes
to the federal funds rate, the Fed’s primary policy tool.
Committee members who favor higher rates or raising
rates sooner are labeled hawks, and vice versa for doves. In
this context, the boundaries between hawks and doves are
much more nebulous, as such decisions depend heavily on
ever-changing forecasts of economic growth.

Looking Ahead
It is tempting to think of hawks as always favoring higher
interest rates and doves always favoring lower. But Fed
officials base their recommendations in large part on their
expectations of future economic growth, and those expectations change as new information becomes available. This
is particularly the case in times of economic uncertainty,
such as the run-up to the financial crisis of 2007-2008. In
the Aug. 7, 2007, FOMC meeting, Poole noted that markets
were “very skittish,” but he and others recommended keeping the federal funds rate “steady.” Two days later, however,
Poole had reassessed the need for action. At his request,
St. Louis proposed lowering the discount rate — the rate it
charges on loans to individual banks.
“I was a hawk, but I was a hawk who was ready to respond
to changing conditions,” says Poole.
Indeed, there are many instances of policymakers alternating between dovish and hawkish recommendations based
on their forecasts of economic conditions, making it difficult
to pin just one label to any Fed official. For example, when
Janet Yellen first came to the Board in 1994, unemployment
was falling, and by 1996 it had fallen below what many economists considered to be the natural rate. Yellen warned that
the Fed needed to be concerned about inflationary pressure and should consider raising rates — a hawkish move.
But during the recession of 2007-2009, Yellen faced very
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

5

different economic conditions. Unemployment was elevated and inflation was low, and Yellen supported the Fed’s low
rates and quantitative easing. This prompted the financial
press to label her a dove when she was nominated to succeed
Ben Bernanke as chair.
More recently, Minneapolis Fed President Narayana
Kocherlakota has been perceived as switching sides. In
September 2011, he dissented against the Fed’s efforts to
lower long-term interest rates by purchasing bonds with
long maturities (a procedure dubbed “Operation Twist”).
Kocherlakota explained that inflation was approaching the
Committee’s stated goal of 2 percent and that the Fed
should not risk diminishing its credibility to keep inflation
on target by pursuing further expansionary policies. But in
2013, Kocherlakota noted that employment and inflation
had both grown more slowly than he had previously expected. Given this new information, he began advocating more
accommodative monetary policy to return inflation to the
Fed’s goal of 2 percent, along the lines of the Evans rule.
Poole says that differences in forecasts, rather than disagreements about the Fed’s long-run objectives, are what
account for much of the debate at the Fed today. “I think
there has been a substantial convergence of views on what
the objectives of monetary policy ought to be,” he says. “The
disagreement between hawks and doves today is more a matter of the judgment you bring to the table about the state of
the economy and what risks you want to run.”
Still, forecasts and preferences for the focus of monetary
policy often go hand in hand. “Your forecasts are tinted by
the glasses through which you view the world,” says Mishkin.

A Broader Debate
The perception of the Fed as a feuding flock may also arise
from the fact that debate among monetary policymakers has
become much more public in the last 20 years. Prior to 1994,
FOMC decisions were not made public until years after
the fact. Over the same period, bank presidents have also
become more vocal participants in the policy debate.
“Until relatively recently, it was rare for a Reserve Bank

president to be a Ph.D. economist,” says Wheelock. “This
has led to the presidents having a stronger and more independent voice on monetary policy than they once did.”
But the impulse to group policymakers on one of two
sides can obscure more subtle disagreements. In a recent
St. Louis Fed paper, Wheelock and former St. Louis Fed
vice president and economic adviser Daniel Thornton catalogued dissents at the FOMC from 1936 through 2013.
They grouped dissents as favoring either tighter or easier
monetary policy, but Wheelock notes that not all of them
fit neatly into one of those two buckets. For example, in the
1960s, the United States was still on a version of the gold
standard and some Fed governors dissented because they
were worried about a balance of payments deficit that might
jeopardize gold reserves. During the recession of 2007-2009,
Richmond Fed President Jeffrey Lacker supported the Fed’s
expansion of the monetary base, a dovish move, but he dissented over the decision to implement that policy through
the purchase of assets like mortgage-backed securities rather
than U.S. Treasuries.
There are also important points of agreement among Fed
officials that the labels can gloss over. Doves are sometimes
portrayed as being unconcerned with inflation, but all members of the FOMC seek to keep inflation expectations low
and stable over the long run. “On that, there’s no difference
between hawks and doves,” says Mishkin. “I’m certainly not
as hawkish as Jeff Lacker is, but both of us were very strong
advocates of inflation targeting. And both of us are equally
concerned about unhinging inflation expectations.”
Nevertheless, the idea of a split between two camps is
likely to persist if for no other reason than the Fed’s primary
policy tool — the fed funds rate — moves in only two directions. And for the most part, monetary policymakers don’t
have the option of not taking a stand.
“When you get to an FOMC meeting, you have to make
a decision given the best information you have,” says Poole.
“You need to be ready to change your mind, but you can’t just
say ‘I’m going to wait until we do more studies.’ That may work
for an academic, but it won’t work for a policymaker.”	 EF

Readings
Evans, Charles L. “Monetary Policy Normalization: If Not
Now, When?” Indianapolis, Ind.: National Council on Teacher
Retirement 92nd Annual Conference, Oct. 13, 2014.

Poole, William. “Inflation Hawk = Employment Dove.” Speech
at the Center for the Study of American Business at Washington
University in St. Louis, July 29, 1999.

Keyserling, Leon H. “Employment and the ‘New Economics.’”
Annals of the American Academy of Political and Social Science,
September 1967, vol. 373, pp. 102-119.

Samuelson, Paul A., and Robert M. Solow. “Problem of
Achieving and Maintaining a Stable Price Level: Analytical
Aspects of Anti-Inflation Policy.” American Economic Review,
May 1960, vol. 50, no. 2, pp. 177-194.

Nash, Betty Joyce. “The Changing Face of Monetary Policy: The
Evolution of the Federal Open Market Committee.” Federal
Reserve Bank of Richmond Region Focus, Third Quarter 2010,
vol. 14, no. 3, pp. 7-11.

6

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

Thornton, Daniel L., and David C. Wheelock. “Making Sense
of Dissents: A History of FOMC Dissents.” Federal Reserve
Bank of St. Louis Review, Third Quarter 2014, vol. 96, no. 3,
pp. 213-227.

POLICYUPDATE

Cracking Down on Fraud?

I

n the wake of the financial crisis, President Obama
established the Financial Fraud Enforcement Task
Force. Led by the Department of Justice, the task
force brought together financial regulators like the Federal
Deposit Insurance Corporation (FDIC) and the Federal
Reserve as well as law enforcement agencies like the
Federal Bureau of Investigation in an effort to increase
detection and prosecution of financial fraud. In a March
2013 speech, Michael Bresnick — then executive director
of the task force — outlined a strategy that has since been
dubbed “Operation Choke Point.” Regulators would press
banks to closely review their merchant accounts and weed
out accounts held by fraudulent payment processors and
other businesses in “high-risk” sectors.
The FDIC issued guidance on its website identifying
categories of businesses that might pose “legal, reputational,
and compliance risks” to banks. The list included illegal
operations, such as Ponzi schemes and cable box descramblers, as well as businesses that are legal in many states, such
as ammunition and firearm merchants and payday lenders.
The FDIC stated that while many of these firms are reputable, as a whole they operate in sectors that have been
increasingly associated with illegal or deceptive practices.
According to the FDIC, these businesses often gain access
to the payment system through nonbank payment processors and then charge consumers for “questionable or fraudulent goods and services.” Banks are required to conduct
due diligence of their customers under the Bank Secrecy
Act (BSA), but nonbank payment processors are not subject
to such laws and therefore may indirectly expose banks to
greater risk.
In January, the Department of Justice filed suit against
Four Oaks Fincorp and Four Oaks Bank & Trust Company in
North Carolina for granting a payment processor that served
several fraudulent online payday lenders direct access to bank
customer accounts. Many of Four Oaks’ customers complained that their accounts were subject to activity they did
not authorize, and prosecutors argued that Four Oaks did not
respond to these and other signs of fraudulent activity. Four
Oaks agreed to pay $1.2 million to settle the charges.
Operation Choke Point has largely focused on such online
payday lenders, which have increasingly been the subject of
consumer complaints. In October, Pew Charitable Trusts
released a report noting that those who borrowed online
suffered much higher rates of fraud than storefront payday
borrowers. Online lenders were also more likely than storefront lenders to issue threats to borrowers and engage in
other illegal activity. A third of online borrowers reported
unauthorized withdrawals from their bank accounts and two
in five had their personal or financial information stolen. Pew

BY T I M S A B L I K

noted that such practices were not universal, however. The
largest online payday lenders were the subjects of very few
complaints, and the majority of offenses were concentrated
among lenders that were not licensed by all the states in which
they operated.
In addition to licensing, several states regulate lending
through usury laws limiting the maximum annual interest
rate that lenders can charge. Some customers of online
lenders reported interest rates far in excess of these limits
— more than 1,000 percent in some cases. A few states,
including North Carolina, ban payday lending entirely. But
states have had difficulty enforcing the rules on unlicensed
online payday lenders, which often operate out of other
countries or through Indian tribes and claim not to be
subject to state laws.
While regulators say that their efforts have been directed
at these illegal lenders, some lawmakers argue that Operation
Choke Point may go too far and unfairly punish legal lenders
and merchants as well. In May and December, Rep. Darrell
Issa, the chairman of the House Committee on Oversight
and Government Reform, issued reports arguing that the
Department of Justice and FDIC used Operation Choke
Point to target legal but disfavored businesses like payday
lenders. Citing emails among FDIC officials that suggested
“personal animus towards payday lending,” the reports argued
that the FDIC acted inappropriately by injecting those beliefs
into the bank examination process. At a July hearing, House
Judiciary Committee Chairman Bob Goodlatte said that he
had “received numerous reports of banks severing relationships with law-abiding customers from legitimate industries”
that were designated high risk.
Studies have shown that payday lenders can fill an important niche for some consumers. Even consumers who have
access to checking accounts or credit cards may choose to use
payday loans if the fees are cheaper than the alternatives, such
as overdrawing an account or failing to make credit card payments on time. Indeed, research by the New York and Kansas
City Feds in 2008 and 2011 found that after North Carolina
and Georgia banned payday loans, households experienced
higher rates of bounced checks and bankruptcy relative to
those in states that allowed payday lending.
In June, a major trade group representing payday lenders
filed a lawsuit accusing financial regulators of attempting to
drive payday lenders out of business. In the same month,
Rep. Blaine Luetkemeyer on the House Financial Services
committee introduced legislation to end Operation Choke
Point. In response, the Department of Justice and FDIC
agreed to launch a preliminary investigation of the program.
The FDIC also removed the list of specific high-risk business
categories from its guidance to depository institutions.	 EF
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

7

JARGONALERT
Disinflation

M

any people know that “inflation” is a rise in the
overall price level. Many people also know that
“deflation” is a fall in the overall price level —
that is, the rate of inflation is negative. But fewer people are
familiar with the path from inflation to deflation: disinflation, a situation in which the inflation rate is falling. Like a
runner who slows down but still propels forward, when there
is disinflation, prices may still be rising, just at a slower rate
than before.
Disinflation can be good news or bad news. It is a good
thing if it comes from increases in productivity and technology, like those that helped keep inflation low in the late 1990s.
More commonly, disinflation is brought about by
contractionary Fed policy. In such episodes, disinflation
is intentional and welcome. This was especially the case
during the most favorable disinflation episode in the Fed’s
history: the early 1980s, when inflation (as measured by
the year-over-year change in the personal consumption expenditures
price index) declined from more than
11 percent in early 1980 to an average
of roughly 3.5 percent in 1985. Though
high inflation hasn’t been a problem
since, the goal of tighter monetary
policy is generally to produce modest
disinflation to get inflation closer to
the central bank’s target.
There is even such a thing as
“opportunistic disinflation,” the
name that former Fed Governor Laurence Meyer gave the
monetary policy strategy of allowing the economy’s inevitable recessions to ratchet down inflation over time. Under
this strategy, the central bank would sustain boom periods
with low rates but jump the gun slightly on raising rates
after recessions to preserve the lower rate of inflation — and
the reduced inflation expectations that help keep inflation
down. The result would be gradual disinflation, with perhaps
some short-term cost in terms of unemployment but longterm gains in reducing the distortions of inflation. Though
some have suggested this was the Fed’s strategy during the
1980s disinflation, the Fed is less commonly believed to
have been following this strategy in the last 20 years, when
inflation has generally been low. At those low inflation rates,
opportunistic disinflation could risk reversing a recovery and
tipping the economy into deflation — and large declines in
the overall price level can be a self-perpetuating trap.
Policymakers tend to get particularly concerned about disinflation when inflation falls below 2 percent, the Fed’s official
goal since early 2012. In recent history, there have been two
notable episodes of disinflation sparking deflationary fears.
8

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

In 2003, when the economy was still limping out of the 2001
recession, core inflation — total inflation minus the volatile
categories of food and energy, often a better measure of current inflation for policymaking purposes — fell steadily to 1.3
percent. And more recently, core inflation fell from 2.3 percent in mid-2008 to roughly 1 percent a year later; after climbing back up, it plunged to 1 percent again 18 months later.
In both disinflation episodes, many economists talked
seriously about the risk of deflation. In 2002, former Fed
Chairman Ben Bernanke, then a governor, made a famous
speech outlining how to prevent something like Japan’s
decades-long bout with deflation. He said the chances of
deflation were quite low but also that deflation is notoriously hard to predict. That’s partly because monetary
policy works with a lag and partly because it is not always
obvious why inflation has fallen. For example, 2004 research
by Richmond Fed economist Andrew Bauer and thencolleagues at the Atlanta Fed showed
that the U.S. disinflation of the early
2000s was driven primarily by falling
housing and car prices, for reasons
unique to those sectors and unrelated
to the economy’s overall weakness.
Therefore, Bernanke said, the very
best medicine for deflation is to never
get into it in the first place. Put differently: Be vigilant in disinflation
episodes for threats of deflation. The
Fed lowered interest rates in 2003,
with some Fed policymakers urging even larger cuts.
When the Great Recession hit in 2007, by contrast, there
was little doubt that monetary policy should ease aggressively.
After pushing interest rates to near zero in late 2008, the
Fed added some unprecedented policies, such as massive
asset purchases (often called “quantitative easing,” or QE)
to pump the banking system full of reserves and, hopefully,
stimulate growth and mitigate any risk of deflation. When
the disinflation trend resumed throughout 2010, the Fed
initiated a second round of QE starting that November, and
a third round began in September 2012.
Today the economy has recovered considerably; unemployment has finally fallen below 6 percent, and QE officially
ended in October 2014. But inflation remains below the Fed’s
goal of 2 percent. The economy seems to have improved
enough that most Fed policymakers have not expressed
concern about continued disinflation or outright deflation.
Instead, today’s policy debate has centered on how quickly
inflation is likely to return to the 2 percent goal — and accordingly, whether it is worth holding rates low for a lot longer,
risking a little inflation to boost employment further.	 EF

ILLUSTRATION: TIMOTHY COOK

BY R E N E E H A LT O M

RESEARCH SPOTLIGHT

The Value of High School Employment

M

BY DAV I D A . P R I C E

illions of American teens have gone through the
year continues to predict positive effects on labor market
coming-of-age ritual of running the fry station at
outcomes 5-11 years after the expected date of high school
a fast-food place, ringing up clothes at a store, or
graduation,” they conclude, “but these beneficial consewatching for trouble from the lifeguard’s chair at the local
quences have attenuated fairly dramatically over time.”
pool. In return, they’ve gotten a modest wage, a chance for
Why the drop? Baum and Ruhm note that for the first
socializing, and work experience.
group, high school work reduced the likelihood of later holdBut the measured long-term economic benefits of that
ing service jobs, which are typically lower-paid — while for
work experience have been going down sharply, according
the second group, it increased the likelihood of a service job.
to a recent paper by Charles Baum of Middle Tennessee
In addition, high school work was associated with less of an
State University and Christopher Ruhm of the University of
increase in adult work experience for the second group than
Virginia. While they don’t come to firm conclusions about
for the first group. They estimate that at least five-eighths of
the cause of the trend, it may reflect broader changes in the
the decline in the benefit of high school work is from these
labor market.
two effects, in roughly equal proportions.
Most previous research has found that high school
They find that the returns to high school work were
employment has a positive effect on future employment
higher for the non-college-bound and declined the most
and wages. But in theory, it could
for those students. For the nonequally have the effect of hurting
college-bound, the students in
“The Changing Benefits of Early Work
a student’s prospects by interferthe late 1970s and early 1980s
Experience.” Charles L. Baum and
ing with academic achievement.
who worked 20 hours or more per
Christopher J. Ruhm. National Bureau of
Baum and Ruhm seek to deterweek in the senior year of high
mine whether the net benefits
school saw an average increase in
Economic Research Working Paper
of high school jobs changed over
their future wages of around 13
No. 20413, August 2014.
time by looking at data from the
percent, compared to 7 percent
National Longitudinal Surveys
for those in the late 1990s and
of Youth, a set of long-term surveys taken by the Bureau
early 2000s. For the college bound, those in the first cohort
of Labor Statistics. They analyze data on two sets of high
saw an increase from high school work of 3 percent, while
school students approximately two decades apart: the surthose in the second saw an increase of 2.2 percent.
vey’s 1979 cohort, a sample of Americans who were high
The decline in the long-term payoff of high school
school students in the late 1970s and early 1980s, and its 1997
employment has coincided with a decline in high school
cohort, a sample of high school students in the late 1990s
employment itself. Overall, the labor force participation rate
and early 2000s.
of teens aged 16 to 19 dropped from 57.7 percent in January
The researchers look at the wages of respondents at ages
1980 to 52.2 percent in January 2000. Beyond the study
23 to 29, or roughly five to 11 years out of high school, along
period, the rate continued to fall, reaching 34.6 percent in
with other measures of employment success. They take
December 2014. The authors posit a number of possible
into account various family characteristics (such as parents’
reasons for the drop during the 2000s, including “increased
education) and characteristics of the individual’s high school
competition for jobs from immigrants, former welfare recipprogram. They control for individual ability using scores
ients and other adults, as well as an increased emphasis on
from armed-services tests administered to many high school
education and in the availability of financial aid for college.”
students; for the 1997 cohort, they also use grades from
With regard to the share of the declining benefit of high
eighth grade.
school work that their findings do not explain, the explaTheir main finding is that over the period of the study,
nation could lie in the dramatic labor-market changes of
the predicted financial benefit from working 20 hours or
the past several decades. These changes may have created
more per week in the senior academic year of high school
forces influencing the benefits of high school work differdropped almost by half. (They find no statistically significant
ently in the more recent period. Among the changes are the
effect from sophomore- or junior-year jobs.) For those who
hollowing out of the labor market (that is, the decline that
were students in the late 1970s and early 1980s, working in
many believe has taken place in demand for middle-skilled
the senior year yielded an average long-term wage increase
labor) and the rising value of college degrees in the labor
of 8.3 percent; for the students of the late 1990s and early
market. The authors suggest that the causes behind their
2000s, it was 4.4 percent. The change was felt most by
findings, especially the concentrated effects on women,
women. “Work experience during the high school senior
deserve closer study.	
EF
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

9

THEPROFESSION

When Economists Make Mistakes
BY J E S S I E RO M E RO

I

n 2010, Harvard University economists Carmen
Reinhart (then at the University of Maryland) and
Kenneth Rogoff published a paper concluding that economic growth stagnated when a country had very high public
debt. “Growth in a Time of Debt” has been cited more than
250 times and was widely referenced by U.S. and European
policymakers advocating austerity measures following the
Great Recession.
So it made headlines in 2013 when Thomas Herndon,
a graduate student at the University of Massachusetts
Amherst, discovered a spreadsheet error in Reinhart and
Rogoff’s work — an error that Herndon, in a paper with his
professors Michael Ash and Robert Pollin, said disproved
the negative relationship between large debts and growth.
(Reinhart and Rogoff have acknowledged the error but don’t
believe it alters the substance of their conclusions.)
“Growth in a Time of Debt” was published as part of the
proceedings of the American Economic Association (AEA)
2010 annual meeting. While conference papers are reviewed
by editors, they aren’t subject to a formal peer review,
the traditional imprimatur of academic publishing. But
peer review isn’t necessarily intended to catch simple data
errors, and sometimes economics articles — including peerreviewed ones — are later found to contain mistakes of one
kind or another. Such incidents have raised the question:
Who’s checking?
To conduct a peer review, the editor of a journal asks
other experts, known as referees, to read a paper to ensure
that it’s an important contribution to the field and that the
conclusions are credible. Referees remain anonymous to the
authors, so they feel free to offer their honest opinions.
Traditionally, social science journals also have maintained the authors’ anonymity during the review process
to prevent a referee from being swayed by an author’s
reputation (or lack thereof). But the wide dissemination of
working papers online has made it easy to learn an author’s
name by entering the paper’s title into a search engine.
That led the AEA to drop such “double-blind” reviewing
for all of its journals. An added benefit is that knowing
who the authors are could help referees identify potential
conflicts of interest.
Sometimes the system can be gamed. In July, SAGE
Publications announced it was retracting 60 papers from
the Journal of Vibration and Control, a well-regarded acoustics
journal, after the discovery of a “peer-review ring.” A scientist in Taiwan had created more than 100 fake identities in
an online reviewing system, which authors then used to write
favorable reviews of each other’s — and sometimes their
own — papers.
Nothing so nefarious is known to have happened in
10

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

economics, but sometimes a discipline becomes clubby, says
Penny Goldberg, an economist at Yale University and the
editor of the American Economic Review. “Then you can end
up in a bad equilibrium where people support each other
and recommend acceptance even if the papers aren’t very
strong.”
Authors aren’t perfect, either. In a survey conducted by
Sarah Necker of the Walter Eucken Institute in Germany,
2 percent of economists admitted to plagiarism, 3 percent
admitted fabricating some data, and 7 percent admitted
using tricks to increase the statistical validity of their work.
Between one-fifth and one-third acknowledged practices
such as selectively presenting findings in order to confirm
their hypothesis or not citing works that refuted their argument. Referees and editors aren’t necessarily on the lookout
for such practices. “As an editor, my role is not to be the
police,” says Liran Einav of Stanford University. Einav is a
co-editor of the journal Econometrica and an associate editor
of several other journals. “If someone is doing something
bad, hopefully the market is efficient enough that eventually
they will get caught.”
Editors also are under pressure to meet publication
deadlines, as William Dewald, emeritus professor at Ohio
State University and a former editor of the Journal of Money,
Credit, and Banking, wrote in a chapter for the 2014 book
Secrets of Economic Editors. As a result, “Some weaker papers
slip through.” And authors are under their own pressure to
publish as many papers as possible, which may lead them to
make mistakes.
When mistakes are found, it’s often because someone
tries to replicate the original study. Some top journals,
including those published by the AEA, require authors to
share their data and programs (with some exceptions for
proprietary data), and many economists post data on their
personal websites as well.
Some economists have argued that replications should be
much more common in order to keep the profession honest.
But there’s the potential to overdo it. “Replication is very
important and should be strongly encouraged, but we should
realize if we spend too much time replicating other people’s
studies, it could generate a lot of noise,” says Einav. “You
could easily see how it gets into a spiral and researchers just
spend all their time responding to the replicators. That’s
not very productive.” Also, Einav adds, the mere possibility
of replication might be enough to make economists extra
careful.
Ultimately, the market is the final test. “If a question is
interesting and policy relevant, then people will try to replicate the results,” says Goldberg. “This is constructive — this
is how science progresses.”	
EF

AROUNDTHEFED

Limiting Bank Size is not the Answer
BY L I S A K E N N E Y

“Too Correlated to Fail.” V.V. Chari and Christopher
Phelan, Federal Reserve Bank of Minneapolis Economic
Policy Paper No. 14-3, July 2014.

W

hat’s the best way to ensure that banks don’t engage
in the kinds of risky behavior that led to the bailouts
of the 2007-2008 financial crisis? If you ask V.V. Chari and
Christopher Phelan of the Minneapolis Fed, the answer is
definitely not the conventional wisdom of limiting the size
of individual banks.
Chari and Phelan argue, in a July 2014 policy paper titled
“Too Correlated to Fail,” that it is the risk profile of the
entire banking system that matters, not the actions of a single large bank. They believe policies that focus on bank size
are misguided and that bank regulation should be focused
on “whether that particular bank’s behavior is mitigating or
aggravating the risk exposure of the entire system.”
Two reasons that banks feel comfortable engaging in
risky behavior are deposit insurance and government bailouts. These explicit protections (deposit insurance) and
implicit protections (bailouts) are sources of moral hazard.
Since someone else bears the cost of failure, creditors of
banks are less concerned about the risk.
One of the most significant kinds of risk that banks
engage in when they feel protected by bailouts is what the
authors call “herding” — which in itself increases the likelihood of a crisis. Herding is when banks invest similarly,
correlating their risks. When bailouts exist, it makes the
most sense for banks to mimic one another, because if they
fail together, they will all be bailed out together. If just one
bank fails, there will not be a big enough crisis to warrant
a bailout.
One example of this herding behavior is securitization,
where banks sell claims to a pool of loans. The catch is that
they sell these claims to other banks. So even though this
action diversifies the portfolio of that one individual bank, it
ensures that all banks hold very similar portfolios.
Given the propensity of banks to herd when bailouts
exist, Chari and Phelan conclude that limits on bank size
cannot effectively solve the moral hazard problem — a highly correlated system of small banks would fail as easily as a
highly correlated system of large banks. Instead, regulators
“need to understand what kinds of events are likely to threaten a significant fraction of the aggregate assets of the entire
banking system.”
“The Wage Growth Gap for Recent College Grads.”
Bart Hobijn and Leila Bengali, Federal Reserve
Bank of San Francisco Economic Letter No. 2014-22,
July 21, 2014.

“Information Heterogeneity and Intended College
Enrollment.” Zachary Bleemer and Basit Zafar, Federal
Reserve Bank of New York Staff Report No. 685,
August 2014.

I

n the wake of the Great Recession, wages for recent college graduates have remained flat while earnings for all
full-time workers have increased at a steady pace. According
to a recent San Francisco Fed Economic Letter, this data
reveals a wage gap that is significantly larger and longer-lasting than wage gaps in previous recessions.
Why? Occupational distributions have remained stable
between 2007 and 2014, so the gap can’t be blamed on recent
grads shifting to lower-wage fields. Instead, the authors find
that the current wage gap is caused by limited wage growth
across all occupations.
With this kind of widespread wage slowdown, the authors
note that “potential graduates, seeing the difficulties faced
by current graduates in finding any job…might interpret this
as a signal that it is not worth going to college.”
The false perception that there is now a low return on
investment for a college education can hurt enrollment rates
— which have remained stagnant in the United States over
the last 20 years.
The possibility of such a misperception is explored in an
August 2014 New York Fed Staff Report, which found that
households generally underestimate the benefits and overestimate the costs of obtaining a college degree. The authors find
that these beliefs directly influence whether a child in a household will attend college. This is particularly true in lowerincome households as well as ones where the parents have not
attended college themselves; the heads of these households
tended to believe costs were much higher and benefits much
lower than did higher-income, higher-educated households.
The paper finds that “this is consistent with individuals’ own
experiences shaping their perceptions.”
The authors believe information gaps may explain these
misperceptions, as people tend to gather information from
their local networks, which may be unreliable. In order to
close these gaps, they suggest information campaigns that
provide relevant information on the costs and benefits of a
college education — particularly for disadvantaged households that have more skewed perceptions.
Expectations play a large role in decisionmaking, so
ensuring individuals have the most accurate information
about college is important for increasing enrollment rates.
The benefits of a college degree still exist — despite the
slow growth of wages for recent grads — as wage rates are
still significantly higher for college graduates than for high
school graduates. 	
EF
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

11

The Dropout
Dilemma

Why do kids drop out of high school,
and how can we help them stay?
BY J E S S I E RO M E RO

O

rangeburg Consolidated School District 5 serves about 7,000 students in
rural South Carolina. More than one-quarter of its high school students
fail to graduate within four years. Predominantly African-American,
Orangeburg is not a wealthy area; median household income in the county is
about $33,000, compared with $53,000 nationally, and the unemployment rate
is 10.4 percent, nearly double the national average. Nearly 85 percent of the
district’s students qualify for free or reduced-price lunches, and many of their
parents did not graduate from high school.

12

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

“Poverty is our biggest challenge,” says Cynthia Wilson,
the district superintendent. “We have students growing up
in homes where no one is working, and it becomes a cycle
we absolutely need to break by graduating more students.”
Every September, teachers and volunteers visit the homes
of students who haven’t returned to school to find out why
and to help them return; lots of kids in Orangeburg drop
out because they don’t have transportation, or they get
pregnant, or they need to get a job. The district has started
offering night classes for students who have children or have
to work, and students also have the option of completing
their coursework online — on laptops they’ve borrowed
from the school, if necessary. Wilson and her staff are taking
other steps to improve the district’s academics, but they’ve
learned that sometimes helping a kid to graduate takes place
outside the traditional confines of school.

Is There a Dropout Crisis?
High school graduation rates in the United States rose
rapidly throughout much of the 20th century. During the
“high school movement,” about 1910 to 1940, the share of
the population with a diploma rose from just 9 percent to 51
percent. But around 1970, the averaged freshman graduation
rate (AFGR), which measures the share of students who
graduate within four years, began to decline, falling from 79
percent during the 1969-1970 school year to 71 percent by
the 1995-1996 school year, where it remained until the early
2000s. This stagnation in graduation rates led to widespread
concern about a “dropout crisis.”
But the AFGR has improved over the past decade, reaching 81 percent during the 2011-2012 school year, the most
recent year for which the Department of Education has published data. Another measure of high school graduation, the
adjusted cohort graduation rate (ACGR), was 80 percent.
(The Department of Education required states to report
the ACGR beginning in 2010 to create more uniformity in
state statistics and to better account for transfer students.
Historical comparisons for this measure are not available.)
The improvement in the overall graduation rate obscures
significant disparities by race and income. The ACGR
for white students is 86 percent, compared with just 69
percent for black students and 73 percent for Hispanic
students. Minority students also are disproportionately
likely to attend a “dropout factory,” which researchers have
defined as schools where fewer than 60 percent of freshmen
make it to senior year: 23 percent of black students attend
such a school, while only 5 percent of white students do. The
dropout rate for students from families in the lowest income
quintile is four times higher than for those in the highest
income quintile.
There is also significant regional variation; states with
low graduation rates tend to be in the South and the West.
In the Fifth District, Maryland and Virginia have the highest
graduation rates, with ACGRs of about 85 percent. Behind
them are North Carolina, with an 83 percent graduation rate;
West Virginia, with 81 percent; and South Carolina, with 78

percent. Washington, D.C., has the lowest graduation rate
in the nation: Just 62 percent of D.C. high school students
earn a diploma within four years.
Despite the improvement in the national graduation
rate, “crisis” is still the term many people use to describe the
dropout situation. “People are severely disadvantaged in our
society if they don’t have a high school diploma,” says Russell
Rumberger, a professor of education at the University of
California, Santa Barbara. “One out of every five kids isn’t
graduating. You could argue that any number of kids dropping out of school is still a crisis.”

Why Graduating Matters
Several decades ago, the disadvantage wasn’t as severe. “If
this were the 1968 economy, we wouldn’t worry nearly so
much,” says Richard Murnane, an economist and professor
of education at Harvard University. “There were a lot of
jobs in manufacturing then. They were hard work and you
got dirty, but with the right union, they paid a good wage.”
But as changes in the economy have increased the demand
for workers with more education, differences in outcomes
have become stark. The wage gap between workers with and
without a high school diploma has increased substantially
since 1970; over a lifetime, terminal high school graduates
(that is, those who don’t go on to earn college degrees) earn
as much as $322,000 more than dropouts, according to a
2006 study by Henry Levin and Peter Muennig of Columbia
University, Clive Belfield of Queens College (part of the
City University of New York), and Cecilia Rouse of
Princeton University. Dropouts also are less likely to be
employed. The peak unemployment rate for people without
a high school diploma following the Great Recession was
15.8 percent, compared with 11 percent for those with only a
high school diploma. (Unemployment for college graduates
peaked at just 5 percent.) Today, the rate for dropouts is still
about 2 percentage points higher.
The differences between graduates and dropouts spill
far beyond the labor market. Not surprisingly, high school
dropouts are much more likely to live in poverty, and they
also have much worse health outcomes. High school dropouts are more likely to suffer from cancer, lung disease,
diabetes, and cardiovascular disease, and on average their life
expectancy is nine years shorter than high school graduates.
High school dropouts also have a much higher probability of ending up in prison or jail. Nearly 80 percent of
all prisoners are high school dropouts or recipients of the
General Educational Development (GED) credential. (More
than half of inmates with a GED earned it while incarcerated.) About 41 percent of all inmates have no high school
credential at all.
The high costs to the individual of dropping out translate
into high costs for society as a whole. Research by Lance
Lochner of the University of Western Ontario and Enrico
Moretti of the University of California, Berkeley found that a
1 percent increase in the high school graduation rate for males
could save $1.4 billion in criminal justice costs, or $2,100
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

13

“Low-income kids start kindergarten way
behind. That’s a huge handicap that needs
to be addressed.”
—
­ Richard Murnane, Harvard University

per additional male high school graduate. Other research
estimates savings as high as $26,600 per additional graduate.
High school dropouts also generate significantly less tax
revenue than high school graduates, while at the same time
they are more likely to receive taxpayer-funded benefits such
as cash welfare, food stamps, and Medicaid. While the costs
vary by race and gender, Levin and his co-authors found that
across all demographic categories the public health costs of
a high school dropout are more than twice the cost of a graduate. In total, the researchers estimated that each additional
high school graduate could result in public savings of more
than $200,000, although they noted that their calculations
do not include the costs of educational interventions to
increase the number of graduates.
Raising the high school graduation rate could have economic benefits beyond saving the public money. In many
models of economic growth, the human capital of the workforce is a key variable. That’s because a better-educated
workforce generates new ideas and can make more productive
use of new technologies; more education thus equals more
growth. Although this connection has been difficult to prove
empirically, many researchers have concluded that the rapid
growth in educational achievement in the United States
during the 20th century, particularly the dramatic increase
in high school education in the first half of the century, was a
major contributor to the country’s economic advances.

Is Dropping Out Irrational?
Economic models generally assume that people are rational,
carefully weighing the costs and benefits of an action before
making a decision. So given the large returns to education
and the poor outcomes for workers without a high school
diploma, why would anyone drop out?
Part of the answer might simply be that teenagers aren’t
rational. A growing body of neurological research has found
that adolescents have less mature brains than adults, which
contributes to more sensation-seeking and risky behavior.
But while teenagers might be more impulsive than adults,
they don’t generally wake up one morning and suddenly
decide to quit school; instead, there are a multitude of factors that over time could lead a student to decide the costs
of staying in school outweigh the benefits.
One factor could be that teenagers place less value on
the future benefits of an education. Research has found that
“time preference,” or the value a person places on rewards
today versus rewards tomorrow, varies with age. Teenagers
14

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

are more likely to prefer gratification today.
Students also might not expect the benefits of staying
in school to be very large. Many low-achieving students
wind up being held back a grade; for these students, staying
enrolled in school doesn’t actually translate into greater educational attainment. In a study of students in Massachusetts,
Murnane found that only 35 percent of students who were
held back in ninth grade graduated within six years; the students who dropped out might have perceived that staying in
school was unlikely to result in a diploma. The same calculation likely applies to students who live in states where exit
exams are required for graduation, as is now the case in about
half the country. Students who don’t expect to pass the exam
have little incentive to remain in school. Multiple studies
suggest that exit exams reduce high school graduation rates,
particularly for low-income and minority students.
In April, South Carolina eliminated its exit exam requirement for future students and is allowing students who failed
the exam in the past to apply retroactively for a diploma.
That’s a benefit for those students, but it poses challenges
for educators. “If someone without a high school diploma
has the opportunity to make $10,000 more by getting a
diploma, you want them to have that opportunity,” Wilson
says. “But we have to find our ways to keep our students
motivated to do more than just get by. We can’t say anymore, ‘You really have to learn this because you have to
pass that test!’ ” In addition, exit exams were introduced to
ensure that high school graduates had achieved a certain
threshold of knowledge. Eliminating them poses the risk
that graduates won’t be adequately prepared for the workforce or for postsecondary education.
The increasing focus on college attendance at many high
schools might also encourage kids to drop out. Students who
aren’t academically prepared for college or who don’t want
to attend may see little value in finishing high school if they
perceive a diploma solely as a stepping stone to college. The
focus on college prep might also contribute to the fact that
many dropouts report feeling bored and disengaged from
school.
For some students, the opportunity cost of attending
school — the value of the other ways they could use their
time — may be quite high. In a survey of high school students conducted by the Department of Education, 28 percent of female students said they dropped out because they
were pregnant, 28 percent of all students quit school because
they got a job, and 20 percent needed to support their family. (See chart.)
Getting bad grades or getting pregnant might be the
most direct cause of a student’s decision to drop out, but
research suggests the reasons run deeper. Zvi Eckstein of
the Interdisciplinary Center Herzliya (Israel) and Kenneth
Wolpin of the University of Pennsylvania and Rice University
estimated a model of high school attendance based on data
from a national longitudinal survey and concluded that students who drop out of high school are different even before
starting high school. In particular, dropouts are less prepared

Most Common Reasons for Dropping Out
Missed too many school days
Easier to get GED
Getting bad grades
Didn’t like school
Couldn’t keep up
with schoolwork
Pregnant
Got a job
Didn’t expect to
complete requirements
Didn’t get along with teachers

The result is kids arriving at school without the
academic or social skills they need to make progress toward graduation. About one-third of the
students in the Department of Education survey
said they dropped out because they couldn’t keep
up with the schoolwork, and nearly half of the
students in a survey commissioned by the Bill and
Melinda Gates Foundation said they were unprepared when they entered high school. That lack of
preparation begins early. “Low-income kids start
kindergarten way behind. That’s a huge handicap
that needs to be addressed,” says Murnane.

Changing the Calculation

What can educators do to tip the cost-benefit calculation in favor of staying in school? Evidence on
what actually works is thin, in part because it’s dif0	
10	
20	
30	
40	
50	
ficult to make school reforms that lend themselves
PERCENTAGE OF DROPOUTS CITING REASON
to rigorous impact evaluations. But there are some
NOTE: Percentages do not sum to 100 because students could list more than one response. The percent
strategies that appear to be effective.
of students citing pregnancy refers to female students only.
SOURCE: Dalton, Glennie, Ingels, and Wirt (2009); Department of Education’s Educational Longitudinal
Sometimes, all a student needs is to attend a
Study of 2002
better high school. Several studies have shown
that black students’ graduation rates increased as
and less motivated for school and have lower expectations
a result of court-ordered desegregation in the 1960s, 1970s,
about the eventual rewards of graduation.
and 1980s, which sent black students to higher-quality
Eckstein and Wolpin’s conclusions are supported by
schools. Conversely, graduation rates decreased with the end
a large body of research on the GED. Introduced in 1942
of court-mandated desegregation in Northern school disfor returning World War II veterans, by 2008 the GED
tricts. A study of the Charlotte-Mecklenburg, N.C., schools
accounted for 12 percent of all the high school credentials
found that graduation rates increased by 9 percentage points
issued in the United States. Although GED earners have
for low-income and minority students who won a lottery to
demonstrated the same knowledge as high school graduates,
attend a higher-performing high school.
they don’t do much better than dropouts in the labor market
Of course, it’s not mathematically possible for every
and they’re about as likely to end up in poverty or in prison.
student to move to a better high school. One approach to
Research by James Heckman of the University of Chicago
reforming existing schools is the “Talent Development”
and other economists suggests this is because they lack the
model, which groups incoming ninth graders into small
noncognitive skills, such as perseverance and motivation,
“learning communities” taught by the same four or five
that would have enabled them to graduate from high school.
teachers. The students take extra English and math classes
These are the same skills that contribute to success in the
and participate in a seminar focused on study skills and
workplace.
personal habits. After freshman year, the students study
The finding that students who drop out of high school
in career academies that are intended to combine academhave different initial traits than those who graduate raises an
ics with the students’ interests. An impact evaluation of
important question: Why are these students different? The
the first two schools to implement the program, both in
answer may have its roots very early in life.
Philadelphia, found that on-time graduation increased by 8
A
	 large body of research has found that the early mastery
percentage points.
of basic emotional, social, and other noncognitive skills lays
As the inclusion of career academies in the Talent
the foundation for learning more complex cognitive skills
Development model suggests, more career and technical
later in life. Once kids fall behind, it’s very hard to catch up;
education could help make school more relevant for some
cognitive and behavioral tests as early as age 5 can predict
students and teach them about post-high school options
the likelihood that a child will graduate from high school.
other than college. “Career and technical education can proResearch also shows that poor and minority children (groups
vide a new way of teaching core academic skills using a pedthat tend to overlap) are much more likely to fall behind.
agogy that is much more project-oriented and hands-on and
In part, their parents might not have the time or money to
is of interest to kids who don’t pay attention to traditional
invest in early childhood education. And the community as
college preparatory approaches,” says Murnane.
a whole might not offer the same resources as higher-income
Orangeburg recently opened up its career certificate
communities, such as parks and playgrounds, after-school
programs to students attending “alternative school,” a sepprograms, and positive role models.
arate school for kids with disciplinary problems. Previously,
Had to support family

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

15

students were required to earn re-admittance to their home
schools before they could apply for the programs. “For a
large number of our dropouts, alternative school was their
last stop,” says Wilson. “But working toward a certificate is
a great motivator” to stay in school.
Research also suggests that students are more engaged
and have higher achievement when they attend small
schools, generally defined as fewer than 400 students. The
average high school in the United States has about 850
students; in many states the average is more than 1,000 students. Beginning in 2002, New York City closed about 20
large low-performing schools and replaced them with more
than 200 small schools. A study of 105 of these schools found
that the four-year graduation rate increased from 59 percent
to 68 percent; the effect on graduation rates was especially
strong for disadvantaged students.
Many states also are experimenting with charter schools.
At least 42 states and Washington, D.C., now allow charter schools, and the number of students enrolled in them
increased 80 percent between 2009 and 2013, although they
still serve only about 4 percent of the country’s schoolchildren. Overall, according to research directed by Margaret
Raymond at Stanford University, students in charter schools
show more improvement in reading than students in traditional public schools and do at least as well in math. While
Raymond’s research doesn’t study the effect on graduation
rates specifically, to the extent that students drop out
because they are not academically prepared, charter schools
might help. Of course, not all charter schools are high quality. “There are some terrific ones, for sure,” says Murnane.
“But there are many that are not so good.”
But even the most effective programs have relatively
modest results. More than half of the students who participated in a Talent Development program in Philadelphia still
failed to graduate, and the graduation rate after New York
City’s reform was still well below the national average. The
lesson to take away from high school reforms may be that
high school reform isn’t enough.

A Lifetime Approach
Given the importance of early educational experiences,
sending children to preschool might be one of the best ways
to increase the likelihood they eventually graduate from high
school. Multiple studies of high-quality early education programs, such as the Perry Preschool study in Ypsilanti, Mich.,

and the Abecedarian project in North Carolina, have shown
that they have substantial long-term effects for low-income
children — not only higher academic achievement and
graduation rates, but also higher earnings as adults, reduced
criminality, and lower rates of teen pregnancy. (See “Babies,
Brains, and Abilities,” Region Focus, Fourth Quarter 2011.)
Early childhood education isn’t a cure-all, however. “It’s not
a substitute for high school reform,” says Murnane. “But it
would sure make high school reform easier.”
One risk is that the academic gains from preschool are
erased if a child subsequently attends a low-quality elementary school. That points to the need for interventions at
every level of schooling. And as Rumberger notes, “There are
some populations where we need to increase the graduation
rate by 20 or 30 percentage points. We have evidence of successful interventions in preschool, in elementary school, in
middle school, and in high school. If we really want to tackle
this problem, we have to compound these interventions.”
Tackling the problem may also mean addressing the
challenges children face outside of school. While students
drop out of school for many reasons, poverty is the common
denominator for many of them; not only are poor children
more likely to be academically unprepared, but they’re
also more likely to get sick, to lack parental support, or to
have children themselves. “Those are incredible burdens
to overcome,” says Rumberger. “To the extent we don’t
improve economic conditions among certain populations in
our country, we’re unlikely to improve the graduation rate
sufficiently.”
Not everyone agrees that better economic conditions are
a prerequisite for increasing academic achievement. There
is considerable debate, for example, about the efficacy of
the Harlem Children’s Zone in New York, which combines charter schools with community services in an effort
to address the panoply of problems facing poor children
and their families. Students who attend the Zone’s charter
schools show significant academic improvement, but it’s
unclear if that’s a result of the schools alone or if the other
services are an essential component. Still, there is evidence
that assistance for low-income families, such as food stamps
or the Earned Income Tax Credit, has positive long-term
effects on their children. “We have to do a better job of
supporting low-income families,” says Murnane. “It’s a
necessary condition for giving kids in these families a better
shot at a good life.”	
EF

Readings
Goldin, Claudia, and Lawrence F. Katz. “Human Capital and
Social Capital: The Rise of Secondary Schooling in America, 19101940.” Journal of Interdisciplinary History, Spring 1999, vol. 29,
no. 4, pp. 683-723.
Heckman, James J., and Paul A. LaFontaine. “The American
High School Graduation Rate: Trends and Levels.” The Review of
Economics and Statistics, May 2010, vol. 92, no. 2, pp. 244-262.

16

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

Levin, Henry, Clive Belfield, Peter Muennig, and Cecilia Rouse. “The
Costs and Benefits of an Excellent Education for All of America’s
Children.” Alliance for Excellent Education, October 2006.
Murnane, Richard J. “U.S. High School Graduation Rates:
Patterns and Explanations.” Journal of Economic Literature,
June 2013, vol. 51, no. 2, pp. 370-422.
Rumberger, Russell. Dropping Out: Why Students Drop Out of
High School and What Can Be Done About It. Cambridge, Mass.:
Harvard University Press, 2011.

How the Geography of Jobs
Affects Unemployment
Why job accessibility is limited for some groups and
what it means for anti-poverty policies B Y F R A N K M U R A C A

2000—

2012—

1990—

1980—

During the 2000s and 2010s, jobs have been moving out of
the city center. A Brookings Institution report in 2009 by
Elizabeth Kneebone found that, between 1998 and 2006,
95 out of 98 metro areas saw a decrease in the share of jobs

1970—

Why Geography Matters

located within three miles of downtown. As of 2006, 45.1
percent of employees in the largest 98 metro areas worked
more than 10 miles away from the urban center, compared
with only 21.3 percent who worked within three miles of
downtown. Kneebone concluded that there has been a trend
of job decentralization regardless of whether a community
has seen economic growth or stagnation.
“Job decentralization trends do not move in lock-step
with the economic cycle; jobs continued to shift towards the
fringe in almost every major metro area, regardless of overarching economic circumstances between 1998 and 2006,”
wrote Kneebone. “Therefore, though the current downturn
[in 2009] may slow the long-term trend, it is unlikely on its
own to reverse the patterns documented here.”
A separate 2010 Brookings report by Steven Raphael,
a public policy professor at the University of California,
Berkeley, and Michael Stoll, a public policy professor at
the University of California, Los Angeles, concluded that
population and employment decentralization go hand in
hand: People and jobs tend to follow each other. The degree
to which this relationship holds is different for each demographic group, however. The tie between population and
employment decentralization appears to be weakest for
minority groups, with poor blacks being the least likely to follow jobs out into the suburbs. Additionally, poor minorities
who do move out to the suburbs are more likely to live away
from job-rich areas.
Raphael and Stoll
Poor Populations in Cities and
found that 72 percent
Suburbs, 1970-2012
of suburban whites
20
lived in job-rich communities, while only
63 percent of blacks
15
and 54 percent of
Hispanics lived in
10
such areas.
The magnitude
of these correlations
5
across demographic
Cities
Suburbs
groups is far from certain, especially when
considering the effect
that the 2007-2009
NOTE: Covers 95 large metro areas
SOURCE: Brookings Institution analysis of Decennial Census
recession has had on
and American Community Survey data
residential choices.
MILLIONS

I

n postwar America, many families moved away from
urban centers into the rapidly developing suburbs.
Culturally, these new communities were associated
with economic opportunity, signifying middle-class values
and upward mobility.
The path to economic mobility is no longer a highway
leading from downtown to the suburbs. For example, the
number of suburban residents in poverty may now exceed
the number of urban-dwellers in poverty. According to the
Brookings Institution, suburban poverty rose from 10 million
in 2000 to 16.5 million in 2012, compared to an increase in
urban poverty from 10.4 million to 13.5 million over the same
period (see chart).
This geographic picture of opportunity and wealth adds
complexity to questions about whether unfortunate circumstances, such as poverty, might be determined in part by
where someone lives. To be sure, where one chooses to live is
about more than job opportunities, which are weighed against
housing options, commuting costs, lifestyle choice, social
networks, and more. In equilibrium, housing prices and wages
should make households indifferent among locations. In
other words, some people might choose to live far away from
jobs, possibly accepting a costlier commute, because they are
“compensated” by factors such as lower housing costs.
But the places where people are distributed by market
forces seem to lead, in some cases, to worse labor market outcomes. An explanation of those outcomes was first identified
in 1968 as an account of how black unemployment rates were
elevated by discriminatory housing policies. That explanation, commonly known as the “spatial mismatch hypothesis,”
posits constraints on where people are able to live.
The scope of spatial mismatch research has broadened
beyond discrimination. Researchers seek to understand
the constraints that certain households face when deciding
where to live, helping to explain phenomena like prolonged
unemployment, lower wages, longer commutes, and geographically concentrated poverty. This research may shed
some light on how anti-poverty programs could take geography into account to be more effective.

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

17

But they are curious in light of the fact that unemployment
rates also tend to be higher for these seemingly less mobile
groups. Some minority groups have long had higher unemployment rates than whites — a pattern that continues
today — with 4.8 percent of white workers unemployed as
of December 2014 compared with roughly 6.5 percent of
Hispanics and 10.4 percent of blacks.

The Spatial Mismatch Hypothesis
In 1968, John Kain, then an economist at Harvard University,
was one of the first economists to draw a relationship between
the geography of jobs and unemployment. Prior to his research,
some economists had tried to measure the effect of discrimination on unemployment for blacks while others wanted
to know the extent of racial discrimination in the housing
market. Kain published an article in the Quarterly Journal of
Economics titled “Housing Segregation, Negro Employment,
and Metropolitan Decentralization,” in which he was one of
the first to suggest that there could be a relationship between
the two issues. “Possible interactions between housing segregation and nonwhite employment and unemployment have
been all but ignored,” he wrote. Kain hypothesized that black
unemployment may be affected by the high cost of reaching
jobs outside residential areas, lower quality information networks, housing discrimination, and possible discrimination by
employers outside black neighborhoods.
Spatial mismatch received more attention through the
latter half of the 20th century with the rising social and
economic problems of urban cores, and it has received
renewed emphasis recently as jobs have migrated across the
urban-suburban spectrum.
Measuring spatial mismatch, however, is not an easy
task. Kain and other economists who have looked into this
question have pointed to a number of challenges in trying
to measure how geography plays into unemployment when
there are many other non-geographical factors that go into
hiring an employee. For example, residents of a community
that is distant from a job-rich area might also have less education or job skills. Where does the impact of education
stop and distance begin?
Moreover, we don’t necessarily know where a given person’s potential jobs are located. “One challenge in teasing out
the relationship between job geography and individual labor
market outcomes is that it is intrinsically difficult to characterize the relevant spatial distribution of job opportunities
of an individual,” says Fredrik Andersson, an economist in
the U.S. Treasury Department’s Office of the Comptroller
of the Currency who has studied the issue.
Andersson is a co-author of a 2014 paper that attempts
to control for many of these underlying variables by using
recently released data on mass layoffs in certain communities. The conclusion of that research was that even while
controlling for several different characteristics, including
job search characteristics, residential choice, and commute
times, prospective employees who live far from flourishing
job markets have a much harder time finding work when
18

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

compared to those who are in close proximity to those job
markets. Black workers were found to be 71 percent more
sensitive to the distance of jobs than whites, and 35 percent
more sensitive in finding a job that paid 90 percent of the
earnings from their previous job. Though the extent to
which certain groups are unable to follow jobs out into the
suburbs is uncertain, enough evidence exists to ask why
minorities, on average, do not relocate to job-rich communities to the extent whites have.

What Drives Spatial Mismatch?
While economists like Andersson have provided more
evidence for the existence of spatial mismatch, other
researchers have tried to understand the specific barriers
that restrict certain groups, particularly ethnic minorities,
from relocating to job-rich communities.
One barrier is information. Economists have considered
whether living in certain geographic locations can reduce
one’s information about possible job opportunities. Yves
Zenou, an economist at Stockholm University, studied social
networks in black communities that were geographically distant from job centers. Building on earlier research that studied networks and employment outcomes, Zenou concluded
that minority communities have far less access to the kinds
of relationships that lead to employment. Zenou found that
ethnic communities relied more heavily on strong ties with
those who are also more likely to be unemployed, and “it is
therefore the separation in both the social and physical space
that prevents ethnic minorities from finding a job.”
Another barrier is access to credit. Richmond Fed economist Santiago Pinto, in a 2002 article, studied how financial
constraints might limit mobility for those who wish to move
to job-rich areas. His research showed that restrictions on
borrowing were an important factor in how households
decided to move, and that those barriers were blocking labor
from following jobs into suburban communities.
“It is commonly thought that individuals have only limited opportunities to borrow against future labor income,”
Pinto says. “These constraints have consequences for moving decisions. This means that people who cannot borrow
will be restricted in terms of their capability of changing
residence location.”
Not all economists are convinced that geography is central to the story of minority unemployment. In 2008, David
Neumark, an economics professor at the University of
California, Irvine, University of Maryland economics professor
Judith Hellerstein, and Melissa McInerney, now a professor
at the College of William & Mary, offered an alternative to
the spatial mismatch hypothesis: “The problem is not a lack
of jobs, per se, where blacks live, but a lack of jobs into which
blacks are hired.” They tested this hypothesis using data that
compared the education levels needed for surrounding jobs
and the education levels of workers in the local labor market.
They found that black male employment was much more
strongly associated with the density of jobs in which minorities had traditionally been employed than it was for whites.

The spatial mismatch hypothesis would predict the lower
employment of blacks is attributed to the distance between
available jobs and where blacks live. These researchers
found that lower employment could be better explained by
a lack of jobs that had historically been open to them. “Pure
spatial mismatch is not an important component of lower
black employment rates,” they wrote. “Instead the spatial
distribution of jobs available to blacks — or racial mismatch
— appears to be much more important.”
Of course, jobs are not the only drivers of residential
choice. Andersson notes that the benefits of living closer to
job-rich communities are dependent on the characteristics
of the communities and individuals. “For instance, the boost
in income from a low-paying job may not be sufficiently large
for an unemployed worker if the job requires relocation to a
more expensive community,” Andersson says.

Policy Implications
In the years after the spatial mismatch hypothesis was
proposed, most economists studying the issue have found
a robust relationship between job location and labor market outcomes and economic well-being. The magnitude of
that relationship is still widely contested, but economists
generally agree that spatial mismatch exists and is driven by
a number of factors, including differences in job prospect
networks, access to loans, and transaction costs.
Researchers who have studied spatial mismatch have
prescribed a number of policy solutions to improving
employment outcomes for disadvantaged communities. The
question for policymakers is how to weigh such programs
against their costs.
For example, it may be desirable to attack some sources
of spatial mismatch for social reasons. In part, that’s what
anti-discrimination laws do. Other causes of spatial mismatch
may be cheap to reduce. For example, it may be relatively easy
to improve the flow of information between communities,
strengthening the network for employment knowledge.
Other proposed policies are more costly, making their
net benefit less clear. Kneebone and Alan Berube of the
Brookings Institution, in their 2013 book Confronting
Suburban Poverty in America, argued that regional communities, both urban and suburban, should collaborate to develop
ways of connecting high unemployment areas with high
job density areas. Again, improving information could be a
cheap way of doing so. More costly measures might include

expanding public transportation networks to connect certain populations with jobs.
Since credit constraints have been a concern, some states
have piloted voucher programs to help people relocate
to neighborhoods with greater job prospects. Maryland’s
Live Near Your Work Program, launched in 1998, offered
workers $3,000, funded equally by the state, the city, and
the employer, toward the purchase of a house located within five miles of the person’s workplace and within one of
Maryland’s targeted residential development zones. Surveys
from participants showed shorter commute times and a
switch to less costly commuting habits, such as walking to
work. The program’s funding ran out in 2002, but it survives
in Baltimore, where as many as 200 people per year receive
grants across nearly 85 participating employers. While the
program benefited recipients, and may even have improved
their labor prospects, it did so at a cost.
The tension of many programs countering spatial mismatch is that the costs of the effort are borne broadly but
the benefits are enjoyed only by recipients. In that sense,
spatial mismatch is largely a distributional consideration
that policymakers have to evaluate like any other.
No matter the cost, what works for one area may not
work for another. “From a theoretical standpoint, some local
policies may serve as a coordination device that induces firms
and individuals to locate in a specific area,” Pinto says. “The
literature is, however, inconclusive about which specific policies are effective and can achieve the desired objectives. The
literature on downtown revitalization programs has faced
similar issues. While some policies seem to work in attracting
households back to downtown areas in some specific locations, the same policies have been unsuccessful elsewhere.”
While it has been 46 years since Kain first highlighted the relationship between the geography of jobs and
unemployment, many economists continue to debate the
degree to which they are related, especially when considering specific demographic groups. Although there may
be differences of opinion as to its magnitude, economists
generally agree that spatial mismatch exists and is driven
by a number of factors, including differences in job-related networks, access to loans, and transaction costs.
Understanding the impact that residential location has on
job availability may help policymakers find ways of limiting
the barriers between affected communities and employment opportunities.	
EF

Readings
Andersson, Fredrik, John C. Haltiwanger, Mark J. Kutzbach,
Henry O. Pollakowski, and Daniel H. Weinberg. “Job
Displacement and the Duration of Joblessness: The Role of
Spatial Mismatch.” National Bureau of Economic Research
Working Paper No. 20066, April 2014.
Holzer, Harry J., John M. Quigley, and Steven Raphael. “Public
Transit and the Spatial Distribution of Minority Employment:
Evidence from a Natural Experiment.” Journal of Policy Analysis
and Management, Summer 2003, vol. 22, no. 3, pp. 415–441.

Kneebone, Elizabeth. “Job Sprawl Revisited: The Changing
Geography of Metropolitan Employment.” The Brookings
Institution, April 2009.
Pinto, Santiago M. “Residential Choice, Mobility, and the Labor
Market.” Journal of Urban Economics, May 2002, vol. 51, no. 3,
pp. 469-496.
Raphael, Steven, and Michael A. Stoll. “Job Sprawl and the
Suburbanization of Poverty.” Brookings Institution, March 2010.

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

19

RAISE THE WAGE?

Some argue that there’s no downside
to a higher minimum wage, but
others say the poor would
be hit hardest
BY W E N DY M O R R I S O N

C

alls to raise the minimum wage can be found
anywhere from political speeches to the lyrics of
popular rap artist Kanye West. In the past few years,
many efforts to raise the minimum wage have been made on
the national, state, and even local level, including a drastic
local minimum wage hike to $15 in Seattle, a bill to increase
the minimum wage to $10.10 in Maryland, and an indexed
$10.10 minimum wage proposal endorsed by the White
House. Critics of the policy claim that economic theory
clearly supports their position, while supporters claim that
the empirical evidence is all over the map and point to
numerous examples of research that seem to fly in the face
of past theoretical conclusions about the minimum wage.
If there are seemingly compelling theoretical and empirical
justifications both for and against the minimum wage, who
should policymakers listen to? Does a minimum wage make
low-income workers, the group its proponents desire to
help, better off, worse off, or some of each?

The Decline of the Historical Consensus
Until around 20 years ago, there was a substantial divide
between public opinion and opinion within the economics
profession on the minimum wage. While minimum wage
laws have historically enjoyed a good degree of support
among the public, dating back to the first minimum wage
legislation following the Great Depression, there had been
a longstanding consensus among economists that minimum
wages have adverse effects on low-skilled employment.
A 1979 American Economic Review study reported that 90
percent of academic economists believed that minimum
wage policies generally cause higher unemployment among
low-skilled workers. By 2000, however, only 73.5 percent
20

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

of respondents to an update of the survey agreed wholly or
partially with that claim.
The historical consensus that minimum wages cause
unemployment stemmed from the conclusions of the
textbook competitive labor market model, in which the
minimum wage acts as a price floor. The price floor is
set above the wage employers would be willing to pay
to low-productivity workers like teenagers and the less
educated, so the quantity demanded of these workers
decreases. This view was famously articulated in Nobel
Prize-winning economist George Stigler’s seminal 1946 article, The Economics of Minimum Wage Legislation. Responding
to the federal minimum wage proposal of 1938, Stigler
argued that the legislation could reduce employment by as
much as several hundred thousand workers.
Today, however, Stigler’s view does not command the
near-unanimous assent that it once did. In a 2006 survey
of 102 studies on the minimum wage, economists David
Neumark of the University of California, Irvine and William
Wascher of the Federal Reserve Board of Governors noted
that past estimates on employment elasticities — the percent
change in employment corresponding to a unit change in the
minimum wage — range from significantly negative to slightly
positive. Neumark is quick to note, however, that “most of
the evidence says there are disemployment effects” and that
claiming the evidence is all over the map is misleading.
In their recent book What Does the Minimum Wage Do?,
however, Dale Belman of Michigan State University and Paul
Wolfson of Dartmouth College argued in a meta-analysis
on the subject (that is, a study of studies) that “employment
effects [of minimum-wage increases] are too modest to have
meaningful consequences for public policy in the dynami-

cally changing U.S. labor market,” according to the book’s
website. Why have study results been so varied? Several
explanations, both theoretical and empirical, have been
offered. The current state of opinion among economists is
unclear, and uncovering the root of the decline of the consensus is difficult. One likely factor, however, is the recent
variation in state-level minimums and the opportunity such
variation provides for new methods of comparative study.

Monopsony in the Labor Market
The work often considered as the beginning of the modern
minimum wage debate is an oft-cited 1994 American Economic
Review article, in which David Card of the University
of California, Berkeley and Alan Krueger of Princeton
University looked at the effects of minimum wage increases
on fast-food workers in mid-Atlantic states and controversially found that the minimum wage seemed to increase,
rather than decrease, employment. While publishing their
paper, Card and Krueger had alleged a publication bias in
the economics profession and suggested that some of the
historical consensus about the minimum wage could be
attributed to a predisposition on the part of scholars and editors toward favoring research that found significant negative
effects over work that showed neutral or positive effects.
To explain the unconventionally positive employment
effects they detected, Card and Krueger suggested that
the labor market may not be as competitive as economists
had previously thought and that one explanation might be
a degree of “monopsony” in the market, a classic type of
market failure. Just as firms may have monopoly power in
markets where they are the sole seller of a good, firms may
also have monopsony power in the labor market if they are
effectively the sole buyer or employer.
In a competitive market, wages are determined by supply
and demand, all firms pay the given competitive wage, and
the cost at the margin of one extra worker is simply that
wage. When firms are the sole buyer of labor, however, they
have the ability and the motive both to pay wages that are
too low given the productivity of their workforce and to
restrict employment. The key is that a monopsonist’s labor
demand affects the market wage in a way that an individual competitive firm’s demand doesn’t — that is, they are
price-setters, not price-takers.
In other words, if a monopsonist demands just one extra
worker, she ends up increasing the market wage for all workers, as her demand is the market demand. In this way, the
added cost of an extra worker increases for every worker hired,
a phenomenon known as increasing marginal cost. The cost
of one extra worker, or her cost at the margin, will not just be
the added wage but the wage increase across her entire workforce. She will therefore under-employ as well as underpay.
By setting a minimum wage above what the monopsonist
is paying, the government essentially makes the extra cost
per worker the same for all workers, meaning the monopsonist’s costs at the margin are constant instead of increasing
with each added employee. Facing these constant marginal

How plausible is it that monopsony
actually exists in the low-wage
labor market? On this question,
economists disagree.
costs, the employer will increase her workforce in order to
profit maximize. It follows, then, that a well-placed minimum wage could induce the monopsonist to both raise
wages and hire more.
But how plausible is it that monopsony actually exists
in the low-wage labor market? On this question, economists disagree. In a 2010 Princeton working paper, Orley
Ashenfelter and Henry Farber of Princeton University
and Michael Ransom of Brigham Young University argued
that monopsony power is likely pervasive in labor markets.
According to their paper, an example of a labor market monopsonist in practice would be a “ ‘company town,’ where a
single employer dominates.” Citing evidence that labor supply is inelastic — in other words, that workers are not highly
responsive to changes in their wages — they argued that
monopsonistic employers are able to use this inelasticity to
their advantage and that the “allocative problems associated
with monopsonistic exploitation are far from trivial.”
Daniel Aaronson of the Federal Reserve Bank of Chicago,
Eric French of the Federal Reserve Bank of Chicago and
University College London, and James McDonald of the
U.S. Department of Agriculture disagree. In a 2007 article
in the Journal of Human Resources, they found that the evidence surrounding price changes after minimum wage hikes
is inconsistent with the monopsony model. They reasoned
that, if the minimum wage increases employment under
the monopsony model, it should straightforwardly lead to
increased production. This increase in supply should lead
to lower prices for the good produced. “Because [monopsonists] will hire on more workers, they’ll sell more hamburgers; because they sell more hamburgers, we thought the price
should actually fall after a minimum wage hike,” says French.
After examining the response of restaurant prices to
increases in the minimum wage, however, they found that
the opposite was true. Instead of falling, prices rose, a result
consistent with the competitive model in which firms pass
the extra labor costs on to consumers.

The Hungry Teenager Theory
Another explanation mentioned frequently in the media
is the theory that increased wages for some workers stimulate demand for goods produced by low-income workers
and offset or even reverse negative employment effects. In
the academic literature, this theory has been referred to as
the “hungry teenager” effect. The theory first appeared in
a 1995 Journal of Economic Literature article by economist
John Kennan of the University of Wisconsin-Madison, who
argued that if a typical minimum-wage worker, such as a
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

21

teenager, spent his extra wages on minimum-wage-produced
goods, then the extra demand could offset any disemployment effects. That increased demand for minimum wage
goods would raise their prices. French explains that according to this model, “Firms receive an increase in demand at
the exact same time that they have to pay higher wages,
which could seriously blunt the effect of higher wages in
terms of how many workers a firm might have to shed.”
One issue with the theory is that the income effect, the
extra consumption spurred by an individual’s rise in income,
may be dominated by a substitution or price effect, in which
a consumer substitutes a good in favor of others when the
relative price of that good rises. Many low-income workers
(in this example, teenagers) will have higher incomes as a
result of the minimum wage, but goods produced by those
workers (in this example, hamburgers) are also now more
expensive, causing all consumers to buy fewer of them.
“Minimum wage advocates always say those price effects
won’t have any effect on demand,” Neumark observes, “but
that raises the question of why companies wouldn’t raise the
price before the minimum wage goes up?”
Another issue, according to French, is that although
“household spending actually does go up a lot amongst
households with minimum wage workers after a minimum
wage hike,” goods like hamburgers that are produced by
minimum wage workers just aren’t that big a share of their
budgets. “For that reason, an explanation that claims the
minimum wage truly causes that big of an income effect,”
says French, “just doesn’t really work.”
In order for the theory to work, the benefit to low-income workers would need to be enormous, and low-income
workers would need to spend all or almost all of those earnings exclusively on goods produced by other low-income
workers. “It’s certainly possible to write down a theoretical
model in which the additional wages paid to low-wage workers increases consumption by so much that employment
doesn’t fall,” says economist Jonathan Meer of Texas A&M.
But “it seems extraordinarily unlikely — the assumptions
necessary are practically laughable.”

Substituting Low-Skilled Workers for High-Skilled
Another major theoretical explanation for the modest disemployment effects is known as labor-labor substitution.
This theory speculates that firms respond to minimum wage
hikes by adjusting the make-up of higher- and lower-skilled
workers in their workforces. While readjusting a production
process may be difficult in the short run, firms may be able
to swap out low-skilled workers for higher-skilled workers
more quickly as the former become comparatively more
expensive.
If this were the case, we would expect to see decreased
demand for lower-skilled workers and increased demand
for higher-skilled workers in response to a minimum wage
increase, meaning the effect on the overall level of employment would be muted — but the changes would hurt the
low-skilled. Citing evidence from a 1995 NBER working
22

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

paper he co-authored with William Wascher of the Federal
Reserve Board of Governors, Neumark explains that it can
be hard to tease out the effects on the low-skilled workers
from the net effects on general employment; despite modest changes to net employment levels, “what happens to
those you’re most trying to help can still be pretty severe.”

Data Problems?
In addition to competing theories about the nature of the
labor market, questions have arisen about the means of measuring and interpreting the data surrounding minimum wage
and its effects. Several major empirical challenges may affect
the ability of economists on both sides of the issue to get an
accurate picture of the policy’s consequences. One potential
issue concerns inflation. Currently, almost four-fifths of all
states and the federal government do not index their minimum wages to changing price levels, meaning that the real
values of the minimums are eroded over time (see chart),
until another one-time nominal increase changes their value.
Meer says that even though the United States has not had
significant inflation in recent years, the real effects of nominal minimum wage increases are washed away over time.
“Over the course of the data we examine, we show that
minimum wage increases are eroded fairly quickly relative to
comparison states,” he says.
As a result, economists turned to measuring the short-run
effects of the policy on employment levels. If the effects of
minimum wage policies were not immediate, however, then
measuring only short-run adjustments of employment levels
would not capture the true effect of the policy. Isaac Sorkin
of the University of Michigan argued that production processes and labor demand may be slow to adjust to changing
conditions in the labor market. He noted that while some
argue that turnover among low-skilled workers is high, these
frequent changes reflect changes in the identity of workers,
not in total labor demand, which is likely much less flexible.
Similarly, Meer and his Texas A&M colleague Jeremy West
argued that because labor demand may be slow to adjust, we
should expect to see the minimum wage affect job growth
trends rather than change the absolute level of employment.
In other words, the minimum wage may affect the number
of future jobs created, rather than the current number of
people employed.
Furthermore, many also control for the growth trends
themselves in an attempt to account for any differences
in employment levels that could be the result of trends
before the policy was enacted. This means that much of
the literature may be missing the true effects of the policy.
To demonstrate this point, Meer and West simulated data
in which the minimum wage has serious negative effects on
employment growth but no immediate effect on employment levels and showed how measuring levels and controlling for trends in employment growth can mask serious
long-run negative effects.
Why not simply control for trends in the period leading
up to a minimum wage hike and then examine the differences

who actually makes the minimum wage and how it
affects the lowest-skilled, most vulnerable workers.
There has been a good deal of controversy over
$10.00
competing demographic claims, as some commentators paint a picture of minimum wage workers as
predominantly middle-class teenagers, while others
$8.00
respond that there are many minimum wage workers struggling to support families as the primary
breadwinner.
$6.00
Both arguments have a degree of truth to them.
According to a report released by the BLS in March,
about 2.5 percent of all workers in the United States
$4.00
make at or below the minimum wage. Many of them
are full-time adult workers, the group most likely to be
$2.00
breadwinners. At the same time, however, minimum
wage workers are disproportionately young and part
time. Despite being only 19.9 percent of total wage
workers, young workers (those under 25) are 50.4 perNOTE: Inflation adjustment based on personal consumption expenditure (PCE) index.
SOURCE: Bureau of Economic Analysis; Pew Research Center; U.S. Department of Labor Wage
cent of minimum wage workers. Despite being only
and Hour Division
26.9 percent of total wage workers, 64.4 percent of
minimum wage workers are part time. 	
in trends instead of discrete levels afterward? “The issue
Additionally, the minimum wage does not target the
is that there are so many changes and they are so frequent
poor specifically, as almost a third of minimum wage workthat there is no distinct ‘pre’ period,” says Meer, making it
ers — 29 percent — live in households making more than
very difficult to find an appropriate counterfactual. Indeed,
three times the poverty income threshold, while less than
the challenge of finding a counterfactual or control group
one-fifth of them live in households whose incomes fall
for states that change their minimum wage policies appears
at or below it, according to a 2014 Congressional Budget
to be a more general empirical problem. “We never really
Office report. In this way, even without causing large unemobserve what would have happened otherwise,” cautions
ployment effects, the minimum wage would remain a blunt
Neumark, “and therefore we somehow have to proxy with
and ineffective tool for fighting poverty. “The fundamental
the data, which is always a challenge.”
problem with using minimum wages to increase the incomes
Without an appropriate counterfactual or control group
of poor and low-income families is that the policy targets
established, data can’t give researchers reliable information
low-wage workers, not low-income families, which are not
about cause and effect. Earlier this year, for example, news
necessarily the same,” wrote Neumark.
sources across the country reported on data from the Bureau
When compared with more targeted policies like the
of Labor Statistics (BLS) that found a correlation between
Earned Income Tax Credit and other transfer programs aimed
states that had raised their minimum wages and relatively
at aiding only the poor, the minimum wage becomes harder
faster job growth. In order to establish any causality, howevto justify as an antipoverty measure. Furthermore, if firms
er, an economist would need a counterfactual comparison,
respond to minimum wage hikes by swapping out low-skilled
and while the BLS data may be suggestive, it tells us nothing
workers for higher-skilled workers, minimum wage policies
definitive about the effect of the minimum wage and could
could be boosting the wages of slightly higher-skilled workpotentially be very misleading.
ers, while hurting the very group the policy was designed
to support. In short, though the evidence on the effect of
Is the Minimum Wage an Effective Policy Tool?
the minimum wage on overall employment levels is varied,
Finally, to the extent that the minimum wage is intended to
it is still likely a problematic tool for improving the living
act as an antipoverty policy, it is also important to consider
standards of the poor. 	
EF
2010—

2000—

1990—

1980—

1970—

1960—

1950—

1940—

DOLLARS PER HOUR

Federal Minimum Wage, 1938-2013 (in 2013 dollars)

Readings
Card, David, and Alan Krueger. “Minimum Wages and
Employment: A Case Study of the Fast Food Industry in New
Jersey and Pennsylvania.” American Economic Review, September
1994, vol. 84, no. 4, pp. 772-793.
“Characteristics of Minimum Wage Workers, 2013.” U.S. Bureau
of Labor Statistics, Report No. 1048, March 2014.

“The Effects of a Minimum Wage Increase on Employment and
Family Income.” Congressional Budget Office, Pub. No. 4856,
February 2014.
Neumark, David, J.M. Ian Salas, and William Wascher. “More
on Recent Evidence on the Effects of Minimum Wages in the
United States.” National Bureau of Economic Research Working
Paper No. 20619, October 2014.

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

23

ECONOMICHISTORY
Free to Speculate
BY KARL RHODES

British frontier
policy threatened
Colonial land
speculation
on the eve of
the American
Revolution

A

s Britain’s secretary of state
for the Colonies, Wills Hill,
the Earl of Hillsborough, vehemently opposed American settlement
west of the Appalachian Mountains. As
the Pennsylvania Provincial Assembly’s
agent in London, Benjamin Franklin
enthusiastically advocated trans-Appalachian expansion. The two bitter enemies disagreed about many things, and
British land policy in the Colonies was
at or near the top of the list.
In the late 1760s, Franklin joined
forces with Colonial land speculators
who were asking King George’s Privy
Council to validate their claim on more
than 2 million acres along the Ohio
River. It was a large western land grab ­
—
even by Colonial American standards
—
­ and the speculators fully expected
Hillsborough to object. But instead
of opposing the deal, Hillsborough
encouraged the speculators to “ask for
more land,” Franklin reported, “enough
to make a Province.”
It was a trick. Hillsborough knew
the Privy Council frowned upon large

The Proposed Province
•

FORT PITT

PA

OH
WV
•

BOONESBOROUGH

VANDALIA

VA

KY
TN

NC

Proclamation Line

SOURCES: The boundaries of Vandalia are from Voyagers to the West by Bernard Bailyn. The
position of the Proclamation Line is from the Historical Atlas of West Virginia by Frank Riddel.

24

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

land grants, so he reasoned that a much
bigger proposal would have a much
smaller chance of winning approval.
But Franklin and his partners turned
Hillsborough’s tactic against him. They
increased their request to 20 million
acres only after expanding their partnership to include well-connected British
bankers and aristocrats, many of them
Hillsborough’s enemies. This AngloAmerican alliance proposed a new colony called Vandalia, a name that Franklin
recommended to honor the queen’s purported Vandal ancestry. The new colony
would have included nearly all of what
is now West Virginia, most of eastern
Kentucky, and a portion of southwest
Virginia, according to a map in Voyagers
to the West by Harvard historian Bernard
Bailyn.
Vandalia is perhaps the most important and intriguing tale of Colonial land
speculation in the years leading up to
the American Revolution. It dramatically highlights the growing tension
between expanding American ambition
and constricting British control.
“Hillsborough on the one side and
the Vandalia speculators on the other
correctly understood the immensity
of the stakes involved,” Bailyn wrote.
“The problems of emigration (population drain from the British Isles) and
expansion into the American west had
become dangerously inflamed, and the
connection between them was beginning to be widely understood.”
Among the motivating factors for
the American Revolution, the conflict between Colonial land speculation
and British frontier policy typically is
overshadowed by the “taxation without representation” mantra. But in the
1760s and 1770s, Britain’s attempts to
curb settlement in the trans-Appalachian region became a major threat to
the political rights and economic interests of colonists, including most of
the men who would become America’s
Founding Fathers.

Vandalia also illustrates the chaotic struggle to obtain and
retain land in the trans-Appalachian region. Everyone from
wealthy speculators and royal governors to poor settlers and
squatters ­ not to mention the indigenous people who lived
—
there ­ prized the fertile fields and navigable rivers between
—
the Appalachian Mountains and the Mississippi River.
For the white settlers, there also was a fundamental connection between land and independence, explains François
Furstenberg, associate professor of history at Johns Hopkins
University. “One of the requirements for independence ­ as
—
it was understood in the 18th century ­ was to be a land
—
holder. If you did not hold land, you were not independent.”

Colonial Land Claims
It may seem unpatriotic to portray the Founding Fathers as
land speculators. The term carries a negative connotation
because land transactions ­ particularly large, sight-unseen
—
deals ­ are hotbeds for fraud. But land speculation is not
—
necessarily a bad thing.
“Land speculators are basically taking risks that other
people don’t want to take,” explains Farley Grubb, professor
of economics at the University of Delaware and an expert
on early American land policy. “People may hate speculators
when they appear to make a lot of money without working
for it. But they are taking risks that allow people to liquidate
land claims into something of more immediate value.”
In Colonial America, this risk-reward trade-off was highly
favorable to land speculators who had the right political
connections. “Their basic business model was to acquire
land from a public entity (initially the crown) at low cost
and gradually sell the land to smaller investors,” wrote
Harvard economist Edward Glaeser in a National Bureau of
Economic Research working paper, “A Nation of Gamblers:
Real Estate Speculation and American History.” The profit
potential could be staggering.
“America has always been a nation of real estate speculators,” Glaeser noted. “Real estate is a particularly democratic
asset that attracts the mighty, like George Washington and
Benjamin Franklin, and the modest, like the small farmers in
Kent, Connecticut, who were buying and selling land parcels
rapidly in 1755.”
The French and Indian War ­ the North American
—
theater of the Seven Years’ War ­ dammed up land specu—
lation at the Appalachian Mountains, but the eviction of the
French from the trans-Appalachian frontier in 1760 opened
the flood gates. Many Indian nations continued to resist
Colonial incursion, but the population of the 13 Colonies was
increasing rapidly, and westward expansion quickly escalated
into major land rushes across the mountains.
“The population movement into uncultivated and legally
unclaimed land excited feverish ambitions in land speculators in every corner of the Anglo-American world,” Bailyn
wrote. “Among them were most of the officials of colonial
America, a large phalanx of British politicians and merchants, and planters and merchants everywhere in America,
who were determined to get a substantial piece of the pie.”

The British government, however, was reluctant to condone settlement of the land it gained from the French and
Indian War. Heavily in debt from many years of global conflict, the British had no desire to continue fighting Indian
nations on America’s western frontier. So the Privy Council
issued the Royal Proclamation of 1763, which prohibited
settlers from moving beyond the Appalachian Mountains.
Colonial land speculators, including Washington, viewed
the proclamation as a temporary measure to appease the
Indians. Speculators continued to acquire western land
rights from Colonial governments and Indian representatives. And in some prominent cases ­ such as Vandalia ­
—
—
they continued to lobby the British imperial government to
validate those claims.
“People assume that the colonists woke up every morning
in the 1760s and 1770s and asked, ‘When are we going to be
free?’ And that wasn’t the case at all,” says Alan Taylor, professor of history at the University of Virginia. “The leaders
of the Colonies, in particular, were deeply enmeshed in the
institutions of the empire, and they were doing their best to
exploit those institutions for their own benefit. They rather
belatedly discovered in 1774 and 1775 that those institutions
were no longer working in their favor.”

The Vandalia Deal
The origins of the proposed Vandalia colony go back to
a group of merchants called “the suffering traders,” who
demanded restitution for supplies lost to indigenous combatants during the French and Indian War. A group of
Philadelphia speculators, led by Samuel Wharton, bought
out most of the suffering traders’ claims and swapped them
for a claim on more than 2 million acres along the Ohio River
southwest of Fort Pitt (present day Pittsburgh).
Representatives of the Six Nations of the Iroquois ceded
this land to Wharton’s group via the Treaty of Fort Stanwix,
which was negotiated by Sir William Johnson, one of two
superintendents of Indian affairs in the Colonies. Johnson,
who was in cahoots with Wharton’s group, exceeded his
authority by extending the Royal Proclamation Line farther than his instructions from Hillsborough allowed. The
Iroquois representatives also exceeded their authority by
selling land that did not belong to them. They lived mostly
in what is now New York, while the land they were ceding
was in the Ohio Valley, which was populated primarily by
the Shawnee and the Delaware.
In addition to questions of authorization and outright
fraud, many such treaties were indeterminate in other
ways, Furstenberg says. “When Native Americans sell land,
they might be selling certain rights to the land ­ the right
—
to hunt or farm on the land ­ but they don’t fundamentally
—
sell the land. It’s a nonsensical concept to them, but a land
speculator might bribe somebody to sign a piece of paper
and then go back to his Colonial government and say, ‘I
now have the rights.’ ”
That approach was not an option for Wharton’s group,
however, because Hillsborough quickly challenged the Treaty
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

25

of Fort Stanwix. So Wharton set sail for London, where he
and Franklin asked the Privy Council to confirm the deal.
Hillsborough fended them off for three years, but then
his “ask-for-more-land” trick backfired. The Privy Council
approved Vandalia, and Hillsborough resigned rather than
implement the council’s decision. “He had serv’d us by the
very means to destroy us, and tript up his own Heels in the
Bargain,” Franklin wrote. An ecstatic Wharton, expecting to
be named the royal governor of Vandalia, told his associate in
the Colonies to start making plans to build a suitable seat of
power for him in the Ohio Valley.
Just when it looked like Vandalia was on the brink of success, the Boston Tea Party poisoned the pond. And in June
1774, passage of the Quebec Act, which extended the boundary of Quebec to the Ohio River, made it clear that Vandalia
would never win final approval from the British government.
By promoting a gigantic Anglo-American land speculating company, Franklin tried to realign the economic
interests of British and American leaders, Taylor concluded
in his forthcoming book, American Revolutions. “Instead,
the frustration of that model widened the gap between the
elites on the two sides of the Atlantic, hastening the rupture
of the empire.”

Economics or Politics?
Was the American Revolution about economic interests
or political rights? After a long debate, economic historians
generally have concluded that it was mostly political, but the
two categories of motivation are often intertwined, Grubb
notes, especially in the long run.
“While the world’s attention was drawn to the question of
the political and constitutional relations between Britain and
America, these other problems were developing quickly and
dangerously,” Bailyn wrote. “First was the question of controlling settlement in the great new western land acquisitions.”
The Royal Proclamation of 1763 was Britain’s initial
attempt to do that, but the proclamation does not receive
much attention in history books as a motivating factor of the
American Revolution. Most economists and historians focus
more on the Quebec Act, the Quartering Act, and the various tax acts, Grubb says. “You could look at the Quebec Act
as a pure taking of assets. Did that spark enough indignation
to make people go to war? It was certainly one of the things.”
In October 1774, Richard Henry Lee, a prominent
Virginia statesman, told the Continental Congress that
the Quebec Act was “the worst grievance” suffered by the
colonists, Taylor wrote. “As an avid land speculator, Lee
understood that the imperial crisis pivoted on issues of
land as well as taxes.”

Taylor views taxes and frontier policy as “two faces of the
same problem, which was Parliament trying to exert itself as
the sovereign legislature for the entire empire. The colonists
were thinking to themselves, ‘We don’t want to be taxed by
Parliament, and we don’t want this money coming out of our
pockets to pay soldiers who are going to restrain our efforts
to expand into Indian country.’ ”
As a motivating factor of the Revolution, Furstenberg
sees British frontier policy as “just as important, if not more
important, than the things we normally hear about ­ trade
—
policy, the Navigation Acts, coercive acts, etc.”
Grubb notes that pre-Revolutionary rioting generally was
sparked by taxation issues, not frontier policy, but Taylor
says revolutionaries may have emphasized taxation because
it was a unifying issue, while land policy was potentially divisive among Colonial leaders. Wharton and Franklin’s plans
for Vandalia, for example, conflicted with land claims held
by prominent land speculators from Virginia.

Land of the Free
Land policy was almost as divisive during the Revolution as
it had been before the Revolution. The Vandalia group “was
very likely behind the attempt in the summer of 1776 to create
a fourteenth commonwealth to be known as Westsylvania,”
wrote historian Otis Rice in The Allegheny Frontier. “Powerful
forces, however, opposed the creation of a new commonwealth. With the Declaration of Independence at hand and
a need for unity among the thirteen states, Congress had no
intention of antagonizing two of its most important commonwealths (Virginia and Pennsylvania) by depriving them
of western lands to which they held claim.”
In 1779, some of the Vandalia partners asked Congress to
recognize their claims in the Ohio Valley, but Congress had
plans of its own for the trans-Appalachian region. Early in
the Revolution, the delegates had started discussing the sale
of western land as their best option for financing the war.
But for this strategy to work, states with huge western land
claims ­ most notably Virginia ­ would have to cede much
—
—
of that territory to the federal government.
“The Articles of Confederation, which they were operating under, didn’t get ratified until 1780, because Maryland,
which had no claims to western land, said, ‘We won’t ratify
this until you solve this problem,’” Grubb says. “I think
Virginia was persuaded by its neighbor.”
From 1781 through 1802, Virginia and six other states
ceded 222 million acres of potentially salable land extending
west to the Mississippi River, north to the Great Lakes, and
south to Florida. “The U.S. Federal Government,” Grubb
says, “was born land rich.”
EF

Readings
Anderson, James Donald. “Vandalia: the First West Virginia?”
West Virginia History, Summer 1979, vol. 40, no. 4, pp. 375-392.
Furstenberg, François. “The Significance of the Trans-Appalachian
Frontier in Atlantic History.” American Historical Review,
June 2008, vol. 113, no. 3, pp. 647-677.
26

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

Marshall, Peter. “Lord Hillsborough, Samuel Wharton and the
Ohio Grant, 1769-1775.” English Historical Review, October 1965,
vol. 80, no. 317, pp. 717-739.
Taylor, Alan. American Revolutions. New York: W.W. Norton
& Co., forthcoming.

BOOKREVIEW

Publish or Perish
SECRETS OF ECONOMICS EDITORS
EDITED BY MICHAEL SZENBERG AND
LALL RAMRATTAN
CAMBRIDGE, MASS.: THE MIT PRESS,
2014, 408 PAGES
REVIEWED BY RENEE HALTOM

T

hough economics blogs may be gaining readers,
journals remain at the center of the profession.
Publication in a top journal is a seal of approval that
tells consumers of economics research where to direct their
attention. It can bring visibility to a rising star and signal a
veteran economist’s continued relevance. Publications and
citation counts are still the dominant way of measuring an
economist’s productivity for purposes of establishing tenure
or promotions. And for future researchers, the profession’s
more than 1,000 journals catalog what the profession knew
at a point in time.
Therefore, the editors of economics journals wield considerable power. They assign referees and make the final
judgment on whether a paper is accepted. They keep referees on schedule and oversee the revision process. In doing all
this, they set the tone for the journal and, article by article,
help adjudicate scientific advancement itself.
Secrets of Economics Editors explores this vital function.
The book features two dozen essays from current and past
journal editors, ranging from top general-interest journals to
regional and subfield publications. The contributors cover
everything from how journals deal with plagiarism and errors
—
­ both reasonably rare problems ­ to competition within
—
the publication industry and the persistent dominance of the
highest-ranked journals.
Arguably the most important question about academic
publishing is whether journals truly encourage and publish
the best research. Opinions on this question differ, but the
essays provide some of their most enlightening insights into
the value and role of economics journals via anecdotes of
the article review process itself, the topic to which the book
devotes most of its pages. These stories convey both the
subjectivity of the process and how seriously editors treat it.
For example, one of the editor’s first and most important
tasks is selecting referees, typically one to three per paper. The
choice weighs depth and breadth, both of which are important but in different measures based on the aim of the journal.
For a paper in a narrow subfield, such as neuroeconomics, it
can actually be an asset to select a referee in a different field
entirely since, if they are unconvinced by a paper’s argument
or importance, the median reader is likely to be too. In that
sense, “the referee is always right,” notes John Pencavel, who
edited the generalist Journal of Economic Literature.

At the same time, referees are not immune to bias, and personalities matter a great deal. Campbell Harvey, former editor
of the Journal of Finance, recalls keeping detailed records on
past referees’ timeliness, quality, and even specializations
—
­ asset pricing and corporate finance theorists are tougher
reviewers, he reports ­ to aid both his referee-selection pro—
cess and his interpretation of their reports.
Some editors have experimented with ways of speeding up
the profession’s notoriously slow response times — authors
must sometimes wait a year or longer for a decision — often
by finding faster ways of plucking unpromising papers out
of the process. Some allow authors to forgo the “revise and
resubmit” option in favor of a binary “accept” or “reject,”
which both promises a faster review and encourages authors
to submit a more complete draft. Other editors issue “desk
rejections,” the practice of flatly denying a paper without consulting referees. Though authors in such instances
bemoan the loss of a referee’s feedback and are more likely
to protest the decision, several of the book’s editors maintain that their responsibility is to the journal and not to its
aspiring contributors.
Given the diversity across journals in focus and practice,
perhaps the only universal fact about editing is that it is
not for the faint of heart. The sheer volume of submissions
—
­ more than 1,000 per year at some top journals ­ is both
—
daunting and ensures a very low acceptance rate. R. Preston
McAfee, formerly of American Economic Review and Economic
Inquiry, estimates having rejected 2,500 papers in his career
while accepting only 200. “Fortunately,” he writes, “there is
some duplication across authors, so I have made only around
1,800 enemies.”
Also daunting is the responsibility of balancing decisiveness with an open mind. Authors and editors alike worry
that journals “play it safe” by bypassing innovative but risky
work in favor of marginal technical accomplishments on an
established topic, which can ingrain mainstream thinking.
At the same time, some infamous rejections ­ such as
—
the 1970 “The Market for Lemons” paper that largely won
George Akerlof the Nobel Prize ­ continue to haunt edi—
tors. (Perhaps worse, notes McAfee, Akerlof received three
confidently smug rejection letters, providing an additional
lesson in the wrong way to write them.) Fortunately, most
editors report relatively few regrets.
Overall, the book provides outsiders with a rare glimpse
into what is arguably still the primary venue of progress in
the economics profession. The audience for such seeming
minutiae may not be immediately obvious, but as Nobel
Prize winner Robert Solow points out in his foreword, it
includes anyone who has ever submitted to, been published
in, or read an economics journal. In other words, just about
the entire profession.
	
EF
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

27

DISTRICTDIGEST

Economic Trends Across the Region

The Rising Tide of Large Ships
BY J A M I E F E I K A N D A N N M AC H E R A S

I

n 1988, a new class of container ships, the American
President Lines (APL) C-10, came on the market — the
­
first class of ships that was too large to pass through the
Panama Canal. Eighteen years later, in 2006, the Panama
Canal Authority began a multiyear project to expand the
canal so that these and other large ships will be able to make
the passage between the Atlantic and Pacific oceans.
The expansion of the Panama Canal, expected to reach
completion in late 2015, heralds much-anticipated shifts in
the routes that goods take to arrive at their final destinations in the United States. This is because larger ships, up to
double the size of those that can transit the Panama Canal
today, will be able to navigate the canal once its new locks
are opened. From the growing markets of Northeast Asia,
Southeast Asia, and the Indian subcontinent, the first leg
of the journey for most traded goods is the long maritime
trip from overseas ports to U.S. ports on the East Coast, the
West Coast, and the Gulf of Mexico. In particular, container shipments to East Coast ports, which include the ports of
the Fifth District, may increase, particularly with respect to
goods arriving from Northeast Asia (China and Japan).
The opportunity for East Coast ports to gain from the
expansion of the Panama Canal depends on many factors,
not the least of which is the depth of their channels. Several
East Coast ports, including Norfolk, Baltimore, and New
York, have channels that are deep enough to accommodate
the larger ships today; Charleston, S.C., can also handle
them, though only at high tide. But the Panama Canal project will not be the only source of growth for these ports.
Larger ships making their passages through the Suez Canal,
the other primary route for Asian trade, are already calling at
East Coast ports that can accommodate them. The expansion of the Panama Canal may accelerate this trend, but the
use of big ships is already well under way.

Waterborne Trade is Growing
Merchandise trade between the United States and the rest
of the world is expected to more than double between 2012
and 2040, according to estimates from the Federal Highway
Administration’s Freight Analysis Framework. Over this
period, imports are expected to grow at a compound average
annual growth rate of 2.9 percent, while exports will grow
even faster, by 3.9 percent.
With this growth will come growth in oceangoing freight.
Measured by volume, the majority of U.S. trade is carried
on oceangoing vessels, with the exception of trade with
Canada and Mexico, which is transported mostly by truck
or rail, or by water via the Great Lakes. Among U.S. major
trading partners, imports from China are expected to grow
faster than those of any other region of the world; nearly all
28

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

trade with China is transported by water. Indeed, the push
for shipping lines to use larger ships has been motivated by
China’s growing trade with the United States, Europe, and
other regions of the world. Because waterborne shipping
is so critical to the movement of goods from China to the
United States, the Panama Canal expansion will have its
greatest potential effect on this aspect of U.S. international
trade, primarily by increasing volume in the trade route from
Northeast Asia to the East Coast.

Ships are Getting Bigger
In 2011, nearly 84 percent of oceangoing commodity trade
between Northeast Asia and the United States was containerized. This has not always been the case, though. Since the
inception of containerized cargo transport in the mid-1950s,
the use of containers and dedicated container-carrying ships
has grown dramatically, with clear cost advantages for many
types of cargo that had previously been shipped by breakbulk methods, requiring each item to be loaded individually.
In addition to the reduced cost of handling and avoidance
of potential vandalism or waste, the use of intermodal containers allows for delivery of smaller shipments directly to
customers via transfer to truck or rail. (See “The Voyage
to Containerization,” Region Focus, Second/Third Quarter
2012.) Initially, containers were used primarily for manufactured goods, but starting in the 1980s, certain agricultural
products also switched to the containerized mode of shipment. From 2002 to 2012, the number of container vessel
calls at U.S. ports rose by 16.6 percent. During this time,
Fifth District ports saw an increase in container vessel calls
of 11.7 percent. (See chart.)
As the number of container vessel calls has risen, so has
the average size of container ships. Vessel size is typically
measured by TEUs, 20-foot equivalent units, which references the standard length of a container. In 2006, container
ships of size 5,000 TEU or greater accounted for just 17
percent of container ship calls at U.S. ports, but by 2011,
this share had grown to 27 percent. (See table.) Because of
the importance of the Panama Canal as a transit between
the Pacific and Atlantic oceans, size categories for vessels
have used the maximum size of ships that can fit through the
Panama Canal as a reference point, defining the Panamax
size as a vessel that can carry 4,000 to 5,000 TEU. Similarly,
when the Panama Canal expansion project is complete,
the new size limit will be 13,000 TEU, establishing a new
size category called Post-Panamax or New Panamax (5,001
to 13,000 TEU). Finally, beyond the limits of the newly
expanded Panama Canal, there are ships that will push the
limits of the Suez Canal called Suezmax (from 13,001 to
18,000 + TEU).

The Panama Canal is Expanding

Vessel Calls (TEUs) on U.S. Ports by Region
25
20

TEU IN MILLIONS

One such extreme ship is the CSCL
Globe. When delivered to China Shipping
Container Lines in late 2014, it became the
largest container ship in the world — and the
company has four more of the 19,000 TEU
ships on order. The CSCL Globe is as large as
four soccer fields. Like other Suezmax vessels, it will be able to transit the Suez Canal,
but not the expanded Panama Canal.

■ California & Hawaii	
■ GA, East Coast of FL, PR	
■ Pacific NW	

■ Fifth District
■ Atlantic NE
■ Gulfcoast

15
10
5

The Panama Canal expansion project will
0
add a third traffic lane and set of locks, at
2002
2007
2012
an estimated cost of $5.25 billion, to allow
SOURCE: U.S. Maritime Administration 2002 – 2012 Total Vessel Calls in U.S. Ports, Terminals and Lightering Areas
for the passage of ships more than twice
Report: Commercial Vessels over 10,000 deadweight tons (DWT)
as large as it can handle now. In addition,
the expanded locks and channels will allow
a greater number of ships to pass through
Containership Calls at U.S. Ports by Size
the canal, thereby doubling capacity. The
Vessel Size
Percent change 2006 share 2011 share of
ambitious project involves deepening and
(TEUs)
2006
2011
2006-11
of total
total
widening the canal entrances, constructing
<2,000
4,143
4,547
9.8
21.2
20.6
two new complexes (one on each end of
2,000-2,999
3,985
2,856
-28.3
20.3
12.9
the canal), excavating a new north access
channel for the Pacific locks, and elevating
3,000-3,999
3,333
2,327
-30.2
17.0
10.5
Gatun Lake’s maximum operational level. In
4,000-4,999
4,782
6,400
33.8
24.4
29.0
addition, the navigational channels through
>4,999
3,344
5,959
78.2
17.1
27.0
Gatun Lake and the connecting waterway,
Culebra Cut, will be deepened and widened
Total
19,587 22,089
12.8
100.0
100.0
to allow for two-way passage of vessels.
NOTE: TEU = 20 foot Equivalent Unit
Construction for the expansion project
SOURCE: Vessel Calls Snapshot, 2011, U.S. Department of Transportation Maritime Administration,
began in 2008, with completion planned for
November 2013
2014, but the project has experienced delays
due to labor disputes and technical problems with the locks.
shipments from Northeast Asia in 2011.
Completion is now expected by the end of 2015, with the
Three factors determine how goods are moved: reliability,
first ships making passage in January 2016. 	
transit time, and transportation cost. For goods moving by
container ship, reliability may be more a factor of trust and
What Determines the Route?
experience with a particular shipper and is therefore someThe arrival of cargo at a port is only the beginning of the
what subjective. Transit time and transportation cost, howevsophisticated multimodal freight transportation system that
er, are directly measurable and easy to compare across differserves producers and consumers all over the United States,
ent routes. The Panama Canal expansion will generate lower
regardless of distance to a coast. Some container ships from
shipping costs per container to East Coast ports because of
Northeast Asia enter directly through West Coast ports to
the economies of scale accompanying larger ships; this may
final destinations across the United States, using a network
lead to a shift in routing away from West Coast ports and
of port terminals, railways, and highways to reach points as
intermodal transit and in favor of routing to the East Coast.
far as the East Coast. Alternatively, shipments may enter
Although larger ships also serve West Coast ports, the longer
ports on the East Coast for intermodal transport to destinawaterborne portion of the trip through the Panama Canal to
tions there and further inland.
the East Coast offers relatively more savings.
Container shipments from Northeast Asia headed for
On the other hand, total transit times may be as much
the East Coast and eastern inland destinations have shifted
as nine days longer to reach the East Coast via the Panama
away from West Coast ports and toward East Coast ports.
Canal relative to routing through the West Coast ports. For
From 2000 to 2011, the movement of containers by rail
example, it could take 16 days to route goods from Northeast
from the West Coast rose by 25 percent. But while the
Asia to Chicago by way of the port of Seattle, compared to
volume from West Coast ports to the Midwest and South
25 days for shipment through the Panama Canal and then
Central regions increased by 64 percent, the volume to
to Norfolk. The significance of the time difference for the
the East Coast declined by 49 percent. Ports on the East
routing decision depends very much on the product being
Coast and Gulf Coast received 31 percent of total container
shipped. For goods of relatively low value, the transit time
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

29

reliance on trucking as the primary mode of inland
container transport for East Coast ports. For a number of eastern metropolitan markets that are 300
Cargo
U.S. Value
U.S. Tons
Percent Arriving Through
Segment
(Millions 2010 $) (Thousands) $/kg
a West Coast Port
to 500 miles inland, rail enjoys cost advantages, and
these areas would be precisely the target markets for
Containerized
345,150
54,790
6.30
70.9
liner operators that want to leverage the capacity
Low Value
100,762
30,103
3.35
66.2
of Post-Panamax vessels. The drive to provide this
High Value
244,388
24,687
9.90
76.5
lower-cost rail alternative, in addition to environmental objectives and other factors, has already led
Bulk/Other
26,410
32,524
0.81
49.9
to improvements in rail infrastructure.
SOURCE: Panama Canal Expansion Study, Phase I Report, U.S. Department of Transportation Maritime
Railroads on the East Coast, specifically Norfolk
Administration, November 2013, p. 108
Southern and CSX, have projects underway to increase
for delivery may not matter as much and this would favor
rail capacity and efficiency in anticipation of increased interthe East Coast route via the Panama Canal, but for relatively
modal traffic from East Coast ports. From the railroad’s
high-value goods, a faster transit time is more essential:
perspective, it doesn’t matter if the increase in traffic is
Time is money. Higher-value imports from Northeast Asia
organic or stems from growth in world trade or the Panama
for which a waterborne route makes sense will likely conCanal expansion. In order to move more freight more quickly,
tinue to arrive via West Coast ports even after the Panama
railroads will need to be able to carry the shipping containers
Canal expansion is complete. (See table.)
double-stacked — an endeavor complicated by the many tunSo, where does East meet West? Consultants and
nels and bridges that obstruct passage. Through private and
researchers have estimated a dividing line called the “transpublic partnerships, projects to upgrade the railroad infraportation cost equivalence line” where it is equally cost
structure are reducing these possible bottlenecks and better
effective to ship through West Coast ports combined with
linking the ports on the East Coast with inland markets.
intermodal transportation as it is to ship through East Coast
One such project, the Heartland Corridor, completed
or Gulf ports. By recent estimates, this line runs about 300
in 2010, was an investment project undertaken by Norfolk
miles from the East Coast, which means most regions of
Southern with state government support. The Heartland
the United States are served more cheaply through West
Corridor connects the Port of Virginia to the Midwest
Coast ports. (See map.) With the opening of the Panama
states — clearing overhead obstacles from Norfolk to
Canal, this line may shift westward as the larger capacity of
Lynchburg, through West Virginia and on to Columbus,
post-Panamax ships lowers the cost per TEU, although effiCincinnati, and Chicago. Another corridor investciency gains at West Coast ports and along the intermodal
ment project involving Norfolk Southern, the Crescent
routes could offset this movement.
Corridor, runs from the Port of New York through
The region near the transportation equivalence line is
Lynchburg, Charlotte, Atlanta, and Memphis to New
considered to be the most competitive for service through
Orleans. CSX’s National Gateway is another multistate
ports on either coast. Large metropolitan areas in this region
project that parallels the I-95 corridor between North
that generate the most intense interest include Atlanta and
Carolina and Baltimore, then along the I-70 corridor
areas up through Detroit and Ohio, where East Coast ports
between Washington, D.C., and Pittsburgh and on to
stand to gain share, and Chicago, where it is more likely that
Northwest Ohio. These projects represent significant
the West Coast route will win out. These are large metroopportunities for cost savings and stand to benefit all
politan areas that drive significant demand on the part of
parties involved, from the railroad companies, to the port
consumers and industry.
authorities, shippers, and finally consumers.

Value of U.S. All Waterborne Imports from Northeast Asia – 2010

The Rail Factor

30

Are East Coast Ports Ready?

West Coast ports benefit from cost efficiencies for rail
on the cross-country eastbound routes. Generally, goods
are transported to inland destinations such as Chicago,
Memphis, and Dallas by double-stack trains, providing an
obvious cost advantage over single-stack cars and trains. In
addition, the large container volumes arriving in West Coast
ports allow for transfer to larger unit trains, carrying a single
type of commodity all bound for the same destination. Unit
trains provide cost savings and faster shipping times because
they can make nonstop runs between two terminals, avoiding the need to switch cars at intermediate junctions.
These efficiencies are not as easily obtained in the
more congested East Coast region, which leads to a heavier

The effect of larger vessels passing through the Panama
Canal from Northeast Asia to the East Coast will depend
not only on the cost savings of an all-water route and efficiencies on the intermodal segment but also on the capacity
of East Coast ports to accommodate the increased volume of
cargo. Factors such as channel depth, terminal capacity and
infrastructure, access to intermodal operations, and productivity will determine whether the East Coast ports can fully
utilize the efficiency offered by post-Panamax vessels.
Many ports on the East Coast are constrained by
channel depth, as post-Panamax vessels require a channel
of around 50 feet. Norfolk, Baltimore, and New York are
currently the only ports with 50-foot channels, although

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

the 45-foot channel depth at the
Transportation Cost Equivalence Line
port of Charleston can service
The transportation cost equivalence line defines where it is equally cost effective to ship through West Coast ports
combined with intermodal transportation as it is to ship through East Coast or Gulf ports.
some post-Panamax vessels at
high tide. While the channel
of the Port of New York/New
Jersey is deep enough for the
larger vessels, the ships will not
be able to call at the port until
the height of the Bayonne Bridge
is raised to allow the high stacks
of containers to pass under it
— a project currently underway and expected to be open
to post-Panamax vessels by the
end of 2015. Other East Coast
ports, such as Savannah, Miami,
and Charleston, have projects
underway to deepen channels
and expand terminal capacity for
post-Panamax vessels.
Meanwhile, the terminals
SOURCE: CBRE Port Logistics Group, “Transportation Cost Equivalence Line: East Coast vs. West Coast Ports” (July 2014)
at Norfolk and Baltimore are
already serving post-Panamax
vessels coming through the Suez Canal. Both terminals are
warehousing and distribution and other business services.
equipped with giant super post-Panamax cranes — taller
The RESI study was motivated by a proposed publicthan a 14-story building and able to reach 22 containers
private partnership investment in a rail intermodal facility
across a container ship and lift more than 185,000 pounds of
in southwest Baltimore that would have improved rail access
cargo. Efficiency of port operations benefits the port itself
given the local tunnel obstructions that limit the use of douby generating higher revenues but also provides savings to
ble-stacked containers. Proponents believe that the facility
shippers that want to minimize transit time for their cargo.
is critical to the ability of the Port of Baltimore to capture
Other efforts include expanded container storage to allow
increased container volume resulting from the expansion of
for the discharge and temporary storage of containers as well
the Panama Canal. In fact, the RESI study predicted a loss
as improved gate processing to move trucks in and out more
of 50 percent of the Port of Baltimore’s containerized cargo
quickly. All of these improvements are essential to provide
traffic, and an associated contraction in employment, wages,
service to larger ships and increased volumes of cargo.
and tax revenues, if the project does not proceed. In late
Clearly, policymakers believe increased port activity will
August, the state of Maryland withdrew its funding for the
generate economic benefits for the regional, and even stateproject due to concerns of citizens living in the vicinity of
wide, economy. It is difficult, however, to quantify the potenthe proposed intermodal facility.
tial regional benefit due to the uncertainties regarding the
ultimate volume of increased container traffic to the ports
Conclusion
resulting from the Panama Canal expansion.
Significant investments are taking place in East Coast
A study of the likely economic and fiscal effect on
ports and the railways that serve them to accommodate
the Greater Baltimore region considered two possible
the increase in large ships that will arrive when the Panama
scenarios for increased container volume at the Port of
Canal expansion is complete. It is important to bear in mind
Baltimore — on the lower end, volume rises by 10 perthat large ships are already coming through the Suez Canal
cent over current levels, while on the higher end, it rises
to those ports on the East Coast that can handle them.
by 25 percent. According to the study, prepared for the
Growing trade with Southeast Asia and the Indian subcontiEconomic Alliance of Greater Baltimore by Towson
nent will only accelerate the trend toward larger ships calling
University’s Regional Economic Studies Institute (RESI),
on the East Coast. How ready ports are in terms of channel
the increase in containerized volume could, in the lowdepth may not matter as much as where our growing trade is
end scenario, add an estimated 107 jobs and $5.5 million
originating from, where goods are destined for in the United
in wages; in the high-end scenario, the growth would
States, and what types of goods are being shipped. Cost
bring 266 jobs with an additional $13.9 million in wages.
savings will affect shipping routes on the margin, but trade
Employment growth stems from the jobs created directvolumes are expected to increase over the next 25 years so
ly at the port as additional workers are hired to handle
the East Coast ports will benefit even if they don’t steal
cargo, plus other jobs created by associated businesses in
market share from the West Coast.	
EF
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

31

State Data, Q1:14
	

DC	

MD	

NC	

SC	

VA	

WV

Nonfarm Employment (000s)	
746.0	
2,605.0	
4,090.6	
1,918.9	
3,765.8	
765.6
Q/Q Percent Change	
-0.2	
0.0	
-0.2	
0.1	
0.0	
0.1
Y/Y Percent Change	
0.3	
0.5	
1.4	
1.9	
0.0	
0.2
							
Manufacturing Employment (000s)	
0.8	
104.0	
441.9	
228.2	
229.2	
47.7
Q/Q Percent Change	
-7.7	
-1.2	
-0.2	
-0.1	
0.0	
-1.7
Y/Y Percent Change	
-20.0	
-2.9	
-0.2	
2.5	
-1.0	
-1.7	
						
Professional/Business Services Employment (000s)	 156.1	
420.9	
563.6	
240.2	
663.4	
66.5
Q/Q Percent Change	
0.3	
0.6	
0.1	
0.2	
-0.7	
1.8
Y/Y Percent Change	
0.4	
0.7	
4.2	
2.4	
-3.0	
2.8
							
Government Employment (000s)	
237.2	
506.2	
715.8	
352.0	
706.3	
154.1
Q/Q Percent Change	
-0.8	
0.6	
-0.5	
-0.1	
-0.5	
-0.5
Y/Y Percent Change	
-2.2	
0.3	
-0.3	
0.0	
-0.5	
0.3
						
Civilian Labor Force (000s)	
370.3	
3,108.9	
4,665.0	
2,166.0	
4,274.3	
795.2
Q/Q Percent Change	
0.7	
0.0	
0.0	
-0.2	
0.9	
0.6
Y/Y Percent Change	
-1.0	
-1.0	
-1.2	
-1.1	
0.8	
-0.9
							
Unemployment Rate (%)	
7.4	
5.7	
6.5	
5.9	
4.9	
6.0
Q4:13	
7.8	
6.2	
7.2	
6.8	
5.3	
6.2
Q1:13	
8.6	
6.8	
8.6	
8.1	
5.6	
6.8	
					
Real Personal Income ($Bil)	
46.0	
299.3	
359.1	
161.6	
378.3	
61.6
Q/Q Percent Change	
1.0	
0.6	
1.0	
0.7	
0.6	
0.3
Y/Y Percent Change	
2.2	
1.0	
1.4	
2.4	
1.1	
0.7
							
Building Permits	
1,217	
3,604	
11,022	
6,937	
6,286	
377
Q/Q Percent Change	
26.4	
-16.3	
-10.6	
22.7	
10.0	
-11.3
Y/Y Percent Change	
263.3	
-5.6	
-2.2	
33.2	
-9.4	
-26.7
							
House Price Index (1980=100)	
676.7	
417.1	
306.7	
307.5	
403.7	
220.1
Q/Q Percent Change	
3.1	
0.7	
0.6	
0.3	
0.3	
0.0
Y/Y Percent Change	
11.3	
2.6	
2.0	
1.2	
2.2	
1.8

32

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 2003 - First Quarter 2014

Change From Prior Year

First Quarter 2003 - First Quarter 2014

First Quarter 2003 - First Quarter 2014

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

10%
9%
8%
7%
6%
5%
4%
3%
	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13

	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13

Fifth District

	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13

United States

Nonfarm Employment
Major Metro Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 2003 - First Quarter 2014

First Quarter 2003 - First Quarter 2014

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

Unemployment Rate
Major Metro Areas
First Quarter 2003 - First Quarter 2014

	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13
Charlotte

Baltimore

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%

Washington

Change From Prior Year

40%
30%
20%
10%
0%
-10%
-20%
-30%
-40%
-50%
	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13
Charlotte

Baltimore

	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13
Fifth District

Washington

FRB—Richmond
Services Revenues Index

House Prices

FRB—Richmond
Manufacturing Composite Index

First Quarter 2003 - First Quarter 2014

First Quarter 2003 - First Quarter 2014

30
20

20

10

10

0
-10
-20
-30
-40
-50

0
-10
-20
-30
	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13

Change From Prior Year
First Quarter 2003 - First Quarter 2014

16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

40
30

United States

	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13

	 03	 04	 05	 06	 07	 08	 09	 10	 11	 12	 13
Fifth District

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease. The manufacturing composite index is a
weighted average of the shipments, new orders, and employment indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building Permits: U.S. Census Bureau, http://www.census.gov.
House Prices: Federal Housing Finance Agency, http://www.fhfa.gov.

For more information, contact Jamie Feik at (804)-697-8927 or e-mail Jamie.Feik@rich.frb.org

E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

33

Metropolitan Area Data, Q1:14
	

Washington, DC	

Baltimore, MD	

Hagerstown-Martinsburg, MD-WV

Nonfarm Employment (000s)	
2,488.4	
1,325.3	
Q/Q Percent Change	
-1.5	
-2.0	
Y/Y Percent Change	
0.3	
1.0	
			
Unemployment Rate (%)	
4.8	
6.0	
Q4:13	
4.8	
6.0	
Q1:13	
5.6	
7.2	
			
Building Permits	
6,605	
1,209	
Q/Q Percent Change	
18.5	
-10.4	
Y/Y Percent Change	
54.8	
-33.2	
			
		
	
Asheville, NC	
Charlotte, NC	

102.3			
-2.6			
-0.5			
6.4			
6.4			
7.5			
155			
-33.8			
-10.4			

Durham, NC	

Nonfarm Employment (000s)	
174.6	
881.1	
286.6			
Q/Q Percent Change	
-1.9	
-1.2	
-1.1			
Y/Y Percent Change	
2.6	
2.4	
2.0			
					
Unemployment Rate (%)	
4.8	
6.5	
5.0			
Q4:13	
4.8	
6.5	
5.0			
Q1:13	
7.0	
8.8	
6.7			
						
Building Permits	
291	
3,658	
632			
Q/Q Percent Change	
-21.6	
-12.0	
-8.3			
Y/Y Percent Change	
5.4	
2.4	
22.0			
					
					
	
		
Greensboro-High Point, NC	
Raleigh, NC	
Wilmington, NC
Nonfarm Employment (000s)	
343.7	
549.3	
141.8			
Q/Q Percent Change	
-2.1	
-1.1	
-1.2			
Y/Y Percent Change	
0.3	
3.7	
3.9			
					
Unemployment Rate (%)	
6.8	
5.2	
6.6			
Q4:13	
6.8	
5.2	
6.6			
Q1:13	
9.1	
7.0	
9.0		
	
Building Permits	
436	
2,559	
574			
Q/Q Percent Change	
-11.6	
-17.2	
-44.2			
Y/Y Percent Change	
16.6	
-2.4	
-17.2		

34

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

		
Winston-Salem, NC	
Charleston, SC	
Columbia, SC	
Nonfarm Employment (000s)	
206.3	
308.0	
361.1		
Q/Q Percent Change	
-2.5	
-1.2	
-1.4		
Y/Y Percent Change	
0.0	
1.0	
1.2		
			
Unemployment Rate (%)	
6.1	
5.1	
5.4		
Q4:13	
6.6	
5.7	
6.1		
Q1:13	
8.2	
6.7	
7.2		
		
Building Permits	
297	
2,142	
942		
Q/Q Percent Change	
47.8	
100.4	
5.6		
Y/Y Percent Change	
9.6	
43.7	
0.7		
				
	

Greenville, SC	

Richmond, VA	

Roanoke, VA	

Nonfarm Employment (000s)	
315.2	
635.0	
156.8		
Q/Q Percent Change	
-0.7	
-0.6	
-2.0		
Y/Y Percent Change	
2.7	
1.8	
-0.1		
			
Unemployment Rate (%)	
4.8	
5.3	
5.2		
Q4:13	
5.7	
5.5	
5.4		
Q1:13	
6.7	
6.2	
6.0		
				
Building Permits	
922	
785	
64		
Q/Q Percent Change	
39.7	
-19.1	
-61.4		
Y/Y Percent Change	
31.7	
-17.6	
-41.3		
				
	
	
Virginia Beach-Norfolk, VA	
Charleston, WV	
Huntington, WV	
Nonfarm Employment (000s)	
739.6	
142.9	
113.1		
Q/Q Percent Change	
-1.8	
-1.9	
-2.1		
Y/Y Percent Change	
-0.3	
-1.6	
0.7		
				
Unemployment Rate (%)	
5.5	
5.7	
6.5		
Q4:13	
5.7	
5.6	
6.7		
Q1:13	
6.3	
6.5	
7.2		
				
Building Permits	
993	
2	
43		
Q/Q Percent Change	
22.4	
-91.7	
-14.0		
Y/Y Percent Change	
-49.9	
-95.6	
330.0		
				

For more information, contact Jamie Feik at (804) 697-8927 or e-mail Jamie.Feik@rich.frb.org
E C O N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

35

OPINION

The Long View of the Labor Market
BY J O H N W E I N B E RG

S

ometimes a single dramatic event — a natural disaster,
say, or a financial crisis — affects our economy in a
highly visible way. It’s only natural to then focus on
that event when trying to make sense of current conditions.
But focusing too closely on extreme events can draw our
attention away from slower-moving, more persistent forces
in the economy.
The Great Recession is a case in point. Although the
unemployment rate recently dipped below 6 percent for the
first time since 2008, many people have questioned whether
this represents a genuine improvement in the health of the
labor market. They note that labor force participation has
declined dramatically since 2009 and is now at its lowest
rate in more than three decades. Certainly, some of the people who quit the labor force in recent years did so because
they were discouraged about the likelihood of finding a job.
But the decline in labor force participation actually started
around 2000, well before the most recent recession, and
research by Richmond Fed economists and others suggests
it is driven in large part by long-term demographic changes,
such as the aging of the baby boomer generation.
Many people also have been concerned about the slowdown in GDP and wage growth compared with pre-recession
levels. They view the Great Recession as a wrench in the
works of the economy that spurred major deviations from
historical trends and has pushed us far below our potential.
But it’s possible that what we are experiencing now is not an
anomaly but rather the result of longer-term changes in how
the labor market functions.
The aggregate numbers we use to describe the labor
market, such as the number of jobs created or the unemployment rate, mask a tremendous amount of activity
beneath the surface. Jobs are constantly being both created
and destroyed as firms expand and contract, and workers
are constantly moving between jobs and companies. This
reallocation of jobs and workers tends to be good for the
economy, helping to move resources from less-productive
to more-productive businesses and helping workers make
better (and higher-paying) matches with employers.
But beginning around 1990, according to research
by Steven Davis at the University of Chicago and John
Haltiwanger at the University of Maryland, the rates of
job and worker reallocation in the United States started to
decline significantly. The causes of this decline are varied
and in some cases benign. For example, one factor is the
shift from “mom-and-pop” stores to big-box retailers, which
tend to have much less movement of jobs and employees.
That shift has also been accompanied by huge increases
in supply-chain efficiency and lower prices for consumers,
developments that many would argue are positive.
36

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 4

But there may also be less favorable aspects to these
changes. Since 2000, there has been a large decrease in
the number of high-tech startups and young firms. Such
firms contribute disproportionately to job creation and
destruction rates, and they were also an important source of
innovation and productivity growth during the 1980s and
1990s. To the extent that the declines in reallocation are
driven by changes in the high-tech sector, they may be a factor in the slower productivity growth we are experiencing.
Government policies, such as stricter employmentprotection laws or licensing requirements, may also play a
role. In the 1950s, about 5 percent of employees had jobs
that required a government license; by 2008 the share had
increased to 29 percent. And during the 1970s and 1980s,
courts made a series of decisions providing exceptions to
the employment-at-will doctrine. While these measures
may have other beneficial effects, they’ve likely had a negative effect on the fluidity of the labor market.
As Davis and Haltiwanger note, less job and worker reallocation equals fewer new job opportunities. That means
unemployed workers will tend to remain unemployed for
longer, and employed workers will probably have a harder
time moving up the ladder or changing careers. In aggregate,
the result is likely to be lower employment and slower wage
growth — exactly what we are seeing today.
The Great Recession was a cataclysmic event in our
country’s economic history. But not all of our present economic conditions can be attributed to that event. Changes
in the labor market appear to have begun well before the
recession and have likely played a large role in the disappointing nature of the recovery. That means we may need
to reconsider what’s “normal” when assessing the economy’s
current performance.
But it doesn’t mean we must remain gloomy. The
American economy has demonstrated tremendous resilience in the past, and our workforce has a strong track
record of discovering new sources of innovation. And there
are signs the economy is gaining momentum: GDP growth
was strong in the second and third quarters of 2014, and
job growth has averaged nearly 250,000 per month over
the past year. These data contributed to the Federal Open
Market Committee’s decision to end quantitative easing
and to move toward more traditional monetary policy.
Taking the long view of the labor market suggests that
we may need to temper our expectations in the present,
but it also suggests we should remain optimistic about the
future. 	
EF
John A. Weinberg is senior vice president and director
of research at the Federal Reserve Bank of Richmond.

NEXTISSUE
The Sharing Economy

A slew of Web startups have launched the “sharing” economy,
allowing individuals to act as hoteliers by renting out their
homes to travelers and to provide car services similar to taxis.
These new markets may increase social welfare by giving use
to underutilized assets. But critics argue that these businesses
have gained an unfair advantage over incumbents by ignoring
regulations designed to protect consumers.

Money Talks

Legal changes in campaign finance have made it possible for
people, corporations, and unions to increase their support of
political activity. But are they actually spending more — and are
they getting anything for their money?

Craft Brewing

Craft breweries — small, independent beer producers — have
multiplied sharply over the last three decades. And craft beers
are growing in popularity. Research has shed light on the unique
structure of the craft brewing industry, including its emphasis on
cooperative competition, and the reasons for its success.

Federal Reserve
The 19th century classical economists
Walter Bagehot and Henry Thornton are
often credited with having written the
guides on crisis management for central
banks. Some historians argue that they have
been misinterpreted, however — including
by the Fed during the 2007-2008 financial
crisis. How can the teachings of the classical
economists be applied to 21st century
financial markets?

Economic History
Early in the 20th century, automobiles
powered by internal combustion surpassed
electric vehicles (EVs) to win one of the most
dramatic technological competitions of the
industrial age. Runner-up technologies often
don’t last long, but EVs found niche markets
and continued to improve. Could they make
a significant comeback, or are motorists
locked in to internal-combustion cars?

Interview
Claudia Goldin of Harvard University
discusses the gender pay gap, the changing
landscape of U.S. education, and how
economics is like detective work.

Visit us online:
www.richmondfed.org
•	To view each issue’s articles
and Web-exclusive content
•	 To view related Web links of 	
	 additional readings and 	
	 references
•	To subscribe to our magazine
•	To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

ter
it
tw

Follow @RichFedResearch
on Twitter for links to …

Updated national and regional economic data,
including the Richmond Fed manufacturing survey
Research papers and articles on economics
and policy issues
Background information during policy speeches
by Richmond Fed President Jeff Lacker