View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

How Green Are
Green Jobs?

Caring for the
Mentally Ill

Interview with
John Haltiwanger

VOLUME 17
NUMBER 2
SECOND QUARTER 2013

COVER STORY

14

Sizing Up Currency Manipulation: The Chinese government may
be holding down its currency to increase exports. But it’s not clear
what — if anything — the United States should do about it

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
DIRECTOR OF RESEARCH

John A. Weinberg
EDITORIAL ADVISER

Kartik Athreya

FEATURES
19

EDITOR

Aaron Steelman

Maryland’s Full House: Maryland joins the ranks of states that
hope casinos mean easy money

SENIOR EDITOR

David A. Price
MANAGING EDITOR /DESIGN LEAD

Kathy Constant

22

STA F F W R I T E R S

Renee Haltom
Jessie Romero

Health in Mind: Why mental illness is one of the hardest social
welfare problems to solve

E D I TO R I A L A S S O C I AT E

Tim Sablik
CONTRIBUTORS

26

Green Jobs: Not a Black and White Issue: Is saving the environment the best way to boost employment?

Jamie Feik
Charles Gerena
Santiago Pinto
Karl Rhodes
Caroline Tan
DESIGN

ShazDesign

DEPARTMENTS

1
2
5
9
10
11
12
13
30
35
39
40
48

President’s Message/Financial System Fragility — Inherent or Induced?
Upfront/Regional News at a Glance
Federal Reserve/Playing by the Rules
Policy Update/New Disclosures for Climate Change
Jargon Alert/Full Employment
Research Spotlight/Industrialization and the Workforce
The Profession/Where Are the Women?
Around the Fed/Measuring Economic Security
Interview/John Haltiwanger
Economic History/Mother of the Domestic Slave Trade
Book Review/Antifragile: Things That Gain from Disorder
District Digest/Economic Trends Across the Region
Opinion/Watching Labor Force Participation

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/RichFedResearch

Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications
or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics below.
Permission from the editor is
required before reprinting photos,
charts, and tables. Credit Econ
Focus and send the editor a copy of
the publication in which the
reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025X (Online)

PHOTOGRAPHY: HUDIEMM/GETTY IMAGES

PRESIDENT’SMESSAGE

Financial System Fragility — Inherent or Induced?
hen I was a college student some years ago,
I took a seminar class in the government
department on international relations. One of
the readings was Graham Allison’s Essence of Decision, his
account of decisionmaking within the Kennedy administration during the Cuban missile crisis. The book taught
me a lesson that I hadn’t received in my economics classes:
In a crisis situation, when the available facts are evolving
quickly and seem to point in more than one direction,
policymakers tend to rely heavily on theory to help them
make sense of those facts.
Flash forward to 2007-2008 and the financial turmoil
that struck the United States during that period. I found
myself among the regulators and policymakers who needed
to interpret what was happening quickly and contribute to
decisions about how to respond. Among the unknowns in
mid-2007, before disaster struck: What firm conclusions,
if any, should be drawn from the fact that mortgage
delinquency rates have been rising steadily for over a year?
And once the existence of a crisis becomes clear, what
weight should be given to the benefits of minimizing
financial distress today (for example, through bailouts of
institutions) versus the costs of moral hazard that could
promote further risk-taking tomorrow?
The transcripts of Federal Open Market Committee
(FOMC) meetings during this time show participants looking closely at the facts that were available and seeking
to resolve them into a coherent interpretation of what was
happening. The same, surely, was taking place within the
Treasury Department and elsewhere. As in other crises,
moreover, the lens through which policymakers turned
scattered and contradictory facts into interpretations, and
ultimately into policy conclusions, was theory.
Without trying to survey the entire landscape of theories
about financial markets, I would like to highlight two broad
alternative views that influenced policymakers during and
after the crisis. One sees financial markets as inherently
prone to fragility. In this view, the inherent fragility of
markets makes it necessary, in turn, for policymakers to
create an expectation of a financial safety net to maintain
the trust of market participants in institutions and to minimize destabilizing behavior. This view has led to a number of
instances starting in the 1970s in which government has
extended the financial safety net beyond the scope of
deposit insurance. Among these is the private bailout of the
hedge fund Long-Term Capital Management (LTCM) organized by the New York Fed in 1998; even though the LTCM
rescue was privately financed, the Fed’s involvement may
well have changed expectations about the Fed’s willingness
to sit out a failure of a major financial firm.
The other broad view sees fragility in financial markets

W

as something induced in
large measure by government
policies themselves. This view
recognizes that financial
distress is always a possibility
(because some losses and
failures are inevitable), but it
emphasizes the incentives
of market participants to
manage risk through their
selection of institutions and
contractual arrangements.
For example, bondholders can insist that an institution
maintain an agreed level of equity to create a buffer against
losses. In this view, expanding the financial safety net,
either explicitly or implicitly, lessens the incentives of
participants to adopt stability-enhancing arrangements —
thereby rendering the system more fragile.
During the financial crisis, the model of inherent
fragility predominated in shaping policy responses. Most
notably, the Fed increasingly used emergency lending
and, later, purchases of assets to encourage lending and to
establish a safety net beneath large institutions.
It is not clear how much these measures contributed to
stabilizing the U.S. financial system even in the short run.
For those of us who see merit in the model of induced
fragility, however, a greater concern is the longer-run effects
of such programs. The actions taken by the Fed likely had
the effect of telling the market to expect actions in support
of large institutions if they fell into distress. The institutional leaders hearing this message would naturally feel less
urgency in safeguarding their firms by, for example, raising
capital or selling assets. There are indications that the leaders of Bear Stearns and Lehman Brothers had this point of
view in the weeks leading up to their firms’ failures in 2008.
Parts of the Dodd-Frank Act enacted after the crisis,
including its rules for “living wills” to enable distressed
financial firms to be wound down without government support, reflect the induced-fragility view. Yet the financial
safety net is still large; it included as much as 57 percent of all
financial firm liabilities at the end of 2011, up from 45 percent in 1999, according to research by my Richmond Fed
colleagues. The persistence of the safety net and the moral
hazard that goes with it means that the work of responding
to the crisis is not yet finished.
EF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND
ECON FOCUS | SECOND QUARTER | 2013

1

UPFRONT

Regional News at a Glance

Moving from Hoover

The Competition for the FBI’s New Headquarters
he Federal Bureau of Investigation is moving from
its downtown Washington, D.C., headquarters.
Three dozen sites in Maryland, Virginia, and D.C. are
competing to host the bureau’s new home, a project
that could cost nearly $3 billion to build and will bring
11,000 jobs to the new location.
The FBI has occupied the famous J. Edgar Hoover
Building for nearly 40 years, but it has outgrown the
facility after experiencing a 25 percent growth in
personnel since 9/11. Now only half the staff is stationed
there, with the rest scattered across 20 buildings in
D.C., Quantico, Va., and Clarksburg, W.Va. According
to the General Services Administration (GSA), the
government’s procurement arm, the building is
aging, expensive, and inadequately secured. The split
locations also impede operations. Rather than spend

T

The J. Edgar Hoover Building, known for its striking
appearance, in downtown Washington, D.C.,
may soon be up for grabs.

$850 million to renovate the headquarters, the GSA
decided to canvas the greater capital region for possible
new homes.
The administration received submissions from 35
potential sites by the March 2013 deadline. The FBI
requires 2.1 million square feet of office space on at least
40 acres of land, as well as access to the Beltway and the
Metro subway system. D.C. Mayor Vincent Gray proposed building the headquarters on Poplar Point, a 110acre piece of land owned by the National Park Service,
but 70 of its acres are required by federal law to remain
parkland, which doesn’t leave much space for the FBI
and private developments that would provide needed
tax revenue. Another contender is the Greenbelt Metro
station site in Prince George’s County, Md. The
Washington Metropolitan Area Transit Authority,
which operates the Metro, set up a development deal
back in 2011 in anticipation of the FBI’s search.
A 70-acre warehouse site in Fairfax County, Va.,
boasts proximity to FBI training facilities on the
Quantico Marine Corps base as well as to other intelligence agencies. It’s on federally owned land that would
save taxpayers on rent. There’s just one problem: The
Fairfax site is rumored to house a classified Central
Intelligence Agency facility that would have to be
evicted to make room for the FBI.
The next step is for the GSA to request formal bids,
but the timing depends on additional studies and the
input of stakeholders, says GSA spokesman Dan Cruz.
There’s no word yet on what will become of the downtown building, known for its striking Brutalist architecture. The GSA has said that it might give the building to
a private developer in exchange for a new development
to house the FBI.
— R E N E E H A LT O M

Virtual Mugging

PHOTOGRAPHY: FEDERAL BUREAU OF INVESTIGATION

Cyberattacks Have Lasting Effects

2

nline banking and other services have been a boon
for consumers and generate vast amounts of data
that can yield useful insights into users’ behavior —
data that can help further improve offerings from financial services firms. But this aggregated information also
has created attractive targets for criminals looking

O

ECON FOCUS | SECOND QUARTER | 2013

to make money and “hacktivists” looking to make a
political statement.
Case in point: The cyberattack on the South Carolina
Department of Revenue in September 2012, which
resulted in the theft of electronic information from
nearly 4 million individual tax returns and about 700,000

business returns. It cost South Carolina about $12 million
to offer the victims a year’s worth of credit monitoring. But
the repercussions haven’t ended there.
Catherine Mann, an economist at Brandeis University
who has studied the effect of information technology on
the economy, has calculated the costs of data breaches in
various ways. First, there is the effort and expense devoted
to discovering a breach and plugging it. South Carolina
spent $500,000 to hire Alexandria, Va.-based Mandiant to
block hackers’ access to its computer network and
$800,000 on additional security measures. The state’s
2013-2014 budget sets aside another $10 million for cyber
security, including an extension of the credit-monitoring
program.
For private firms, there is also the revenue loss associated

with changes in consumer behavior and the possibility of
punishment by the stock market. In both cases, the effects
are relatively small and short-lived, according to Mann.
As a result, it is consumers who usually bear the brunt of
a data breach. For example, victims of data breaches are disproportionately victims of identity theft, according to a
June 2013 report from Javelin Strategy & Research. Javelin
estimates that South Carolina taxpayers could face
$5.2 billion in losses resulting from identity theft, or
$776 in out-of-pocket expenses per affected consumer.
To mitigate these expenses, the state is offering
$1 million of identity theft insurance for those who sign up
for the credit-monitoring program. The insurance will
cover certain costs for a year, including lost wages and unauthorized electronic fund transfers.
—CHARLES GERENA

Fuel Finances

MD and VA Change Their Gas Taxes for the First Time in Decades
f there’s one thing drivers can count on, it’s changing
gas prices. Prices fluctuate for a number of reasons —
from shifts in oil supply and demand to natural disasters
that interfere with production and delivery. But one
component of the price at the pump has long remained
constant in many states: the gas tax. About half of the
states haven’t increased their gas tax in a decade or more.
Falling revenue and a growing need for transportation
funding, however, have prompted two Fifth District states
to make a change.
Prior to this year, Virginia last revised its gas tax in 1986
when it fixed its rate at 17.5 cents per gallon, while Maryland
established its 23.5-cents-per-gallon rate in 1992. Since then,
inflation has reduced the effective value of the taxes in
constant dollars to about 8 cents and 14 cents, respectively.
Additionally, higher fuel efficiency standards are reducing
how often drivers fill their tanks, another hit to gas-tax
revenue.
In response, Govs. Bob McDonnell of Virginia and
Martin O’Malley of Maryland proposed revisions to their
states’ gas taxes that were approved by their legislatures in
April. Virginia’s plan replaces the fixed cents-per-gallon tax
in part with a 0.3 percentage point increase in the general
sales tax, the proceeds of which will go to transportation
projects. (The sales tax will increase by 0.7 percentage point

I

in Northern Virginia and Hampton Roads, which have
much higher traffic congestion than the rest of the state.)
The plan also includes a 3.5 percent tax on wholesale
gasoline and a $64 annual titling tax on hybrid cars, which
use less gasoline.
“While gasoline prices may be very volatile, the price
level in general has a ratchet. It doesn’t go down,” says
George Hoffer, a transportation economist at the
University of Richmond. “By relying on the general sales
tax, you avoid the problem of volatility in the transportation trust fund.”
Although using proceeds from the general sales tax
to fund transportation is not entirely new — Virginia
previously earmarked 0.5 percent of the tax revenue for
that purpose — it does raise a concern for some. The gas tax
represents a kind of user fee: People who drive the most
generally buy the most gas, and therefore pay the most to
maintain the roads. Increasing the funding that comes from
the general sales tax reduces the tie to roadway usage.
In addition, the tax on hybrids penalizes a technology
aimed at reducing the negative externalities of air pollution,
something that lawmakers in Virginia have previously
encouraged via tax credit. Hoffer also notes that the fixed
nature of the fee means that it applies equally to all models
of hybrids regardless of how fuel efficient they are, and so

ECON FOCUS | SECOND QUARTER | 2013

3

punishes buyers of hybrids that are not much different from their gas equivalents.
Maryland’s plan keeps its fixed per-gallon tax, but
indexes it to inflation. It also imposes a 3 percent sales
tax on gasoline that will be phased in over the next
three years. As a result, Maryland drivers could pay up
to an extra 21.1 cents per gallon by 2018. Critics are
concerned this will hurt the state’s competitiveness
with Virginia, where Hoffer estimates prices will fall
by a few cents, and with Washington, D.C., which
replaced its 23.5-cents-per-gallon tax with an 8.3 percent wholesale tax in May. If half of the Marylanders

who commute to D.C. and Virginia choose to fill
up across the border because of price differences,
the state could lose out on as much as $22 million
annually, according to Wendell Cox and Ronald Utt of
the Maryland Public Policy Institute.
All told, the Virginia plan is forecast to raise
$406 million in tax revenue for 2014, and the
Maryland plan is expected to raise $116 million. But
the changes might be short-term fixes at best. As cars
become more fuel efficient, Hoffer says, it will
become increasingly difficult to draw adequate
transportation funding from the pump. — T I M S A B L I K

Medicaid Malpractice

U.S. Supreme Court Strikes Down NC Medicaid Rule
hirteen-year-old Emily Armstrong is blind, deaf,
and mentally disabled. Her condition is the result
of injuries she sustained during birth, via a Caesarian
section at a hospital in Hickory, N.C. The doctor who
delivered her had a history of drug abuse; her parents
sued him, the hospital, and several other medical
staff, and received a settlement of $2.8 million. As the
result of a U.S. Supreme Court ruling earlier this year,
Emily and her family won’t have to give nearly
$1 million of that settlement to the state of North
Carolina.
Emily is a Medicaid recipient, and North Carolina
has paid about $1.9 million toward her medical
expenses. Under the federal Medicaid statute,
states are required to recover some portion of these
expenses from Medicaid recipients who win tort
settlements. In North Carolina’s case, it has done so
by fixing its share at one-third of a plaintiff ’s settlement. But the Medicaid statute also prohibits states
from placing a lien on any portion of a settlement that
is not related to medical expenses. The Armstrongs’
settlement did not specify how the money was allocated, and they and their lawyers argued that much less
than one-third of the total settlement was actually
earmarked for medical expenses.
“Given the nature of the plaintiff ’s injuries and the
long-term care required, and the amount of pain and
suffering [damages] likely awarded, it was hard for the
state to say ‘One-third is fair,’” says Richard Saver,

T

4

ECON FOCUS | SECOND QUARTER | 2013

a professor in the University of North Carolina School
of Law and the School of Medicine.
North Carolina argued that trying to divide every
settlement between medical and nonmedical expenses
would be “wasteful, time-consuming, and costly.” But
writing for a 6-3 majority, Justice Anthony Kennedy
noted that “even if that were true, it would not relieve
the State of its obligation to comply with the terms of
the Medicaid anti-lien provision.”
While the Court ruled that North Carolina could
not establish an “unrebuttable” uniform percentage, it
stopped short of specifying the precise process states
must follow. Currently, 16 states and Washington,
D.C., hold administrative hearings for each case
to determine the state’s recovery amount. Another
possibility, Saver says, is that North Carolina could
establish a uniform percentage but give plaintiffs the
opportunity to contest it. “What won’t fly is fixing one
number across the board with no justification and no
opportunity for plaintiffs to rebut why it doesn’t make
sense in that case,” he says.
In addition to the administrative burden, states
also are concerned that Medicaid recipients might try
to shield their settlements by claiming that the full
amount was for pain and suffering. Given many states’
fiscal difficulties and the future expansion of
Medicaid via the Affordable Care Act, states are under
pressure to recover every dollar they can.
—JESSIE ROMERO

FEDERAL RESERVE

Playing by the Rules
BY T I M S A B L I K

The Fed has followed a number
of monetary principles over the
years — with mixed results
ilton Friedman often said that, given the choice,
he would replace the Federal Reserve with a
computer. This computer would have one task:
Print out the same amount of money, month after month,
year after year. There wouldn’t be much work for central
bankers, except perhaps as IT personnel.
Friedman’s proposal preceded and ultimately complemented work he and several other economists did in the
1960s through 1980s to develop “rational expectations”
models. Under rational expectations, market participants
make their decisions based not only on past monetary
actions but also on their expectations of future actions.
If the central bank can commit to a rule for future behavior,
it can help set market expectations, making the job of
achieving its goals easier. A noncontingent rule — one that
doesn’t change based on conditions in the economy —
makes it even easier for the market to predict future monetary policy; an example is a fixed rate of money growth.
Even before the adoption of the rational expectations
assumption, economists understood the importance of
having some guide for managing the money presses.
Without something to limit the growth of money, a government might be tempted to create more whenever it needed
to finance extra spending. That would lead to inflation, the
result of too much money chasing too few goods. More
important, simply the fear that the government would give
in to this temptation could be enough to generate an expectation of inflation, which could then become a self-fulfilling
prophecy as new contracts came to reflect that expectation.
The Federal Reserve has never adopted an official
monetary rule, but its decisions have been guided by several
implicit rules over the course of its 100-year history —
frameworks that guide its decisions, even if not as mechanically as Friedman’s imagined computer. The most recent was
measured by Stanford University economist John Taylor in
1993. The “Taylor Rule” is an empirical summary of how the
Fed has actually behaved. It is a mathematical formula for
calculating the Fed’s interest rate based on the sizes of two
gaps: the gap between current inflation and the Fed’s target
and the gap between current GDP and the economy’s
potential. When the economy is running above potential
and inflationary pressures are high, the Fed raises interest
rates to tighten the supply of money in the economy and
return inflation and growth to target. When the economy is

M

below potential and inflation is weak, the Fed lowers rates to
loosen money supply and spur growth.
The Fed’s Taylor-Rule-like behavior has been credited by
some for contributing to the period of low inflation and high
growth the United States enjoyed in the mid-1980s through
the early 2000s, known as the Great Moderation. But the
Fed did not arrive at this behavior overnight. Indeed, the
history of the central bank is in many ways a search for the
best monetary rule.

The Early Years
The Fed was established in 1913 to serve as a lender of last
resort and to meet public demand for exchanging deposits
for currency during financial panics. To accomplish this, the
Federal Reserve note was envisioned as an “elastic currency”
— meaning that the central bank could expand its supply
rapidly if needed. At this time, the Fed was not seen as the
steward of inflation that it is today; its only overarching
objective was financial stability, especially in the short term.
Its job was to issue money, and the amount of money to be
issued was dictated largely by the gold standard.
The Fed inherited its first guide in the gold standard
system. Under this regime, dollars were convertible into gold
at a fixed rate of $20.67 per ounce. The Fed’s role in this
process was to meet money demand, but the amount of
money it could supply was capped by the amount of gold it
held in reserve. On the surface, the gold standard was a type
of noncontingent monetary rule: The supply of money was
restricted by the quantity of gold, which was determined by
factors outside the central bank’s control. Changes in the
price level can occur under a gold standard, but instead of
being caused by monetary policy, they are caused by fluctuations in gold supply or by the need for relative prices to
change across countries.
“The gold standard, if left alone, in a sense drives the
money supply, and there isn’t much room for monetary
policy,” says Michael Bordo, an economist and monetary
historian at Rutgers University.
As Bordo noted in a 1997 paper with late economist Anna
Schwartz, however, the gold standard was not exactly noncontingent because it was understood that countries would
suspend it when convenient. In wartime, governments often
went off the gold standard in order to print extra money to
finance the fighting.
“The rule was contingent in the sense that the public
understood that the suspension would last only for the duration of the wartime emergency plus some period of
adjustment, and that afterward the government would adopt
the deflationary policies necessary to resume payments at
the original parity,” Bordo and Schwartz wrote.
ECON FOCUS | SECOND QUARTER | 2013

5

Such a suspension occurred in Europe shortly after the
Fed was established, when World War I broke out and
European countries left the gold standard. The United
States remained on the gold standard, but purchases of
American weapons and supplies by the Allied powers in
Europe resulted in large gold inflows, which led to inflation.
“The Fed was relatively powerless in the face of these gold
movements,” says Bordo. The episode illustrates one of the
downsides of the gold standard as a monetary rule — the
money supply was at the mercy of gold movements, which
did not always match the needs of the economy. Although
gold levels would eventually return to equilibrium, this
process could be slow, and the price fluctuations that
occurred in the meantime could be painful.
Central banks could speed the adjustment process by
increasing or decreasing the amount of credit in the system
in the same direction as gold flows. They could also move in
the opposite direction to “sterilize” gold flows, for example
increasing the amount of credit in the system by an amount
equal to the decline in gold. This would keep the money base
constant and prices stable. After the experience of inflation
in World War I, this was the practice the Fed adopted
(see chart). There was a growing belief that the Fed should
play a role in maintaining stable prices, and the Fed also
argued that until Europe returned to the gold standard, the
gold adjustment mechanism would not function properly.
The other monetary principle that guided the Fed at its
founding was the “real bills doctrine.” This rule stated that
money growth would not be excessive as long as the central
bank only made loans backed by real bills (short-term debt
from businesses) as collateral. This way, the money supply
would expand to meet real growth in the economy rather
than speculative investments, which would keep inflation in
check. The rule was flawed, however, as it did not account
for the fact that rising prices would lead borrowers to
demand more money for real bills. As soon as inflation
expectations set in, there was nothing in the rule to stop

Gold Sterilization
4,000
3,000
2,000
$MILLIONS

1,000
0
-1,000
-2,000
-3,000
-4,000
1924

1925

1926

1927 1928 1929
Federal Reserve Credit

1930 1931
1932 1933
Gold Currency
Bank Reserves

NOTE: “Gold currency” is the difference between gold stock and money in circulation. The Fed
extended credit in the opposite direction as currency movements to keep total reserves in the
system relatively constant.
SOURCE: “Banking and Monetary Statistics 1914-1941,” Federal Reserve Board of Governors

6

ECON FOCUS | SECOND QUARTER | 2013

inflation from spiraling out of control, as the Fed would
supply greater amounts of money to feed the rising prices of
real bills.
Throughout the 1920s, the Fed and other central banks
continued to sterilize gold flows, even after Europe returned
to the gold standard. This prevented the natural adjustment
mechanisms from working and shifted world gold supplies
to the sterilizing countries. Although the full causes of the
Great Depression are debated, many economists agree this
was a major contributing factor. When the Depression hit,
the Fed’s two rules guided it in the wrong direction. The Fed
was required to hold gold reserves equal to 40 percent of its
issued notes, and Fed leaders feared that expanding reserves
would lead to gold outflows that would jeopardize the convertibility of its notes. Additionally, the real bills doctrine
made them reluctant to extend credit that might fuel stock
market speculation, and they argued that the deflation was a
necessary response to the stock market boom of the late
1920s.
By the mid-1930s, the Fed had demonstrated that it was
either unwilling or unable to increase the money supply in
response to the Great Depression. The monetary guides it
had relied upon failed to provide adequate guidance or, in
the case of the gold standard, were distorted by a failure to
play by the rules.
“The gold standard tended to prevent the Fed from doing
what it should have done to offset the deflation, the fall in
output, and the bank failures,” says Bordo. “But it didn’t
have to do this.” In research with Schwartz and Ehsan
Choudhri, Bordo found that the Fed had enough gold
reserves to follow an expansionary policy during the Great
Depression if it had chosen to do so.

New Economics, Old Rules
From the mid-1930s and into World War II, the Fed
followed a policy of keeping the interest rates of government
bonds low to help finance the war effort (a role it had first
played in World War I). During this time, Bordo says development of monetary policy in the United States essentially
ceased, as the Fed was effectively a branch of the Treasury
Department. By the late 1940s, however, it became clear
that holding interest rates artificially low was contributing
to inflation, and the Fed began agitating for greater independence. That independence was established with the
Treasury-Fed Accord of 1951.
The post-war period also brought changes to currency
policies. Toward the end of World War II, economic leaders
from the Allied countries met in Bretton Woods, N.H., to
discuss the formation of a post-war international monetary
system. They were concerned about a return to the Great
Depression once wartime spending ceased, and they wanted
to create a system that would protect against the deflationary spiral that had occurred in the 1930s. The Bretton
Woods system combined the fixed money discipline of the
gold standard with the flexibility of floating exchange rates.
Countries agreed to peg their currencies to the dollar at

By the mid-1930s, the Fed had
much more important than price
an exchange rate that could be
stability and that the constraints of
adjusted, and the United States
demonstrated that it was either
the Bretton Woods system were
agreed to convert dollars to gold at
unwilling or unable to increase
something that had to be jettithe fixed rate of $35 per ounce. It
the money supply in response to
soned,” says Bordo.
was thought that this would provide
By 1970, inflation had risen
the best of both worlds: It would
the Great Depression.
above 5 percent and outstanding
keep prices stable through a comThe monetary guides it had
dollars at foreign central banks
mitment to gold convertibility, and
relied upon failed to provide
outnumbered gold reserves two to
it would allow nations other than
the United States flexibility to set
adequate guidance or, in the case one at the official exchange rate.
Spending on domestic projects
their own monetary policy.
of the gold standard,
under the Great Society and on the
It would take until the end of
Vietnam War put added pressure on
1958 before full international conwere distorted by a failure
the Bretton Woods system, and
vertibility began, but from the start,
to play by the rules.
attempts to limit gold outflows
economists noted flaws in the
culminated with the United States
system. In 1947, Belgian economist
ending convertibility in 1971. The monetary rules guiding
Robert Triffin observed that if the gold base did not expand
the Fed toward expansionary policy to achieve lower unemto meet the growth of the world economy, countries would
ployment took precedence over the restraint dictated by the
demand a greater number of dollars as a substitute for gold.
gold standard, and the Bretton Woods system collapsed.
At some point, the number of outstanding dollars would be
so large that the United States would not be able to credibly
promise convertibility to gold at the fixed price of $35 per
Recognizing Limitations
ounce, and the system would collapse. Another problem
The Fed continued to implicitly target unemployment of
would also hasten the demise of Bretton Woods: the desire
4 percent in the 1970s, following a “stop-go” monetary
by the United States to set monetary policy that ran counter
policy that loosened money supply to target lower unemto the rules of the gold standard.
ployment and then tightened when inflation expectations
In the 1960s, many Keynesian-oriented economists prostarted rising. The problem, as economist and Fed historian
posed that fiscal and monetary authorities should play a
Allan Meltzer and others have noted, was that the Fed did
bigger role in managing the real economy. A key component
not tighten enough during the “stop” periods. In fact, the
of this movement was the Phillips Curve, which appeared to
Fed failed to distinguish between real and nominal rates
show an inverse relationship between nominal wages and
when setting the federal funds rate target. While it thought
unemployment. It suggested that policymakers could obtain
it was tightening money supply by raising nominal rates,
lower unemployment in exchange for higher inflation. Like
real interest rates were in fact quite low or even negative
the prewar gold standard, the Bretton Woods system left
(see table).
little room for monetary policy other than keeping the gold
Over the course of the 1970s, the market came to believe
price fixed and the number of outstanding dollars low
that the Fed was following a rule that placed greater imporenough to credibly commit to conversion. But policymakers
tance on unemployment than inflation, and inflation rose
now envisioned a more active role for the Fed. Indeed,
dramatically. The greater weight on labor market conditions
Congress established the Fed’s “dual mandate” with the
was problematic because policymakers miscalculated which
Employment Act of 1946, which stipulated that the Fed
unemployment rate to target. In a 2011 working paper,
should set monetary policy to maintain maximum employAthanasios Orphanides of the MIT Sloan School of
ment and stable prices.
Management and John Williams, president of the San
At the time, it was believed that maximum employment
Francisco Fed, proposed a model for a monetary rule
meant unemployment of 4 percent, and economists on the
consistent with the Fed’s behavior during this time. They
Council of Economic Advisers and the Federal Reserve
Board believed that as long as unemployment was above
End-of-Year Effective Federal Funds Rate 1965-1972
that level, expansionary monetary policy could help
Year
Nominal Rate
CPI Inflation
Real Rate
close the gap without risking inflation. Fed Chairman
William McChesney Martin disagreed, arguing that it
was difficult to know for certain what the optimal level of
employment was, and that expansionary monetary policy
past that point would result in inflation. In the mid-1960s, as
unemployment approached 4 percent, he argued in favor of
tightening, but he was increasingly opposed by others on the
Federal Open Market Committee.
“They thought that maintaining full employment was

1965
1966
1967
1968
1969
1970
1971
1972

4.32
5.40
4.51
6.02
8.97
4.90
4.14
5.33

1.54
3.33
3.81
5.08
5.91
6.60
3.10
3.00

2.78
2.07
0.70
0.94
3.06
-1.70
1.04
2.33

NOTE: Consumer Price Index inflation excludes food and energy.
SOURCE: Federal Reserve Economic Data (FRED), Federal Reserve Bank of St. Louis
ECON FOCUS | SECOND QUARTER | 2013

7

found that the Fed’s underestimation of the natural rate of
unemployment at 4 percent, combined with the emphasis it
placed on targeting the natural rate of unemployment, led to
the Great Inflation of the 1970s.
“We can only tell if the economy is not operating at its
natural rate long after the fact by subsequent developments
in inflation,” says Orphanides. “In the late 1960s and 1970s,
the structure of the economy changed considerably,
including changes in demographics and a slowdown in
productivity. These factors led to a significant increase in the
natural rate of unemployment that was only recognized with
a significant lag.”
It became apparent that the trade-off implied by the
Phillips Curve was not as stable as policymakers had
thought. Although unexpected inflation was capable of
reducing unemployment in the short term, once markets
came to expect higher inflation, raising inflation no longer
reduced unemployment. In response to loose monetary
policy and the lack of a rule to restrain money growth like
the gold standard, inflation expectations soared.
In 1979, Paul Volcker was appointed chairman of the
Fed to stop the inflation spiral. Volcker favored a change in
procedure to regain the Fed’s credibility for keeping prices
stable. In a special meeting on Oct. 5, 1979, Volcker proposed
focusing on price stability rather than unemployment by targeting money growth more aggressively and allowing
interest rates to “float” to whatever level needed to bring
down inflation. Although the Fed had incorporated monetary targets into its policy in the 1970s, it had frequently
overshot these targets. Volcker believed that targeting price
stability would ultimately lead to both low inflation and full
employment.
This new focus represented a new monetary rule of sorts
— one that placed less emphasis on the gap between current
employment and full employment and more emphasis on
inflation. Volcker’s first attempt to convince the market that
the Fed was dedicated to price stability failed. Tight monetary policy triggered a recession in early 1980, and as the
recession worsened, the Fed felt compelled to ease up.
To the market, the episode resembled the same sort of
“stop-go” policies they had come to expect, and inflation
continued to climb. The Fed tightened again, allowing the
fed funds rate to reach about 20 percent, and the economy
plunged into a deep recession in 1981-1982. This time,
however, Volcker held course, despite unemployment near
11 percent and charges from some in Congress that the Fed

was neglecting its mandate to maintain full employment.
But as Volcker, and later his successor Alan Greenspan,
interpreted the mandate, full employment would follow
naturally from a pursuit of stable prices.
The Fed’s actions were a concession to the difficulty in
measuring the natural employment level, and to the lesson
learned in the 1970s that the Phillips Curve trade-off did not
exist in the long run. The Fed’s commitment to its new
rule in the face of the deep 1981-1982 recession convinced
markets, and inflation expectations declined. During the
early 1980s, changes in the composition of the money
supply made targeting money growth more difficult.
The Fed continued targeting the money supply until disinflation set in because of concern that a change in policy
would undermine the credibility it was trying to establish.
Once inflation had subsided and real interest rates were
easier to estimate, the Fed returned to using interest rates as
the primary tool for maintaining price stability.
The Great Moderation that followed was a period of low
inflation and impressive economic growth, leading many
observers in the 1990s to proclaim the monetary policy
equation “solved.” Some economists attributed that performance to the Fed’s adherence to the Taylor Rule, but
Orphanides argued in a 2002 paper that Taylor’s original
equation also fits Fed policy during the Great Inflation when
accounting for the data available at the time.
“Policymakers at the Fed always thought that they were
following systematic policy in the 1960s and 1970s. The
question is: What are your guides? The guides you use may
fool you because they may be based on a presumption of
too much knowledge, or too precise knowledge,” says
Orphanides. He argues that simple rules — Friedman’s
constant money growth being the simplest — are preferable
for this reason.
In the wake of the 2007-2009 recession, the Fed turned
to a number of discretionary measures to bolster the
economy. Interest rates hit zero but the economy remained
weak, so the Fed used some unconventional tools that by
definition reached beyond the rule it seemed to have been
following for decades. Many economists agree that monetary policy should return to rules-based guides sooner rather
than later. Although there is no consensus as to which guides
the Fed should adopt, history has made one thing clear:
Expectations of future monetary policy play a big role in the
economy, and those expectations will be driven by the rule
the Fed is perceived to be following.
EF

READINGS

8

Bordo, Michael D., and Anna J. Schwartz. “Monetary Policy
Regimes and Economic Performance: The Historical Record.”
National Bureau of Economic Research Working Paper No. 6201,
September 1997.

Orphanides, Athanasios, and John Williams. “Monetary Policy
Mistakes and the Evolution of Inflation Expectations.” National
Bureau of Economic Research Working Paper No. 17080,
May 2011.

Meltzer, Allan H. “U.S. Policy in the Bretton Woods Era.” Federal
Reserve Bank of St. Louis Review, May/June 1991, vol. 73, no. 3,
pp. 53-83.

Taylor, John B. “A Historical Analysis of Monetary Policy Rules.”
In John B. Taylor (ed.), Monetary Policy Rules. Chicago: University
of Chicago Press, 1999.

ECON FOCUS | SECOND QUARTER | 2013

POLICY UPDATE

New Disclosures for Climate Change
BY C A R O L I N E TA N

he environmental consequences of energy extraction have drawn attention not only to the energy
industry, but also to the financial firms that help
fund it. Soon, these firms may feel more pressure to disclose their exposure to financial risks from greenhouse
gas emissions, thanks in part to a recent decision by the
Securities and Exchange Commission (SEC).
In a significant shift from previous rulings, the SEC
decided in February that the shareholders of PNC Financial
Services Group had the right to vote on whether the bank
must report its risk exposure to climate change. For the first
time, a bank was not allowed to exclude a climate change
disclosure resolution questioning its lending practices from
its proxy ballot. Though manufacturing and electrical
companies had already been held to this standard, the SEC
had previously reasoned that issues related to the way a
bank maintains its lending portfolio fell within its “ordinary
business” and did not need a shareholder vote.
As the nation’s sixth-largest commercial bank in terms of
assets, Pittsburgh-based PNC claims about $300 billion in
total assets and is the only major bank headquartered in
Appalachia, where coal extraction is a key business.
Environmental group Rainforest Action Network estimated
that PNC’s lending practices accounted for 43 percent of
Appalachian coal extracted in 2011 through the controversial “mountaintop removal” mining (MTR) technique.
In light of PNC’s role in financing MTR mining, activist
shareholders submitted a resolution in November 2012
requesting that the bank report its exposure to climate
change risk through its lending, investing, and financing
activities. PNC has marketed itself as a “leader in ecofriendly development,” and the shareholders expressed
concern that mismanagement of climate change issues
could pose significant risks to the bank’s brand, business
operations, and performance. Boston Common Asset
Management, which drafted the resolution, told PNC that it
is important for investors to “understand in what ways these
concerns are being addressed by PNC’s lending policies.”
PNC’s board members unanimously opposed the resolution, requesting an SEC no-action letter that would permit
the bank to keep the proposal from a shareholder vote. PNC
argued that such an assessment would be costly, unnecessary,
and micromanaging. Because PNC was not directly involved
in coal mining, the board argued there was no sufficient
“nexus” between the bank and the proposal.
The SEC replied that it was “unable to concur” with
PNC’s request, calling climate change a “significant policy
issue.” The Commission’s decision effectively transferred
authority on climate change disclosure from corporate
managers to shareholders, for the first time requiring that a

T

bank bring the issue to a shareholder vote. Though shareholders did not pass the resolution at their April 23 annual
meeting, more than 22 percent voted in favor — a strong
statement, shareholder activists say.
“This decision means that even companies a few steps
removed from having a direct climate impact must pay
attention to [climate] issues,” says Michael Gerrard, director
of the Center for Climate Change Law at Columbia
University. In effect, some experts argue that the SEC
broadened the range of companies for which climate change
disclosure resolutions could apply, bringing the banking
industry into the fold.
Climate change disclosure may mark a new and challenging phase for the banking industry — one that Chicago
environmental lawyer E. Lynn Grayson of the law firm
Jenner & Block says may be “darn near impossible” for some
firms to accommodate, due in part to limited information
and the difficulty of quantifying environmental risks. In
2010, the SEC issued an interpretive guidance to help public
companies navigate existing climate change disclosure rules.
The Commission emphasized that it was “not opining on
whether the world’s climate is changing,” but rather trying
to ensure that disclosure rules were consistently applied.
The SEC noted different scenarios — legislation, international accords, changes in demand of a good or a company’s
reputation, and the physical impact of climate change —
that may require a company to disclose its carbon footprint.
The PNC ruling does not necessarily mean that the SEC
is “going green.” It simply represents an attempt to inform
shareholders about risk, Grayson says. To the extent that
climate change portends regulatory changes or damage to
major capital assets and infrastructure, it counts as risk.
Gerrard expects the SEC’s decision to apply to a broad
range of financial firms, as “large swaths of the economy are
seriously affected by climate change,” and he predicts it will
inspire many similar resolutions. Indeed, even before the
decision, more than 40 percent of shareholder resolutions
from 2011 were related to environmental and social issues, a
10 percentage point increase from 2010, according to an
Ernst & Young study. (Other resolutions related to political
spending and lobbying, human rights issues, and governance
matters, including executive compensation.) Still, the SEC
told Bloomberg that its decision applied only to PNC and
did not create a new duty for the entire financial sector.
It is unclear if a wave of climate change disclosures is in
the forecast. That depends on whether the SEC’s decision
inspires activist shareholders to present similar resolutions
— and how shareholders vote if the resolutions make the
proxy ballot. At the very least, Grayson says, the ruling
marks a “changing tide” for banks and climate change. EF
ECON FOCUS | SECOND QUARTER | 2013

9

JARGONALERT

Full Employment
ince the 2007-2009 recession ended, unemployment
has slowly declined, but most would agree that today’s
level of 7.6 percent unemployment does not represent the economy’s full potential. Full employment is often
described as the level of employment at which virtually
anyone who wants to work can find employment at the
prevailing wage. One might assume that if everyone who
wants a job has one, then the unemployment level would
be zero. Yet in the last half century, the unemployment
rate in the United States has ranged from 2.5 percent to
10.8 percent; it has never been zero. Does that mean we
have never had full employment?
Not according to economists. Full employment is not
the same as zero unemployment because there are different
types of unemployment, and some are unavoidable or even
necessary for a functioning labor market. At any given time,
jobs are being created and destroyed as
industries evolve, and the transition from
old jobs to new is not seamless. For example,
frictional unemployment occurs because
workers who lose their jobs or quit typically
do not accept the first new job for which
they qualify. Unless they are facing extreme
pressure to replace lost income, most
people take the time to find a job that fits
their skills well. Because of this lag, some
percentage of the workforce is between
jobs at any given time and classified as
unemployed.
Persistent unemployment also arises
from mismatch between the supply of workers
and the demand for labor at a given wage, which is known as
structural unemployment. In a fully flexible market, wages
would adjust to the point where the number of people seeking work equaled the number of positions employers were
willing to provide at that wage. Wages can be set above this
level for a variety of reasons, however, such as minimum
wage requirements or because employers choose to set
higher wages in order to get better productivity from their
workers. As a result, the supply of labor can exceed the
demand for it, and structural unemployment arises.
Since some degree of frictional and structural unemployment exists at any given time, economists define full
employment as the unemployment level resulting from a
combination of these two components, which is always
greater than zero. Unemployment can rise above this level
due to shocks in the economy, such as the housing market
collapse that occurred in 2007-2008. It can also temporarily
fall below this level if the economy is operating above its
efficient capacity, resulting in rising prices and wages.

S

10

ECON FOCUS | SECOND QUARTER | 2013

In the 1950s, many economists argued that fiscal and
monetary policy could steer the economy toward the full
employment level. By the end of the decade, policymakers
came to believe they could permanently increase full
employment in exchange for some inflation. This idea was
embodied in the Phillips Curve, which depicted a trade-off
between unemployment and inflation. Indeed, in the 1962
Economic Report of the President, the Kennedy administration
opined, “If we move firmly to reduce the impact
of structural unemployment, we will be able to move the
unemployment target … to successively lower rates.” While
policymakers succeeded initially, inflation and unemployment both rose in the mid-1970s.
Around this time, Milton Friedman and Edmund Phelps
modified the ideas behind the Phillips Curve by including a natural rate of unemployment for the economy. Policy
actions to reduce unemployment below
that level could succeed in the short run,
but in the long run, unemployment would
return to the natural rate and inflation
would be higher as a result of expansionary policy. This idea was largely a return to
the pre-Phillips Curve understanding of
full employment as a generally fixed level.
Although the natural full employment
level is relatively stable, it can change over
time. Changes in the composition of the
labor market or structural changes in
industries can shift the full employment
level. Some economists have argued that
changes during the 2007-2009 recession may
have increased the natural rate of unemployment. They
point to the fact that job vacancies have increased without
the expected decline in unemployment, suggesting a potential mismatch between industry demands and worker skills.
The shifting nature of the natural rate of unemployment makes it difficult to estimate. Since the 1970s, the Fed
has steered clear of targeting a specific level of unemployment, choosing instead to target low and stable inflation.
In its December 2012 action, the Federal Open Market
Committee (FOMC) indicated that it planned to maintain
accommodative monetary policy at least until unemployment fell below 6.5 percent, but Chairman Ben Bernanke
explained that this rate was not the Fed’s estimate of the
natural rate of unemployment. Additionally, the FOMC
conditioned its accommodative policy on inflation remaining near 2 percent. Reflecting the lessons of the 1970s,
Bernanke noted that attempting to target a precise level of
full employment risked missing the mark and could “compromise the FOMC’s longer-term inflation objective.” EF

ILLUSTRATION: TIMOTHY COOK

BY T I M S A B L I K

RESEARCH SPOTLIGHT

Industrialization and the Workforce
BY DAV I D A . P R I C E

58 percent to 65 percent during that period, while whitemployment in the United States since the 1980s has
collar employment grew from 3 percent to about 12 percent.
increased the most at the high end and the low end
The researchers point out that “while manufacturing was
of the skill spectrum. That is, employment in higha growing share of GNP in the 19th century, it was (very) far
wage, high-skilled occupations and in low-skilled, low-wage
from the whole economy.” In considering the changes in
occupations has been growing relative to middle-tier
skill distribution across the economy as a whole, they again
occupations — such as white-collar clerical work and
divide workers into high skill (white collar), middle skill, and
blue-collar factory work. This trend is known by the nonlow skill, but they consider alternative definitions for middle
technical, but descriptive, term “hollowing out.”
skill. When they define the middle tier to include both artiLabor economists generally attribute hollowing out in
sans and farm operators, they find that its share did fall and
large part to the technology revolution that has brought
that the overall economy did see hollowing out.
computers and computer-controlled machines to offices and
When they define the middle tier to include only artifactories, reducing demand for middle-tier workers, and to
sans, however, as in their estimates for manufacturing, they
developments in international trade — partially enabled by
find that the skill distribution did not hollow out in the
technology — that have moved much middle-tier work elseoverall economy as it did within manufacturing; in fact, the
where. But the past 30 years do not, of course, mark the first
overall employment share for artisans was slightly higher in
time that technological change has come to the labor
1910 than it was in 1850. They attribute this in part to growth
market. In a recent working paper, Lawrence Katz of
in construction (which used
Harvard University and Robert
artisanal labor heavily) and to
Margo of Boston University
“Technical Change and the Relative
the fact that although the artiseek to assess whether hollowDemand for Skilled Labor: The United
sans’ share was declining within
ing out also took place during
the manufacturing revolution States in Historical Perspective.” Lawrence manufacturing, manufacturing
itself was growing and artisans
of the 19th century.
F. Katz and Robert A. Margo. National
were still strongly represented
The researchers consider
in it compared to other sectors
the question in three parts:
Bureau of Economic Research Working
of the economy. Thus, instead
how the distribution of occuPaper No. 18752, February 2013.
of hollowing out, the pattern
pations across different skill
was one of general upgrading in
levels changed within the
skill levels: The share of high-skill jobs expanded, that of
manufacturing sector, how it changed within the U.S.
middle-tier jobs remained around the same as before, and
economy as a whole, and whether those changes in the
that of low-skill jobs went down.
broad economy reflected mainly a shift in demand or a shift
Katz and Margo use data on wages to determine whether
in supply.
the increase in high-skill employment in the overall econoWith regard to manufacturing, Katz and Margo divide
my reflected mainly a shift in demand (as the needs of
workers into three categories: high-skilled white-collar
employers changed) or a shift in supply (as educational
workers, middle-tier artisans, and low-skilled operators and
attainment increased). They rely on data about wages at
laborers. They note that factory owners increased efficiency
army forts; past work by Margo indicated that “wages at
by simplifying production tasks so as to enable them to
the forts were very similar to those in the purely civilian
replace artisanal labor with unskilled labor plus specialized
economy in the local labor market.” They find that for
machines. The adoption of steam-powered machines — and,
white-collar workers, wages rose relative to those of other
later, electric ones — gave rise to greater economies of scale,
workers from 1820 to 1880, which was also a period in
further favoring large factories over artisanal shops. At the
which the share of white-collar employment increased. “It
same time, the growth in factory size led to growing employfollows,” they write, “that the relative demand for white-colment of managers.
lar workers increased with relative supply over this period.”
Using 19th-century census records from the University of
Katz and Margo conclude that the rise in the relative
Minnesota’s Integrated Public Use Microdata Series
earning power of white-collar workers began as long ago as
(IPUMS) and elsewhere, Katz and Margo conclude that the
the early years of industrialization and that this trend hit a
skill distribution in manufacturing did hollow out. The
lull by 1915 — a lull that would continue until 1980, when the
proportion of artisans declined from 39 percent in 1850 to
skill premium began another ascent that has continued to
23 percent in 1910. Conversely, the proportions increased at
the present day.
EF
the low and high ends: The unskilled share grew from

E

ECON FOCUS | SECOND QUARTER | 2013

11

THE PROFESSION

Where Are the Women?

Editor’s Note: This artic le was re vise d afte r public atio n
to c larify a quo tatio n fro m Claudia Go ldin.

BY J E S S I E RO M E RO

omen earned 34 percent of economics Ph.D.s
in 2011, according to the National Science
Foundation’s Survey of Earned Doctorates.
That might sound like a lot, but it’s much lower than the
46 percent of all doctorate degrees earned by women, and
the smallest share among any of the social sciences. Women
earned 72 percent of all psychology Ph.D.s, for example,
and 61 percent of sociology Ph.D.s.
The gender gap in economics gets larger at each stage
of the profession, a phenomenon described as the “leaky
pipeline.” In 2012, women were 28 percent of assistant
professors, the first rung on the academic ladder; 22 percent
of associate professors with tenure; and less than 12 percent
of full professors, according to the 2012 annual report of the
Committee on the Status of Women in the Economics
Profession (CSWEP), a committee of the American
Economic Association.
In part, this might reflect the long lag between earning a
Ph.D. and attaining the rank of full professor; if more
women are entering the field today than 20 years ago, more
women might be full professors in the future. But the share
of new female Ph.D. students is actually lower than it was in
1997, when CSWEP first began collecting data — which
means women’s share of economics faculty could actually
shrink.
Donna Ginther of the University of Kansas and Shulamit
Kahn of Boston University also found leaks in the pipeline.
In several studies, they have shown that women are less
likely than men to progress at every stage of an academic
career, beginning with landing a tenure-track job and culminating in promotion to full professor. Furthermore, women
are less likely to be promoted in economics than in other
social sciences, and even than in more traditionally male
fields such as engineering and the physical sciences.
In part, the disparity between men and women could be
due to different choices, such as having children or focusing
more on teaching than on research. Women also tend to
publish fewer articles, which can affect the likelihood of
getting tenure. To the extent that such factors are the cause,
mentoring programs or more family-friendly policies could
help to close the gender gap.
But even after controlling for education, ability, productivity, and family choices, Ginther and Kahn found that a
gap of about 16 percentage points persists in the likelihood
of promotion to full professor in economics — a much
larger gap than in other disciplines.
The problem begins at the undergraduate level: Women
are less likely than men to major in economics, or even to
take an introductory economics course. Proposed explanations have included a lack of female role models in the

W

12

ECON FOCUS | SECOND QUARTER | 2013

classroom or the emphasis on math, but empirical studies
have not supported them.
“It’s something systemic to the field,” says economist
Claudia Goldin of Harvard University. That something
might be the way economics is taught. “It’s like we’re the
marketing department at Kimberly-Clark, and we suddenly
discovered that we haven’t translated the diapers package
into Spanish, but Hispanics have the highest birthrate.
We’re teaching economics the same way we did when
women didn’t matter. But now women do matter. So how do
we translate economics into ‘girlish’?”
More research is needed to answer that question, but
some have suggested using discussion groups in class or
making textbooks less abstract. For example, Susan
Feigenbaum, Sharon Levin, and Anne Winkler of the
University of Missouri at St. Louis developed an introductory microeconomics class that used stories about
real-world decisions, such as having a child or going to
college, to illustrate economic concepts like opportunity
cost and human capital investment. They found that women
and minorities were less likely to drop the class and more
likely to major in economics than students in a more
traditional course.
These efforts beg an important question: Does it
actually matter how many female economists there are?
Yes, says Susan Athey of Stanford University. “You just don’t
get the best allocation of human capital” when one category
of people is excluded. “Losing out on a chunk of the population is wasteful.” (In 2007, Athey was the first woman to
receive the John Bates Clark medal, given to the American
economist under 40 who has made the greatest contribution
to the field.) In addition, a survey by Ann Mari May and
Mary McGarvey of the University of Nebraska-Lincoln and
Robert Whaples of Wake Forest University found that male
and female economists have significantly different opinions
on public policy questions such as the minimum wage, labor
regulations, and health insurance. As the authors concluded,
“Gender diversity in policymaking circles may be an
important aspect in broadening the menu of public policy
choices.”
Although gender parity is some distance off, women do
reach the top echelon of the profession. Goldin is president
of the American Economic Association this year, and two
more women have received the John Bates Clark medal since
Athey in 2007. “My view is a young woman going into
economics … will face some bumps along the road having
to do with being a woman, but they’re not going to be career
defining,” Athey says. “They’ll be obstacles that can be
overcome.”
EF

AROUNDTHEFED

Measuring Economic Security
BY C H A R L E S G E R E N A

“The Economic Security Index: A New Measure for Research
and Policy Analysis.” Jacob S. Hacker, et al., Federal Reserve
Bank of San Francisco Working Paper 2012-21, October 2012.

ear is a powerful force. When a family is afraid of losing
what they have, they may decide to cut back on
nonessentials and save more. But finding the right way to
“get over the hump” is challenging, and an unexpected
economic loss can still lead to hardship.
Having an accurate measure of the nation’s economic
security — the degree to which individuals are protected
against hardship-causing economic losses — could be useful
for policymakers trying to determine the best ways to intervene when people get into financial trouble. A group of
researchers from Yale University, Ohio State University, the
Urban Institute, and the San Francisco Fed are developing
an economic security index (ESI) that goes beyond measuring income volatility or resource adequacy.
Their ESI incorporates data from multiple panel surveys
into a single measure that represents the share of individuals
who experience at least a 25 percent hit to their annual
household income and who lack liquid financial wealth to
replace this loss. Household income is adjusted for inflation,
out-of-pocket medical expenses, and the estimated cost of
debt-service for those with negative financial holdings.
Despite some limitations and the need for further
research, “the ESI shows that Americans are not only facing
greater insecurity than at any time within a generation,
but also that they were at heightened risk even before
the recent downturn,” note the researchers in their paper.
“It also provides a new means of examining the sources of
insecurity and the degree to which Americans with different
characteristics are vulnerable to it.”

F

“Why Doesn’t Technology Flow From Rich to Poor Countries?”
Harold L. Cole, Jeremy Greenwood, and Juan M. Sánchez,
Federal Reserve Bank of St. Louis Working Paper 2012-040A,
October 2012.

n an ideal world, the technologies that helped richer
countries get rich would eventually find their way to
poorer countries. But that transfer doesn’t always happen.
A variety of factors influence a country’s adoption of
technology, from the labor or natural resources it has
available to government policies that either promote or discourage certain industries. A recent paper published by the
Federal Reserve Bank of St. Louis finds that the efficiency of
a country’s financial system could play a significant role in
technology adoption.
Why? Implementing a new technology requires a signifi-

I

cant investment with an uncertain payoff, and investors may
not have the necessary information to properly assess risks
or monitor how their funds are used. “Financial institutions
play an important role in constructing mechanisms that
ensure investments are used wisely,” note the paper’s
authors. “They do this by both monitoring firms and implementing reward structures that encourage firms to
truthfully reveal their profits so that investors can be fairly
compensated.”
Monitoring firms cannot be done cost effectively in some
countries, however, given the state of their financial systems.
In these cases, financial intermediaries must use reward
structures in place of monitoring; funding is delayed until a
new technology is fully implemented and the firm’s performance can be properly assessed. Even with such “backloading”
of funds, cash flows generated from technology adoption
may not be adequately disclosed.
The paper’s authors model the relationship between the
level of technology adoption and the state of a country’s
financial system and find that it helps explain differences
in income and total factor productivity between India,
Mexico, and the United States. The efficiency of the
American financial system seems to position it to adopt
advanced technology, while the inefficiency of monitoring in
Mexico limits that country to implementing intermediate
technology that can be funded using a backloading strategy.
“The Agglomeration of R&D Labs.” Gerald A. Carlino, Robert
M. Hunt, Jake K. Carr, and Tony E. Smith, Federal Reserve
Bank of Philadelphia Working Paper 12-22, September 2012.

ompanies engaged in similar work may benefit from
agglomerating, or operating in close proximity to each
other, even in today’s age of instant communication. That
holds true for research and development firms, according
to a recent paper published by the Philadelphia Fed.
Economists from the Philadelphia Fed, Ohio State
University, and the University of Pennsylvania analyzed the
geographic concentration of about 1,000 private R&D labs
in 10 northeastern states. “First, the clustering of labs is by
far most significant … at very small spatial scales, such as
distances of about one-quarter of a mile, with significance
attenuating rapidly during the first half-mile,” report the
authors. “The rapid attenuation of significant clustering at
small spatial scales is consistent with the view that knowledge spillovers are highly localized.”
In addition, they found evidence of significant agglomeration of R&D firms at the metropolitan level. This is
consistent with one of the perceived benefits of agglomeration: the pooling and matching of skilled workers.
EF

C

ECON FOCUS | SECOND QUARTER | 2013

13

C O V E R

S T O R Y

F

or every dollar of goods and services that U.S.
producers sell to other countries, Americans
import nearly $1.50. Given the size of the U.S.
economy, that creates a trade deficit that, as
of 2011, was the largest in the world nearly five times over.
More than 40 percent of that trade deficit comes from
trade with the People’s Republic of China. To many
observers, if it weren’t for cheap goods from China,
Americans would be spending more of their jobs-sustaining
cash at home. And cheap Chinese goods are no accident.
Most economists think that China’s currency — known
interchangeably as the yuan and the renminbi (RMB), or the
“people’s currency” — has been held artificially cheap by the
Chinese government for much of the last 20 years. Currency
manipulation, as such a policy is often called, gives a country
an artificial edge in world trade, siphoning demand from the
rest of the world and preventing production from flowing to
the most efficient places. That’s one reason it is prohibited
under international law.
The backdrop to the debate, of course, is that American
unemployment is high, the Fed has already employed
extraordinary measures to stimulate the economy, and

the national debt is climbing. Pressuring China to allow its
currency to appreciate might seem to be an easy way to
address America’s jobs shortfall without costing our government a penny in domestic programs. China’s success in
manufacturing — accounting for more than 90 percent of
the country’s rapid export growth since it joined the
World Trade Organization (WTO) in 2001 — has been a particular sore spot among people concerned about the United
States’ steady decline in manufacturing as a share of total
employment.
But there is more to the story. It is an open question how
much demand we have lost to China’s currency policy rather
than, for example, China’s rise as an efficient producer of
consumer goods. There are even benefits to a cheap RMB,
such as the discount it provides on the billions of dollars in
Chinese goods that U.S. consumers buy each year. And the
RMB has appreciated considerably since 2005. Economists
are no longer so sure that it is undervalued to a worrisome
degree.
Still, there are reasons to worry about China’s exchange
rate policy. It is part of a broader growth strategy that
creates some potentially dangerous global imbalances,

THE CHINESE GOVERNMENT MAY BE HOLDING
DOWN ITS CURRENCY TO INCREASE EXPORTS.
BUT IT’S NOT CLEAR WHAT — IF ANYTHING —
THE UNITED STATES SHOULD DO ABOUT IT
PHOTOGRAPHY: HUDIEMM/GETTY IMAGES

BY R E N E E H A LTO M

14

ECON FOCUS | SECOND QUARTER | 2013

which economists say can’t be maintained forever. That
suggests China will eventually be forced to subject its
currency to stronger market forces.

U.S.-China Exchange Rate
The RMB has appreciated by a third since its peg against the dollar was released in 2005.
The amount it might be undervalued now is unclear.
0.25

When Is It Manipulation?

0.20
DOLLAR PER RMB

China’s official currency policy for most of the last 20 years
has been some version of an exchange rate fixed to the
dollar. That’s the exchange rate in nominal terms, or the
price at which one currency trades with another, as opposed
to real exchange rates, which measure goods exchanged for
goods. Many economists don’t take issue with fixed
exchange rates for creating stability for developing countries like China. The charge of “manipulation” comes into
play when a currency is held substantially below its equilibrium value, or what it would trade at over the long run with
no intervention from governments.
Currency manipulation is hard to spot, however.
Exchange rates are meant to reflect the attractiveness of
holding one currency versus another. If investment opportunities, purchasing power, or stability become greater in one
country relative to another, its currency will probably rise.
Since those are subjective concepts, there are different ways
to define the equilibrium rate of exchange between two
currencies. According to some theories, equilibrium is the
exchange rate that balances trade or maintains a stable trade
surplus. In others, it’s the rate that equalizes prices or labor
costs across countries. And those are just the conceptual
issues; picking the appropriate measures for costs is another
challenge in quantifying manipulation.
“What makes China unusual is that, as of five years ago,
pretty much all the criteria gave the same answer,” that the
RMB was undervalued, says Jeffrey Frankel, an economist at
Harvard University. That explains today’s widespread belief
that China is intentionally holding its currency low. But
things have changed in just the last few years. After fixing
the RMB against the dollar from 1994 through
2005, China announced a new policy in 2005 that
included an initial revaluation (a one-time
strengthening of the RMB) of 2.1 percent. The
RMB was then pegged to an unnamed basket of
currencies — among which the dollar was
still the most heavily weighted,
Frankel’s research with Columbia
University economist Shang-Jin
Wei has shown — and was allowed
to fluctuate by up to 0.3 percent a
day. After gradually appreciating, the
RMB was re-fixed to the dollar in
2008 during the tumult of the financial crisis, but in 2010 was again
allowed to float slowly — very slowly —
in the direction of strength. In all, the
RMB has risen 35 percent in the last eight
years (see chart).
It is no longer clear whether the RMB is
substantially undervalued. Conventional

0.15
0.10
0.5
0
1990

1995

2000

2005

2010

SOURCE: Federal Reserve Board. Data through June 2013.

measures indicate a wide range: from a 49 percent undervaluation to even being overvalued, according to a recent
paper by economist Yin-Wong Cheung at the University of
California, Santa Cruz.
Since one cannot necessarily determine whether a
currency has been manipulated just by looking at its price —
as a Depression-era central banker put it, “only God could
tell” what a currency should trade at — many people point to
the Chinese government’s interventions in foreign exchange
markets as proof of currency manipulation. Hoarding
foreign assets is the traditional way in which a country holds
down its exchange rate. Most governments hold foreign
assets, but buying them in extremely large quantities can
shift exchange rates because a government must buy foreign
currency to do it, which decreases the relative global
demand for its own currency. China’s central bank, the
People’s Bank of China (PBOC), holds almost $3.5 trillion
in foreign exchange reserves. China doesn’t reveal how
much of that is denominated in dollars, but based on past
snippets of information, many economists estimate the
number is around two-thirds, held mostly in U.S. Treasury
securities. That would be more Treasuries than even the
Federal Reserve owns. China holds nearly three times
as many total foreign reserves as Japan, the world’s secondlargest holder, and almost as many as all advanced
economies combined.

Effects on America
An undervalued RMB would provide American consumers
with a lot of artificially cheap goods. Our top consumption
imports from China include small electronics — like telephones, monitors, and ADP equipment — as well as
clothing, furniture, and toys, all effectively subsidized by
the Chinese government.
The size of that subsidy depends on how much the RMB
is undervalued, says Mark Perry, an economist at the
University of Michigan-Flint and the American Enterprise
Institute. The United States imported $425 billion in
goods from China in 2012. If the RMB was undervalued by
ECON FOCUS | SECOND QUARTER | 2013

15

5 percent — and it has most likely been undervalued by
much more than that in the last decade — U.S. consumers
and businesses saved $21.3 billion, or about $68 per person.
But Perry emphasizes that’s a rough estimate because fewer
Chinese goods would be purchased if the subsidy were no
longer present.
In fact, that is the key question in the debate over China’s
currency policy: How much has the policy actually boosted
China’s trade balance? That’s a hard question to answer
because many things affect bilateral trade between nations.
Economist Joseph Gagnon at the Peterson Institute for
International Economics recently took a novel approach.
He looked directly at the dollars that governments spend on
exchange rate intervention — purchases of foreign currency,
which most governments share openly — and estimated
how much showed up in each country’s total trade balance.
He found that the effect of currency interventions on trade
is large: On average, between 60 and 100 cents of every dollar of currency intervention shows up in the trade balance.
But that doesn’t necessarily mean ending China’s currency policy would send that demand back to American
firms. For many goods, particularly the labor-intensive ones
that China excels at producing, “if we don’t buy them from
China, they’re still not going to be produced here,” Frankel
says. And by removing China’s effective subsidy, U.S. consumers could end up paying higher prices by importing more
expensive manufactured goods from other major players like
South Korea. Moreover, some economists argue that the
United States and European governments are also plenty
protectionist, hurting foreign producers via ample subsidies
to the domestic agriculture industry, among others.
The effects on U.S. jobs might start to look bigger if
China’s currency policy encourages its competitors to follow
suit. “By keeping its own currency undervalued, China has
also deterred a number of other Asian countries from letting
their currencies rise very much (if at all) against the dollar
for fear of losing competitive position against China,” economist C. Fred Bergsten of the Peterson Institute said in
congressional testimony. Bergsten and Gagnon estimated
that currency manipulation by more than 20 countries —
with South Korea, Hong Kong, Malaysia, Singapore, and
Taiwan among the most active — has added between $200
billion and $500 billion to the U.S. trade deficit each year.
As for China alone, our bilateral trade imbalance — at
almost a third of a trillion dollars last year — is not entirely
what it seems. China has become a global platform for the
“processing trade,” the name for taking raw materials
imported from the rest of the world and assembling them
into final manufactured goods. The share of Chinese exports
that are produced by foreign-invested enterprises (FIEs) —
operations owned by companies outside of China, like factories that assemble Apple’s iPhone — is now almost 60
percent, up from just 2 percent in the mid-1980s. Estimates
of the amount in wages, land, and returns to capital that
China contributes to that production — its value added —
reach 55 percent at most. In other words, foreigners benefit
16

ECON FOCUS | SECOND QUARTER | 2013

from production in China sometimes more than the
Chinese do. According to a study by Robert Johnson at
Dartmouth College and Guillermo Noguera at Columbia
University, our bilateral trade imbalance with China would
look 40 percent smaller if adjusted for value added.

Pressuring China
In sum, the offsetting effects of China’s currency policy
make its net effect on American jobs difficult to assess.
Historically, more immediate — some might say political —
concerns have tended to drive U.S. action against currency
manipulators. Research by Frankel and Wei found that the
United States has been more likely to apply political pressure when its bilateral trade deficit with another country
grows larger, and in election years when unemployment
is high.
Nowadays, action from the United States would probably
start with a declaration that China is a currency manipulator,
a phrase that gained more notoriety during the 2012 presidential campaign. Since the 1988 Omnibus Trade and
Competitiveness Act, the Treasury Department has been
required to publish semiannual reports on suspected currency manipulators. The last report, issued in April 2013,
declined to apply that label to China, citing the RMB’s gradual appreciation over the last decade. The declaration would
open the door to tariffs, capital controls (that could, for
example, prohibit China from buying Treasuries), or trade
sanctions. At a minimum, the label requires the United
States to hold talks with offending governments.
Eight countries have been mentioned in the report since
1988, all of them Asian with the exception of Russia. The
reports received less attention during the East Asian financial crisis of the late 1990s, and didn’t even name any
offenders in some years of the early 2000s. The United
States has ramped up rhetorical pressure on China since
2003, however. Members of Congress proposed tariffs on
Chinese goods, and the Treasury reports have once again
recommended discussions with the Chinese government.
Still, China has not been labeled a manipulator since the
early 1990s.
The international path of recourse is less clear. Since the
end of Bretton Woods, there have been very few cases of
countries successfully pushed into revaluation, and no economically significant ones, Frankel and Wei argued.
Membership rules for the International Monetary Fund
(IMF) indicate that countries should “avoid manipulating
exchange rates … to gain an unfair competitive advantage
over other members,” but the words “manipulating” and
“unfair” are left undefined, and the rules include no enforcement mechanisms. Some people argue that the IMF could
nonetheless publicly criticize manipulators, suspend voting
privileges, or even threaten expulsion. The WTO’s rules do
include a resolution process, but its list of prohibited activities are limited to trade measures like tariffs and quotas, not
so much exchange rates. The WTO process has never been
invoked for currency manipulation, and would require the

cooperation of the IMF, which has thus far declined to act.
Enacting countervailing protectionist measures is tricky
since tariffs on Chinese goods are equivalent to a tax on
U.S. consumers, with skewed distributional effects. Few
Americans work in the manufacturing plants that might
compete with Chinese sellers, but hundreds of millions of
Americans buy Chinese goods. In other words, the benefits
of ending currency manipulation would be concentrated on
a relatively small set of Americans at a cost to millions of
others. For example, a 2009 tariff on Chinese tires saved at
most 1,200 jobs but cost U.S. consumers $1.1 billion in
higher import prices — almost a million dollars per job —
according to Gary Clyde Hufbauer and Sean Lowry at the
Peterson Institute. Slightly higher prices might be considered worthwhile to reduce the severe outcome of job loss,
even if for a relatively small set of people. But protectionism
could have larger unintended consequences. Another recent
IMF study found that the protectionist measures by all
countries enacted during the 2007-2009 recession reduced
global trade in affected product markets by as much as
8 percent.
“Studies repeatedly show that the consumer cost of trade
protection typically exceeds, by a wide margin, any reason-

able estimate of what a normal jobs program might cost,”
according to Hufbauer and Lowry.

For China’s Sake
Though the effects on the United States are up for debate,
experts mostly agree the currency policy should eventually
end — for China’s own sake.
The currency policy creates one direct problem for
China: The central bank must accumulate ever more foreign
assets to stop the RMB from appreciating. That creates a
dangerous mismatch on its balance sheet. The PBOC’s
dollar holdings are a significant share of its total assets, and
they are mostly invested in low-yielding U.S. Treasury securities. That leaves the PBOC paying out more on its liabilities
(denominated in RMB) than it earns on its assets (denominated in dollars). How long this can continue depends on its
ability to sustain that funding imbalance. Moreover, it
stands to incur huge capital losses if, or when, the RMB
eventually does appreciate. With the PBOC now holding
possibly more than $2 trillion in dollar assets, those losses
could be extraordinary. The mismatch gets larger each day
that the currency policy continues.
More broadly, China’s currency policy is just one compo-

Is the Fed Pushing Down the Dollar?
After the Fed undertook quantitative easing (QE), the name
for massive asset purchases to reduce U.S. interest rates and
speed the economic recovery, developing countries complained. (The policies began in 2008 and continue today.)
The developing countries argued that QE would weaken the
dollar. That would not only siphon their exports, but also
drive capital to their already inflated asset markets. Brazil’s
finance minister, Guido Mantega, warned in September
2010 that the Fed’s policies had sparked an international
“currency war” that would lead emerging markets to depreciate their own currencies to neutralize the effects. Japan
and Britain have been the target of similar criticisms.
Currency manipulation and large-scale asset purchases
are similar in that they are both strategies to stimulate a
domestic economy. They could even have some similar
byproducts, like a weaker currency and greater exports.
But there’s one critical difference, says economist Joseph
Gagnon at the Peterson Institute for International
Economics: With asset purchases, “you’re trying to increase
total spending, not just grab someone else’s spending.”
Still, negative spillovers from monetary easing are possible, and they happened during the Great Depression. Many
countries supported their economies by expanding the
money supply, which required exiting the gold standard.
Countries that did so saw weaker currencies, which
improved their trade balances at the expense of those that
stayed tied to gold, according to a famous 1985 study
by economists Barry Eichengreen at the University of

California, Berkeley and Jeffrey Sachs at Columbia
University. In that situation, Eichengreen noted in a recent
paper, there may have been an argument for matching currency depreciation with currency depreciation — what some
now call a currency war, but is more favorably known as policy coordination. That would have given all countries a
domestic boost while avoiding “beggar-thy-neighbor”
effects on trade.
But it works only if countries have experienced the same
shocks. The difference between the 1930s and now is that
economic weakness is not global: Many emerging markets
are booming. Those booms could be overstimulated by
easier U.S. monetary policy, even as it restores global financial markets and developed-country demand for goods from
emerging markets. Currency wars might avoid unwanted
trade and capital flows, but they could add to economic
overheating.
Eichengreen argued that emerging economies should
pursue other options, like fiscal tightening, to offset excessive capital inflows — because in the long run, QE has the
potential to benefit everyone by restoring developed-country health. A 2011 study by the International Monetary Fund
suggested that the net spillover effects of QE on the output
of U.S. trading partners were initially positive, especially due
to improvements in financial markets that added to growth.
In that way, Fed Chairman Ben Bernanke said in a March
2013 speech, QE is not a policy of “beggar thy neighbor” but
rather one of “enrich thy neighbor.”
— R ENEE H ALTOM

ECON FOCUS | SECOND QUARTER | 2013

17

12

1,000
800
600
400
200
0
-200
-400
-600
-800
-1,000

8
4
0
-4
-8
-12

1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012
(est.)
United States
China
SOURCE: International Monetary Fund, World Economic Outlook Database, April 2013

nent of a development strategy that some economists argue
is widely out of balance. At more than 50 percent of GDP,
China’s national saving rate is the highest in the world.
There are several reasons for it, according to Dennis Yang at
the University of Virginia. As growth took off after 2001,
government tax revenue nearly quadrupled; what the
government does not spend adds to national savings. Also,
the effects of China’s movement away from its communist
heritage amplified corporate profitability, profits which
were held as retained earnings, another component of
national savings. Meanwhile, household savings are high
both due to demographic factors (especially the one-child
policy and aging population) and less-developed social
safety nets and financial markets. And while investment is
very high in China, as in most fast-growing economies,
saving is even higher. If more of China’s extraordinary
economic growth — averaging 10 percent annually since the
late 1970s — were used for domestic consumption or investment, its exports would have been substantially lower.
China ensures that exports stay high through more than
just the currency strategy. Regulations require that China’s
substantial foreign direct investment — second only to that
of the United States — be oriented toward export production. For goods that are exported, producers are refunded
both any tariffs paid on intermediate goods and the valueadded taxes paid at each stage of production. The effect of
these incentives, many of which have been in place since the
late 1970s, were amplified considerably by China’s entry into

the WTO, after which its exports soared by 25 percent
each year. China’s currency policy also plays a role, as
does its historically cheap labor.
But these patterns can’t go on indefinitely, because
growth by exports doesn’t create enduring wealth.
“Long-term growth will ultimately depend on capital
accumulation, size and quality of the labor force, technological advances, and other institutions,” Yang says.
Several factors could trigger a rebalancing. China is
likely to experience pressure from the international
community while large trade imbalances persist. In
addition, the risk on the PBOC’s balance sheet could
prove too great.
Or, the imbalances could be naturally unwound by a
maturing economy. For example, wages are rising as
China is reaching the limits of how much cheap labor can be
drawn out from its rural provinces. That will reduce China’s
competitive edge in the processing trade. It will also eat
away at corporate profitability and put more income in the
hands of households, who could use it for consumption or
imports, both which would reduce national saving and
China’s trade imbalance.
Finally, Chinese inflation is rising, a result of the PBOC
creating money to accumulate foreign assets. “If you don’t
allow the currency to appreciate in nominal terms, your
prices go up,” Frankel says. “You end up getting a real appreciation either way.”
The Chinese government is aware of the problems. Its
most recent Five-Year Plans — its national development
initiatives that are a relic from its central planning days —
have indicated a goal of balanced trade, and policymakers
have said they want the RMB to be governed more by market forces. These haven’t yet come to fruition, but the
recession temporarily alleviated the issue. Global demand
for Chinese goods fell, trade imbalances eased (see chart),
foreign money stopped pouring into China, and the PBOC
had less appreciation pressure to offset. Thus, China has
recently had the best of both worlds: a stable exchange rate
without having to pile up additional reserves. But as China’s
trading partners regain strength, Chinese goods will once
again be in demand. They’ll have to make a decision, Gagnon
says: “Do we let the renminbi go, or do we go back to the bad
old days?”
EF

PERCENT OF GDP (LINES)

BILLIONS OF DOLLARS (BARS)

Current Account Imbalances: Another Measure of Trade

READINGS
Bergsten, C. Fred, and Joseph E. Gagnon. “Currency Manipulation,
the U.S. Economy, and the Global Economic Order.” Peterson
Institute for International Economics Policy Brief No. PB12-25,
December 2012.

18

Gagnon, Joseph E. “The Elephant Hiding in the Room: Currency
Intervention and Trade Imbalances.” Peterson Institute for
International Economics Working Paper No. 13-2, March 2013.

Eichengreen, Barry. “Currency War or International Policy
Coordination?” Manuscript, January 2013.

Hufbauer, Gary Clyde, and Sean Lowry. “U.S. Tire Tariffs: Saving
Few Jobs at High Cost.” Peterson Institute for International
Economics Policy Brief No. PB12-9, April 2012.

Frankel, Jeffrey A., and Shang-Jin Wei. “Assessing China’s
Exchange Rate Regime.” Economic Policy, July 2007, vol. 22,
no. 51, pp. 575, 577-627.

Yang, Dennis Tao. “Aggregate Savings and External Imbalances in
China.” The Journal of Economic Perspectives, Fall 2012, vol. 26, no. 4,
pp. 125-146.

ECON FOCUS | SECOND QUARTER | 2013



More than 1,200 slot machines
welcome guests at the
Hollywood Casino Perryville
in Perryville, Md.

BY J E S S I E RO M E RO

PHOTOGRAPHY: HOLLYWOOD CASINO PERRYVILLE

T

ired shoppers at
the Arundel Mills
outlet mall in
Hanover, Md., between
Washington, D.C., and
Baltimore, can take a
break at one of the
country’s largest commercial casinos. Just feet away
from Bass Pro Shops and
Burlington Coat Factory, in a building that
could hold more than five football fields,
Maryland Live! is home to more than 4,300
slot machines and 122 live table games, such
as blackjack and craps, with a two-story
poker room opening in August. If a gambler
isn’t feeling lucky in Hanover, he can drive
an hour north to the Hollywood Casino
Perryville, two and a half hours east to the
Casino at Ocean Downs, near Ocean City, or
two hours west to the Rocky Gap Casino Resort in
Cumberland. Within the next three years, additional
casinos are scheduled to open in downtown Baltimore and
in Prince George’s County, on the border with Washington, D.C. — making the Free State one of the most concentrated gambling markets outside of Las Vegas.
Gambling has been a heated topic in Maryland politics
since the early 2000s, when gubernatorial candidate Robert
Ehrlich campaigned on bringing slot machines to Maryland.
Ehrlich was elected, but failed to persuade the state legislature to pass his bill. In 2008, however, voters approved a
referendum, backed by new governor Martin O’Malley, to
allow up to five slots-only casinos to open in the state.
Just four years later, gambling was once again the subject of
special legislative sessions and a fierce political campaign,
which eventually resulted in a major expansion of the state’s
casino industry.
Supporters view casinos as a surefire way to generate
tax revenues for the state and jobs for the surrounding
communities. Opponents argue that these benefits are
greatly overstated, not to mention outweighed by significant
social costs. The reality is probably somewhere in the middle, but legislators in Maryland and many other states are
hoping that their bets pay off.

Going All In
The most expensive
political campaign in
Maryland history wasn’t
about a person — it was
about a business. In
November 2012, voters
approved legislation,
passed by the General
Assembly that August,
authorizing the construction of a sixth
casino in Prince George’s County and
expanding casino gambling to include
live table games. Between August and
November, supporters and opponents
spent more than $90 million — as much as
was spent on the past four governors’ races
combined — to convince voters of their
position on Question 7, as the ballot initiative was known. On the “pro” side was
MGM Resorts, which wants to build an $800
million casino at the National Harbor resort on the Potomac
River. On the “con” side was Penn National Gaming, which
wants to add a casino to its Rosecroft Raceway, and has
argued that the political process is tilted in favor of giving
the sixth casino license to MGM and National Harbor. Penn
National also owns the Hollywood Casino at Charles Town
Races in West Virginia and the Hollywood Casino Perryville.
Maryland’s battle was fierce, but it wasn’t unique, says
James Karmel, a gaming consultant and economic historian
at Harford Community College in Bel Air, Md. “It’s very
rarely the casino interest versus people who just don’t
like casinos. Almost always it’s one casino interest versus
another casino interest, because the money is so big.”
Consumers spent $37.3 billion at commercial casinos in
2012, nearly as much as the prerecession peak of $37.5 billion,
according to the American Gaming Association (AGA), a
trade association. Commercial casinos include riverboat
and dockside casinos, racetrack casinos, and stand-alone
casinos. Tribal casinos, which operate on Indian reservations
under federal regulation, generated $27 billion in revenue in
2011, the most recent year for which data are available.
Of course, commercial casino owners don’t get to keep a
large portion of their revenue. In Maryland, the state keeps
ECON FOCUS | SECOND QUARTER | 2013

19

Revenue Shares from Maryland Casino Gambling, April 2013
Gross Revenue: $68,943,512
Maryland Lottery and Gaming Control Agency
1.4%
Small, minority-owned, and
Race Track Facilities
women-owned businesses
1.2%
1.1%
Local Impact Events
3.9%
Race Track Purses
5.0%

--

Casino Expenses

37.9%

14.5%

Education Trust Fund

35%
Casino Profit

NOTE: April 2013 was the first month casinos operated table games.
SOURCE: Maryland Lottery and Gaming Control Agency

67 percent of the slot machine revenue and 20 percent of the
table game revenue, one of the highest rates in the nation.
The 2012 law allows Maryland Live! and the forthcoming
Baltimore casino — the ones closest to a sixth casino in
Prince George’s County — to keep about 8 percent more
of their slots revenue as compensation for the added competition. The money that casinos give to the state does not
include property taxes or corporate income taxes, which
must be paid separately. (Tribal casinos are not subject to
state or federal taxes, although the state compacts that
govern tribal casinos generally include a revenue-sharing
agreement.)
“People think, oh, casinos take in so much money,” says
Jennifer Miglionico, director of marketing at Hollywood
Casino Perryville. “But they don’t realize how much we give
back out.”
The 2008 legislation established an education trust fund,
which initially received 49 percent of the total gambling
revenue. With the passage of Question 7, the trust fund will
receive 39 percent of total gambling revenue, according to
projections by Maryland’s Department of Legislative
Services (DLS). The remainder of the state’s money goes
toward supporting the horse racing industry, local impact
grants, and small, minority-owned, and women-owned
businesses. Through May 2013, total slot machine revenue
was more than $620 million. (See charts.)

Winning Big…
Casinos have proliferated rapidly in the United States.
Before 1989, gamblers had to travel to Nevada or Atlantic
City, N.J. But that year, a casino opened in Deadwood, S.D.,
in a bid to revitalize the struggling town. Today, more than
500 commercial casinos operate in 22 states, with
Massachusetts slated to become the twenty-third. Tribal
casinos operate on Indian reservations in 28 states, including
the Fifth District state of North Carolina.
Maryland and West Virginia are the only Fifth District
20

ECON FOCUS | SECOND QUARTER | 2013

states with commercial casinos, but all five states and the
District of Columbia operate state lotteries, which combined have generated $33 billion for their states’ budgets.
Maryland’s lottery is the longest-standing, at 40 years, while
North Carolina only started its games in 2005.
Whether it’s scratch-off tickets or a glitzy casino, state
lawmakers legalize gambling for a combination of three
reasons: reducing fiscal stress, keeping gambling revenues
and taxes in state, and attracting tourism, according to an
analysis of states’ decisions by Peter Calcagno and Douglas
Walker of the College of Charleston and John Jackson of
Auburn University. And once one state allows gambling, its
neighbors tend to follow suit. “[Legislators] realize that
people are still gambling, and figure, well, if we can get an
extra $500 million for the budget, let’s let people gamble
here as opposed to some other state,” says David Schwartz,
director of the Center for Gaming Research at the
University of Nevada, Las Vegas.
That was the case in Maryland, where Governors Ehrlich
and O’Malley both supported slots as a way to close large
budget deficits. Maryland also found itself in the midst of a
casino arms race: West Virginia and Delaware began allowing slots gambling in 1995, and Pennsylvania followed suit in
2004. No sooner did Maryland legalize slots than its neighbors responded by allowing table games; recapturing the
revenue lost to other states thus became a major impetus
for the 2012 expansion. Marylanders spent more than
$1 billion at Charles Town Races and Slots between 2003 and
2012, in effect generating tax revenue for West Virginia
instead of Maryland, according to a study by Sage Policy
Group, a Baltimore-based economic consulting firm. Sage
estimated that if Question 7 did not pass, Maryland residents could spend an additional $1.5 billion at Charles Town
over the next 10 years. (Sage received funding from a proQuestion 7 group.)
The DLS estimated that the addition of table games
and the sixth casino would increase gambling revenue to
$1.9 billion by 2017, about $700 million more than slots
alone. The education trust fund would receive $750 million
in 2017 — $170 million more than would be generated by the
five existing casinos.
Gaming industry supporters also point to casino gambling as an effective way to create jobs. The casino industry
supported about 820,000 jobs in 2010, according to a study
prepared for the AGA by The Brattle Group, a consulting
firm. About 350,000 people were employed directly by the
casinos, with the remainder employed by suppliers and
other support industries. The study also noted that casinos
are more labor-intensive — they employ more people per
dollar of revenue — than many other industries. A report
prepared for Massachusetts by the Spectrum Gaming Group
found that casinos have a multiplier effect of about 1.5,
meaning that for every job created at a casino, additional
spending in the economy generated another 0.5 jobs. The
Brattle Group puts the multiplier at 1.92. In Maryland’s case,
supporters of Question 7 claimed that the expansion would

Maryland Slot Machines Revenue
80
70
60
$MILLIONS

generate 2,000 construction jobs and 10,000 permanent
jobs across the state. So far, Maryland Live! has hired about
2,400 permanent employees, half of whom were a result of
Question 7. Hollywood Casino Perryville has added about
140 employees to its original 300, according to Miglionico.
There are no estimates yet of how many indirect jobs might
have been created.

Statewide

50
40

Maryland Live!

30
20
Hollywood Casino Perryville

10

… Or Going Bust?
Economic-impact studies on the effects of casino gambling
depend, however, on a number of assumptions about how
consumers and businesses will respond — and those assumptions might or might not prove to be true. For example,
most studies assume that the new casinos will attract out-oftown visitors, but as more states legalize casinos, there is less
reason for people to travel out of state to gamble. In addition, research suggests that many of the people who visit
casinos are day-trippers who rarely venture beyond the
casino, making it unlikely that they generate a large
multiplier effect.
Many impact studies also fail to account for the net
effect on jobs and tax revenues. As Karmel says, gaming has
become “an everyday thing, just part of your routine entertainment options, like going to a restaurant or a movie or a
ballgame.” While that has been good for the casino industry,
it’s possible that consumers are shifting their spending
from other forms of entertainment to casinos, rather than
increasing their total amount of entertainment spending.
If that’s the case, any job gains or increased tax revenue
from casinos could be offset by job losses or decreased
taxes from other businesses, according to Earl Grinols, an
economist at Baylor University. “When one sector of the
economy expands at the expense of another sector of the
economy, you’ve merely shifted the location of jobs,” Grinols
says. “Casinos don’t create people. They’re merely hiring
people who would already be working someplace else.”
Of course, that is less likely to be true when unemployment is high, as at present. In that case, casinos could
provide a benefit if they put some people back to work more
quickly than would otherwise have been the case. But in the
long run, casinos might not have a large effect on economic
growth. In a separate study, Walker and Jackson found that
casinos do not have any measurable effect on state per
capita income. While there might be an initial boost in
employment or tax revenue, they conclude that “the average
state should not expect any long-term growth effects from
legalizing casino gambling.”
Whatever the financial benefits of casinos, it’s possible
that they are outweighed by the social costs of pathological
gambling and higher crime. Compulsive gamblers tend to
commit more crimes and are more likely to commit suicide;
they also file for bankruptcy and get divorced at higher rates
than the rest of the population. Areas that have a casino
within 50 miles have double the rate of problem and pathological gambling, according to the National Gambling
Impact Study Commission, which was convened by

Ocean Downs

0
Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May
2011
2013
2012
NOTE: Data are not included for the Rocky Gap Casino Resort, which opened in May 2013.
SOURCE: Maryland Lottery and Gaming Control Agency; UNLV Center for Gaming Research

Congress in 1996. It’s not certain that these links are causal;
people with gambling problems often have other psychological problems, and an already-compulsive gambler might
move to an area with a new casino, rather than the casino
creating the compulsion. Still, the commission concluded in
its 1999 report, “As the opportunities for gambling become
more commonplace, it appears likely that the number of
people who will develop gambling problems also will
increase.”
Casinos also could lead to more crime. Crime rates
increase in counties where casinos open, and overall
between 5 percent and 30 percent of crime in a county can be
attributed to the casino, according to research by Grinols
and David Mustard of the University of Georgia. In another
study, William Evans of the University of Notre Dame and
Julie Topoleski of the Congressional Budget Office found
that while employment in the surrounding county increased
by 26 percent four years after an Indian casino opened,
bankruptcy rates, violent crime, auto theft, and larceny all
increased by 10 percent.
Other studies, however, have found that counties with
and without casinos have the same crime rates, or that crime
increases in some counties while remaining the same in
others. Part of the discrepancy stems from the fact that it is
difficult to distinguish the effects of casinos specifically
from the effects of more visitors generally. Also, differences
in population, law enforcement, or casino regulation might
affect how a community responds to the introduction of
a casino.
Despite the potential costs of gambling, arguments based
on the economic impact tend to prevail. “You can easily
quantify the benefits; you can say, here’s the gaming revenue,
here’s the tax, here’s the employment,” says Schwartz.
“When you look at the social costs, that’s a much more
nebulous area.”

Breaking Even
So far, Maryland Live! is drawing huge crowds. In March, the
casino generated more than $44 million in slots revenue —
more than any other casino in Delaware, Maryland,
New Jersey, or Pennsylvania. And anecdotal evidence
suggests that Delaware and West Virginia already are losing
continued on page 29
ECON FOCUS | SECOND QUARTER | 2013

21

BY R E N E E H A LT O M

T

he nation’s first mental hospital, the Public
Hospital for Persons of Insane and Disordered
Minds, part prison and part infirmary, received its
first patient in 1773 in Williamsburg, Va. A century
later, 110 mental hospitals around the country were up and
running.
The system kept growing. By 1950, well over half a million
Americans lived in state mental hospitals — about a quarter
as many inmates in the entire federal, state, and local jail system today. Unfortunately, those hospitals were no place to
get well. They were often filthy. Psychotropic drugs and
tranquilizers hadn’t yet hit the market, so the halls were
filled with people dazed and rambling from their psychoses.
The science of the time offered electroshock therapy, lobotomies, and little else by way of treatment. Most of the staff
were unskilled custodians, and many patients were locked
away and never expected to reenter society.
That’s about when the downsizing of state mental
hospitals began. As of 2010, just 46,000 people resided in
roughly 240 state and county psychiatric hospitals, according to the Substance Abuse and Mental Health Services
Administration (SAMHSA). That’s a small number compared to the 45.6 million adults that have some form of
mental illness, such as anxiety, mood, or impulse control disorders, and even compared to the 11.5 million adults with
serious mental illness, such as schizophrenia, bipolar disorder, psychotic depression, or other debilitating diseases.
Thanks to better science on the treatment of mental illness,
the vast majority of people with even serious mental
illnesses can live full, productive lives, a virtual impossibility
50 years ago.
But it’s clear that too many people still lack adequate
mental health care. The mentally ill are overrepresented
in bad walks of life. One-fifth of the population has a mental
illness, according to SAMHSA, but they make up more
than half the inmates in jails and prisons, and one-third of
the homeless population. Suicide claims more lives each
year than car accidents, and more than twice as many
homicides. And there are unspeakable costs that the
people of Virginia Tech, Aurora, Colo., and Tucson, Ariz.,
won’t ever forget.
People with mental illness have a better chance than ever
at thriving. But do we know how to deliver care that maximizes quality of life for the people who aren’t?
22

ECON FOCUS | SECOND QUARTER | 2013

The Challenge
It’s not easy to say what an efficient mental health care
system would look like, according to Harvard Medical
School economist Richard Frank, co-author of the 2006
book Better But Not Well with Columbia University Mailman
School of Public Health economist Sherry Glied. One can
point to some good signs: “We’ve virtually doubled the rate
at which people who have an illness get treated,” Frank says.
“We’ve also increased the chances that people who get treatment get the treatment that is likely to make them better.”
Science is responsible for much of that; more treatments
are available, and the side effects of medication are more
tolerable. But we’ve also expanded and improved the
system’s ability to deliver care. Before the 1950s, treatment
was mostly limited to state mental hospitals and about 7,000
psychiatrists, many located in small private practices in
urban areas, Frank and Glied wrote. There were also 13,500
psychologists and 20,000 social workers, but most didn’t
provide mental health care. Today, there are more than half a
million licensed psychiatrists, psychologists, counselors,
family therapists, nurses, and social workers working across
4,800 public and private mental health organizations that
provide varying intensities of care. More than 31 million
people get mental health treatment each year. In addition,
society is more respectful of patients’ rights, and the stigma
of mental illness is gradually eroding.
An ideal system creates opportunities for as many people
as possible to live independently, Frank says. “On the other
hand, for both humane reasons and externality reasons, you
don’t want to let them fall too far.”
It would be prohibitively expensive, in all likelihood, to
reduce the number of people who fall through the cracks to
zero. Still, most people would agree that the sheer volume of
bad outcomes makes it clear that our system needs improvement. The number of mentally ill people in jails and prisons
is now orders of magnitude larger than the number in
mental hospitals. To some extent, that’s because the mentally ill are twice as likely to abuse drugs, which can lead to jail.
But they are unlikely to get better there. Only a third of
inmates with mental illness receive any treatment — hospitalization, medication, or therapy — once incarcerated.
Potentially even worse off are those who don’t enter any
system of care. Two out of five people with serious mental
illness receive no treatment. In 2010, more than 38,000

people committed suicide, according to the Centers for
Disease Control and Prevention. In 2011, more than 1 million people attempted it, and 8.5 million had thoughts of it.
A third of the homeless population is mentally ill, according
to the Treatment Advocacy Center, a nonprofit that advocates involuntary treatment for some severely ill people.
In economics terms, involuntary treatment is justified in
part by the externalities associated with mental illness — the
fact that people who fall through the cracks tend to become
society’s problem, whether through crime, homelessness, or
the drain on public resources. Externalities aside, it is also
thought to be justified by the fact that individuals with some
severe mental illnesses lack the capacity to make rational
decisions about treatment that could improve their lives.
In an attempt to safeguard against overuse of involuntary
treatment, many state laws require that a person have
already exhibited dangerous behavior in order to receive
treatment against their will.
Critics such as the Treatment Advocacy Center argue
that overly high standards for involuntary commitment
could cost the system resources down the line. A famous
1999 study out of Duke University found that programs like
court-ordered outpatient therapy and medication reduced
hospital admissions among people with schizophrenia and
other psychotic disorders by 72 percent. Critics also contend
that in the absence of involuntary commitment, some of the
mentally ill are, in effect, sentenced to life in the streets. But
to other advocates, patients’ liberties outweigh the public
and private benefits of commitment.
Even in a system that brought all mentally ill people into
treatment, it would be a challenge to treat all people effectively because the diseases are so complex. People with
identical diagnoses can have vastly different symptoms and
needs. “Even if you know the genes, the environment and
early life experiences all figure into it,” Frank says.
Health care markets in general have problems that prevent buyers and sellers from coming together to negotiate
services efficiently. The science on this goes back to economist Kenneth Arrow in 1963, who was the first to explain
why efficient health care systems are so scarce. Uncertainty
is a key component: No one knows his chances of getting
sick, so people pool their risk through insurance. But the
health insurance market is riddled with adverse selection
(sicker patients will tend to buy more insurance, and insurers
can’t immediately identify them) and moral hazard (once
insured, people overuse service).
Market failures for mental health care are the same, only
worse. Adverse selection is more pronounced, and studies
show that uptake of services when someone else pays is at
least twice as high for mental health than for other health
services. As many as 8 percent of people who seek mental
health treatment have no diagnosable condition at all.
Insurers counteract market failures by providing better
coverage for minor psychological conditions to attract
low-risk consumers. Many insurers ration care, but more
dramatically for mental health. Since the 1990s, that has

been done through caps on service facilitated by managed
care organizations — HMOs and other intermediaries
between patients and doctors. States, in turn, have counteracted rationing with parity laws. These laws force plans to
cover a certain level of mental health service, different in
every state. The result is that almost all private insurance
plans cover mental health services, but private insurers cover
just over one-quarter of the total expenditure on mental
health.
In most cases, private insurance doesn’t even enter the
picture. Many disorders can make it difficult to hold a job,
and life stressors can exacerbate genetic conditions. A recent
study in the Journal of Mental Health Policy and Economics
found that households with a severely mentally ill person are
three times as likely to be poor, and they fall further below
the poverty line. For a variety of reasons, “having a severe
mental illness makes you poor, and being poor also increases
your chance of having a mental illness,” Frank says.
That means Medicaid, Medicare, and Social Security
disability programs are more likely to be involved. Public
payers cover 58 percent of mental health spending,
compared to 46 percent for overall health (see chart).
The majority of public funds for mental health come
through Medicaid, a federal program run by the states. Most
mental health services are considered optional under
Medicaid rules, meaning states get to decide which services
are covered. The funds are matched dollar for dollar by the
federal government, or more in poor states. For people who
don’t qualify for Medicaid, states use general mental health
funds to pay for treatment. (For a quick look at how recent
health care reform is expected to affect mental health care,
please see the online supplement to this article.)

Shifting Payers and Priorities
Entitlement programs were never intended to be a major
provider of mental health services — they just came along at
the right time. Medicare and Medicaid were created in 1965,
and Social Security’s Supplemental Security Income (SSI)
program was created in 1972. This was just as state mental
hospitals were in steep decline.
State mental hospitals downsized mostly because care
got better. The first antipsychotic drugs hit the market in
the mid-1950s, which for the first time allowed people to
be stabilized, rehabilitated, and discharged. The average
hospital stay of six months in 1954 dwindled to just 23 days
by 1980. That meant fewer beds and smaller hospitals. By
1980, the number of long-term residents in state mental
institutions had dropped from 559,000 to just 154,000.
At the time, there were few alternatives for care. That
began to change in 1963, when President Kennedy launched
a system of community-based mental health care. Before
then, mental health care was almost entirely a state issue.
States were legally responsible for funding, so state legislators effectively set mental health policy by allocating their
budgets, mostly devoted to state mental hospitals. Kennedy
tripled federal funds to build a system of community mental
ECON FOCUS | SECOND QUARTER | 2013

23

Who Pays Health Care Dollars and How They’re Spent
Mental Health
$112.8 billion (2005)

Total Health
$1,850.4 billion (2005)

PERCENT

Who Pays
100
90
80
70
60
50
40
30
20
10
0

Other State and Local
Other Federal
Medicaid
Medicare
Other Private
Private Insurance
Out of Pocket
Mental Health

Total Health

PERCENT

Where the Money Goes
Dentists and Other
Medical Products
Insurance Administration

100
90
80
70
60
50
40
30
20
10
0

Retail Prescription Medication
All Other Providers
Home Health
Nursing Homes

Mental Health

Total Health

Physicians and
Other Professionals
Hospitals

SOURCE: Mental Health United States, 2010. Substance Abuse and Mental Health Services
Administration

health centers (CMHCs). Within 15 years, there were
650 CMHCs covering 43 percent of the population and
serving 1.9 million people per year.
Though they were never intended to be used that way,
entitlement programs made the federal government a permanently bigger player in mental health. When hospitals
downsized, entitlement programs stepped in to fill the void.
Medicaid and Medicare funded medical needs, and SSI
provided income for food, housing, and other nonmedical
services. Entitlement programs were designed to make it
hard for states to shift the funding burden on to the federal
government. Most notably, Medicaid cannot be used for
treatment in state mental hospitals. States got creative; they
shifted patients to nursing homes and general hospitals. The
latter doubled their psychiatric beds in barely more than a
decade of Medicaid’s launch. Within six years of its creation,
Medicaid accounted for 16 percent of all mental health
spending, and it was 28 percent in 2005 (see chart). The
share of state spending fell from more than half in the 1950s
to less than a quarter by the early 1970s, where it has stayed.
The federal government’s larger role has also helped
determine the focus of care, says Howard Goldman at the
University of Maryland School of Medicine. The old state
mental hospitals focused on the sickest and most disabled
people. Kennedy’s community system, by contrast, was
focused on overall “mental health,” which the World Health
Organization defines as mental wellness, not just the lack of
an active disorder. For many people, this was a great thing.
For the first time, even people with mild depression or anxiety had treatment options.
But it proved difficult for people with serious illnesses to
24

ECON FOCUS | SECOND QUARTER | 2013

navigate the full range of services that they needed — both
medical services and nonmedical support like food, housing,
help finding and keeping a job, and social lifelines to aid the
reintroduction into society. Many resided in nursing
homes and slum apartments. In 1977, the Government
Accountability Office wrote a scathing report of the
CMHCs’ lack of support for people with chronic mental illness, and federal and state mental health services began to
focus again almost exclusively on people with the most
debilitating conditions. The gentrification of cities in the
late 1970s and early 1980s brought many of them into the
streets, creating a visible problem that sapped the remaining
public support for the community system. Appropriations
to CMHCs were pulled under President Reagan’s deficit
reduction efforts, and replaced with smaller block grants.
Today, the vast majority of federal spending on mental
health services comes through Medicaid.
The pendulum has started to swing back, Goldman says.
“There has been a drive over the last 20 years to expand the
scope of who has a mental illness.”
The profession follows the American Psychiatric
Association’s Diagnostic and Statistical Manual of Mental
Disorders, known as the DSM, to diagnose patients and
assign treatments. It now includes behavioral conditions,
like attention deficit disorder. The fifth release, issued in
May 2013, is even more expansive, including conditions
like bereavement and caffeine withdrawal. Although
public mental health services are not directed at the same
expanded list of conditions, there is a greater interest in
early intervention and bringing people into the system,
Goldman says.

Dwindling Funds
As the definition of mental illness is expanding, funding is
being drained by the ongoing state revenue crisis that has
afflicted state governments since the onset of the Great
Recession.
At $37.4 billion, mental health expenditures were 2.3 percent of total state budgets on average in 2010 (see chart). But
those numbers are falling. States cut $4.35 billion from
mental health spending from 2009 to 2012, according to the
National Association of State Mental Health Program
Directors, which represents the states’ mental health agencies. Over the same four-year period, the state system saw a
nearly 10 percent increase in utilization in publicly financed
inpatient and outpatient behavioral health treatment
services. South Carolina cut funding more than any other
state; its general fund budget for mental health dropped
by 39 percent between 2009 and 2012, according to a separate study by the National Alliance on Mental Illness
(NAMI), an advocacy group. Washington, D.C., was among
the top 10 at 24 percent. Nine other states cut funding by
more than 10 percent.
“The states are devastated by the budget cuts. There’s
just no nice way to say it,” says Lisa Amaya-Jackson at
Duke University’s School of Medicine.

Per Capita Spending by Fifth District States

Searching for Welfare Gains
It is not clear what overall level of spending for mental
health would be optimal. Among countries that spend about
what we do on mental health as a share of GDP, some do
better and some do worse, as measured by homelessness,
incarceration rates, and the number of people who get EBT,
Frank says. Australia matches our spending levels, and is
known for its effective system.
With any social welfare problem in which resources are
scarce, there are usually ways to squeeze blood from a turnip
by distributing resources more efficiently. For example,
effective treatment could prevent many of the socially
destructive behaviors that land people in prison. North
Carolina had 3,300 public and private inpatient psychiatric
beds in 2007. But the previous year, more than 5,500 inmates

400

$ PER CAPITA

The poor are the most vulnerable, but budget problems
spill over. Non-Medicaid spending tends to be cut first, since
cuts to Medicaid lose the federal matching funds too. That
happened when stimulus funds ran out in June 2011 and
pulled $14 billion from state Medicaid programs. One hospital in Phoenix, Ariz., reported a 40 percent spike in
emergency room psychiatric episodes after services were
eliminated for 12,000 people with serious mental illnesses
who did not have Medicaid, NAMI reported.
Agencies are starting to use evidence-based treatment
(EBT) as a way to protect services from budget cuts. The
EBT philosophy emerged in the 1990s for overall health
care, but has amassed wide support in mental health in just
the last decade. The approach is based on rigorous followthrough of each step of a scientifically validated regimen.
As the name suggests, not only is EBT arguably more
effective, but it is perceived as such by state legislatures
deciding where limited state funds should go. AmayaJackson is the director of Duke’s Evidence-Based Practice
Implementation Center (EPIC), which trains clinicians on
EBT. After following through on the training program,
EPIC puts the clinician’s name on a public roster. AmayaJackson says that agencies in North Carolina have become
more eager to partner with EPIC in lean times because
appearing on that roster signals accountability to legislatures. It has also created a network of clinicians within the
state that third-party payers — Medicaid and insurance
companies — want to work with, maximizing the clinicians’
reimbursement rates.

State mental health agencies spent roughly $37 billion on care in 2010.
Here’s where the funds came from in the Fifth District.
300
200
100
0
DC (1)

NC (11)

MD (12)

VA (31)

WV (40)

SC (44) U.S. Average

State General Funds
Federal Medicaid
State Medical Match All Other Funds
NOTE: State’s rank for total mental health agency spending in parentheses.
SOURCE: National Association of State Mental Health Program Directors Research Institute,
and author’s calculations

in the states’ prisons — 14 percent of the state’s prisoners —
had a serious mental illness. Budget cuts have already
removed well over 3,000 of the nation’s psychiatric beds,
more than 6 percent of the total.
The combination of mental illness and substance abuse is
a particularly vulnerable area, Frank says, responsible for
many of the mass shootings in recent history. “Someone with
schizophrenia is more likely to be a victim than a perpetrator, and they are no more likely to be perpetrators than the
rest of us.” But if you combine schizophrenia with substance
abuse, they are much more likely to inflict harm. “The issue
is that, unfortunately, people with schizophrenia are more
likely to abuse substances.”
In many cases, the difficulty is getting people into the
treatment system in the first place. That means treating illnesses before they snowball into bigger problems, especially
for children; the average onset of mental illness is at just
14 years old. There are also big gains from treating mothers
with depression, since children with depressed mothers do
worse in school and are more likely to become depressed
themselves. “That’s a cheap fix,” Frank says. “It’s maybe
$1,200 to get effective treatment for depression. It’s not
very expensive to get people decent treatment for your common mental illnesses.”
Though there is no obvious wholesale fix to the system,
here’s the good news: A lot of progress has been made in a
very short amount of time. We have good ideas of how to
treat mental illness, and how to enable people to live controlled, productive lives, and we have greatly improved the
rate at which people enter the system. By further improving
the ability of markets to allocate care, there is hope of further driving down the number of people with mental illness
who are imprisoned by their diseases.
EF

READINGS
Frank, Richard G., and Sherry A. Glied. Better But Not Well: Mental
Health Policy in the United States Since 1950. Baltimore, Md.: The Johns
Hopkins University Press, 2006.

Goldman, Howard H., and Gerald N. Grob. “Defining ‘Mental
Illness’ In Mental Health Policy.” Health Affairs, May 2006, vol. 25,
no. 3, pp. 737-749.

Frank, Richard G., and Thomas G. McGuire. “Economics and
Mental Health.” In A.J. Culyer and J.P. Newhouse (eds.), Handbook of
Health Economics, 1st Edition, Vol. 1. Amsterdam: Elsevier, 2000,
pp. 893-954.

Honberg, Ron, Angela Kimball, Sita Diehl, Laura Usher, and Mike
Fitzpatrick. “State Mental Health Cuts: The Continuing Crisis.”
National Alliance on Mental Illness, November 2011.

ECON FOCUS | SECOND QUARTER | 2013

25

Is saving the environment the best way to boost employment?
BY J E S S I E RO M E RO

Green jobs aren’t just solar panels and windmills. In many cities,
garbage collectors who pick up recycling are considered green workers.
26

ECON FOCUS | SECOND QUARTER | 2013

attempt to achieve both economic and environmental
goals might not be the most effective way to achieve either
of them.

What Does “Green” Mean?
Before you can create a green jobs policy, you have to know
what a green job is. But that’s not a simple task. “If we think
about someone putting solar panels up on roofs, you could
say, ‘Well sure, for sure that’s a green job,’” says Robert
Pollin, an economist at the University of Massachusetts
Amherst. “On the other hand, that person is probably an
electrician who is spending 70 percent of his or her time
doing something other than putting up solar panels.”
Another question: If a job is not directly related to green
output — such as the accountant at a solar panel firm —
should it be counted as green?
The answer is “yes,” according to the Bureau of Labor
Statistics (BLS), which began collecting information on
green jobs in 2010. (The BLS ended the program in March
2013 in response to mandatory budget cuts.) The BLS has
defined two categories of green jobs: “green goods and services” jobs and “green technologies and practices” jobs. The
former are “jobs in businesses that produce goods or provide
services that benefit the environment or conserve natural
resources” — including that accountant. The latter are “jobs
in which workers’ duties involve making their establishment’s production processes more environmentally friendly
or use fewer natural resources.” About 3.4 million people,
amounting to 2.6 percent of total employment, had green
goods and services jobs in the United States in 2011 (the
most recent year for which data are available). About
850,000 people were employed in green technologies and
practices jobs. Because these data are gathered via different
surveys, there is some overlap between the groups; for example, someone working on a green process at a company that
produces a green good would be counted in both surveys.
Thus, green jobs aren’t all solar panels and windmills.
According to the green goods and services survey, the largest
green occupational category, with about 475,000 jobs, is
“transportation and warehousing.” The category includes
nearly 300,000 city and school bus drivers, who are considered green because they reduce the number of individual
drivers on the road. (See chart.) More than 350,000 people
are employed in “administrative and waste services,” a category that includes garbage collectors, who in many cities also

PHOTOGRAPHY: SHUTTERSTOCK

M

ost scientists agree that the Earth is warming
and the oceans are rising — a fact they attribute largely to carbon dioxide emissions.
The United States emits roughly 6.7 billion
metric tons of carbon dioxide each year, about one-fifth
of the world’s total. At the same time, 12 million people
are currently looking for work, and millions more are either
underemployed or have dropped out of the labor force
entirely. So why not put those people to work building a
new, green economy? It is a compelling vision: millions of
people employed building wind farms, retrofitting buildings to make them more energy efficient, or designing
electric cars.
Investing in such “green jobs” is viewed by many as a
win-win-win situation. Green industries will create more
and better jobs relative to other industries; investing in new
technologies will plant the seeds for future innovation
and productivity gains; and these new industries will help
conserve natural resources and reduce greenhouse gas
emissions.
Green jobs were a centerpiece of the $787 billion
American Recovery and Reinvestment Act of 2009 (ARRA),
and many federal, state, and local policies are premised
on the link between environmental policy and economic
results. But that link isn’t entirely clear, and policies that

Green Jobs by Occupation, 2011
pick up recycling. Green workers also are found on organic
farms, at nuclear power plants, and at steel mills. Only about
5 percent of green jobs are in the renewable energy sector,
according to a 2011 analysis by Mark Muro, Jonathan
Rothwell, and Devashree Saha at the Brookings Institution,
which used a green jobs definition similar to the BLS. The
Brookings researchers counted 2.7 million green jobs overall.
More than 350,000 green jobs are in “public administration,” including administering and enforcing environmental
regulations — which explains why Washington, D.C., has the
highest share of green jobs in the country, 5.1 percent.
(California has the highest absolute number of green jobs
with 360,000.) In Maryland, about 3.7 percent of jobs are
green, and in the rest of the Fifth District the share is close
to the national average of about 2.6 percent.
Nearly half of green jobs are held by workers with a high
school degree or less, compared to 37 percent in the United
States overall, according to the Brookings report. At the
same time, the Brookings researchers found that median
wages in the “clean economy” were 13 percent higher than
median U.S. wages overall. Green firms don’t necessarily pay
more for the same work, but green jobs tend to be in betterpaying industries and better-paying occupations, according
to Brookings, which suggests that green jobs might offer
better opportunities for lower-skilled workers.

Green Policy
Interest in green jobs was especially high during the recession of 2007-2009, when numerous studies proposed green
jobs as the cure for a flagging labor market. In 2008, for
example, Pollin and several colleagues estimated that $100
billion allocated to energy efficiency and renewable energy,
split among tax credits, direct government spending, and
loan guarantees, would generate 2 million new jobs. A 2009
report published by the Peterson Institute for International
Economics, a nonpartisan research institution, projected
that every billion dollars invested in “green recovery” would
generate more than 30,000 jobs. Many other studies were
similarly optimistic.
The ARRA stimulus legislation did include about
$90 billion for a variety of environmental initiatives. The
chief components were $20 billion for energy efficiency
measures, such as weatherizing homes; $27 billion in tax
credits and loan guarantees for renewable energy; $18 billion
for transit improvements, including building high-speed rail
lines; and $11 billion for modernizing the electric grid.
The remainder of the money went toward advanced vehicle
development, green job training, and other programs.
It is difficult to determine precisely what effect the stimulus had on the economy, much less what effects can be
traced to specific provisions. According to a report by the
liberal Economic Policy Institute and the BlueGreen
Alliance, an association of labor unions and environmental
organizations, the green portions of the bill saved or created
1 million jobs, including indirect and induced jobs. One key
assumption of the report’s model, however, was that all

Information
Other Services
Natural Resources/Mining
Management of Companies/Enterprises
Leisure/Hospitality
Education/Health Services
Utilities
Trade

Public Sector
Private Sector

Public Administration
Administrative/Waste Services
Professional/Scientific Services
Transportation/Warehousing
Construction
Manufacturing
400,000
500,000
100,000
200,000
300,000
600,000
0
NOTE: Industries are based on the North American Industry Classification System (NAICS).
SOURCE: Bureau of Labor Statistics “Green Goods and Services” survey

$90 billion of the authorized funds actually made their way
into the economy. But only about half of the money actually
has been spent; many agencies and potential recipients
found it difficult to comply with the application and reporting requirements. “The stimulus was coming into a situation
where the level of government investment was negligible. To
go from spending one or two billion to $90 billion is really
hard,” Pollin says. “The standards were pretty high; it took
six months just to figure out how to write a spreadsheet.”
Although direct government spending toward green
jobs was relatively low prior to the ARRA, the linking
of environmental and economic objectives predates the
Great Recession. In 2007, for example, Congress created the
Advanced Research Projects Agency within the Department
of Energy, known as ARPA-E. The agency was modeled after
DARPA, the research arm of the Department of Defense,
which supported the research that led to the development of
the Internet, among other technologies. ARPA-E provides
funding to energy projects that are not yet ready for private
sector investment. First among the agency’s listed goals is
not protecting the environment but rather enhancing the
United States’ economic prosperity.
Since 2007, the Department of Energy also has run a loan
guarantee program devoted to helping renewable energy
projects reach a scale “sufficient to contribute meaningfully
to the achievement of our national clean energy objectives,”
the first of which is job creation. The program also aims to
enhance national competitiveness by ensuring that the
United States is at the forefront of developing any new
energy technology. Many people point to China’s dominance
today in producing solar panels as an example of a
missed opportunity for the U.S. manufacturing sector.
(See “American Made,” Region Focus, Fourth Quarter 2011.)
In addition, many experts believe that innovation in one sector lays the groundwork for future innovation in other
sectors. “This is a tremendous opportunity for technical
innovation, and integrating innovation into the economy,”
Pollin says.
The federal government also supports renewable energy
industries via production and investment tax credits, which
help producers recoup their investment costs. (Oil and gas
companies also receive a variety of tax allowances.)
ECON FOCUS | SECOND QUARTER | 2013

27

The 20-year-old production tax credit, which covers producers of geothermal, biomass, and wind energy, among other
technologies, has been renewed multiple times, most recently at the end of 2012. The fortunes of these industries appear
closely tied to the tax credit; Congress has let it expire four
times since 1992, and investment fell sharply during each
year before it was renewed. Prior to the credit’s anticipated
expiration in 2012, manufacturers had begun laying off workers; the credit’s renewal saved as many as 35,000 jobs,
according to Sanjay Mazumdar, an industry analyst and chief
executive officer of the consulting firm Lucintel.
An investment tax credit for solar power was established
in 2005 and renewed for eight years in 2008. Solar installations doubled the year after the investment tax credit went
into effect, and have doubled four more times since then,
according to the Solar Energies Industry Association, a trade
organization. Consumers also are eligible for a federal tax
credit for installing renewable energy products in their
homes, which may boost demand for these industries.
All 50 states and the District of Columbia have pursued
green business as an economic development strategy,
offering some mix of business and consumer tax credits,
renewable energy mandates, and green job training programs. In the Fifth District, for example, Maryland has four
different incentive programs for businesses toward its goal
of creating 100,000 green jobs by 2015. Even West Virginia,
one of the nation’s largest producers of coal, offers corporate
and residential tax credits for wind and solar power.

It’s Not Easy Being Green
The relationship between environmental policy and economic growth is far from straightforward, however. One
issue is that “green jobs” is a very broad category, as economists Jon Strand and Michael Toman of the World Bank
explained in a 2010 paper. For example, labor-intensive policies that can be implemented quickly, such as environmental
cleanup projects or energy efficiency retrofits, are effective
for short-term stimulus but might not have a large effect on
long-term growth. Projects with long-term environmental
and economic potential, such as investments in renewable
energy or transportation infrastructure, take a long time to
scale up and are unlikely to create a lot of jobs in the near
term.
The mix of projects in the ARRA illustrates this tradeoff. Workers who were hired to weatherize homes or to clean
up hazardous waste sites were laid off when the funding for
the projects ended. On the other hand, $8 billion was
allocated to 22 states and Washington, D.C., to construct
high-speed rail lines, but construction on the nation’s first
high-speed line won’t begin until later this year.
In addition, job gains in the renewable energy sector
could be offset by job losses in the fossil fuel industry, as the
Congressional Budget Office noted in a 2010 report. It takes
time for the labor market to adjust to new conditions;
workers might have to move to new locations or acquire new
skills, and some workers might not be able to adapt at all.
28

ECON FOCUS | SECOND QUARTER | 2013

“Whenever government encourages job creation in one
sector of the economy, there’s usually going to be job loss in
another sector,” says John Whitehead, an environmental
economist at Appalachian State University in Boone, N.C.
Some critics of green job creation policies agree that
green technologies yield relatively high rates of job creation
— but they don’t agree that this is good for the economy.
At present, for example, renewable energy is much
more labor-intensive than traditional energy. The jobs
created thus are low-productivity jobs, and the high labor
content contributes to the energy’s high costs. Mandating
renewable energy, according to this argument, could
decrease productivity and raise costs throughout the
economy as a whole.
A large number of green jobs could even be a sign of poor
environmental policy. Many green jobs involve cleaning up
pollution, “and to a certain extent, this is a very unproductive activity,” Whitehead says. “Look at the Exxon Valdez oil
spill. It created a lot of green jobs. But if firms could get the
proper incentives to clean up the pollution before it gets in
the air or the water, then it’s not going to create many jobs
and that would be a good thing.”

“It’s An Externality, Stupid”
Job creation, long-term economic growth, and environmental protection are all important goals — but that doesn’t
mean they should necessarily be addressed with the same
policies. According to the “Tinbergen Rule,” named for the
late Jan Tinbergen, a Nobel laureate and economist at the
Netherlands School of Economics, for each policy goal there
must be at least one distinct policy instrument. Trying to
achieve multiple goals with a single tool might prevent policymakers from choosing the most effective tools for each
goal. For example, other sectors of the economy might be
better targets for short-term job creation, and the best
policy to help the environment might not be one that
creates a lot of jobs. “Employment policy should be employment policy and environmental policy should be
environmental policy,” Whitehead says.
Crafting effective environmental policy is complicated,
but at base the economics of the problem are simple. In the
words of Carlo Cottarelli, the director of the fiscal affairs
department at the International Monetary Fund: “It’s an
externality, stupid — so price it.”
An externality is a cost or a benefit that isn’t reflected in
the price of a good, and thus accrues to a party other than
the buyer or seller of the good. For example, the private
sector can’t fully monetize the benefits of a cleaner environment — a positive externality — so it is unlikely to invest in
a socially desirable amount of clean energy. Manufacturers
don’t bear the cost of the pollution they produce — a negative externality — and thus are likely to produce more of it.
Most economists agree that the government has a
role to play in correcting such “market failures.” But not
all interventions are created equal. With respect to the
environment, for example, the government could address

positive externalities by subsidizing new energy technologies. But subsidies often create inefficiencies and
unintended consequences in other markets, and are subject
to the criticism that the government is attempting to pick
winners and losers, a task better left to the market. One way
to address negative externalities is via regulation, such as
fuel efficiency standards or limits on the use of certain
hazardous products. While such regulations can be effective, they could also encourage firms to change their
behavior only to the regulated level.
Instead, economists of all stripes agree that the government could have the greatest effect on the environment by
putting a price on large negative externalities. A carbon tax,
for example, would raise the price of goods and services that
use fossil fuels to reflect the high costs of pollution. Demand
would shift to other sources of energy, and firms would have
an economic incentive to continue reducing their use of
fossil fuels. “Basic economics tells us that when you tax
something, you normally get less of it. So if we want to
reduce global emissions of carbon, we need a global carbon
tax,” Gregory Mankiw, an economist at Harvard University

and the chair of the Council of Economic Advisers under
George W. Bush, wrote in a New York Times editorial.
(A carbon tax is an example of a “Pigovian tax,” named after
British economist Arthur Pigou, who developed the idea of
using taxes to correct negative externalities.) Another way
to put a price on pollution is via a “cap-and-trade” system,
which limits the total amount of pollutants and issues
emissions permits to firms. A cap-and-trade bill passed
the U.S. House of Representatives in 2009, but failed in
the Senate.
It’s possible that policies to reduce greenhouse gas
emissions or conserve natural resources will create more
jobs or spur long-term economic growth. Certainly, people
and the planet will be healthier for them, and “to the extent
that improved air quality or improved water quality will
have a positive impact on human health, then that will have
a macroeconomic impact through labor productivity,”
Whitehead says. But measuring the success of environmental policy by the number of jobs created, rather than by the
effect on the environment, could make it more difficult to
achieve either goal.
EF

READINGS
Muro, Mark, Jonathan Rothwell, and Devashree Saha. “Sizing the
Clean Economy: A National and Regional Green Jobs Assessment.”
Brookings Institution, July 13, 2011.

Warren, Zack. “The Green Good and Services Occupational
Survey: Initial Results.” Monthly Labor Review, January 2013,
vol. 136, no. 1, pp. 26-35.

Strand, Jon, and Michael Toman. “ ‘Green Stimulus,’ Economic
Recovery, and Long-Term Sustainable Development.” World Bank
Policy Research Working Paper No. 5163, January 2010.

FULL HOUSE

continued from page 21

customers to Maryland, primarily to Maryland Live!.
“It’s a phenomenon we’ve seen throughout the Northeast,”
says Schwartz. “As casinos have proliferated, new jurisdictions have done well but the old jurisdictions have definitely
suffered.”
It’s an open question, however, whether or not Maryland
can support all the casinos that are scheduled to open. When
Maryland Live! opened in June 2012, gambling revenues fell
noticeably at Hollywood Casino Perryville and Ocean
Downs. The Rocky Gap casino opened in May with about
300 fewer slot machines than originally planned, and
Hollywood Casino Perryville recently reduced its number of
slot machines by almost 350. “We have too much supply for
the demand we have,” general manager Bill Hayles said in a
statement. But even Maryland Live! might take a hit; a 2012
study by the DLS with PricewaterhouseCoopers calculated

that nearly half of the revenues from a new casino in Prince
George’s County would come at the expense of those in
Anne Arundel County and Baltimore.
While the proliferation of casinos poses challenges to the
casinos themselves, it could be good for consumers. “For
most people who like to gamble, and who view it as a recreation, there is a benefit to being able to gamble closer to
home rather than having to travel to Las Vegas,” says
Grinols.
Lawmakers tend to be less interested in consumer utility
than in creating lots of jobs and tax revenue. In this respect,
the evidence is mixed. Still, the fact that casinos might
fall short of expectations for economic development
doesn’t necessarily mean that they shouldn’t be legalized —
but policymakers must carefully weigh the costs and
benefits for their own communities.
EF

READINGS
Grinols, Earl L., and David B. Mustard. “Casinos, Crime, and
Community Costs.” Review of Economics and Statistics, February
2006, vol. 88, no. 1, pp. 28-45.
Kearney, Melissa S. “The Economic Winners and Losers of

Legalized Gambling.” National Bureau of Economic Research
Working Paper No. 11234, March 2005.
Maryland Public Policy Institute, “An Economic Analysis of the
Proposed Expansion of Gaming in Maryland,” October 2012.
ECON FOCUS | SECOND QUARTER | 2013

29

John Haltiwanger

Editor’s Note: This is an abbreviated version of EF’s conversation with
John Haltiwanger. For the full interview, go to our website:
www.richmondfed.org/publications

Policymakers, economists, and the
public pay close attention to statistics such as GDP, productivity
growth, and the unemployment rate.
But those aggregate statistics mask
the tremendous amount of churning
that takes place in the economy:
Every month, millions of jobs are
created and destroyed as firms
expand and contract, or as new
companies enter the market and
old companies go out of business.
Until the 1980s, few economists,
and especially few macroeconomists,
paid attention to these “microdata.”
John Haltiwanger was one of the first
to recognize their value for our
understanding of the labor market
and business cycles. Since the 1980s,
he has worked closely with the
Bureau of the Census and other
statistical agencies to build new
longitudinal business datasets and
develop new methodologies for analyzing such data.
Those efforts spurred a new line of research into how
firms create and destroy jobs, including Haltiwanger’s
seminal 1996 book Job Creation and Destruction,
co-authored with Steven Davis of the University of
Chicago and Scott Schuh of the Boston Fed. Economists
in a wide variety of fields have built on Haltiwanger’s
work to study topics ranging from tax policy to international trade.
Haltiwanger has continued to study the microdata
that underlie aggregate statistics to examine the importance of young and small businesses to job growth,
cross-country differences in productivity, and how
firms find workers. Recently, he has documented a
decline in the volatility and dynamism of the U.S. economy, which may help to explain the United States’
sluggish recovery from the 2007-2009 recession.
Haltiwanger joined the University of Maryland
faculty in 1987 and was named Distinguished University
Professor of Economics in 2010. In 2013, Haltiwanger
was awarded the Julius Shiskin Memorial Award
for Economic Statistics for his decades of work
with government statistical agencies to develop new
datasets and methodologies for measuring firm
dynamics. Jessie Romero interviewed him in his office
in College Park, Md., in July 2013.

30

ECON FOCUS | SECOND QUARTER | 2013

EF: Your book Job Creation and Destruction has been
credited with “fundamentally altering” our view of the
labor market. What was the prevailing view prior to
its publication, and how did that view change?
Haltiwanger: We’ve known for a long time from household
data that workers move across jobs quite a bit. When you’re
a young worker, you’re trying to figure out what kind of job
you’d like, so you switch jobs a lot. And obviously people
have children or get more education or retire. So there’s long
been a sense that there were a lot of worker flows. But before
the work that Steve Davis and I did, and that Timothy
Dunne, Mark Roberts, and Larry Samuelson did, we didn’t
know that a large fraction of those flows are actually not
because the worker is moving, but because the jobs themselves are moving. My guess is if you actually talked to the
typical worker, they’d say, “Yes, of course, that’s exactly my
experience.” But we didn’t have a good metric for it, because
we didn’t have access to the data that allowed you to measure job creation and destruction. Once we got the data,
though, we found it was really huge. And second, we found
that not only was it large, but it varied over the cycle too. So
that was part of what changed things in macroeconomics.
One view is, sure, there’s a lot of this heterogeneity, but it’s
just in the background; it’s not so clear it’s important for
business cycles. Maybe it just cancels out. But actually, no, it
doesn’t cancel out.

PHOTOGRAPHY: SCOTT SUCHMAN

INTERVIEW

EF: How did you get access to the
data that allowed you to measure
job creation and destruction?

You just don’t make much
progress in science unless other
people can replicate what
you’re doing and be creative
with the data themselves.

Haltiwanger: Steve Davis and I
met back in the mid-1980s, and we
had this idea that to understand
how the labor market works, it
would be critical to understand the ongoing process of what
we called job creation and job destruction. In the mid-1980s,
we got to know Dunne, Roberts, and Samuelson, who were
using lower-frequency Census data to study the entry and
exit of firms and firm dynamics. We asked them if they
thought it was possible to get access to the data to look at
higher frequencies, say monthly or quarterly. And they said,
“Well, we don’t know, but why don’t you call these guys up?”
So Steve and I called up the Census Bureau. Robert
McGuckin, the director of the Bureau’s Center for
Economic Studies (CES) at the time, invited us to come give
a seminar. We got two reactions. Bob McGuckin was incredibly enthusiastic. But some of the folks said, “You guys are
nuts!” They kept saying that the data were not intended for
this task, that we were pushing them in a way they weren’t
meant to be pushed. Steve and I were cognizant of that, but
we started working with the data and realized their potential, and that led to us developing these concepts of job
creation and destruction and how to measure them.
Over the years, one of our most satisfying accomplishments was to convince the statistical agencies that this was
important. The Census Bureau and the Bureau of Labor
Statistics (BLS) now have regular programs where they are
turning out the kind of statistics that we developed. Back in
the 1980s, there were only a handful of people working with
the firm-level data. We literally were in basement rooms
without windows. Now the CES has 100 staff members and
15 research data centers spread across the country — and
most of the staff work in rooms that have windows! Those of
us who worked with the CES in the early days recognized
that it was important to improve data access because you
just don’t make much progress in science unless other people
can replicate what you’re doing and be creative with the data
themselves. So when I was chief economist at the Census
Bureau in the 1990s, it was important to me and the research
director of CES at the time, Tim Dunne, to expand the
research data centers. Many people contributed to this
effort over the past 30 years, but I am proud to have been
part of it.
EF: You’ve studied aspects of the search-and-matching
process (which describes how workers and firms find
each other) that don’t typically get a lot of attention.
What do we learn from studying factors such as firms’
recruiting intensity?
Haltiwanger: We’ve learned an enormous amount from the
kinds of approaches that Dale Mortensen and Christopher

Pissarides developed. At the core of
their model is this black box called
the “matching function.” It’s a mathematical function that relates hires
to labor market tightness, vacancies,
and unemployment. There are lots
of stories about what that represents, but we just don’t know very
much about how firms and workers meet, how they match,
and what the process is.
In terms of data development, first we had data sets that
allowed you to track whether firms were growing or shrinking — that’s job creation and destruction. More recently,
we’ve been able to integrate what’s happening to the workers
at the firms that are expanding and contracting.
There’s a very nice new survey that BLS developed called
JOLTS [Job Openings and Labor Turnover Survey]. For each
establishment, JOLTS provides information on hires and
separations, and breaks the separations into quits and
layoffs. It also gives you vacancies, so you’ve got all the
ingredients to begin looking at how hiring occurs.
As usual, Steve Davis and I and our co-author on this
work, Jason Faberman, didn’t want to look just at the aggregate data. So we got access to the JOLTS microdata by
working directly at BLS and integrated it with the data BLS
has developed on job creation and destruction. Our main
focus in this work is to understand the hiring process.
We were struck by the fact that there was a pattern in the
job-filling rate that was not consistent with the standard
search-and-matching model: Businesses that were very
rapidly expanding filled their jobs much faster than other
kinds of businesses. In the standard search-and-matching
model, if you want to expand quickly, you just post more
vacancies. We found that was true — businesses that were
expanding rapidly did post more vacancies — but we also
found that they filled them much more quickly.
So what’s going on there? The model that we came up
with is that firms don’t just post vacancies, they also spend
time and resources on hiring people. So if you want to hire
more people, you can raise the wage you offer, or you can
change the way that you screen workers — these are just two
examples of the variety of margins that a firm can use. As
shorthand, we’ve called these margins “recruiting intensity.”
We also found that recruiting intensity dropped substantially in the Great Recession and has been slow to recover.
Firms posted many fewer vacancies in the collapse, and
that’s exactly what you’d expect from the theory. But then as
we went into the recovery, vacancies started to come back,
but hiring didn’t. People were puzzled by that. How can this
be? Why don’t the patterns of hires, vacancies, and unemployment fit the matching function? Why are we off the
Beveridge curve? [The Beveridge curve describes the relationship between vacancies and hiring; for the past few
years, the unemployment rate has been higher relative to the
number of vacancies than would be predicted by historical
trends.] Our explanation is that our index of recruiting
ECON FOCUS | SECOND QUARTER | 2013

31

intensity has dropped significantly and it hasn’t recovered.
We don’t explain all of the departures from the matching
function and the Beveridge curve during the Great
Recession and slow recovery, but it appears that recruiting
intensity explains a nontrivial fraction of it.
EF: Your research makes a distinction between aggregate shocks, such as the financial crisis in 2008, and
allocative shocks, such as technological change that
reduces the demand for manufacturing workers. What
role have these two types of shocks played during and
since the Great Recession?
Haltiwanger: We’re still struggling with how to disentangle
this. And there’s a third kind of shock: uncertainty. If you’re
in a sector where businesses are facing fundamentally different conditions — changing technology, changing costs,
changing international competition — it’s as though the rug
has been pulled out from underneath you. You can’t do
things the way you did them last decade. And I think that
we’re increasingly recognizing that those kinds of events
lead to greater uncertainty. Allocative shocks and uncertainty go hand in hand. In many early models, it was assumed
that the economy was hit by a certain pace of idiosyncratic
shocks every period. But then Nick Bloom and others came
along and said, well, there’s no reason that pace can’t change
over time. That research has helped make the case that
changes in the dispersion of idiosyncratic shocks, which
generate uncertainty, are important. So now we’re struggling
to disentangle the relative importance of aggregate shocks,
allocative shocks, and uncertainty.
We’re also struggling with causality. Something Steve and
I spent a lot of time on, and are still worrying about, is the
“cleansing effect” of recessions. One idea is that recessions
are times in which things slow down, which makes them a
good time to reorganize and reallocate. That suggests that
the causality arrow could run from aggregate shocks to
reallocation. But on the other hand, a period of very high
idiosyncratic shocks could be a period where we’ve got to do
lots of reorganization. Reorganization takes time and
resources, which could cause a recession itself, so the
causality might be running the other way. Then you add
uncertainty shocks to this mix. So, as usual, drawing causal
inferences is a continued challenge.
EF: What are the implications of findings on reallocation for employment policy or industrial policy?
Haltiwanger: That’s a hard question. What we’ve found, in
the United States at least, is that all this reallocation is
largely productivity-enhancing. The job destruction we
observe usually occurs in the less productive businesses, and
all the job creation and entry of new firms usually occurs in
the more productive businesses. A large fraction of aggregate productivity growth is moving resources away from less
productive to more productive firms.
32

ECON FOCUS | SECOND QUARTER | 2013

Eric Bartelsman, Stefano Scarpetta, and I started to look
at the connection between reallocation and productivity
around the world. We’ve found that in some countries, that
process doesn’t work so well. Allocative efficiency — the
ability to move resources from less-productive to more-productive businesses — differs pretty dramatically across
countries. We first hypothesized that things were so
distorted in emerging economies that they had very low
rates of reallocation. But actually, there is a lot of entry and
exit going on, it’s just not particularly healthy. For example,
there’s lots of entry of street vendors, but street vendors
don’t grow. They just churn: They come in, they go out, they
come in, they go out. But it’s not productivity enhancing.
How is this related to the policy question? What we’re
beginning to see in the cross-country evidence are things
that distort this reallocative process. But here’s the problem.
It’s not as though we can’t come up with policy recommendations — we can come up with too many. There are 101
(plus) possible distortions to the ongoing dynamics of reallocation. We are still trying to figure out which ones are the
most important. For example, is it distortions that primarily
impact the entry margin? If it is on the entry margin and it’s
young businesses, is it that they can’t get access to credit? I
think we’ve got lots of hypotheses out there, but I don’t
think we’ve got a good way yet of saying which are the most
important and how they vary across countries, and how they
vary within countries across time.
EF: How do countries find a balance between allowing
the necessary reallocation to occur and minimizing the
costs of that reallocation to workers and firms?
Haltiwanger: That’s one of the hardest questions countries
face. I think the evidence is overwhelming that countries
have tried to stifle the destruction process and this has
caused problems. I’m hardly a fan of job destruction per se,
but making it difficult for firms to contract, through
restricting shutdowns, bankruptcies, layoffs, etc., can have
adverse consequences. The reason is that there’s so much
heterogeneity in productivity across businesses. So if you
stifle that destruction margin, you’re going to keep lots of
low-productivity businesses in existence, and that could lead
to a sluggish economy. I just don’t think we have any choice
in a modern market economy but to allow for that reallocation to go on.
Of course, what you want is an environment where not
only is there a lot of job destruction, but also a lot of job
creation, so that when workers lose their jobs they either
immediately transit to another job or their unemployment
duration is low. In the United States in the 1990s, the last
time we experienced really robust growth, there was a high
pace of job creation and job destruction, and a lot of it was
workers moving directly from one job to another. That’s
what you want. But what are the ingredients for efficient,
productivity-enhancing reallocation? There are many factors. But I think it’s clear that you have to make sure that

amongst that group, a relatively small
there aren’t barriers to entry or exit,
group of them takes off in terms of
and then that there aren’t barriers to
➤ Present Position
both jobs and productivity. So the
post-entry growth or contraction.
Distinguished University Professor of
concern is, have we become less
Having said that, I think lots of
Economics, University of Maryland,
entrepreneurial? If you’re not rolling
countries hear this advice from econCollege Park
the dice as often, you’re not going to
omists or from organizations like the
get those big wins as often. Then the
International Monetary Fund or the
➤ Other Positions
question is why? What happened?
World Bank, so they open up their
Chief Economist, Bureau of the Census
One idea that some have suggested
markets, they open up to trade, they
(1997-1999); Associate Professor, Johns
recently is that culturally we’ve
liberalize their labor and product and
Hopkins University (1986-87); Assistant
become more risk-averse. That’s an
credit markets. And what happens is,
Professor, University of California,
Los Angeles (1981-1986)
interesting hypothesis but only one
job destruction goes up immediately,
of many possible hypotheses.
but job creation doesn’t. They realize
➤ Education
In terms of what we know, we’ve
that they’ve got a whole bunch of
Sc.B., Brown University (1977)
clearly seen a shift in activity toward
firms that can’t compete internationPh.D., Johns Hopkins University (1981)
large national and multinational busially, and they’re in trouble. So the
nesses. In retail trade, that shift is
question is, does the economic envi➤ Selected Publications
overwhelming. The big guys domironment have the ingredients that
Author of six books and numerous
nate. That’s not necessarily a bad
allow businesses to come in and start
articles in journals such as the American
thing; maybe what we’ve done is
up? On the one hand, there is lots of
Economic Review, Quarterly Journal of
move resources toward the big firms
evidence that countries that distort
Economics, Review of Economics and
because they are the more productive
the destruction margin find themStatistics, Journal of Monetary Economics,
and profitable. But there might be a
selves highly misallocated, with low
and Journal of Economic Perspectives
trade-off between economies of scale
productivity and low job growth. On
and flexibility and innovation. A
the other hand, it’s difficult to just let
strong young business sector is flexible. If an economy needs
things go without having well-functioning market instituto shift resources from one place to another, the young busitions in place — it’s not easy to snap your fingers and
ness is a good margin of adjustment.
become the United States. And even here this process is at
Also, there’s the question, what’s the role of entrepretimes distorted.
neurs in innovation? This question is much harder. But
suppose the hypothesis is correct that you really do need all
EF: You’ve written about a long-term decline in
this experimentation to get new ideas out there, and that
volatility in the United States; the job creation and
young businesses are where lots of this experimentation
destruction rates, for example, have been declining
occurs. Then the decline in the share of young businesses is
steadily since the 1990s. What’s behind that decline?
concerning. It’s not that the big guys don’t innovate, but it
tends to be an adaptation of what they already do.
Haltiwanger: There is now substantial evidence from a
But coming back to the facts, there’s no doubt that there
wide range of sources that volatility is declining in the
has been a decline in volatility, and it’s of some concern. The
United States. One of the markers is a decline in the pace of
benign view is, maybe it’s possible to get the same gains in
job reallocation. But there’s also something going on with
productivity without all this churning of businesses and jobs.
new businesses. We’ve always known that young businesses
If so, that would be a good thing. But if it was really nothing
are the most volatile. They’re the ones experimenting, trying
but good news, I ask, why aren’t we recovering more
to figure out if they have what it takes to be the next
quickly? Why isn’t productivity surging? Even the recovery
Microsoft or Google or Starbucks. But now we’re seeing a
from the 2001 recession was nowhere near the recoveries of
decline in the entry rate and a pretty stark decline in the
previous years. We have been pretty sluggish at least since
share of young businesses. The latter isn’t surprising given
2001. And this is the period of time when the decline in
the decline in the entry rate — it’s equivalent to seeing that
entrepreneurship accelerated.
when the birth rate declines, you have fewer children.
But it’s also important to recognize that the decline in the
share of young firms has occurred because the impact of
EF: Do you view the decline in volatility as the cause of
entry is not just at the point of entry, it’s also over the next
the slow recovery?
five or 10 years. A wave of entrants come in, and some
of them grow very rapidly, and some of them fail. That
Haltiwanger: I think there is some causality, but I can’t say
dynamic has slowed down.
for sure. I think we could make a case that this moreShould we care? The evidence is we probably should,
sclerotic, less-dynamic economy affects the pace of
because we’ve gotten a lot of productivity growth and job
recoveries. The decline in volatility predates the Great
creation out of that dynamic. A cohort comes in, and
Recession. It has been going on for decades, although it

John Haltiwanger

ECON FOCUS | SECOND QUARTER | 2013

33

and young small businesses? One possible channel is that
young small businesses use houses as collateral. An alternative but related channel — there’s some very nice work
on this by Atif Mian and Amir Sufi — is the housing
price/aggregate demand channel. Housing comprises the net
worth of the household, so when prices drop households
have less to spend. If there’s a local spending shock, young
small businesses might get hit especially hard if they’re the
most vulnerable businesses. Another possible mechanism is
that local financial institutions were hit especially hard in
the areas where housing prices fell the most. If local financial
institutions are especially important for startups and young
businesses, this might have been an important factor.

accelerated after 2000. In terms of job flows, we never
recovered from the 2001 recession. Job reallocation never
returned to the late-1990s levels. Something similar seems
to be going on following the Great Recession. Job destruction is somewhat lower than during the 2004-06 period, and
job creation is a lot lower. The decline in young businesses
appears to be playing a role in these dynamics.
EF: Early research found that small businesses create
the most jobs. But more recent work, including your
own, has shown that’s not the whole story. What does
that mean for policy?
Haltiwanger: If you just cut the data by business size, there
is some evidence that net growth rates of small businesses
are higher than net growth rates of large businesses. There
are some statistical issues here, involving regression to the
mean, but there is some truth to this conventional wisdom
in an accounting sense. But our recent work has asked, wait
a second, who are those small businesses that are creating
jobs? It turns out it’s not old small businesses. It’s all coming
from the entry of new firms and young businesses, which
happen to be small businesses. We didn’t realize this until
recently because we didn't have good measures of firm entry
and young businesses. I started studying this with Steve
Davis and Scott Schuh, and recently I’ve been working with
Ron Jarmin and Javier Miranda from the Census Bureau to
develop data that allow us to study the role of startups and
young businesses.
So the grain of truth is that it’s all coming from entry and
young businesses. I do think it’s important to recognize that
if you’re trying to advocate a policy to help small businesses
to stimulate job creation, and it’s not especially relevant for
entry or young businesses, then it’s possible the policy is
completely mistargeted. But we don’t view our evidence as
saying we should inherently target young businesses.
Instead, it flips things around to ask, what are the barriers
that young small businesses face? That’s the right question
to ask, and we’re starting to ask it better.
Young small businesses got hit especially hard in the
Great Recession. The evidence suggests that financial markets and access to credit played some role in that. I’ve been
studying this with Teresa Fort, Ron, and Javier, and we’ve got
some interesting evidence that housing prices played a role
here. Young small businesses got hit especially hard in states
with very large declines in housing prices, even taking into
account the extent of the cyclical slump in the state.
California is the poster child for these patterns. California
had a bad recession but it had an especially bad decline in
housing prices, and young small businesses got hit especially
hard there. What’s the connection between housing prices

EF: That vulnerability is part of what you’ve described
as the “up or out” dynamic — most young small businesses fail, but those that survive are likely to grow very
rapidly.
Haltiwanger: We’ve been struck by how rare success is for
young businesses. When you look at normal times, the
fraction of young small businesses that are growing rapidly is
very small. But the high-growth firms are growing very
rapidly and contribute substantially to overall job creation.
If you look at young small businesses, or just young businesses period, the 90th percentile growth rate is incredibly high.
Young businesses not only are volatile, but their growth
rates also are tremendously skewed. It’s rare to have a young
business take off, but those that do add lots of jobs and contribute a lot to productivity growth. We have found that
startups together with high-growth firms, which are disproportionately young, account for roughly 70 percent of
overall job creation in the United States.
This is related to the policy challenge: It’s a needle-in-ahaystack problem. Can you say the high-growth companies
are going to be in particular sectors? No. Sector is not a very
great predictor of where the next best thing is going to come
from, even though governments would love to be able to
predict it, and they want to be able to help the high-growth
businesses. Theory suggests that you can set up an environment that allows high-growth businesses to be successful,
but you can’t target them. It’s just too hard. Did we really
know that Apple would take off? And it’s not just tech companies; it’s in mundane industries too. It’s the Hair Cuttery.
It’s Starbucks.
Most entrants fail. Even conditional on survival, most
surviving young businesses don’t grow. But a small fraction
of surviving young businesses contribute enormously to job
growth. A challenge of modern economies is having an environment that allows such dynamic, high-growth businesses
EF
to succeed.
◆

34

ECON FOCUS | SECOND QUARTER | 2013

ECONOMIC HISTORY

Mother of the Domestic Slave Trade
BY K A R L R H O D E S

elia Garlic was born in
Powhatan County, Va., in the
1830s, the height of the
domestic slave trade. She was sold,
along with her mother and brother, to
a speculator who resold them to the
highest bidders in Richmond, Va. The
sheriff of a nearby county purchased
Delia and her mother, but they never
again saw Delia’s brother.
Delia worked in the sheriff ’s house,
suffering abuse at the hands of his wife
and daughter. One night the sheriff
came home drunk and flew into a rage
at the dinner table. He called an overseer and told him to take Delia outside
and beat her. Delia bolted out of the
house and into the darkness, but later
that night, she followed her mother’s
voice home.
“Right away they came for me,”
Delia recalled. “A horse was standing in
front of the house, and I was taken that
very night to Richmond and sold to a
speculator again. I never saw my
mammy anymore.”
On her second visit to the slave pens
of Richmond, Delia was sold to a hotelier from Georgia. After his business
failed, she was sold to a planter in
Louisiana, where she worked in the cotton fields until the Civil War set
her free.
Delia told her story to Margaret
Fowler as part of the Federal Writers’
Project in the late 1930s. Delia was
among approximately 1 million slaves
who were forced to migrate from
the upper South (mostly Virginia,
Maryland, and the Carolinas) to the
Deep South from 1810 to 1860.
“As many as two-thirds of these
1 million or so people were carried
south by slave traders, whose daily business resolved the diverging fortunes of
the declining upper South and the
expanding lower South into mutual
benefit,” wrote Harvard historian
Walter Johnson in Soul by Soul: Life
Inside the Antebellum Slave Market.

ILLUSTRATION: THE NEW YORK PUBLIC LIBRARY ’S SCHOMBURG CENTER FOR RESE ARCH IN BL ACK CULTURE

D

Slaves were worth substantially
more in states such as Georgia,
Alabama, Mississippi, and Louisiana
because labor was the limiting factor of
the Deep South’s highly profitable agricultural expansion. This dramatic price
differential and the declining supply of
slaves from the trans-Atlantic trade,
which was outlawed in 1808, produced
a thriving domestic slave trade in the
United States.
“By the 1830s, Virginia’s largest
export was human property,” says
Steven Deyle, associate professor of
history at the University of Houston
and author of Carry Me Back: The
Domestic Slave Trade in American Life.
Slaves were worth more than the land
and, unlike real estate, they were
highly portable and easily sold. Many
Virginia slaveholders, it seems, knew
roughly how much each of their slaves
was worth to the speculators who
scoured the Virginia countryside offering quick cash for human assets.

Virginia’s
human exports
fueled the
Deep South’s
expansion

“The Coffle Gang,” a print by Van Ingen & Snyder, appeared in The Suppressed Book
about Slavery, an abolitionist book that was completed in 1857, but not published
until 1864. The print depicts slave traders driving 40 men and 30 women through
Kentucky toward New Orleans. Slave drivers commonly commanded
members of coffles to play musical instruments and sing.
ECON FOCUS | SECOND QUARTER | 2013

35

“Master used to say that if we didn’t suit him he would put
us in his pocket quick — meaning he would sell us,” recalled
William Johnson, a Virginia slave who escaped to Canada.
More than the whip, slaves feared being sold south, forever
separated from their families, friends, and homes.

The Supply Side
Virginia outlawed the importation of slaves during the
American Revolution, but the state’s number of slaves
increased steadily from 165,000 in 1776 to 347,000 in 1800.
This rapid growth rate continued in the 19th century,
prompting abolitionists and others to call Virginia a
“breeder” state. Most modern historians find little credible
evidence of forced breeding operations, but they note that
slave owners in all states routinely encouraged — and sometimes participated in — procreation among their slaves.
By the 1830s, Virginia’s oversupply of forced labor was
obvious and widespread, but the issue had been evident in
the eastern part of the state for several decades. “George
Washington was typical in his frustrations at having ‘more
working Negros by a full moiety, than can be employed to
any advantage in the farming system,’” Deyle wrote. (In
other words, Washington believed he had twice as many
slaves as he needed.) Overplanting of tobacco had exhausted
Virginia’s soil, and the price of tobacco had fallen, partly due
to new competition in Kentucky, Tennessee, and the
Carolinas. As a result, many Virginia growers transitioned
from tobacco to grain crops, which required fewer slaves.
While the rewards of slave labor continued to decline in
Virginia, the risks suddenly became more apparent. In 1831,
Nat Turner led a slave revolt in Southampton County, Va.,
killing 59 white people before local militias quashed the
insurrection. A few months later, the General Assembly convened a special session to consider gradual emancipation,
colonization, and other ways to rid Virginia of its slaves.
James Gholson, a young delegate from Southside
Virginia, summed up the pro-slavery argument succinctly,
according to Deyle. “Gholson reminded white Virginians
that no matter how much they might fear another slave
revolt, they no longer had any real choice in the matter.
Their state had become too economically dependent upon
the institution of slavery to ever give it up, especially
through some form of emancipation. He noted that ‘our
slaves constitute the largest portion of our wealth, and by
their value, regulate the price of nearly all the property we
possess.’” Abolition was more palatable to delegates from
the state’s western counties, where there were far fewer
slaves, but the Virginia House of Delegates ultimately
rejected a proposal — by 15 votes — to phase out slavery.
While some Virginia planters debated emancipation and
colonization, others simply moved to what was then the
Southwest (Alabama, Mississippi, and Louisiana), taking
their slaves with them. This trend accounted for a significant
portion of the forced migration, especially in the early
years of southwestern expansion. But in later years, selling
slaves south became the more prevalent method of forced
36

ECON FOCUS | SECOND QUARTER | 2013

migration. Quite often slave trading and planter migration
overlapped, notes Edward Baptist, a historian at Cornell
University. “If you were brought down south by one owner
who immediately sold you to raise cash to expand his operations,” then that sale was essentially part of the interstate
slave trade.
Some paternalistic planters refused to sell slaves under
any circumstance, while others claimed to sell slaves only
when they had little choice. “Slaveholders always had some
reason for selling a slave — an estate to divide, a debt to pay,
a transgression to punish, a threat to abate,” Walter Johnson
wrote. “What they rarely had when they sold a slave, it
seems from the accounts they gave of themselves, was any
direct responsibility for their own actions.” Instead, they
blamed the evils of the trade on their favorite scapegoats —
the lewd, crude slave traders. But the size and scope of the
domestic slave trade — in bad times and in good times —
explodes the myth of benevolent masters who sold slaves
only with great reluctance. University of Liverpool historian
Michael Tadman estimated that from the 1830s through the
1850s, slave owners’ sales to traders in the upper South
generated receipts equivalent to between 15 percent and
20 percent of receipts from the region’s staple crops.
As slave prices soared to all-time highs in the 1850s, slave
speculation became widely accepted in the upper South as a
necessary evil to protect the region’s economic interests.
Colorado State University historian Robert Gudmestad
observed in his 2003 book, A Troublesome Commerce: The
Transformation of the Interstate Slave Trade, that “the need for
the trade conquered most slaveholders’ qualms about the
negative consequences of the peculiar institution.”

The Demand Side
A number of factors drove up demand for slaves in the Deep
South in the late 1700s and early 1800s. Agricultural innovations, most notably the cotton gin, and surging international
demand for cotton greatly enhanced returns to investment
in slaves on cotton plantations. While this was happening,
the United States purchased the Louisiana territory in 1803
and outlawed the trans-Atlantic slave trade five years later.
The rapidly expanding nation then began to push American
Indians westward, making more land available for cotton
cultivation.
The soil and climate of the Deep South were ideally suited to growing cotton, especially in Alabama, Mississippi,
and Louisiana. Cotton production was labor intensive, so
the domestic slave traders began moving slaves there —
slowly at first, but quite rapidly as cotton prices recovered
following the Panic of 1819.
In addition to field hands for cotton, there was strong
demand for “sturdy adult males” to work the sugar plantations of Louisiana. Those plantations often were supplied via
markets in Baltimore, Alexandria, Va., Norfolk, Va., and
especially Richmond.
There also was lascivious demand for attractive young
female slaves with light brown skin. These “fancy maids” or

“fancy girls” often were raped by traders and sold in New
Orleans to work as sex slaves — either in brothels or for
exclusive exploitation by owners who paid up to $7,000 to
flaunt their wealth, power, and audacity.
Compared with the number of slaves purchased for
cotton production, the number of slaves purchased for sex
was very small, Baptist concedes, but it was “significant in
terms of the way it injected sexuality into all of the discussions of female slaves who were for sale,” he says. “Men for
sale were always being discussed in terms of their labor
capacity. Women were usually discussed with some reference
to their physical attractiveness.”
Deep South planters also viewed slaves as objects of
finance. They frequently secured loans with human
collateral, as did planters in the upper South. After the
demise of the Second Bank of the United States, which had
provided substantial funding for slave trading and cotton
expansion, upstart banks in the Southwest offered easy
credit to planters based on the number of slaves they could
mortgage. The banks packaged these loans into mortgagebacked securities that they sold to banks in London,
Amsterdam, and New York. London-based Baring Brothers,
the leading merchant bank of the day, even persuaded the
Louisiana legislature to guarantee the bonds.
“It really seemed that these bonds were risk-free for the
immediate lender and the immediate borrower,” Baptist
says. “They were virtually identical to mortgage bonds as we
know them in more recent times, and the outcome was very
similar.”
The confluence of easy credit, abundant land, and a
steady supply of slaves eventually led to overproduction of
cotton, and when cotton prices collapsed, “nobody was able
to make the interest payments on their mortgage loans,
which meant the banks couldn’t make the interest payments

on their bonds,” Baptist says. “And that was, as far as I can
tell, the cause of the Panic of 1837.”

The Middle Men
Planters from Mississippi often made buying trips to
Virginia, where they could purchase slaves one-third to onehalf cheaper than in the Southwest, according to Joseph
Ingraham, a self-described “Yankee” who wrote The SouthWest, a book about his experiences in Mississippi and
Louisiana in the 1830s.
Some planters in the upper South offered discounts to fellow planters, relative to the prices they charged slave traders,
so speculators sometimes posed as Deep South planters.
Upper South slave owners assumed they could easily distinguish between a genteel Southern planter and an uncouth
slave trader, but often there was not much difference.
A slave trader is “very much like other men. He is today a
plain farmer with 20 or 30 slaves endeavoring to earn a few
dollars from the worn-out land,” Ingraham wrote. “He is in
debt and hears he can sell his slaves in Mississippi for twice
their value in his own state.” So the farmer drives his slaves,
and perhaps a few of his neighbor’s slaves, to the Southwest.
“He finds it profitable; and if his inclinations prompt him,
he will return home, after selling his slaves, and buy, with
ready money, from his neighbors, a few here and a few there,
until he has a sufficient number to make another caravan.”
These caravans, or “coffles,” were common sights in
Virginia. They ranged from 10 to 300 slaves who traveled
about 20 miles per day. The male slaves typically were
shackled two abreast with a long chain or rope running down
the middle of their column tying all their shackles together.
The women were bound by ropes, if at all, and some traders
and planters allowed women, children, and sick or injured
men to ride in wagons.

The Devil’s Half Acre
In the 1850s, slave trading thrived in Richmond’s Shockoe
Bottom, just a few blocks from Capitol Square.
Most of the trading occurred not in public auctions but in
slave pens run by resident dealers. Historically, Robert
Lumpkin is the most infamous of these Richmond dealers.
In addition to trading slaves himself, he owned and operated
Lumpkin’s Jail — also known as “the devil’s half acre” — on
Richmond’s “Wall Street.”
At the end of the Civil War, as the Confederate government was fleeing Richmond, Lumpkin tried to put one last
shipment of 50 slaves on a train to Danville, Va., according
to an account given by abolitionist Charles Coffin in his
memoir, The Boys of ’61.
“This sad and weeping 50, in handcuffs and chains, was
the last slave coffle that shall tread the soil of America,”
Coffin wrote triumphantly. They were “trampling the bonds
of the Confederate States of America in the mire, as they

marched to the station.” The coffle made it to the depot, but
sentinels guarding the train turned them back because the
cars were reserved for Confederate officials and government
documents. What happened next is uncertain, but presumably Lumpkin’s last coffle went free after Union forces took
control of the city.
Lumpkin died in 1866 and willed his former slave-trading
complex to Mary, a light-skinned slave he had purchased and
eventually married. In 1867, Mary leased the complex to the
American Baptist Home Mission Society, which converted it
into a school for former slaves that evolved into Virginia
Union University. In an 1895 history of the school, James
Simmons, a leader of the Baptist society, recalled his visit to
the converted complex after the Civil War.
“The old slave pen,” he wrote, “was no longer ‘the devil’s
half acre’ but God’s half acre.”
—KARL RHODES

ECON FOCUS | SECOND QUARTER | 2013

37

During the 1820s, Virginia’s domestic slave trade evolved
from a loosely organized network of itinerate traders into a
leading example of America’s market revolution. Major trading centers emerged in Richmond, Washington, Baltimore,
and Norfolk. From these centers, slaves could be shipped
around Florida on specially outfitted sailing vessels and later
on steamers that could reach New Orleans in 19 days. Large
slave-trading organizations also emerged. Alexandria-based
Franklin and Armfield, the biggest of these firms, kept its
ships moving from November to April, picking up slaves in
Richmond and Baltimore and taking them to depots in New
Orleans and Natchez, Miss. Traders continued to use overland routes because they were cheaper, but time was money,
and in the 1840s and 1850s, traders increasingly took slaves
south on trains. The quicker the traders could deliver one
coffle of slaves, the sooner they could pay their bankers,
borrow more money, and assemble another coffle. So
Richmond’s growing rail connections to the lower South
enhanced the city’s position as the upper South’s largest
slave market.
“The domestic slave trade was not simply a consequence
of the [market revolution] but a central component in propelling it,” Deyle wrote. In addition to employing the latest
modes of transportation, slave traders rapidly adopted new
business practices, such as newspaper advertising, standardized pricing, and international finance.
The most sophisticated traders even managed to elevate
themselves socially above the employees and agents who did
their bidding. Partners Isaac Franklin and John Armfield, for
example, used their enormous wealth to “distance themselves from the foul odor of speculation,” Gudmestad
concluded. Franklin married into a “respectable” Nashville
family, and Armfield “was instrumental in establishing the
University of the South at Sewanee.”
Most slave traders, however, remained the pariahs of
Southern society. They deserved their reputations for greedfueled cruelty, but they created neither supply nor demand.
They simply facilitated the movement of slaves from willing
sellers in the upper South to eager buyers in the lower South.

“The rhetoric of the evil slave trader enabled Southerners
to explain a problematic aspect of their society: the cruel
treatment of slaves,” Gudmestad wrote. “Once speculators
were to blame for the worst abuses of slavery, Southerners
could remain committed to the institution as a whole.”

The Movie Version
In the aftermath of the Civil War, Southerners struggled
to make sense of the massive loss of life and property.
“Unfortunately, making sense of it meant recasting it,”
explains Christy Coleman, president of the American Civil
War Center in Richmond. “Wasn’t it romantic? Wasn’t it
wonderful? Wasn’t it cool? And the rest of America got
sucked up in it.”
Between 1875 and 1900, there was a concerted effort to
reunite the country, Deyle adds. “There was this decision —
conscious or unconscious — by white Americans to forget
about it all and let white Southerners write what they
wanted to believe and what they wanted the rest of the
country to believe. So they retold the story by sort of ignoring what the real cause of the war was, and slavery didn’t get
talked about.”
The cover-up was perpetuated in part by plantation
romance novels, which sold more copies in the North than
in the South, Deyle notes. While Margaret Fowler was interviewing former slave Delia Garlic, Margaret Mitchell was
receiving rave reviews for Gone with the Wind. Fowler’s true
story gathered dust in the Library of Congress, while
Mitchell’s fanciful fiction won a Pulitzer Prize and 10
Academy Awards.
Misleading images of Old South slavery persist today, but
in recent years, historians have replaced the myths of benevolent masters and happy slaves with candid accounts of how
the domestic slave trade callously connected the economic
interests of the upper South with those of the lower South.
Tearing apart families and selling people south clearly troubled some slave owners, but they did it anyway, Coleman
says. “We are talking about people’s pocketbooks, and I hate
to say it, but greed is greed is greed.”
EF

READINGS
Bancroft, Frederic. Slave Trading in the Old South. Baltimore:
J.H. Furst, 1931. Reprinted with a new introduction by Michael
Tadman. Columbia, S.C.: University of South Carolina Press, 1996.

Gudmestad, Robert H. A Troublesome Commerce: The Transformation
of the Interstate Slave Trade. Baton Rouge, La.: Louisiana State
University Press, 2003.

Berlin, Ira, Marc Favreau, and Steven F. Miller (eds.) Remembering
Slavery. New York: The New Press, 1998.

Johnson, Walter. Soul by Soul: Life Inside the Antebellum Slave Market.
Cambridge, Mass.: Harvard University Press, 1999.

Collins, Winfield H. The Domestic Slave Trade of the Southern States.
Port Washington, N.Y.: Kennikat Press, 1969. First published 1904
by Broadway Publishing.

Tadman, Michael. Speculators and Slaves: Masters, Traders, and Slaves
in the Old South. Madison, Wis.: University of Wisconsin Press, 1989.

Deyle, Steven. Carry Me Back: The Domestic Slave Trade in American
Life. New York: Oxford University Press, 2005.

38

ECON FOCUS | SECOND QUARTER | 2013

Troutman, Phillip Davis. Slave Trade and Sentiment in Antebellum
Virginia. Dissertation, University of Virginia, August 2000.

BOOKREVIEW

Embracing Shocks
ANTIFRAGILE: THINGS THAT
GAIN FROM DISORDER
BY NASSIM NICHOLAS TALEB
NEW YORK: RANDOM HOUSE,
2012, 544 PAGES
REVIEWED BY CAROLINE TAN

t’s easy to conjure up an image of “fragility” — a
teacup, eggshells, wine glasses. But what’s the
opposite of fragility? According to New York University’s Polytechnic Institute professor and former derivatives trader Nassim Nicholas Taleb, there isn’t quite a
word for that. Words like “resilient” and “robust” don’t
work. Since fragility describes things that break under
pressure, the opposite would refer to things that thrive
under pressure, not just resist it. To fill this gap in terminology, Taleb offers the term “antifragile”: things that
benefit and improve from volatility.
In a follow-up to his 2007 book The Black Swan,
Taleb’s Antifragile argues that despite our aversion to
unpredictability and shocks, they are often beneficial. Small
environmental disturbances may harm the individual, but
Taleb says they are good for the species, which grows as it is
forced to adapt. Random mutations in the genetic code, for
example, can lead to a healthier gene pool. Small mistakes
can provide useful information outside of nature as well:
Entrepreneurs learn from failed startups and small plane
crashes provide data that can help avoid larger accidents.
In addition to yielding information, unpredictable
variations also act as purges, Taleb writes. Small forest fires
remove the most flammable material from the system, and
occasional market setbacks prevent hidden risks from
accruing with impunity. “When a currency never varies, a
slight, very slight move makes people believe the world is
ending,” he continues. “Injecting some confusion stabilizes
the system.”
Taleb seems to have a problem with what he calls
“modernity,” which he defines as humans’ tendency to
smooth out the world’s natural jaggedness. Humans, he says,
have “fragilized” their environment by removing randomness from it. Doctors overtreat patients at the risk of
increased medical error, politicians support “rotten regimes”
in the name of stability, and overbearing parents eliminate
all elements of danger from their child’s life — classic examples of “naïve interventionism” that Taleb says has become a
core element of modernity. The problem, he contends, is
that this quest for stability inhibits the buildup of immunity
and makes humans more vulnerable to large shocks — or, as
he calls them, “black swan” events. The steps that we take to
avoid fragility may actually end up creating more of it.

I

One of Taleb’s biggest issues with modernity is the
“malignant transfer” of fragility from one party to the other
— in other words, the asymmetric exposure to risk that
benefits those who “steal a free option from society.”
To guard against this problem, Taleb argues for “skin in the
game,” a risk management principle that says people should
be exposed to any negative consequences that may result
from their actions. He notes, for example, that bankers
receive compensation for positive performance, but do not
have to pay reverse bonuses for poor performances, an
asymmetry that creates an incentive to hide risk.
Taleb makes a good point, but he runs into trouble when
trying to apply it across a broad range of industries.
In Antifragile, he presents a table that categorizes different
professions into three groups: “skin in the game for the sake
of others” for the most valorous, “skin in the game” for those
in the middle, and “no skin in the game” for the most selfish.
Soldiers and entrepreneurs are placed in the highest
category, while, predictably, bankers, politicians, and corporate executives are in the lowest. But it’s unclear whether
Taleb’s categorization always holds. For example, he puts
politicians in the lowest category, meaning he believes they
suffer no consequences for their risky actions. But to the
extent that politicians are held accountable to their constituents via election cycles and the media, one must wonder
whether Taleb’s categorization generalizes too much and
ignores important nuances.
While Taleb’s ideas are attractive in some respects,
Taleb himself is less appealing in these pages. He makes
ad hominem attacks on individuals, including many economists, “tie-wearing abhorrent” bankers, and the “fragilista
journalist” Thomas Friedman, who, Taleb claims, makes him
“nauseous” upon eye contact. In many instances, Taleb is
outright condescending. He writes that traders are “overeducated if they could spell their street address correctly,” and
wonders whether “people without practical sense usually
manage to get the energy and interest to acquire a Ph.D. in
the fictional world of equation economics.” At the same
time, he does not refrain from self-inflicted praise: “I just
get disturbed when I see wrong and do nothing about it,”
Taleb writes at one point. “It is biological.” While his
irreverent tone offers the occasional reading break and
has become a trademark style of Taleb’s writings, it mostly
detracts from his argument.
Taleb presents an interesting idea that will inspire
many readers to rethink the role of risk in their lives.
Though he overstretches his argument by several hundred
pages — violating his own “less is more” rule — his book is
ultimately worth the read, especially for those who can
overlook his grandiose and self-satisfied style.
EF
ECON FOCUS | SECOND QUARTER | 2013

39

DISTRICTDIGEST

Economic Trends Across the Region

State Corporate Income Tax and Multistate Corporations
BY S A N T I AG O P I N TO

he relative importance of state corporate income
tax (SCIT) revenue has been declining over the last
few decades. State corporate taxes as a percentage
of total state tax revenues declined from 6.6 percent in
1992 to 5.3 percent in 2011. As a percentage of before-tax
corporate profits, state corporate taxes declined from
4.4 percent to 2.2 percent during the same period. (See
chart below.) As expected, these indicators show a cyclical
behavior, but the underlying trend is downward. These
trends have been taking place even as corporate profits as
a share of national GDP have been rising.
The SCIT plays different roles in different states of the
Fifth District. In Maryland, Virginia, and South Carolina,
the participation of the SCIT in the state tax revenue is
below the state average for the whole country; in North
Carolina and the District of Columbia, it is about average;
and in West Virginia, it is generally above the average. The
long-run behavior also differs by states. The trend has been
toward a reduced role for the SCIT in North Carolina and
South Carolina, an increased role in Maryland, and an essentially constant one in Virginia and D.C. West Virginia also
exhibits a downward trend after controlling for the
exceptionally high values achieved during the period of
2005-2009. (See chart on page 41.)
Why has the role of the SCIT been declining nationally
and in most Fifth District states? To understand the answers
to this question, it may be helpful to have some background
on this type of tax.

T

State Corporate Income Tax Revenue and Corporate Profits
14
12

PERCENT

10
8
6
4
2

SCIT/Corporate Profits
SCIT/State Tax Revenue
Corporate Profits/GDP

40

ECON FOCUS | SECOND QUARTER | 2013

2011

2007

SOURCES: Bureau of Economic Analysis and U.S. Census Bureau

2008
2009
2010

2003
2004
2005
2006

2002

1999
2000
2001

1997
1998

1996

1992
1993
1994
1995

0

Understanding the SCIT
Most large corporations consist of a group of related businesses. Typically, there is a parent corporation and a number
of subsidiaries owned by the parent. When these corporations operate in multiple states, measuring income earned
within each region raises a difficult conceptual problem:
How should states determine the appropriate amount of tax
to impose on the incomes of such businesses?
Federal court decisions have limited the power of states
to tax out-of-state corporations. A corporation is subject to
income tax in the state in which it is organized and in every
state where it conducts activities that are sufficient to create
what is called a “nexus.” Once nexus is established, the state
has the right to impose a tax obligation on the corporation.
The determination of nexus for a multistate corporation
can be a major challenge and is a highly contentious issue in
state taxation. The “physical presence” standard dictates
that a multistate corporation has nexus in the state where it
produces — that is, the state where the company has offices
and production facilities, in addition to local employees.
More recently, however, states have shifted toward the adoption of the “economic presence” standard in determining
whether in-state activities create nexus for tax purposes.
According to this principle, a company also has presence in
the states where it sells its products. The economic presence
standard has become the subject of widespread litigation in
state courts and the rulings on this matter have been far
from uniform.
Reporting methods for multicorporate groups vary
across states. While some states require corporations to file
separate or consolidated tax returns, a growing number of
states are moving toward combined (or unitary) filing. Under
the separate entity method, a company with nexus in the
state must file its own separate return, ignoring the existence of the corporate group. Each entity is treated as a
separate taxpayer. In principle, a company cannot offset
profitable subsidiaries with subsidiaries with losses. Since
intercompany transactions (that is, transactions between
subsidiaries or sister corporations) are treated similarly to
transactions between the corporation and third parties for
tax purposes, the company has some control over its taxable
income. Typically, a separate entity state accepts the
company’s statement of its taxable profits derived from its
own books, but states have the right to make adjustments if
they believe intragroup sales are deliberately used to avoid
taxes (transfer pricing). In a few states, including Maryland,
separate reporting is the only filing option.
Some states allow corporations that belong to an

method is popular in other countries as well, such as
Canada, mostly because it is relatively easy to administer.
12
For tax purposes, a sale must be assigned to one
single
state. For tangible property, most states follow
10
the “destination rule” principle, which imputes sales
to the state where they take place. If the destination
8
states lack the authority to tax the seller (either
because there is no nexus or the formula does not
6
weigh the sales portion), sales assigned to those states
are not included in the state of origin’s sales factor.
4
When this occurs, a portion of that company’s profits
2
remains untaxed. The untaxed profit is referred to as
NC
U.S.
MD
SC
“nowhere income.” To address this issue, several
VA
WV
DC
0
states have implemented a “throwback rule,” which
uses an alternative approach to calculate the sales
share of the apportionment formula. Suppose as
SOURCE: U.S. Census Bureau
before that a firm sells part of its production in a
destination
state and these sales are not subject to taxation
affiliated business group to file one single consolidated tax
in that location. If the company’s host state has a throwback
return (consolidated filing), rather than having each separate
rule, then the sales in the destination state are added or
entity file a separate return. Generally, companies can only
“thrown back” to the sales share in the formula apportionchoose this option if they satisfy certain conditions. For
ment of the host, increasing the taxable income in the host
instance, the parent company must own at least 80 percent
state.
or more of each affiliate, and only the affiliated entities that
About half of the states with corporate income tax have
have nexus with the state can be included in the consolilegislated throwback rules. New Jersey and West Virginia use
dated return.
a variant of this rule, but with similar implications, known as
Combined or “unitary” filing focuses on the “unitary”
the “throwout” rule. Instead of assigning all sales to the
economic unit and treats related corporations as one entity.
states in which the company operates, the throwout rule
The profits of the parent company and subsidiaries are
simply excludes from aggregate sales those sales that are not
added together, ignoring geographic boundaries, and the
assigned to any state.
state then taxes a share of the combined income. Combined
The economic rationale of the throwback rule is quesfiling requires the determination of whether a group of
tionable, though. From a practical standpoint, it is unclear
corporations can be legally considered a unitary business.
why the design of the state tax system should depend on
This area has also been highly contentious due to the lack of
whether other states appropriately tax business activities.
consistency across states.
Additionally, differences in the implementation of the
Supporters of consolidated and combined reporting
throwback rule can create economic distortions and tax
claim that these options alleviate some of the distortions
avoidance opportunities. To the extent that some states do
created by separate-unit filing and reduce tax-avoidance
not impose throwback rules, companies can reduce their
opportunities. Opponents, however, claim that by aggregatstate taxable income by locating their property and payroll
ing the income of all the businesses with different economic
in states with no throwback rule and then selling in states
profitability regardless of their geographic location,
where the company does not have nexus.
consolidated and combined reporting may
As of December 2012, all states in the
not accurately attribute the corporation’s
Fifth District had adopted formula apporincome to the correct state.
tionment methods that weigh the sales share
Irrespective of the filing requirements,
heavily. Concerning filing options, some
states allow a corporation that operates in
The widespread use
states still permit separate filing. However, at
multiple states to apportion its business
of the SCIT as an
the present time these states are planning on
income among the nexus states using a preinstrument of economic
shifting toward combined reporting. Finally,
scribed formula. This method, known as
development to attract
most states in the Fifth District do not have a
formula apportionment, assumes that the
businesses and jobs has
throwback (or throwout) provision, with the
proportion of a multistate corporation’s
exception of West Virginia. (See table on page
income earned in a given state is a weighted
negatively affected
42.) The case of North Carolina is atypical in
average of the firm’s total sales, property, and
state tax revenue in
the sense that there is no statutory throwpayroll shares in that state. Each state has the
the short run.
back rule. Still, corporations with nexus in
ability to choose the weights attached
North Carolina that sell their products in
to these factors. The formula apportionment
2010
2011

2007
2008
2009

1999
2000
2001
2002
2003
2004
2005
2006

1992
1993
1994
1995
1996
1997
1998

PERCENT

State Corporate Income Tax Revenue as a Percentage
of State Tax Revenue

QUICK
FACT

ECON FOCUS | SECOND QUARTER | 2013

41

tion, other states may feel compelled
to do the same, initiating a “race to
NC
SC
VA(1)
WV
MD
DC
the bottom” in which all states end
9.975%
8.25%
Tax Rate
7.75%
6.00%
5.00%
6.90%
up imposing the same (lower) tax
Double-weight
liability. Supporting this view, an
sales
Single-sales Double-weight Double-weight
Double-weight
Apportionment Double-weight
Single-sales
sales
sales
sales
sales
factor
Formula
empirical research study published
factor for
manufacturing
in 2009 by economist Sanjay Gupta,
Separate
also of Michigan State University,
(consolidated
Combined
Separate
Combined
Separate
Combined
Filing
is allowed
and several of his colleagues found
under certain
that states with a double-weighted
conditions)
No statutory
sales factor experience lower SCIT
throwback rule,
revenues than states with an equally
but similar
procedure is
None
None
Throwout
None
Throwback
None
weighted sales factor.
used for sales in
Rule
states in which the
An additional issue with the forcorporation is not
required to file
mula apportionment method that
a tax return
may affect the SCIT revenue arises
NOTE: (1) Virginia also has a gross receipt tax in addition to the state corporate income tax forms.
when states are allowed to choose
SOURCES: Tax Foundation, state corporate income tax forms
their own formulas. If all states
adopt the same formula, then
exactly
100
percent
of
a
corporation’s income will be apporstates where they are not required to file a tax return must
tioned across states. Nonuniformity, however, can result in
add those sales to the sales taking place in North Carolina;
more or less than 100 percent of a corporation’s income
essentially, this provision works as a throwback rule for that
being subject to state income tax.
specific situation.
Two related studies — one published in 2005 by William
Fox,
an economics professor, and LeAnn Luna, an accountExplaining the Drop in SCIT Revenue
ing
professor,
both at the University of Tennessee, and the
The decline in the SCIT revenue is generally attributed to
other one published in 2010 by Luna and Matthew Murray,
a variety of factors, including the use of the SCIT for
an economics professor at the University of Tennessee —
economic development purposes, the development of more
contended that recently corporations have been adopting
aggressive state tax planning methods, and changes in state
more aggressive tax avoidance measures and engaging in
and federal tax laws. Recent research lends some support to
what is known as “state tax planning.” The decline in SCIT
these explanations.
revenue as a proportion of corporate profits may be indicaThe widespread use of the SCIT as an instrument of
tive of such behavior. Most multistate income tax planning
economic development to attract businesses and jobs has
involves various forms of income-shifting among state jurisnegatively affected state tax revenue in the short run.
dictions through intercompany transactions or relocation of
Concessions offered through the SCIT system differ by
production processes to avoid nexus in states with higher
state and include property tax reductions, and investment
taxes. To a large extent, this kind of behavior is encouraged
and employment tax credits. Even though these are comby the separate-entity reporting requirements.
mon practices, there is no conclusive evidence of their
State tax planning also includes other more sophisticated
effectiveness in the long run. The tax competition literature
strategies. For instance, companies react to state policies
offers one possible explanation for this outcome. John
by choosing legal arrangements that would reduce the
Douglas Wilson, an economics professor at Michigan State
corporation’s tax exposure. The recent proliferation of
University, summarized the findings of this literature in an
S-corporations, partnerships, and LLCs is consistent with
article published in 1999 in the National Tax Journal. The
such practices. These organizations, unlike shareholders in a
main argument is that state competition for businesses
corporation, are not taxed as a separate business entity.
triggers a process that leads to a “race to the bottom,” where
Instead, profits and losses are “passed through” the business
all states end up imposing inefficiently low tax rates.
to each member of the corporation, who eventually report
A more recent strand of literature focuses on other
profits and losses on their own personal income tax returns.
ways of attracting businesses such as the manipulation of
A widespread shift toward legal arrangements of these types
the apportionment formula. In 1967, the Multistate Tax
is expected to affect the SCIT base negatively.
Compact established that the three factors considered in
Another common practice has been establishing holding
the apportionment formula (property, sales, and payroll) are
companies in states with no corporate income tax. This
to be weighted equally. In spite of this recommendation,
strategy allows corporations to separate the revenues genermost states have been systematically deviating toward
ated by their physical activities from the revenues obtained
a formula that weighs the sales portion more heavily.
from intangible property (trademarks, trade names, or other
Currently, most states use a formula that assigns a double
intellectual property). Specifically, the parent company
weight to the sales portion. As more states pass such legisla-

State Corporate Income Taxation in the Fifth District

42

ECON FOCUS | SECOND QUARTER | 2013

Implications for the Future
As the SCIT tax base erodes and the performance of the
SCIT weakens, state governments are pushed to evaluate
alternative ways of financing government expenditures.
Pressed by financial needs and state balanced-budget

Federal Corporate Income Tax
30
25
20
PERCENT

15
10
FCIT/Corporate Profits
FCIT/Federal Tax Revenue

5

2010
2011

2007
2008
2009

2002
2003
2004
2005
2006

1999
2000
2001

1997
1998

1993

1994
1995
1996

0
1992

incorporates a wholly owned subsidiary as an “intangible
holding company” in a tax-favored state. Then, the holding
company enters into licensing arrangements under which
the operating entity pays royalties to the holding company
for the use of intangible assets. The operating entity deducts
the royalty payments from its taxable income in the states
where it files, and the holding company pays no income tax
on the royalty income.
Other changes in state laws, such as combined filing
and the introduction of throwback rules, may have also
contributed to the evolution of the SCIT. In recent years,
states have been shifting toward combined reporting.
As more and more states adopt this method, it becomes
less profitable for companies to engage in tax-avoidance
strategies. The net impact of combined reporting on SCIT
revenue is ambiguous, however. If the subsidiaries operating
out of state incur losses, then the amount of income apportioned to a unitary state could be reduced.
The empirical literature is inconclusive in this respect.
While Gupta and his colleagues did not find any significant
association between combined reporting and SCIT revenue,
Fox and Luna found that combined reporting tends to
increase SCIT revenue. Concerning the throwback provision, the conclusions from Gupta and his colleagues indicate
that the implementation of this rule has a positive impact on
SCIT, but in a 2010 report commissioned by the National
Conference of State Legislatures, Fox and Luna claimed
that the revenue effects tend to decline as the SCIT rate
is higher.
Finally, changes in federal tax laws ultimately affect the
SCIT revenue. The calculation of state taxable corporate
income generally begins with the amount of federal taxable
income reported on the corporation’s federal tax form.
States introduce certain adjustments, but state taxable
income mostly conforms to the federal tax base. As a consequence, any amendment to federal tax rules (for example,
the enactment of more accelerated depreciation methods)
would have an effect on state tax collections as well.
During the period 1992-2011, the federal corporate
income tax revenue decreased from 9 percent to less than
8 percent as a percentage of total federal tax revenue, and
from approximately 24 percent to 17 percent as a percentage
of pretax corporate profits. (See chart.) Such behavior does
not seem to fully explain the declining importance of the
SCIT, however. Research on this topic published in 2005 by
Gary Cornia, dean of the Marriott School of Management at
Brigham Young University, and some colleagues suggested
that changes taking place at the federal level do not appear
to be the cause of the decrease in state corporate income
taxes.

SOURCES: Bureau of Economic Analysis, U.S. Census Bureau, Office of Management and Budget

requirements, however, states are unlikely to eliminate the
SCIT completely, at least in the short term. If they did so,
states would face the major challenge of compensating for
the loss in state revenue (in 2011, the SCIT accounted for
5.3 percent of the total state revenue), and there would be no
assurance that the new financing alternatives would be less
distortive. Moreover, from a political standpoint, the SCIT
is still attractive to the extent that it grants state authorities
the opportunity to export part of the tax burden to out-ofstate residents.
In such context, states have chosen to introduce partial
modifications to their SCIT systems. As noted earlier,
the literature is ambiguous about the net impact of these
changes in SCIT revenue. For example, the recent shift
toward a double-weight sales factor tends to reduce tax
revenue, the implementation of throwback appears to raise
tax revenue, and combined reporting does not seem to affect
tax revenue. At the same time, it is not obvious that all states
would be willing to adopt the same tax policies. Clearly, a
formula that gives a relatively large weight to the sales factor
(and, consequently, a low weight to the property or capital
portion) essentially penalizes those companies with higher
in-state sales, and benefits those that operate and produce
within the state’s borders. In contrast, the throwback
rule, regardless of its validity, tends to penalize those companies that sell out of state more. Depending on the states’
objectives, some policies may be more appropriate than
others.
In the Fifth District, states have already adopted a
double-weight sales factor formula, and with the exception
of West Virginia (and, to some extent, North Carolina),
states do not have a throwback provision. In light of current
research, the state governments in the region seeking to
increase SCIT revenue could do so by choosing a more
balanced apportionment formula and by adopting a throwback rule.
EF
ECON FOCUS | SECOND QUARTER | 2013

43

STATE DATA, Q4:12
DC

MD

NC

SC

VA

WV

734.3

2,586.3

4,022.2

1,872.4

3,744.0

766.9

0.6

0.5

0.9

0.9

0.5

0.3

0.7

1.3

2.2

1.9

1.1

0.5

0.9
-6.9
-10.0

105.5
-2.9
-5.1

441.8
0.4
1.9

221.0
0.6
1.5

232.4
-0.2
0.7

49.1
0.9
-0.9

Professional/Business Services Employment (000s) 154.5
Q/Q Percent Change
1.2
Y/Y Percent Change
1.7

414.0
0.9
2.2

538.8
0.8
3.3

231.9
-1.9
0.3

682.8
0.7
1.8

65.2
0.7
1.0

Government Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

241.7
0.2
-1.3

505.1
0.1
0.1

715.5
0.3
0.5

350.7
1.2
2.6

715.1
0.7
0.2

154.8
0.3
0.6

Civilian Labor Force (000s)
Q/Q Percent Change
Y/Y Percent Change

369.3
1.4
5.7

3,138.8
0.5
1.2

0.6
1.7

2,168.1
0.2
0.0

4,216.8
0.3
0.0

807.8
0.2
0.5

8.5
8.9
9.8

6.7
6.9
7.1

9.4
9.6
10.0

8.7
9.1
10.0

5.7
5.9
6.3

7.5
7.6
7.5

41,264.2
1.3
2.6

267,487.5
1.4
2.8

316,753.3
1.5
3.8

141,640.5
1.3
3.4

337,257.0
1.4
2.7

55,568.0
0.9
1.8

Building Permits
Q/Q Percent Change
Y/Y Percent Change

1,562
20.0
1.9

3,886
4.4
25.0

12,867
12.5
61.3

4,571
-0.9
7.9

6,847
2.9
60.3

486
8.0
15.4

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

598.8
1.6
4.5

408.1
0.3
-0.8

302.2
0.4
-0.8

305.2
0.0
-1.3

398.0
0.4
-0.1

214.5
0.2
0.1

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Manufacturing Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Unemployment Rate (%)
Q3:12
Q4:11
Real Personal Income ($Mil)
Q/Q Percent Change
Y/Y Percent Change

44

ECON FOCUS | SECOND QUARTER | 2013

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 2002 - Fourth Quarter 2012

Change From Prior Year

First Quarter 2002 - Fourth Quarter 2012

First Quarter 2002 - Fourth Quarter 2012

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

10%

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

9%
8%
7%
6%
5%
4%
3%
02

03 04

05

06

07

08 09

10

11

02

12

03 04

05

06

07

08 09

10

11

Fifth District

12

02

03 04

05

06

07

08 09

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 2002 - Fourth Quarter 2012

First Quarter 2002 - Fourth Quarter 2012

First Quarter 2002 - Fourth Quarter 2012

03 04

05

Charlotte

06

07

08 09

Baltimore

10

30%
20%
10%
0%
-10%
-20%
-30%
-40%
-50%

Washington

03 04

05

06

Charlotte

07

08 09

Baltimore

10

FRB—Richmond
Manufacturing Composite Index

First Quarter 2002 - Fourth Quarter 2012

First Quarter 2002 - Fourth Quarter 2012

30

30
20

20

10

11

12

0

-50
03 04

05

06

07

08 09

10

11

12

08 09

United States

First Quarter 2002 - Fourth Quarter 2012

-30

02

07

16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

-40

-30

06

Change From Prior Year

-20
-20

05

House Prices

-10

-10

03 04

Fifth District

0

10

02

Washington

FRB—Richmond
Services Revenues Index

40

11 12

40%

02

11 12

10

Change From Prior Year

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
02

11 12

United States

Nonfarm Employment
Metropolitan Areas
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

10

02

03 04

05

06

07

08 09

10

11

12

02

03 04

05

Fifth District

06

07

08 09

10

11 12

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and employment
indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

ECON FOCUS | SECOND QUARTER | 2013

45

METROPOLITAN AREA DATA, Q4:12
Washington, DC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

Hagerstown-Martinsburg, MD-WV

2,498.2
1.1
1.2

1,341.2
1.9
2.2

104.4
1.1
2.1

5.5
5.5
5.9

7.1
7.2
7.5

7.7
7.8
8.5

6,397
15.6
43.2

1,714
-1.6
12.8

252
26.6
93.8

Asheville, NC

Charlotte, NC

Durham, NC

174.4
2.3
2.4

868.6
2.8
3.3

286.6
1.6
1.9

7.6
7.7
8.2

9.4
9.5
10.3

7.2
7.3
7.8

265
-31.0
21.0

3,110
-1.3
120.1

519
-55.7
-19.0

Raleigh, NC

Wilmington, NC

345.8
1.7
0.5

528.4
0.8
2.8

139.7
0.9
3.3

Unemployment Rate (%)
Q3:12
Q4:11

9.9
10.0
10.4

7.6
7.7
8.4

9.6
9.7
10.4

Building Permits
Q/Q Percent Change
Y/Y Percent Change

396
10.0
-35.2

4,833
87.3
183.6

674
-19.7
62.0

Unemployment Rate (%)
Q3:12
Q4:11
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment ( 000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:12
Q4:11
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Greensboro-High Point, NC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

46

Baltimore, MD

ECON FOCUS | SECOND QUARTER | 2013

Winston-Salem, NC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:12
Q4:11
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:12
Q4:11
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:12
Q4:11
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Charleston, SC

Columbia, SC

208.2
2.1
1.6

306.3
-0.2
2.3

357.7
1.6
1.3

8.9
8.9
9.4

7.0
7.3
8.2

7.6
8.0
8.6

159
-8.6
-64.3

1,042
-6.5
-15.9

877
-2.0
41.9

Greenville, SC

Richmond, VA

Roanoke, VA

311.2
1.8
1.5

629.3
0.3
1.5

160.3
0.9
1.6

7.1
7.5
8.1

6.1
6.3
7.0

5.9
6.0
6.6

678
15.9
65.8

1,245
0.1
86.1

105
11.7
16.7

Virginia Beach-Norfolk, VA

Charleston, WV

751.1
0.3
1.4

147.8
0.0
-0.8

115.3
2.4
0.0

6.3
6.5
7.1

7.1
7.1
6.9

7.3
7.3
7.9

1,120
-24.1
13.1

38
-2.6
111.1

8
33.3
-68.0

Huntington, WV

For more information, contact Jamie Feik at (804) 697-8927 or e-mail Jamie.Feik@rich.frb.org

ECON FOCUS | SECOND QUARTER | 2013

47

OPINION

Watching Labor Force Participation
BY J O H N A . W E I N B E RG

Then, during and since the recession, the pace of decline
central concept in evaluating economic performsped up, with participation currently at 63.3 percent, about
ance is how fully an economy is using its resources.
where it was in 1980.
Of particular interest in this regard is the utilizaThere are a number of reasons why potential workers
tion of labor resources, both because labor accounts for
might leave the labor force or remain outside of it. They may
the bulk of production and because labor income is the
be pursuing education or caring for family members. They
key to the broad well-being of households. Accordingly, the
may be disabled or retired. Also, an individual is classified as
unemployment rate, as a measure of unutilized labor
not in the labor force if he or she is not searching for work
resources, has always received considerable attention from
out of a belief that it isn’t possible to find a job. For example,
policymakers, politicians, and the general public.
the person may have tried to find work and gave up. These
Like most of our efforts to measure the economy, howso-called discouraged workers are in some ways more similar
ever, the unemployment rate is an imperfect indicator of
to the unemployed than they are to others who are not in the
how effectively our labor markets are working. First, the
labor force: They represent potential labor supply that
nature of labor markets — the fact that employment is
might be expected to quickly flow back into the labor force
usually the result of a process by which workers and employas conditions improve.
ers search for a match between workers’ skills and
The behavior of labor force particiemployers’ needs — means that there is
pation is central to how one interprets
always some amount of unemployment.
Like most of our efforts
the evolving outlook for labor marAnd the amount of unemployment aristo
measure the economy,
ket performance. Suppose economic
ing from that process may vary over
the unemployment rate is an growth were to continue at the roughly
time, depending on shifts in the supply
2 percent annual pace that it has averand demand for different types of skills.
imperfect indicator of
aged since the end of the recession.
The extent to which persistently high
how effectively our labor
That pace of growth would likely
unemployment in the wake of the
markets are working.
continue to produce net employment
Great Recession is the result of
gains similar to the post-recession
increased difficulty in finding good
average, around 180,000 jobs a month. If labor force particmatches — many refer to this as an increase in “structural
ipation remains low, or even continues its recent decline,
unemployment” — has been the subject of considerable
such a pace of job growth might cause the unemployment
debate.
rate to fall relatively quickly. If, on the other hand, participaA second aspect of the unemployment rate that makes it
tion picks up as discouraged workers flow back into the
hard to interpret as a measure of labor market performance
market, then for a given pace of job growth, the unemploylies in its definition. We define the unemployment rate as
ment rate will fall more slowly or may even rise.
the fraction of the labor force that is not employed. The
Determining the sources of nonparticipation is difficult.
labor force, in turn, is defined as all employed people plus
Some recent work has suggested that a substantial share of
those in the working-age population who do not have jobs
the decline in participation is a product of the recession,
but are seeking employment. The unemployment rate is
which tends to be more consistent with the notion of people
silent on people who, for whatever reason, are neither
withdrawing from the labor force because of poor employworking nor searching.
ment prospects. But a cyclical decline of the magnitude
Another measure of labor-market activity is the labor
suggested by that work would be unusual in the historical
force participation rate — that is, the share of the popularecord.
tion in the labor force. Unlike the unemployment rate, this
Regardless of whether the decline in labor force particimeasure does provide information about those who are not
pation is rooted mainly in the recession or in structural
working or searching. Changes in this measure are usually
changes in the economy, it is an important phenomenon,
dominated by demographics and other trends that play out
one unprecedented in our postwar experience. The behavior
over time periods longer than the typical business cycle.
of labor force participation is likely to remain a challenging
For instance, from the 1970s through the end of the 20th
aspect of the economic data — for forecasters and policycentury, the participation rate rose from around 60 percent
makers alike — for some time.
EF
to 67.3 percent as women increasingly entered and remained
in the workforce. But this shift had largely played itself
out by 2000, and participation has been trending down
John A. Weinberg is senior vice president and director
since then, standing at 66 percent at the end of 2007.
of research at the Federal Reserve Bank of Richmond.

A

48

ECON FOCUS | SECOND QUARTER | 2013

NEXTISSUE
Digital Currency

Jargon Alert

In 2009, Bitcoin launched as a purely digital currency — it exists
only as bits and bytes. Bitcoins aren’t backed by any government or any physical commodity, yet they are accepted as
payment at several dozen businesses in the United States
and traded for dollars through online exchanges. Are digital
currencies the next evolutionary step for money or something
else entirely?

Analysis of “present value” is important —
not just to winners of the lottery, but to
anyone facing a major financial decision.
This analysis helps people evaluate tradeoffs between receiving income now versus
later.

The Profession

Is College Becoming a Riskier Investment?
Hundreds of studies agree: College is the most reliable way to
increase your earnings. But what happens when the payoff
becomes less certain? New research suggests that the returns to
college might be smaller and more variable than they used
to be — but that it’s still a better investment than not going.

The High Point Initiative
For decades, High Point, N.C., was plagued by open-air drug
markets and violent crime. In 2004, the police decided
they needed a new strategy — so instead of putting drug
dealers in prison, they offered some a second chance. Today,
the drug markets are gone, and cities nationwide are adopting
the High Point model.

Some say the review process at economics
journals has become longer and more
cumbersome, but economists have more
ways to disseminate their research before
the articles are formally published.

Interview
John Cochrane of the University of Chicago
discusses financial regulation, monetary
policy following the Great Recession, and
the effect of fiscal stimulus on economic
growth.

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
• To subscribe to our magazine
• To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

The Richmond Fed’s 2012 Annual Report features
an essay entitled, “Land of Opportunity? Economic
Mobility in the United States.”
The essay suggests that economic mobility has
decreased in recent years, particularly for people born
at the top and bottom of the income distribution.
Many factors contribute to the retention and
attainment of economic status. But for nearly
everyone, advancement depends on opportunities to
obtain cognitive and noncognitive skills, and those
opportunities are not as good for children born to
poor families. Initiatives that focus on early childhood education seem to yield high returns on investment, although their feasibility on a large scale is
unknown. Nonetheless, these efforts may have the
potential to help the United States achieve a more
inclusive prosperity.
In addition to the essay and the Bank’s financial statements, the Annual Report includes a summary of the
region’s economic performance in 2012 and an
overview of how the Bank’s regional information and
analysis contribute to monetary policy deliberations.

The Annual Report is available on the Bank’s website at
http://www.richmondfed.org/publications/research/annual_report/