View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

THIRD QUARTER 2017

FEDERALRESERVE
RESERVEBANK
BANKOF
OFRICHMOND
RICHMOND
FEDERAL

CYBERATTACKS and the
DIGITAL DILEMMA
Can economics shed light on why it’s so
difficult to defend against cyberthreats?

Subprime
Auto Loans

When is Inflation
Too Low?

Interview with
Douglas Irwin

VOLUME 22
NUMBER 3
THIRD QUARTER 2017

COVER STORY

8

Cyberattacks and the Digital Dilemma
Recent high-profile hacks have renewed calls for improved
security, but competing incentives pose a challenge
       
FEATURES

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
DIRECTOR OF RESEARCH

Kartik Athreya
EDITORIAL ADVISER

Aaron Steelman
EDITOR

12

Renee Haltom
SENIOR EDITOR

David A. Price

Subprime Securitization Hits the Car Lot
Are fears of a “bubble” in auto lending overstated?     

MANAGING EDITOR/DESIGN LEAD

Kathy Constant

16

The Resurgence of Universal Basic Income
Concerns about the effects of automation have brought an
old policy proposal back into the limelight                    
       
DEPARTMENTS

1		 Message from the Interim President/The Federal Reserve 		
			 and Cybersecurity
2		 Upfront/Regional News at a Glance
3		 Federal Reserve/Waiting for Inflation
6		 Jargon Alert/Yield Curve
7		 Research Spotlight/Affirmative Action, On and Off
20		Interview/Douglas Irwin
26		 Economic History/Soul City: Doing Development Differently
30			Policy Update/Unwinding the Balance Sheet
31			Book Review/#Republic: Divided Democracy in the Age of Social Media
32		 District Digest/Preparing Unemployment Insurance for a Downturn:
			 The Carolinas
40 Opinion/Do Low Interest Rates Punish Savers?

STAFF WRITERS

Helen Fessenden
Jessie Romero
Tim Sablik
EDITORIAL ASSOCIATE

Lisa Kenney

­

CONTRIBUTORS

Charles Gerena
Richard Kaglic
Michael Stanley
DESIGN

Janin/Cliff Design, Inc.

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/
RichFedResearch
Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Econ Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025x (Online)

MESSAGE FROM THE INTERIM PRESIDENT

The Federal Reserve and Cybersecurity

W

hat can we do about cybersecurity? That’s the
central question of this issue’s cover story,
“Cyberattacks and the Digital Dilemma,”
which explores the incentives businesses, governments,
and consumers have to invest in and monitor the security
of their systems, applications, data, and online activities.
It’s a question of vital importance. In 2016, the FBI
received nearly 300,000 complaints from consumers about
cybercrime, at a cost to the victims of more than $1.3 billion.
That only includes people who reported the crime; the security-software company Symantec puts the total financial
cost to U.S. consumers at more than $20 billion. For U.S.
businesses, a data breach currently costs about $7.4 million
on average, according to a research study sponsored by IBM.
And these numbers pale in comparison to the potential
harm if hackers were able to infiltrate systems within our
country’s critical infrastructure sectors, including the financial services sector.
At the Fed, we are highly aware of cybersecurity risks
and the importance of maintaining trust and confidence
in the banking system. That’s why protecting the integrity of our data, systems, and applications underpins
everything we do, from sending an email to transferring
trillions of dollars each day between financial institutions.
Led by National IT’s Office of the Chief Information
Security Officer, and based here in the Fifth District, the
Federal Reserve Banks execute a comprehensive cybersecurity strategy anchored on three goals: defending Federal
Reserve System networks, applications, and data; developing the cybersecurity workforce skills necessary for
tackling tomorrow’s threats; and deploying threat-driven,
risk-based processes. Simply put, our goal is to reduce
cybercriminals’ financial motivation by making it too
costly to attack us.
The Fed’s physical and virtual footprints are quite
large, comprising 12 regional Reserve Banks, an additional
24 branch offices, and more than 22,000 employees —
not to mention the thousands of financial institutions
we supervise and provide payment services for. Among
other protections, we’ve deployed layers of sophisticated
security technologies at our internal and external network
entry and exit points, as well as protections at System endpoints. These are coupled with multiple layers of protection at the application level, including the most up-to-date
encryption technologies, for the core data themselves.
Even the best programs and technologies aren’t enough
if our employees don’t play an active and informed role.
That’s why we’re also committed to building our “human
firewall” as a core component of our defense strategy.
Every Fed employee undergoes extensive annual training
to stay up to date on our cybersecurity and data privacy

requirements. We also make
sure our employees have the
knowledge they need to spot
potential scams or “phishing”
attempts — emails that try to
trick the recipient into revealing personal data.
We’re confident that our
technologies, our processes,
and our people help to form a
strong layered defense against
hackers and other cybercriminals. As strong as that defense
is, however, we are well aware of the dangers of complacency. We’re always on the lookout for the next emerging
threat and adjusting our defenses accordingly.
This issue of the magazine also addresses a topic of
great interest to policymakers recently: the persistence
of inflation below the Fed’s long-run target of 2 percent.
Historically, lower unemployment rates have been correlated with higher inflation; given October’s unemployment rate of 4.1 percent, one might conclude the current
stance of monetary policy is too accommodative. But it’s
possible that relatively low inflation is a temporary circumstance. For example, some unusual events have lowered
specific prices substantially, which has an effect on the
overall price level. Since those unusual events are only
transitory, inflation is likely to increase in coming months.
Overall, the Federal Open Market Committee expects
inflation will rise and then stabilize at around 2 percent
over the next few years, and that additional increases in
the federal funds rate will be forthcoming.
That said, we don’t have a perfect understanding of the
current behavior of inflation. Some economists have proposed that the relationship between unemployment and
inflation has weakened over time, meaning it takes bigger
swings in unemployment to trigger changes in inflation.
Demographic changes could also be a headwind for wage
growth and inflation.
In addition to the articles on cyberattacks and inflation, I hope you will enjoy discussions of a universal basic
income, the subprime auto lending market, and an interview with trade economist Douglas Irwin. Thank you for
reading.
EF

MARK L. MULLINIX
INTERIM PRESIDENT AND CHIEF OPERATING OFFICER
FEDERAL RESERVE BANK OF RICHMOND

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

1

UPFRONT

Regional News at a Glance

BY L I S A K E N N E Y

MARYLAND — In late September, Baltimore’s Sparrows Point facility held
a groundbreaking ceremony to celebrate its first tenant. FedEx opened a
$58 million, 307,000-square-foot distribution center in the historic former
steel mill site. Processing up to 15,000 packages per hour, the FedEx center is
mostly automated, but it does bring more than 400 new jobs to the area. Other
tenants at Sparrows Point include apparel manufacturer Under Armour, which
is building a distribution center on the site, and car importer Pasha Automotive,
which has leased space to store imported vehicles.
NORTH CAROLINA — Industrial manufacturer NN, Inc. announced in
September that it will move its global headquarters to Charlotte in early 2018. The
$10 million headquarters will be the base for 200 workers, 175 of whom NN has
promised to hire locally. To lure the company, the city tentatively agreed to provide
more than $280,000 in property tax rebates over five years, and the state approved a
$3.7 million grant as well as more than $350,000 in community college training funds.
SOUTH CAROLINA — The late August solar eclipse was the state’s
biggest single tourist event ever, according to the S.C. Department of Parks,
Recreation and Tourism. South Carolina was the last place in the United States
to witness the “totality” of the eclipse. The department’s report found that
1.6 million people traveled to or within the state to watch the eclipse and spent
about $269 million. The most popular viewing locations were parks, mountain
sites, and the coast.
VIRGINIA — Twelve companies have been selected to participate in the
Virginia Economic Gardening Pilot Program, which is administered by the
Virginia Economic Development Partnership and is targeted at helping existing
Virginia businesses grow. The program focuses on second-stage companies,
which are young companies that have transitioned beyond being startups in terms
of revenue or employment. The program lasts for six to eight weeks and helps
businesses identify new markets and industry trends, refine business models, and
raise their online visibility. The participants’ revenue and employment growth
will then be tracked for 36 months in order to assess the long-term effectiveness
of the program.
WASHINGTON, D.C. — In a September report, the D.C. auditor found that
the district may be forgoing millions of dollars of tax revenue by not properly
regulating vacant properties. In D.C., vacant or blighted properties are taxed at
rates five to 12 times higher than properties in good repair. The auditor found
that the Department of Consumer and Regulatory Affairs improperly granted
exemptions and did not follow legal requirements, among other issues, leading to
an inaccurate count of vacant properties.
WEST VIRGINIA — Toyota announced in September that its plant in Buffalo
will become the first in the United States to make transaxles for hybrid cars.
The $115 million project won’t create new jobs, but it is expected to provide job
security for the 1,600 current employees, who will receive additional training.
Production of the transaxles is scheduled to begin in 2020.

2

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

FEDERALRESERVE

Waiting for Inflation
When is inflation too low?
BY H E L E N F E S S E N D E N

I

PERCENT CHANGE FROM YEAR AGO

n the past several years, economists and policymakers
inflation “too low” at some point, should it rethink its taralike have been increasingly absorbed by an unexget or its tools?
pected puzzle: the stubbornness of low inflation.
This might seem like a benign conundrum, given that the
A Question of Credibility
success of a central bank has historically been defined by
When the Federal Open Market Committee (FOMC)
its control of inflation. Indeed, price stability is one pillar
announced the 2 percent target in January 2012, it emphaof the Fed’s dual mandate (the other being maximum
sized two objectives. One was that it would help “anchor,”
employment). And given how unpopular and economically
or firmly establish, long-run expectations that inflation
destabilizing high inflation is, a safe assumption is that
would stay low and stable. The other was that it would let
long stretches of low inflation would be welcome.
the Fed achieve more transparency and accountability in
But today, the United States and many other advanced
communicating monetary policy. With regard to anchoreconomies are seeking to lift inflation off of very subdued
ing, the undershooting of the target has caused some
levels. A common concern is that if inflation is too low
economists, and Fed critics more broadly, to ask whether
for too long, interest rates will remain near zero. Because
the Fed can in fact remain credible if, in their view, it keeps
interest rates can’t fall far below zero, this means that
missing the target — especially as it makes the case for
policymakers might have little room to stimulate the
higher interest rates.
economy by cutting rates in case they need to address a
On the FOMC, this concern has been most frequently
negative shock. Moreover, low interest rates could induce
expressed by Minneapolis Fed President Neel Kashkari,
investors to chase higher yields in riskier assets, or take on
who contends that the Fed should be worried about missing
excessive debt, ultimately driving up risk throughout the
the target — and if need be, hold off on tightening until
financial system.
inflation data are consistently moving higher. He sees the
The Fed announced an annualized 2 percent inflation
risk of holding off on further hikes (potentially leading to
target in January 2012, based in part on the long-run average
higher inflation) as more benign than tightening too soon
prior to the Great Recession. It’s also a figure shared by
(potentially hurting the recovery). While most labor market
counterparts such as the European Central Bank and the
indicators have strengthened, he argues that there is still
Bank of England. But what was once seen as a reasonable
slack, most notably in the relatively low labor force particiobjective has, for some, become a challenge. While the
pation rate for prime-age workers.
United States has posted higher inflation, and stronger
But many economists still share the view that this low
growth rates, than many other major economies, inflation
average inflation doesn’t constitute a true “miss.” For
has remained below 2 percent over a sustained period
by most gauges. Among these is the Fed’s preferred
Inflation Is Lying Low
metric, “core” personal consumption expenditures
PCE inflation including and excluding volatile sectors
(PCE) inflation, which excludes the volatile food
5
and energy components. Since 2012, annual core
4
PCE inflation has averaged around 1.6 percent and
3
recently slipped to 1.3 percent. (See chart.)
A debate is now unfolding over whether the long
2
duration of low inflation — despite apparently loose
1
monetary policy — requires fresh thinking by the
0
Fed. This question has immediate policy implications in terms of how and when the Fed should act
-1
in continuing to tighten monetary policy. But it also
-2
raises the broader question of just what it means to
2003
2005
2007
2009
2011
2013
2015
2017
“meet” or “miss” inflation targets. For example, does
it matter if inflation remains modestly lower than the
Recession
Headline PCE
Core PCE
target? The Fed’s inflation target is “symmetrical,”
NOTE: Core personal consumption expenditures inflation excludes food and energy; headline PCE
includes them. (Both are chain-type price indices with 2009 as their base year.)
but over what horizon should symmetrical fluctuaSOURCE: U.S. Bureau of Economic Analysis, National Bureau of Economic Research. Shaded area
tions be expected to occur? And if the Fed considers
denotes recession.

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

3

example, core PCE steadily rose from late 2015 to late 2016
to graze 2 percent. This group also notes the 2 percent inflation target is a long-run objective that smooths out price
volatility, whereas the recent inflation softness is likely temporary and driven by sector-specific price decreases — such
as in cell phone services, housing, and health care; these are
all examples of how a degree of volatility and uncertainty
is built into overall inflation measurements in the short to
medium term.
“This is a very important debate, but to call low inflation
a ‘puzzle’ at this point is overstated,” says Johns Hopkins
University economist Laurence Ball, who has argued for a
higher inflation target of 4 percent. “So much depends on
the time period in question and which measure you use.
The numbers bounce around a lot and there are large error
terms. If we’re seeing inflation at 1.6 percent instead of
2 percent, I’d call that normal statistical noise.”
To most on the FOMC, including Chair Janet Yellen,
these short-term fluctuations also don’t undermine the
view that long-run inflation will be moving back toward
2 percent in the next couple of years. Yellen has reasserted
this view in recent testimony and speeches, albeit with
some caveats.
“We continue to anticipate that inflation is likely to
stabilize around 2 percent over the next few years,” she
said in a Sept. 26 speech. “But our understanding of the
forces driving inflation is imperfect, and we recognize that
something more persistent may be responsible for the current undershooting of our longer-run objective.”
A Post-Recession Conundrum
Whatever the implications of low inflation may be, most
economists still agree that its persistence has been a surprise given other fundamentals. Since the recession, U.S.
growth has been steady, if slow, while unemployment has
fallen sharply. Most other labor-market indicators have
also tightened. In addition, monetary policy has been
highly stimulative since 2008 — benchmark interest rates
were near zero from 2008 to 2015, and the rate hikes ever
since have been incremental. The mystery is that this
stimulus, combined with the increase in labor utilization,
hasn’t been met by an uptick in inflation — the scenario
that most economists and the markets had expected.
Such consistently low inflation to date is also below
the Fed’s own inflation projections. Since the 2012 inflation-target announcement, the FOMC’s Summaries of
Economic Projections (SEP) — a quarterly report with
forecasts of key indicators — have regularly overestimated
future inflation as well as gross domestic product growth
and the committee’s expected trajectory of short-term
interest rate hikes (known as the “dot plot”), according to
a 2016 study by the Kansas City Fed. In essence, the Fed
projected a quicker return to strong growth and higher
inflation, which in turn would let the FOMC pursue “liftoff” — getting interest rates off the “zero lower bound”
— and eventually shrink its balance sheet holdings of
4

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

$4.5 trillion that expanded through its bond-buying campaign. But as the report also notes, the Fed was hardly alone
in assuming higher inflation and stronger growth — this
was also the consensus of private-sector projections.
One part of this surprise involves the relationship
known as the Phillips Curve, named after the British
economist A.W. Phillips. It states that when unemployment falls, inflation rises, one reason being that real wages
go up as available workers become scarcer. Higher wages
prompt employers to pass those costs on to consumers,
which causes prices to rise. When unemployment is high,
by contrast, employers have room to cut wages, which
eases inflationary pressure. Empirically, however, this
relationship has not been consistent over the decades. For
example, the correlation was stronger in the mid-to-late
1960s, whereas the current environment of falling unemployment amid low inflation is quite similar to the early
1960s and the late 1990s. Today, most economists agree
there is no tight, fixed correlation; rather, some argue
there are occasional circumstances when the correlation
is stronger, such as when the labor market is very tight.
Nonetheless, many economists and FOMC members
generally expected at the start of the recovery that inflation would rebound once the labor market healed. This
has not happened. Unemployment is now 4.1 percent,
down from the 2009 high of 10 percent, while inflation
has stayed quiescent. This apparent “flattening” of the
Phillips Curve has received much attention from economists. Among some tentative explanations is the rising
importance of long-term inflation expectations relative
to unemployment in determining actual inflation in the
short term; very low inflation expectations might keep
inflation muted even if unemployment is also falling.
Other economists point to the importance of understanding how different measurements of inflation, as well
as the type of workers who are unemployed, play a role
in shaping the curve. (Research by the Federal Reserve
Board of Governors suggests labor force slack did account
for a large part of the inflation “shortfall” below 2 percent
after the recession, but less so in recent years as more
transitory factors came into play.) Amid these competing
explanations, many economists today say that more study
is needed to understand the causal relationship between
inflation and unemployment — if there is one — and what
truly “anchors” inflation in the long run.
The ‘New Normal’
Another reason why low inflation is unexpected lies on
the monetary policy side. Since the recession, the Fed and
most other major central banks have pursued exceptionally
accommodative policies by keeping benchmark rates near
zero. Inflation-adjusted (or “real”) interest rates have sometimes dropped below zero as a result of very low nominal
rates, while another key conceptual measure — the equilibrium or “natural” interest rate — has also dropped below
zero by most estimates.

The natural rate is important for understanding, among
other things, the degree of accommodation. It represents
the inflation-adjusted short-term interest rate when the
economy is at full employment. It’s not observed but is
estimated as a function of other variables such as productivity, savings, demographics, and expected long-term
growth. When it falls, it’s often interpreted as an indication that long-term growth prospects are also falling
— perhaps the result of an aging population or slowing
productivity. When short-term real interest rates fall
below the natural rate — which has generally been the case
during most of the recovery — monetary policy is considered accommodative. Most models see the estimated
natural rate as having slightly risen in the past couple years,
and this is one reason some economists argue that higher
nominal interest rates are now appropriate.
Where is the natural interest rate today, and what
is its relationship to inflation? While estimates differ
somewhat, economists generally believe the natural rate
has fallen dramatically since the recession, both in the
United States and abroad. According to a well-known San
Francisco Fed model that incorporates data on inflation,
output, and nominal interest rates, the U.S. natural rate
averaged between 2 percent and 2.5 percent in the 2000s.
It then dropped from about 2 percent at the start of the
Great Recession to zero in late 2010 and has hovered
around zero since then, with a slight uptick in the last
few years. A Richmond Fed model produces a similar
trend with a slightly higher natural rate at present. And
while the natural rate is independent of inflation — and is
independent of monetary policy — a low natural rate may
push down inflation expectations by reinforcing the belief
among consumers and firms that monetary policy will be
constrained by the zero bound in the future.
In short, inflation has behaved in unexpected ways,
staying subdued despite growing labor market tightness
and a historic degree of accommodation. Some economists — pointing to the fact that low inflation, along with
a low natural rate, is actually a global phenomenon — say
this environment marks a “new normal.”
Raising Expectations
How much are inflation expectations changing in the
“new normal”? One challenge is that many different gauges
can come into play. For example, survey-based measures
that poll individuals or firms are more stable and tend
to give higher readings, while measures that are drawn
from financial market participants tend to be lower, and
some have shown a recent decline, according to recent
San Francisco Fed research. Some economists are pointing to these different trends to ask whether inflation
expectations, in the aggregate, are falling. In some recent
speeches, Yellen has suggested that when interest rates
are close to the zero lower bound, the management of
inflation expectations becomes even more important than
usual in controlling inflation. One tool she pointed to was

the Fed’s practice of “forward guidance,” which involves
making public statements that not only outline future
policy, but say which factors could change that policy. In
this “new normal” environment, she noted, such tools are
even more critical — and if a central bank seeks long-run
inflation at 2 percent, it has to understand how to move
long-run expectations upward as well. (See “When Talk
Isn’t Cheap,” Econ Focus, First Quarter 2013.)
“We need to know more about the manner in which
inflation expectations are formed and how monetary policy influences them,” said Yellen in a 2016 speech, noting
that both actual and expected inflation are ultimately tied
to the inflation target. But it’s not clear how this anchoring takes place, she added.
“Does a central bank have to keep actual inflation near
the target rate for many years before inflation expectations completely conform?” she asked. “Can policymakers
instead materially influence inflation expectations directly
and quickly by simply announcing their intention to pursue a particular inflation goal in the future?”
A Fresh Strategy?
As noted above, one important reason behind the Fed’s
2012 announcement of the 2 percent target was transparency: The Fed wanted to present a benchmark that would
convey to the public its view of how much inflation to
expect in the long run and anchor expectations accordingly.
And recent FOMC minutes indicate that almost all committee members still believe that this target is appropriate.
But there is an alternative approach, advocated by San
Francisco Fed President John Williams, known as “pricelevel targeting,” which gives the Fed the flexibility to adjust
its inflation targets so it can “catch up” on future inflation
when it’s low — and vice versa. As he sees it, a more flexible
approach like a price-level target would shore up the Fed’s
credibility. This strategy would adjust the inflation target
to the trajectory of prices and deviations from the natural
unemployment rate rather than a fixed numeric target. In
essence, when inflation is unusually low, the Fed could set a
higher target; when inflation picks up, the Fed would adjust
the inflation target back downward. To Williams, pricelevel targeting can also work around the constraint set by
the zero lower bound because it signals to the public that
the Fed is willing to pursue higher inflation even when real
and nominal rates are around zero.
“A price-level target provides greater clarity on where
prices will be 5, 10, and 30 years into the future, time horizons that people think about when buying a car, a home,
or planning for retirement,” Williams said in a presentation last May. “This should lend itself to greater transparency and clarity for the public — especially when interest
rates are constrained by the lower bound.”
Most other FOMC members, by contrast, have not
publicly embraced such an approach or any change to the
2 percent target. And Yellen has expressed skepticism
continued on page 19

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

5

JARGONALERT
Yield Curve

A

yield curve is a chart that plots the yield on a given
bond — that is, the interest rate offered at current market prices — at every available maturity.
Usually a yield curve displays the shortest maturities on
the left and the longest maturities on the right. It is a snapshot in time: You can replot a yield curve as fast as market
expectations change, meaning today’s yield curve may not
look the same as tomorrow’s.
The yield curve is a simple concept, but what it means
is much debated. The idea is to understand factors
influencing short- versus long-term interest rates, but
multiple factors affect both. Those factors include the
risk that the bondholder won’t be repaid (called the
bond’s credit risk); expected inflation (since inflation reduces the real
value of a bond, though some bonds
are indexed to inflation such that
their yields don’t include a premium
for inflation risk); a term premium
(compensation for tying up the
investor’s funds for the given time
period, though this can be a benefit if an investor wants to lock in a
given return); and the short-term
rates that are expected to prevail
over the life of the bond. Any time
expectations about those factors change, a yield curve is
liable to change shape. But it won’t necessarily be obvious which components have shifted to cause that change.
An upward-sloping yield curve is the most common.
But if short-term interest rates are expected to fall, a
yield curve can flatten or even become downward sloping.
That’s often interpreted to suggest that market participants see a recession on the horizon, and with it, looser
monetary policy.
And yield curves have often been correct about recessions. In a 2009 study comparing yield curves to common
forecasting methods, San Francisco Fed President John
Williams and Executive Vice President and Senior Policy
Advisor Glenn Rudebusch found that the slope of the
yield curve does far better at predicting recessions three
and four quarters out than economists and professional
forecasters do using all the information and data available
to them. That said, the yield curve has falsely “predicted”
recessions, most notably in 1966 and in 1998, the Cleveland
Fed has pointed out. And the yield curve is generally flatter today than it used to be as inflation in many countries
has become lower and more stable.
Moreover, factors beyond the economic forecast also
affect longer-term yields. The Fed raised its policy rate no
6

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

fewer than 17 times between June 2004 and June 2006, but
long-term rates stayed steady and even declined. This was
especially unusual given the perceived health of the economy at the time. Then-Fed Chairman Alan Greenspan
famously called this bond market behavior a “conundrum,”
which remains partly a mystery.
For monetary policy, the yield curve can be both a predictive economic indicator and a measure of monetary
policy’s effects. Though the Fed directly controls only
very short-term interest rates, it does have tools for influencing longer-term rates, which are the rates that drive
much of economic activity — the rates on consumer
loans and major business investments, for example.
The Fed’s influence, or lack
thereof, over longer-term rates
became a focus after the Great
Recession. Once the Fed pushed
short-term policy rates as low as they
could go — the zero lower bound
— it sought to continue stimulating the economy by pushing down
longer-term interest rates through
three avenues. First, it signaled its
intent to keep monetary policy loose
for a long time to come, suppressing the expected path of short-term
rates. Second, it purchased large quantities of long-term
securities like treasuries and mortgage-backed securities
to push up their market price and push down their yields.
Third, the Fed exchanged short-term securities on its
balance sheet for longer-term ones to further “twist” the
yield curve.
Estimates differ on the extent to which these moves
successfully lowered long-term rates. The yield curve did
steepen dramatically when Bernanke hinted the end of
these extraordinary actions, which had the effect of tightening financial conditions. Some even wondered whether
this tightening would, ironically, force the Fed to delay its
return to conventional monetary policies. A focus today
has been on what will happen as the Fed returns to “normal” monetary policy in which short-term interest rates
are the key policy lever. (See “Time to Unwind,” page 30.)
Many observers don’t expect the same volatility this time
since the Fed’s next policy moves seem more clear to markets. (See “Unwinding the Fed’s Asset Purchases,” Econ
Focus, Second Quarter 2017.)
Still, as normalization continues, the Fed will likely
be watching the yield curve as one important measure of
how its policy changes are being interpreted by financial
markets.
EF

ILLUSTRATION: TIMOTHY COOK

BY R E N E E H A LT O M

RESEARCH SPOTLIGHT

Affirmative Action, On and Off

I

BY KO DY C A R M O DY

n 1965, President Lyndon B. Johnson signed Executive
the black-white jobless gap over this period.” Moreover,
Order (EO) 11246, requiring that federal contractors
this effect persisted after a firm was no longer subject to
take “affirmative action” to prevent discrimination
the regulation; in the five years after a firm stopped being
in their hiring and employment practices; firms of a cera federal contractor, its black share of employees grew, on
tain size and contract value were subject to more strict
average, by another 0.8 percentage point.
requirements, such as specifying goals and timetables for
Why did the firms continue to increase their minority
hiring of minorities. Conrad Miller, an economist at the
hiring when no longer required? One possibility is that
University of California, Berkeley, has described EO 11246
they anticipated becoming federal contractors again; if
as “arguably [one] of the most controversial labor market
there are adjustment costs to adopting affirmative action,
interventions in U.S. history.”
an employer expecting a future contract might find it best
Current theoretical models of affirmative action in hiring
to keep complying with the executive order. Firms might
tend to focus on statistical discrimination and human capalso think that compliance would increase their chances
ital accumulation: If employers believe that members of a
of winning contracts. Miller argued that these explanaminority group are less productive or if employers have diffitions were not supported: Whether a firm won subseculty evaluating minority candiquent contracts did not show
dates, their hiring will be biased
any relationship with their black
“The Persistent Effect of Temporary
against members of that group.
share of employees or their perAffirmative Action.” Conrad Miller.
That minority group as a whole
sistence in affirmative action
American Economic Journal: Applied Economics,
would then have less incentive
compliance.
to invest in human capital, such
Thus, Miller argued, the perJuly 2017, vol. 9, no. 3, pp. 152-90.
as education and training. If so,
sistent positive effect on hirtemporary affirmative action
ing of black workers suggests
might have persistent positive effects on minority hiring by
that compliance with the executive order was profitable:
encouraging minority human capital accumulation.
Firms’ hiring of blacks might have been inefficiently low
Other economic models of affirmative action, often
before the regulation, or there might just be multiple equicited by skeptics, treat the policy as introducing ineffilibria for the racial composition of new hires.
ciency into labor markets by forcing employers to lower
Miller argued that firms may respond to EO 11246 in a
their hiring standards for minorities. Still another possibilmore complex manner than simply lowering their hiring
ity is that EO 11246 has simply had little effect, possibly as
standards for the affected groups. He put forward a model
a result of limited enforcement. An innovative 2016 article
of “screening capital,” in which employers can respond to
by Fidan Ana Kurtulus of the University of Massachusetts,
such programs by improving their recruiting and selection
Amherst compared contractors to noncontractors, findprocesses. These improvements include investments such
ing that the regulation had only small effects — a less than
as developing tests, employing and training personnel
0.1 percentage point increase in a firm’s black share of
specialists, and building relationships with intermediaries
employees that disappeared as soon as four years after that
such as employment agencies and schools.
firm became a federal contractor.
Miller’s screening capital model makes two main preBut in a recent article in the American Economic Journal:
dictions. First, the model predicts that screening capital
Applied Economics, Miller has argued that the more proper
investments will reduce all racial disparities in individual
comparison is between firms that had ever been federal
firms’ hiring rates; if employers tend to underestimate or
contractors and firms that had not; if affirmative action did
have trouble screening certain racial groups, better screenhave lasting effects on individual firms, then simply coming should reduce that gap. Second, it predicts that affirmaparing contractors to noncontractors would obscure effects
tive action will increase the returns to screening capital (by
at firms that stopped contracting with the federal governimproving the expected quality of minority hires).
ment but continued to increase their minority hiring.
Miller found that large employers, who tend to spend
Such an effect — both persistent and large — is what
more time on screening and use more screening methods,
Miller found. In the five years after a firm became subject
also have a higher black share of workers among their
to EO 11246, its black share of employees increased by an
employees. While data limitations prevented Miller from
average of 0.8 percentage point. To provide perspective,
ruling out alternative mechanisms, he concluded that the
Miller noted that “a 0.8 to 1.3 percentage point increase
evidence suggests that screening investments play a role in
in the black share of the U.S. workforce would eliminate
the persistent effects of affirmative action.
EF
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

7

CYBERATTACKS and the
DIGITAL DILEMMA
Recent high-profile hacks
have renewed calls for
improved security, but competing
incentives pose a challenge
By Tim Sablik

O

ver the past year, Americans have been inundated
with news of one large-scale cyberattack after
another. The Democratic National Committee’s
email server was compromised during the 2016 election,
and the organization’s internal emails were posted publicly by WikiLeaks. An October 2016 attack temporarily
disrupted service to many of the most trafficked sites
on the Web, including Netflix, Amazon, and Twitter.
Ransomware — malicious code that locks a computer’s
files until users pay for a decryption key — infected business, government, and personal computers around the
globe in May and June 2017. And in September, credit
bureau Equifax disclosed that hackers accessed personal
data used to obtain loans or credit cards for as many as 143
million Americans — making it potentially the largest data
theft in history. No digital system seems safe.
According to Symantec’s 2017 Internet Security Threat
Report, more than 1 billion identities were exposed due to
data breaches in 2016 alone, and the number of large-scale
breaches (those that exposed more than 10 million identities) crept up from 13 in 2015 to 15 in 2016. Ransomware
threats have also ballooned. From 2015 to 2016, the average
ransom demanded by attackers rose from $294 to $1,077.
In response to these threats, organizations are heaping significant sums at cyber defense. International Data
Corporation, an IT market analysis consultancy, forecasts
that worldwide spending on cybersecurity software and
services will surpass $80 billion this year. They predict
that number will grow to more than $100 billion by 2020.
Despite that, successful attacks have shown no signs of slowing down. What makes cyber defense so difficult, and can
economic principles shed any light on how to improve it?

8

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

More Connected, More Exposed
One reason cyberattacks continue to be a problem despite
efforts to stop them is that there are simply more avenues
of attack than ever before. For instance, a growing array
of consumer devices — TVs, cars, ovens, and thermostats,
to name a few — are now connected to the Web, making
up what has been called the Internet of Things (IoT).
One estimate holds that there will be more than 8 billion
connected devices by the end of 2017 — more than one
for every person on the planet. By 2020, this number is
expected to grow to more than 20 billion. But these new
devices come with a trade-off.
“The more technology we accumulate to make our lives
easier, the more it opens us up to attack,” says Timothy
Summers, the director of innovation, entrepreneurship,
and engagement at the University of Maryland’s College
of Information Studies.
That’s the digital dilemma: With the power and convenience of greater connectivity comes more potential for
vulnerability to intruders.
Last fall, hackers seized control of thousands of IoT
devices to create a “botnet” — an army of infected
machines. Botnets are typically used to launch what are
known as distributed denial of service (DDoS) attacks
where enslaved computers overwhelm websites with
requests, enough to temporarily shut them down. The
DDoS attack last October that hit numerous major websites was launched using a botnet of IoT devices. With
that service knocked offline, many of the most highly trafficked sites on the Web became hampered or unreachable.
“Anytime you enable an operation, you’re creating a
potential path for a bad guy to carry out an operation as

well,” says Martin Libicki, the chair of cybersecurity studies at the U.S. Naval Academy and a researcher at RAND
Corp. “In the old days, if I wanted to set my thermostat,
I had to actually put my fingers on the thermostat itself.
That limits the amount of mischief someone can do. Now
that you can change your thermostat using your phone,
potentially everyone else can too. So in order for me to
have my convenience, I have to enable capabilities that
might get hijacked.”
Just as connecting household devices through the IoT
benefits consumers, interconnectivity offers firms many
benefits as well. Sharing data and system access with regular
business partners may improve supply chains. But expanding the range of trusted individuals or companies who have
access to a firm’s system increases the opportunities for bad
actors to access it. For example, hackers were able to gain
access to Target’s payment information in 2013 by compromising a system of one of the company’s contractors, an
HVAC vendor. (See “Cybersecuring Payments,” Econ Focus,
First Quarter 2014.)
Automating updates can ensure that a computer system’s defenses against hackers remain up to date — unless
those automated updates become the gateway for malicious actors to enter the system. The ransomware attack
that occurred this past June initially infected Ukrainian
computers by corrupting an automatic update to widely
used tax software.
Likewise, allowing employees to remotely access their
files or emails at home or on the road can increase productivity and create a more flexible workforce but at the
expense of more digital doors to defend.
Is it possible to reap the benefits of increased connectivity while minimizing our vulnerability?
Lack of Incentives
One often proposed solution to cyberattacks is to simply
increase security spending. Economic theory does offer
some insights into why individuals and firms might underinvest in cybersecurity from a social perspective. As the
botnet used in the October 2016 DDoS attack illustrates,
the owners of the breached systems are not necessarily the
ones who suffer the most. This creates a potential externality problem, which can skew the incentives to demand
or supply cybersecurity.
On the demand side, if consumers don’t bear the costs
of their devices being breached, they may demand more
open devices in the interest of convenience. Additionally,
they may believe their devices are more secure than
they actually are. Manufacturers, who know more about
the security of their products than buyers, may take
advantage of this information asymmetry to sell cybersecurity “lemons.” For example, Brian Krebs, a leading
cybersecurity expert and blogger, has reported that
many IoT devices come with weak security measures
out of the box. A recent Senate bill seeks to address this
situation by setting baseline security standards for any

Internet-connected devices sold to the government.
Both positive and negative externalities also may skew
the incentives for firms to invest in security. The Internet
is designed to allow all machines on the network to
interact with one another, and many devices share common software and operating systems. Once an exploit is
implemented on one machine, it can quickly spread to
others on the network. In this way, each firm’s security
depends both on its own defenses as well as the aggregate
security of the entire network, what Howard Kunreuther
of the University of Pennsylvania and Geoffrey Heal of
Columbia University described in a 2003 article as “interdependent security.” This interdependency may result in
weaker network security for a couple of reasons. First,
since firms benefit from the security investments of
others, some may devote fewer of their own resources to
security than they would in a vacuum. If enough firms do
this, it weakens the security of the network as a whole,
potentially undoing the benefits of the firms that invest
more in cybersecurity. Second, even assuming each firm
invests in a level of security appropriate for its own needs,
it may still impose costs on other firms on the network
that value security more highly.
On an individual level, firms also have incentives to
limit cybersecurity spending. In a 2002 article, Lawrence
Gordon and Martin Loeb of the University of Maryland
developed an influential model of information security
suggesting that firms maximize their benefits from cybersecurity by spending only a fraction of their expected
losses from a breach, similar to the rationale for insurance.
Often the most vulnerable systems or information are the
most costly and challenging to defend. Moreover, defenders face a great deal of uncertainty about where attackers
will strike. Attackers will always seek out the weakest link
in a system, but it may be difficult to identify weak points
ahead of time. Rainer Böhme of the University of Münster
and Tyler Moore of the University of Tulsa argued in a
2016 article that it may therefore be rational for firms to
wait for attackers to identify weak points for them and
respond after the fact.
While these actions may be rational for individual
consumers and firms, they could result in less security
and more costly outcomes for society as a whole. In
response to a 2013 executive order from President Barack
Obama seeking to improve critical infrastructure cybersecurity, the Department of Homeland Security issued
a report exploring the incentives firms have to provide
adequate cybersecurity from the perspective of society
and how the government might better align those incentives. Options included using carrots, such as grants tied
to security improvements, and sticks, such as regulations
that hold entities liable for failing to meet minimum security standards.
Private actors have also tried to solve the externality
and interdependent security problems. After Google was
hacked in 2009 through a flaw in Microsoft’s Internet
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

9

Explorer browser, it began to examine its partners’ software more closely. In 2014, Google revealed Project Zero,
a team dedicated to notifying firms of flaws in their software. At times, Project Zero members have threatened to
disclose flaws publicly in order to pressure firms to patch
the holes in their programs.
Ultimately, better cybersecurity is unlikely to be simply
a question of resources alone. “No doubt there are companies that should be devoting more resources to cybersecurity,” says Josephine Wolff, a cybersecurity expert at the
Rochester Institute of Technology who studies the costs
of cyber incidents. “But often when you retrace where
things went wrong after a breach, the problems arise not
from how much or how little a victim spent on defending
itself but rather from what they spent their resources on.”
Hackers for Hire
There is certainly no shortage of cybersecurity options
for firms, governments, and individuals to choose from.
Firewalls, antivirus software, and encryption, to name a
few, are all aimed at keeping bad actors away from sensitive systems and data. As with the walls, gates, and moat
of a castle, firms often invest in multiple layers of cyber
defenses — a strategy known as “defense in depth.”
“We add as many barriers as we can in hopes that
maybe the hackers won’t be able to get in,” says Summers
of the University of Maryland. “And when they finally do
get in, we always say that if we just had one more barrier,
they wouldn’t have been able to get through. But there’s
always going to be a way in.”
In a 2016 article, Wolff described another potential
problem with simply accumulating multiple layers of cyber
defenses: It can be counterproductive. Different security
programs may interact poorly or prompt responses from
human users that defeat the purpose of the security. For
example, requiring users to regularly update passwords
can make systems harder to breach — unless it prompts
users to keep track of a multitude of passwords on notes
10

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

at their desk or to choose shorter, simpler passwords that
are easier to remember.
Therefore, it is important to have contingency plans in
place for when attackers do get through, says Summers.
One way firms have tried to do this is by hiring skilled
security personnel who can identify holes in defenses and
respond to attacks in real time. These “white hat” hackers
have many of the same skills as their criminal “black hat”
brethren, and demand for those skills is high.
White hat hackers may work for firms or government
agencies directly or freelance in the growing “bug bounty”
market. A number of third parties manage payouts
offered by tech companies for finding and reporting various types of software flaws. Rewards vary based on the
severity of the flaw, from hundreds or thousands of dollars to over a million dollars in some cases. HackerOne,
one of the largest platforms for bug bounties, has paid
out more than $20 million since 2012. Other platforms
also report year-over-year growth. Bugcrowd’s total payouts grew 211 percent since 2016 to more than $6 million,
and its average payout per bug rose to $451 from $295.
“It’s a big market,” says Libicki of RAND Corp. But it
isn’t the only market for hackers’ services.
Understanding Cybercriminals
As is the case with physical security, cyber defense is
inherently more difficult than offense. Defenders have
to protect every conceivable entry point into a system;
attackers only need to find one opening to succeed. And
as the market for cyber defense has evolved, so has the
market for cybercriminals.
“The hacker market — once a varied landscape of discrete, ad hoc networks of individuals initially motivated
by little more than ego and notoriety — has emerged as
a playground of financially driven, highly organized, and
sophisticated groups,” according to a 2014 RAND report.
Today’s cyberattackers don’t even need to be particularly tech savvy themselves. They can buy exploit kits
designed by someone else and rent botnets by the hour
to launch DDoS attacks, another source of revenue for
skilled hackers. Such services, which sell for hundreds or
thousands of dollars on the black market, can be broadly
affordable for attackers and lucrative for underground
coders. (See table.)
Just as the incentives for defenders matter for cybersecurity, so too do the incentives faced by attackers. In a
seminal 1968 article, the late Nobel laureate in economics
Gary Becker argued that criminals are rational economic
agents, weighing the costs and benefits of their actions.
For firms and governments concerned about cyberthreats,
there are a variety of ways they might attempt to change
the criminal calculus. For instance, given the right incentives for legal hacking, some hackers might be persuaded
to trade in their black hats for white ones.
As a self-described ethical hacker himself, Summers
has interviewed hundreds of hackers to better understand

what motivates them. “Many times, it’s really the challenge
that drives them more than anything else,” he says. “I think
that the bug bounty programs are just a little too focused
on the monetary aspects. If you think about our economic
system, there are many mechanisms that motivate people.
Multilayered incentives for cybersecurity are really lacking.”
Giving hackers more freedom to explore system exploits
in a legal setting could bolster defense against malicious
actors, but it might not necessarily reduce criminal activity. The anonymity of the Internet makes it hard to be certain that hackers aren’t “double-dipping” in both legal and
illegal markets. For example, Marcus Hutchins, a British
hacker who helped stop the spread of the “WannaCry”
ransomware attack in May, was recently arrested by the
FBI and charged with developing and distributing other
malware. (Hutchins has pleaded not guilty to the charges.)
Of course, carrots aren’t the only way to change criminal
incentives. Law enforcement can also raise the costs of
cybercrime. In a 2016 operation, U.S. and European law
enforcement agencies worked together to shut down thousands of domains associated with the Avalanche network, a
major global provider of malware. Authorities also identified and apprehended key administrators of the network to
ensure it couldn’t immediately rebuild. According to a study
by the Center for Cyber & Homeland Security at George
Washington University, the Avalanche takedown operation
temporarily disrupted the entire cybercrime ecosystem.
In addition to apprehending and prosecuting cyber
criminals, law enforcement — through anti-money laundering laws and “know your customer” laws in the banking
system — can also make it more costly for them to get at
their profits. Hackers have also become victims of their
own success and the forces of supply and demand within
black markets. For example, the average value of a stolen
credit card on the black market plummeted from $25 in
2011 to $6 in 2016, according to Intel Security. This may
help explain the recent rise of ransomware, which seeks to
sell stolen data back to the person often willing to pay the
most for it — the victim.
Carefully weighing security options and reducing incentives for crime are two methods of managing cyberattacks.
A third option is simply to reduce the opportunities criminals have to access sensitive data in the first place.

Something for Everyone
Black market prices for cybercrime tools and stolen data

Price

Malware and Services

Basic banking Trojan kit with support
Password stealing Trojan

$100
$25-$100

Android banking Trojan

$200

Ransomware kit

$10-$1,800

DDoS service, short duration
DDoS service, more than 24-hour duration

$5-$20
$10-$1,000

Consumer data

Single credit card

$0.5-$30

Airline frequent flyer miles account (10K miles)
Identity (Name, SSN, and DOB)
Scanned passports and other documents

$5-$35
$0.1-$1.5
$1-$3

SOURCE: Symantec 2017 Internet Security Threat Report.

Openness vs. Security
Rethinking who should have access to data and what
should be accessible from the Internet lies at the heart of
the digital dilemma.
“Today’s attitude is largely that we want to have access
to everything, and if that creates security problems, that’s
what we have firewalls for,” says Libicki. “When an attack
happens, the response usually isn’t that we’ve made our
systems too accessible, it’s that we need to double down
on security.”
To be sure, reducing accessibility and interconnectivity would have costs, too, which would need to be
weighed against the costs of cybersecurity and the costs
of breaches. There is no doubt that the openness of the
Internet has had tremendous economic and social benefits. Weighing the benefits of openness and interconnectivity against the need for security will likely be a matter of
continuing deliberation in the coming decades.
“Cybersecurity is really a matter of three trade-offs,” says
Libicki. “How much are you willing to invest in security?
How much loss are you willing to accept? And how much
are you willing to change the way you do business?”
EF

Readings
Ablon, Lillian, Martin C. Libicki, and Andrea A. Golay. Markets
for Cybercrime Tools and Stolen Data: Hackers’ Bazaar. Santa Monica,
Calif.: RAND Corp., 2014.
Böhme, Rainer, and Tyler Moore. “The ‘Iterated Weakest Link’
Model of Adaptive Security Investment.” Journal of Information
Security, March 2016, vol. 7, no. 2, pp. 81-102.
Kunreuther, Howard, and Geoffrey Heal. “Interdependent
Security.” Journal of Risk and Uncertainty, March/May 2003, vol. 26,
no. 2/3, pp. 231-249.

Wainwright, Robert, and Frank J. Cilluffo. “Responding to
Cybercrime at Scale: Operation Avalanche — A Case Study.”
Center for Cyber & Homeland Security Issue Brief No. 2017-03,
March 2017.
Wolff, Josephine. “Perverse Effects in Defense of Computer
Systems: When More is Less.” Journal of Management Information
Systems, October 2016, vol. 33, no. 2, pp. 597-620.

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

11

Subprime Securitization
Hits the Car Lot
Are fears of a “bubble” in auto lending overstated?
By Jessie Romero

T

he car dealers deliberately inflated borrowers’
incomes — sometimes without the borrowers’
knowledge — to ensure the loan applications would
be approved and they’d make the sale. The lender knew
the applications were fraudulent and the borrowers were
likely to default, but it didn’t care because it could package
the loans into securities and sell them off to investors. At
least, that’s the version of events described in an action
brought by the attorneys general of Massachusetts and
Delaware against Santander Consumer USA, a subsidiary
of the Spanish bank Banco Santander that specializes in
auto financing. In March 2017, Santander agreed to a $26
million settlement that includes $19 million in relief to
more than 2,000 borrowers.
To many observers, Santander’s alleged lending practices look alarmingly similar to those that contributed to
the housing boom and bust a decade ago, lending weight
to broader concerns that rising delinquencies indicate an
auto lending “bubble” is about to burst. “Auto Loan Fraud
Soars in a Parallel to the Housing Bubble,” proclaimed one
headline. “Are Car Loans Driving Us Towards the Next
Financial Crash?” asked another.
Regulators and policymakers also have expressed
unease. In the fall of 2016, for example, the Office of
the Comptroller of the Currency warned that auto lending risk was increasing and that some banks did not
have sufficient risk management policies in place. Fed
Gov. Lael Brainard pointed to subprime auto lending as an
area of concern in a May 2017 speech; her concerns were
repeated — and amplified — the next month in a speech
by then-Gov. Stanley Fischer.
While it’s not obvious whether the increase in subprime
auto lending is a significant departure from past cycles, it
has raised eyebrows coming so soon after the mortgage
crisis — especially as delinquencies have begun to rise.
In addition, an increasing share of those loans have been
securitized and spread through the financial system, much
like mortgages before the housing bust. Still, even if the
auto finance industry were poised for a fall, the effects on
the financial system could be limited — although the auto
industry itself might take a hit.
12

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

Buy Now, Pay Later
In 1919, General Motors (GM) had a problem. The innovation of the assembly line a half-dozen years earlier by
Henry Ford had made it cheaper and easier to build cars,
but that meant GM needed its dealers to buy in bulk —
and the dealers needed people to buy more cars. The solution was credit, but banks were leery of making loans for
a relatively new invention they didn’t know how to value.
(Around the same time, the Federal Reserve warned banks
against financing “automobiles that are used for pleasure.”)
So GM launched its own financing company, the General
Motors Acceptance Corporation (GMAC), to enable
dealers to stock more inventory and consumers to buy
more cars. Other car manufacturers eventually followed
suit, and today, every major auto manufacturer has its own
“captive” finance company.
The next major innovation in auto finance arrived
half a century later. Banks and credit unions had entered
the market by this point, but loans generally were only
available to borrowers with strong credit histories. That
began to change in 1972, when Detroit businessman Don
Foss founded Credit Acceptance, an independent finance
company, to finance sales at his network of used car dealerships. Credit Acceptance was the first company to specialize in auto loans to borrowers with limited or poor credit
history, known today as “subprime” loans, and its success
spawned numerous competitors.
One of those was Ugly Duckling, an Arizona-based
used car dealership that expanded quickly during the
1990s. (The company is now known as DriveTime.) Ugly
Duckling mainstreamed the “buy here, pay here,” or
BHPH, dealership format, in which the dealer is also the
lender, typically to borrowers with very poor or no credit.
BHPH dealerships often require borrowers to make their
payments in person, hence the name; interest rates may be
as high as 30 percent.
Today, roughly 86 percent of all new cars in the United
States are purchased via financing; about two-thirds of
those transactions are loans and one-third are leases.
Captives and banks issue the majority of new car loans and
leases; as of the second quarter of 2017, they had 53 percent

Motor Trends
Household auto debt fell during the Great Recession, as
did all types of household debt excepting student loans,
but has rebounded more quickly than other types. Between
the second quarter of 2010 and the second quarter of 2017,
outstanding auto debt increased nearly 70 percent, from
$700 billion to $1.2 trillion, according to the New York
Fed’s Quarterly Report on Household Debt and Credit.
In contrast, credit card debt increased just 5 percent, from
$7.4 billion to $7.8 billion. Auto loans are now the
third-largest form of debt behind mortgages ($8.7 trillion)
and student loans ($1.3 trillion).
Subprime auto debt contracted sharply during the Great
Recession but growth resumed soon after. While there is
no legal definition of prime or subprime, a credit score of
620 is generally the cutoff in auto finance; credit scores
range from 300-850. Between 2010 and 2015, average quarterly originations to subprime borrowers more than doubled, from $15 billion per quarter to $31 billion per quarter
(albeit still below the high of $34 billion per quarter in
2005), according to New York Fed data.
With the mortgage crisis fresh in many people’s memories, the increase in subprime auto lending garnered considerable attention. But the growth was comparable to growth
in other credit categories. Loans to borrowers with a credit
score between 660 and 719 increased from an average of
$17 billion per quarter to $31 billion per quarter. Loans to
“super prime” borrowers, those with a credit score above
760, grew less in percentage terms but have surpassed the
pre-recession peak. (See chart.) “The subprime pipe was
turned off after the financial crisis,” says Melinda Zabritski,
the senior director for automotive finance solutions at
Experian. “When the pipe got turned back on, the increase
looked dramatic, but we were coming out of a trough.”

Subprime Revs Up

The return of subprime auto lending has garnered considerable attention

Credit Score

<620

620-659

660-719

720-759

Q1 2017

Q1 2016

Q1 2015

Q1 2014

Q1 2013

Q1 2012

Q1 2011

Q1 2010

Q1 2009

Q1 2008

Q1 2007

Q1 2006

Q1 2005

50
45
40
35
30
25
20
15
10
5
0
Q1 2004

AUTO LOAN QUARTERLY ORIGINATIONS
($BILLIONS)

and 29 percent market share, respectively. Credit unions
currently finance around 13 percent of new cars, with the
remainder financed by independent finance companies,
BHPH dealerships, and other lenders. In the used car market, about 55 percent of cars are financed, the vast majority
via loans. At present, banks make 35 percent of used car
loans, slightly more than their share of the new car market. Credit unions, independent finance companies, and
BHPH dealerships play a much larger role in the used car
market than they do in the new car market, with 27 percent, 17 percent, and 13 percent market share, respectively.
While a consumer can work directly with a lender and
shop for a car with a pre-approval in hand, about 80 percent of car financing is arranged through dealerships. The
dealer sends the loan application to a number of lenders
with whom it has a relationship, and a lender who is willing
to make the loan will respond with a “buy” rate. The dealer
then has some discretion to either lower the rate and absorb
the difference in order to make the sale, or to charge the
purchaser a higher rate and keep the difference as compensation for serving as middleman.

760+

SOURCE: Federal Reserve Bank of New York, “Quarterly Report on Household Debt and Credit,”
Second Quarter 2017 		

Some of the growth was fueled by competition, particularly among captives and independent finance companies,
which have originated about 75 percent of outstanding
subprime loans. As the demand for auto loans grew during
the recovery, new finance companies entered the market
and existing finance companies sought to expand. In an
effort to reach new customers, these companies “started
buying a little ‘deeper’ and taking on more subprime
borrowers,” says Zabritski. Even GM, which had sold a
majority stake in GMAC to a private equity firm in 2006,
got back into the financing game by purchasing subprime
specialist AmeriCredit in 2010.
Subprime auto lending might already have peaked for
now, however. Beginning in 2016, bankers reported tightening auto lending standards in the Fed’s Senior Loan
Officer Opinion Survey, and even some traditional subprime specialists have taken steps to tighten credit. Overall,
average credit scores increased by four points for both new
and used cars between the second quarter of 2016 and the
second quarter of 2017, according to Experian. Average
quarterly subprime originations also decreased in 2016, for
the first time since 2009, to $30 billion per quarter.
The retreat is potentially a response to rising delinquencies. Between 2012 and 2016, average annual subprime
delinquencies increased from 2.5 percent to 4.3 percent — a
higher rate than in 2008, according to S&P Global Ratings.
Researchers at the New York Fed calculated that the share
of subprime loans that were 90 days or more delinquent
increased nearly 40 percent from the beginning of 2013
to the third quarter of 2016, for a total of about 6 million
consumers.
One reason for the rise in subprime delinquencies,
despite improvements in the economy and labor markets
overall, may be a change in the composition of subprime
borrowers. The foreclosure crisis affected both subprime
and prime borrowers, so consumers who may otherwise
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

13

Subprime Securitization

Subprime

Other

2017

2013

2015

2011

2007

2009

2003

Near Prime

2005

2001

1997

1999

1993

1995

1991

1987

1989

100
90
80
70
60
50
40
30
20
10
0
1985

PERCENT OF TOTAL

As a share of all auto ABS, subprime has surpassed its pre-crisis peak

Prime

NOTE: “Other” includes ABS backed by leases, fleet sales, rentals, dealer floorplan loans, and
motorcycle/RV loans. Data for 2017 are from the third quarter.
SOURCE: Securities Industry and Financial Markets Association 		

have been low risk could have seen their scores drop to subprime levels. Many of those borrowers’ credit scores have
since recovered, however. (See “The Missing Boomerang
Buyers,” Econ Focus, First Quarter 2017.) As a result, the
remaining pool of subprime borrowers may be riskier.
It’s also possible that increased competition led lenders
to lower their underwriting standards in other ways, such as
not requiring proof of income. In at least one batch of loans,
for example, Santander Consumer verified just 8 percent of
borrowers’ incomes. Lenders also have been increasing the
length of loans. In 2002, 36 percent of subprime loans had
an original loan term longer than 60 months. In 2016, more
than 83 percent had a term longer than 60 months. Some
lenders even offer 84-month — seven-year — car loans.
These loans are attractive to some buyers because they offer
a lower monthly payment, but they are also much more
likely to end in default.
Securitizing Subprime
Like other types of loans, including student loans and
credit card receivables, auto loans can be packaged into
securities and sold to investors. These “asset-backed securities,” or ABS, provide the lender with the cash (and an
additional incentive) to make more loans. ABS made up of
auto loans — “auto ABS” for short — are mostly issued by
independent and captive finance companies.
Auto loan securitization increased rapidly in the early
2000s, as did securitization in general. Between 2000
and 2005, the annual issuance of auto ABS increased from
$70 billion to $106 billion; total ABS (excluding collateralized debt obligations, which comprise multiple securities types, including mortgages) grew from $185 billion to
$280 billion. Issuance of auto ABS contracted sharply
during the Great Recession, but afterward, between 2010
and 2015, it grew from $58 billion per year to $96 billion.
Currently, auto ABS makes up about 45 percent of ABS
14

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

issuance, according to data from the Securities Industry and
Financial Markets Association (SIFMA).
An increasing share of those ABS are backed by subprime loans. In 2009 — the financial crisis trough — subprime loans accounted for just 11 percent of auto ABS. As of
the third quarter of 2017, the share had more than doubled,
to 22.5 percent — 4.5 percentage points higher than the
pre-recession peak. (See chart.) Moreover, many of those
loans were made to the riskiest borrowers; the share of
securitized subprime loans considered “deep subprime” —
to borrowers with a credit score in the mid-500s or below
— soared from 5 percent in 2010 to 33 percent in 2017.
The increase in subprime and deep-subprime securitization has continued despite worsening performance.
Subprime securitization net loss rates have increased
steadily since 2010; in June of that year, the average net
loss rate was 3.5 percent, according to market analytics
firm S&P Global. In June 2017, the rate was 6.2 percent.
The cumulative net loss rate — total losses since the security was issued — also has increased for successively later
“vintages” of security issuances.
Lower performance in part reflects more loans being
made to borrowers with lower credit scores and their usually
higher delinquency rates; if average credit scores continue
to improve, performance might improve as well. But higher
loss rates also reflect lower recovery rates, meaning that
lenders are recouping less from the sale of repossessed cars.
Longer loan terms bear some of the blame, because they
make it more likely the loan’s outstanding balance exceeds
the car’s value if the buyer defaults in the early years of the
loan. In addition, record-high rates of vehicle leasing in
recent years have swelled the number of used cars on the
market, lowering the value of repossessed collateral. If those
trends continue, they might be a drag on subprime ABS
performance even if delinquency rates stabilize or go down.
Toil and Trouble?
Higher loss rates don’t necessarily translate into higher
losses for investors, however. Some amount of loss is
built into issuers’ projections, and subprime issuers typically offer “credit enhancements,” such as establishing
a reserve fund or holding extra collateral, to cover those
expected losses. And since the Securities and Exchange
Commission issued a rule in 2014 requiring ABS issuers to
release loan-level detail about their bond packages, potential investors have substantial information with which to
evaluate the adequacy of those enhancements. Problems
are more likely to arise when the actual losses exceed the
expected losses.
That’s what happened in the mortgage market a decade
ago; in hindsight, market participants underpriced the
amount of risk present in mortgage-backed securities
(MBS), perhaps in part because they had some expectation
the government would step in to protect them. There’s
little evidence of a similar expectation in auto lending;
although the U.S. Treasury purchased a large stake in

GMAC in 2008, the move was widely regarded as an
attempt to protect auto manufacturers rather than lenders
or investors. All else equal, the absence of government
support, explicit or implicit, would make the auto lending
industry more responsive to risk.
Other factors could contribute to lenders and investors
underpricing risk, however. For example, a car is a relatively easy asset to repossess compared to, say, a home.
Laws vary from state to state, but in general, lenders are
allowed to repossess a vehicle as soon as the borrower is in
default without providing any prior notice. Actually towing the vehicle takes just minutes, and some lenders and
car dealers even install so-called “kill switches” to prevent
a car from starting if the borrower misses a payment. In
contrast, in many states, a home foreclosure requires judicial action; even in nonjudicial states, the process can take
months or even more than a year to complete.
Cars depreciate rapidly, which means there’s likely to
be a gap between what the borrower owes and what the
lender can recoup. But in nearly every state, lenders are
allowed to sue borrowers for the difference. A lender who
obtains such a “deficiency judgment” is able to garnish a
borrower’s wages or seize other assets. Some states also
allow mortgage lenders to sue foreclosed borrowers, but
there are greater restrictions on obtaining a judgment
than in auto lending.
Even if subprime auto ABS performance does deteriorate beyond current expectations, there are reasons to
think it’s unlikely the effects would spill beyond the auto
finance sector into the broader financial system.
“People who say, ‘This is just like the subprime
mortgage crisis!’ are missing the boat,” says Christopher
Killian, managing director and head of the securitization
group at SIFMA. First, auto loans are a much smaller portion of consumer debt than mortgages: $1.2 trillion versus
$8.7 trillion in outstanding mortgage balances. And the
volume of auto ABS is dwarfed by the volume of MBS:
Mortgage-backed securities in the United States total
more than $9.1 trillion (including both residential and
commercial), compared to $201 billion in outstanding
auto-loan-backed securities. Perhaps most important,
Killian notes, auto ABS aren’t turned into collateralized debt obligations, the highly complex securities
that helped transmit MBS losses throughout the entire
system.
“Let’s imagine the subprime auto market craters,”
Killian says. “There will be losses, but there won’t be cascading losses.”

History also offers some reassurance; this is not the first
time investors have been enamored of subprime auto loans.
In the early 1990s, the securitization market contributed
to hundreds of new subprime lenders opening their doors;
between 1991 and 1994, there were more than 20 initial
public offerings. Just a few years later, a combination
of accounting irregularities, overleverage, and fraud had
contributed to massive stock price declines and numerous
bankruptcies. Stockholders and investors lost money, but
the effects on the broader financial system were nil.
The Effects on Detroit
Still, past performance is no guarantee of future results.
As Fischer noted in his June speech, the potential for
subprime auto lending to cause broader financial distress
seems moderate at first glance. But, he cautioned, “[O]ne
should remember that pre-crisis subprime mortgage loans
were dismissed as a stability risk… and not take excessive
confidence.”
The industry most likely to feel the pain from a decline
in subprime lending is the auto industry itself, which
accounts for about 3 percent of U.S. GDP and supports
nearly 4 percent of private employment. Vehicle sales
have been a bright spot during the relatively tepid recovery from the Great Recession, doubling from a nadir of 9
million sold per month in early 2009 to about 18 million
per month at the end of 2016. But during the first eight
months of 2017, monthly sales fell by about 2 million and
many manufacturers began cutting jobs. (Vehicle sales
spiked in September of 2017, in part because consumers were replacing hurricane-damaged cars.) Numerous
factors influence vehicle sales, including energy prices
and trade policy. But many observers noted the correlation between the decline in sales and tightening credit
conditions.
The availability of credit played a large role in the drop in
vehicle purchases during the Great Recession, according to
a 2017 article in the Quarterly Journal of Economics by Efraim
Benmelech of Northwestern University, Ralf Meisenzahl
of the Federal Reserve, and Rodney Ramcharan of the
University of Southern California. The authors concluded
that lenders’ lack of liquidity accounted for nearly one-third
of the decline in auto sales in 2009.
Barring other developments, it’s doubtful a pullback
from subprime auto lending and securitization would result
in an auto credit crunch as extreme as that experienced
during the financial crisis. Still, manufacturers are keeping
their fingers crossed the subprime pipe stays open.
EF

Readings
Benmelech, Efraim, Ralf R. Meisenzahl, and Rodney Ramcharan.
“The Real Effects of Liquidity During the Financial Crisis: Evidence
from Automobiles.” Quarterly Journal of Economics, February 2017,
vol. 132, no. 1, pp. 317-365.

“Semiannual Risk Perspective for Fall 2016.” Office of the
Comptroller of the Currency, Jan. 5, 2017.
“Quarterly Report on Household Debt and Credit.” Federal Reserve
Bank of New York, August 2017.

Fischer, Stanley. “An Assessment of Financial Stability in the
United States.” Speech at the IMF Workshop on Financial
Surveillance and Communication, Washington, D.C., June 27, 2017.
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

15

The Resurgence of
Universal Basic Income
Concerns about the effects of automation have
brought an old policy proposal back into the limelight
By Kody Carmody

T

he idea that technology will make human workers
obsolete is certainly not new. In the 1930s, John
Maynard Keynes wrote, “We are being afflicted with a
new disease of which some readers may not yet have heard
the name, but of which they will hear a great deal in the
years to come — namely, technological unemployment.”
Keynes thought that technology would replace workers
faster than workers could find new jobs. But he optimistically believed that this process eventually would lead to an
“age of leisure and of abundance.”
Today, a new set of techno-optimists argue that coming
advances in automation and artificial intelligence will finally
fulfill Keynes’ prediction, replacing most human labor.
Even if machines don’t cause widespread unemployment,
they have caused and surely will continue to cause substantial labor market shocks in specific industries. These
concerns have breathed new life into the discussion over a
policy now called universal basic income, or UBI.
Many variations have been proposed, but UBI generally
refers to regular cash payments that would go to individuals regardless of work status or income (that’s the “universal”) and would cover some minimum standard of living
(that’s the “basic”). Elon Musk, Mark Zuckerberg, and
other figures in the tech industry have publicly announced
their support for UBI as a result of their concerns about
job loss from automation. As workers are replaced by
machines, “we need to figure out new roles for what those
people do, but it will be very disruptive and very quick,”
said Musk in a 2017 speech in Dubai. “I think we’ll end up
doing universal basic income … it’s going to be necessary.”
At the same time, questions remain about how it could
be done and its effects.
UBI Meets U.S. Politics
It wasn’t technology leaders or futurists who first brought
UBI into mainstream U.S. political discourse — it was
economists. Milton Friedman first proposed the negative
income tax (NIT), a forerunner of UBI, in his 1962 book
Capitalism and Freedom. The NIT and UBI are identical,
except that NIT benefits would decrease as a recipient’s
income increases and at a certain level phase out entirely,
while UBI payments would be fixed regardless of income.
Economists from all over the ideological spectrum came
to support NIT proposals, including Friedman’s fellow
Nobel laureate Friedrich Hayek as well as liberal-leaning
16

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

economists like Nobel laureates Paul Samuelson and
James Tobin. In 1968, more than 1,200 economists signed
a manifesto advocating for a guaranteed income.
Support from economists and policy experts
eventually led to a political movement. At the urging of
Sen. Daniel Patrick Moynihan (D-N.Y.), President Nixon
presented the Family Assistance Plan (FAP) in 1969; the
program would have provided each family in America
$1,600 per year (roughly $10,650 in today’s dollars) subject
to some work requirements. Shortly after, a more generous
proposal called the Human Security Plan was proposed by
Sen. George McGovern (D-S.D.), part of his presidential
campaign platform as the Democratic nominee in 1972.
Despite economists’ support for the Moynihan plan, no
guaranteed income plan ever made it through Congress.
Proponents made many arguments for basic income.
One was that basic income would be more efficient
than the welfare system as it would require very little
bureaucracy. Although lower administrative costs might
be a benefit of UBI, it probably would not be a large one.
According to Jason Furman, chairman of the Council of
Economic Advisers during the Obama administration and
now senior fellow at the Peterson Institute, eliminating
the entire administration for unemployment insurance,
food stamps, housing vouchers and the like would provide
an annual UBI of only about $150 per person.
One of the main concerns about UBI has been its effect
on work and labor supply. In 1986, Alicia Munnell, then
senior vice president and research director at the Boston
Fed, said basic income schemes have been beset by “the
widespread fear that a guaranteed income would reduce the
work effort of poor breadwinners and, as a result, cost taxpayers a great deal of money.” This objection is still shared
by many today, but it was the exact opposite of what supporters expected: They thought that replacing the U.S. welfare system with a guaranteed income might actually give
the poor more reason to work. “I see the work incentive for
low-income families as the single biggest economic benefit
of replacing the current system with a UBI,” says Ed Dolan,
an economist at the libertarian-leaning Niskanen Institute
in Washington, D.C., and a prominent proponent of UBI.
Why the disconnect? It’s rooted in opposing beliefs
about how workers would respond to the payments — and
how they respond right now to welfare programs.
If you suddenly start receiving an extra check in the

Many Poor Face High Marginal Tax Rates
Effective marginal tax rate for a hypothetical single parent with one child, 2014

The NIT Experiments
To test the net effects on labor and costs relative to welfare, the Nixon administration launched four NIT experiments in urban and rural areas across the United States
— the very first large-scale randomized control trials conducted in economics. At the same time, Canada launched
a similar experiment called Mincome in the province of
Manitoba. The five experiments lasted about three to
five years each, providing monthly payments to families
with children. The programs also varied in generosity; in
2013 dollars, a family of four would get anywhere between
$17,445 and $48,446 per year, with effective marginal tax
rates between 30 percent and 80 percent.

PERCENT OF FEDERAL POVERTY LINE

100

0

50

100

150

200

250

300

350

401

80
60
RATE

mail every month, such as a UBI, you can suddenly
consume more for any given amount of leisure, and you
can afford to work less. This — what economists call an
“income effect” — is what many skeptics have in mind
when they worry that UBI would cause people to work less
or stop working altogether.
But if that check comes as part of a means-tested program, like a traditional welfare program, then the payment
goes down as you earn more. From your perspective, the
declining welfare payment is equivalent to an increase in
marginal tax rates. The more you work, the less you get to
keep of each dollar earned, and you might rationally choose
to work less. This is a substitution effect: As work becomes
relatively less profitable, you substitute toward leisure.
The substitution effect is a major concern that many
economists have with the current U.S. welfare system:
Many poor people face high effective marginal tax rates.
Data from the Congressional Budget Office show that the
effective marginal tax rate for a single parent with one child
changed with their earned income in 2014. When including
federal transfer payments, this effective marginal tax rate
nears 100 percent at low incomes — a hypothetical family
nearing 150 percent (about $23,000) of the federal poverty
line would keep less than 10 cents of each extra dollar
they earned. (See chart.) There are also other large cliffs
in effective marginal tax rates, which vary widely by state
and almost always fall below 150 percent of the poverty
line: losing eligibility for Medicaid, the Children’s Health
Insurance Program (CHIP), the Supplemental Nutrition
Assistance Program (SNAP, formerly “food stamps”),
Temporary Assistance to Needy Families (TANF), and
state transfers.
Thus, a poor family might be faced with the situation
of working longer hours for an extra $100 of income while
losing $90 in benefits. A UBI to replace this system might
have an income effect, depending on how generous the
benefit is, but it would almost definitely have a positive
substitution effect through lowering and smoothing the
effective marginal tax rates that poor families face. Theory
alone, however, can’t predict whether the income effect
from a UBI would be larger than the substitution effect
from welfare. That’s where experiments have come in.

40
20
0
-20
-40
$0

$10,000

$20,000

$30,000

$40,000

$50,000

$60,000

$70,000

EARNINGS

Under the Federal Individual Income Tax System
Under the Federal System of Taxes and Transfers
SOURCE: Congressional Budget Office

Ioana Marinescu, an economist at the University of
Pennsylvania, recently reviewed evidence on the 1970s
NIT experiments as part of a larger project on unconditional cash transfers. She found that “the labor supply
effects are uniformly small to nonexistent, depending on
the study.” Only the Seattle/Denver program, the largest
and most generous of the five, saw a statistically significant
decline in the percentage of people with jobs (by about 4
percentage points). More significant was the reduction in
hours worked — between two and four weeks of full-time
employment over a year.
But Marinescu contends that there were two major
implementation issues. For one, participants would
underreport their earnings to qualify for more income.
Additionally, participants who did not reduce their hours
of work, and therefore didn’t get as much benefit, tended
to drop out of the study. Both of these problems exaggerated the labor supply effect, making it seem more negative
than it actually was.
On top of implementation, Marinescu points to several
conceptual problems with the NIT experiments. “First,
these experiments lasted for only about three years,” she
says. “That makes it hard to extrapolate to what would
happen in the long term.” Some participants might not
have quit their jobs, knowing that the NIT would only
be temporary. On the other hand, participants might
have taken more time off, treating the experiments as a
“sale on leisure,” where the cost of working less was temporarily reduced. The second is that, because they were
experiments, not everyone in the areas who qualified for
the NIT received it — researchers needed control groups
living in same areas to compare against. Yet there might
be effects that only come from everyone in an area being
part of the program, such as macro-level effects on labor
demand or effects arising from social networks. Overall,
Marinescu says, “it is unclear on theoretical grounds which
way those effects would go.”
A 2017 analysis by two sociologists attempted to evaluate
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

17

the social effects. David Calnitsky at the University of
Manitoba and Jonathan Latner, then of the University of
Bremen, took advantage of the fact that Mincome, though
mostly randomized throughout the province of Manitoba,
was provided universally in the town of Dauphin. Calnitsky
and Latner compared the effect in Dauphin to the rest of
Manitoba and were able to attribute about 30 percent of
the labor force reduction to “social interaction” effects,
occurring only when the benefit was truly universal. For
example, some individuals reducing their work effort might
have made doing so more socially acceptable for everyone.
Also relevant is that the tax rate and level of the
guaranteed income are adjustable aspects of a NIT. The
experimental programs of the 1970s often had generous
benefit levels but also had high implicit tax rates — that
is, benefits fell sharply as income rose. Economists have
attempted to estimate the benefit and tax levels that
would leave work incentives intact, but it’s a hard task
without widespread experimental evidence. Most modern
UBI proposals leave benefits constant as income rises.
What About Automation?
The tremendous increases in automation seen in the past
two centuries largely validate Keynes’ prediction: The
“age of leisure and abundance” arguably is here. Per capita
real wages are more than 16 times greater than 200 years
ago, while the average workweek has fallen by half and the
share of one’s life spent working is far shorter. Meanwhile,
the average unemployment rate that has prevailed over
time has not increased. Economic theory suggests that
technology and productivity are the key to sustained
improvements in our standard of living.
But it can require some adjustment in the short term.
Dolan sees a role for UBI in smoothing out present and
future labor market shocks from automation and trade.
He argues that UBI would improve labor market flexibility: People might be more likely to take risks like moving
between states if they have an unconditional, reliable
safety net. Along similar lines, he reasons that UBI would
help smooth consumption for workers in the gig economy,
who generally have more variable income.
Whether machines cause widespread unemployment
or just shocks to certain markets, Marinescu doesn’t see
UBI as a long-term solution: “Any realistic UBI is going
to be so small, that, when someone loses any decent job,
it’s not going to make up for it. It would be better than
nothing, but it’s not going to do all that much for people
who are left out due to technological shocks.”
Indeed, the overall cost of a UBI remains one of its
opponents’ main concerns. In principle, a UBI could
be revenue neutral if it were funded by scrapping the
welfare state – but then it might not be large enough to
meet households’ “basic” needs or be what some would
consider a true safety net. A UBI with loftier goals, like
lifting families out of poverty, could require additional
funding and public support. A 2017 study by researchers
18

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

at the American Enterprise Institute, for example, found
that a UBI funded by cutting virtually all welfare and
transfer programs, including Social Security and Medicare,
would provide a UBI of $13,788 for adults and $6,894 for
children, with varying winners and losers across income
and age groups. Hillary Clinton recently stated that she
considered proposing a UBI program during her 2016
presidential campaign but didn’t think she could provide
a meaningful enough dividend with a realistic set of new
taxes.
Regarding UBI’s role in buffering adjustment to technological change, Marinescu points out that jobs play
an important non-pecuniary role in our lives: People get
value from the identity and social recognition that come
from a job. “I recently talked with two of my economist
colleagues — one from the left, one from the right —
and they both agreed that UBI is fine, that they weren’t
against it,” Marinescu says. “But at the same time, they
thought that investing in skills and generally finding ways
for people to be socially integrated was more urgent to
think about — that just having some small extra income
wasn’t going to solve that problem.”
Of course, UBI is not mutually exclusive to investing in
skills, and some have sought to justify UBI on grounds other
than automation. Foremost among these is reducing poverty. One possible benefit of UBI as a poverty-reduction
measure is that many welfare programs have surprisingly
high non-take-up rates — that is, many people qualify for
welfare but don’t take advantage of it. There is a range of
possible reasons for this, including lack of awareness, the
social stigma of welfare, and administrative hassle. By virtue of being universal and unconditional, UBI likely would
decrease these issues.
Dolan contends that a revenue-neutral UBI that
reduces poverty is possible; his proposed plan would
replace the current welfare state as well as other transfers
such as tax deductions, which primarily benefit the relatively affluent. Marinescu sees this as a political advantage
for UBI: The flat, universal nature of UBI might make it
a more palatable form of redistribution. “It gets around
some political issues that have recently been documented
in the economics literature — for example, people seem
less and less supportive of redistribution.”
That same feature could also be a political liability.
Even though certain UBI proposals might be more progressive than the current system of taxes and transfers,
UBI schemes are also more transparent about giving
money to the rich. For example, many of the tax deductions that some UBI proposals would replace are regressive and distortionary but are also very popular. A 2011
Gallup poll found that strong majorities of the public
opposed cutting these deductions either to lower taxes or
reduce the federal deficit. Liberal and libertarian critics
of UBI argue that the policy would be wasteful and that
government shouldn’t be giving money to the rich at all.
Furman, in a debate in March, put it this way: “If you give

somebody a dollar, that dollar has to come from somewhere. It has to come from cutting benefits that someone
is getting or raising taxes on someone.”
What’s Next?
Several new UBI trials have just been launched around
the world. One experiment, run by a nonprofit called
GiveDirectly, will continue for at least 12 years and provide some villages in Kenya with a truly universal benefit.
This might uncover some of the long-term and macro-level
effects that the NIT experiments couldn’t measure, but
it is unclear how applicable any results would be to the
United States. Closer to home, Y Combinator, a Silicon
Valley startup accelerator, is giving 100 Oakland families
a UBI for up to one year as part of a five-year study. These
projects, however, are for the most part small and short
term, revisiting the NIT experiments of the past, and
focused on labor supply and other micro-level statistics.
Marinescu argues that the next step should be to implement something larger, maybe at the state level, in the
United States so that researchers can evaluate macro-level
effects and interactions with other policies. A state-level
UBI isn’t totally without precedent: In 1976, Alaska
used revenue from oil extraction on state-owned land to

establish the Alaska Permanent Fund. This fund, popular
with Alaskan voters, provides all residents of the state
with $1,000 to $2,000 per year. Marinescu’s own research
has shown that the fund’s payments have had no effect on
the state’s employment rate and only a minor decrease in
hours worked; the income effect, Marinescu concludes,
might have been cancelled out by stimulation of labor
demand. But the amount of the payments is small compared to typical UBI proposals, so the effects of that program might not be a good predictor for a full-scale UBI.
In short, the practical questions surrounding a UBI
— especially its effect on work incentives and how large
a revenue-neutral payment could actually be — don’t yet
have clear answers. While empirical evidence seems to
suggest that concerns over work incentives may be less
serious than some have argued, it is largely drawn from the
NIT experiments of the 1970s, when the labor market and
economy looked radically different from today. Current
trials may provide better evidence of UBI’s labor supply
effects, especially how it would interact with recent economic phenomena like low male labor force participation,
while future large-scale projects might be able to shed
light on macroeconomic and social effects that research so
far has left open.
EF

Readings
Forget, Evelyn L. “The Town with No Poverty: The Health Effects
of a Canadian Guaranteed Annual Income Field Experiment.”
Canadian Public Policy, 2011, vol. 37, no. 3, pp. 283-305.

Marinescu, Ioana. “No Strings Attached: The Behavioral Effects of
U.S. Unconditional Cash Transfer Programs.” Roosevelt Institute
Report, May 2017.

Garfinkel, Irwin, Chien-Chung Huang, and Wendy Naidich. “The
Effects of a Basic Income Guarantee on Poverty and Income
Distribution.” USBIG Discussion Paper No. 014, February 2002.

Moffitt, Robert. “The Negative Income Tax and the Evolution of
U.S. Welfare Policy.” Journal of Economic Perspectives, Summer 2003,
vol. 17, no. 3, pp. 119-140.

Federal Reserve continued from page 5
that a varying or higher target would have made much difference during and after the recession, as well as concern
that the Fed’s commitment to stable inflation could come
into question if it changed the target “opportunistically.”
Beyond the relatively narrow question of the nominal
target, however, economists inside and outside the Fed
are giving fresh attention to understanding the relationship between inflation and inflation expectations and to

whether the anchoring process has changed. “Extreme
economic events have often challenged existing views of
how the economy works and exposed shortcomings in the
collective knowledge of economists,” noted Yellen in her
speech last fall, citing the Great Depression of the 1930s
and Great Inflation of the 1970s. “The financial crisis and
its aftermath might well prove to be a similar sort of turning point.”
EF

Readings
Ball, Laurence, and Sandeep Mazumder. “A Phillips Curve with
Anchored Expectations and Short-Term Unemployment.”
National Bureau of Economic Research Working Paper No. 20715,
November 2014.
Blanchard, Olivier, Eugenio Cerutti, and Lawrence Summers.
“Inflation and Activity: Two Explorations and their Monetary
Policy Implications.” National Bureau of Economic Research
Working Paper No. 21726, November 2015.

Christensen, Jens H.E., and Jose A. Lopez. “Differing Views on
Long-Term Inflation Expectations.” Federal Reserve Bank of San
Francisco Economic Letter No. 2016-11, April 4, 2016.
Kahn, George A., and Andrew Palmer. “Monetary Policy at the
Zero Lower Bound: Revelations from the FOMC’s Summary of
Economic Projections.” Federal Reserve Bank of Kansas City
Economic Review, First Quarter 2016, vol. 101, no. 1, pp. 5-37.

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

19

INTERVIEW

Douglas Irwin
Editor’s Note: This is an abbreviated version of EF’s conver-

sation with Douglas Irwin. For additional content, go to our
website: www.richmondfed.org/publications

20

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

EF: Why did you decide to write a general history of
U.S. trade policy — the first, as you note in the introduction, since the early 1930s?
Irwin: I have long had a general interest in trade and history, but what solidified my interest in U.S. trade policy in
particular was spending a year at the Council of Economic
Advisers (CEA) while in grad school. That was in 1986-1987,
and it was a momentous period for U.S. trade policy. There
were trade disputes with Japan, a lot of protectionist pressures to block imports of textiles and steel, and many other
trade issues on the agenda. So that CEA experience, seeing
how policy is made, and learning from people at the U.S.
Trade Representative’s (USTR) office, got me very interested in how U.S. trade policy functioned. And then, pretty
naturally, I became very interested in looking to the past.
The last major book of this sort was The Tariff History
of the United States by Frank Taussig. It’s a great book, a
classic, but it’s been a long time since his last edition. And
I thought it could be updated on multiple dimensions —
first of all, to discuss the Great Depression and then bring
it up to the present. We have also learned a lot more about
the trade history that he did cover. He was writing before
cliometrics, before the use of statistical methods to test
a lot of the propositions he was discussing, such as the
effects of protectionism in the late 19th century. In addition, economists have become interested in the political
economy of policy formation. There’s not a lot of political

PHOTOGRAPHY: ROB STRONG

There is arguably no proposition more widely held
among economists than the free trade of goods across
countries generally benefits the citizens of both the
exporting and the importing countries. Yet, support
for trade often faces resistance among the public and
policymakers. In the United States and other developed countries with broadly liberal trade policies,
such skepticism, at least rhetorically, seems to have
gained momentum recently.
Douglas Irwin, an economist at Dartmouth College,
argues that nations would be well advised to retain or
to adopt a commitment to free trade. The overall benefits remain large — and the costs of protectionism
are often understated.
Moreover, Irwin notes, the arguments that proponents of protection frequently advance are many times
questionable. For instance, he acknowledges that the
United States had relatively high trade barriers during
the late 19th century, a time of rapid industrialization.
But it seems likely that such economic growth was due
to a number of other factors instead. In other cases,
Irwin argues, protectionist policies, while unwise, have
not been as destructive as some have claimed. For
instance, the importance of the Hawley-Smoot Tariff
of 1930 to the deepening of the Great Depression generally has been overstated.
Much of Irwin’s work falls at the intersection of
economic history and trade theory. His most recent
book is a comprehensive, more than 800-page history of U.S. trade policy, Clashing over Commerce. In
addition to authoring and editing many other books
and publishing widely in professional journals, Irwin
occasionally writes for the popular press. He started
his career at the Federal Reserve Board of Governors,
moved to the University of Chicago’s Graduate School
of Business, and has been at Dartmouth since 1997.
Aaron Steelman interviewed Irwin on the Dartmouth
campus in August 2017.

economy in Taussig, so it can
the Articles of Confederation,
be a little bit dry sometimes as
Congress did not have the
Those are the three Rs: revenue,
he’s going through changes in
power to levy taxes. The fedrestriction, and reciprocity.
the wool schedule, the cotton
eral government was broke and
When I looked at the broad canvas
schedule, and so on. I think peocouldn’t pay its bills, leading
of U.S. history, those three categories
ple are less interested in that sort
the country toward a crisis. So
of detail than the bigger picture
one of the major reasons for the
really apply to three different periods
of how the political parties were
Constitutional Convention was
of U.S. trade policy history.
functioning, the pressures memto give Congress the power to
bers of Congress faced, and how
raise revenue. The Tariff Act of
we shifted toward freer trade. So that’s sort of how it came
1789 was really just a revenue measure to pay debts and to
together.
finance the spending of the federal government. Revenue
More specifically, I distinctly remember being in my
remains the major issue in trade policy through the anteChicago office in 1995 when Michael Bordo gave me a
bellum era.
call (email was still a novelty) and asked if I would write a
Then, with the Civil War, of course, there is a transipaper on U.S. trade policy during the Great Depression. I
tion of political power in the United States. The North
really hadn’t worked much on U.S. trade policy up to that
becomes politically dominant, and it was the home of a
point, though I had the latent interest. I thought it would
lot of import-competing industries. Republicans from the
be a really easy paper to write because I assumed that there
North were overwhelmingly in charge of Congress, and so
would be a large literature on trade policy during the Great
we get protection as a policy outcome. Once those high
Depression. But when I did my literature survey I discovtariffs were in place, they become very hard to dislodge
ered — to my horror — that there was almost nothing
for a lot of reasons and they continue for a long time —
really analytical on the period. So I actually had to write
long beyond when we actually become a net exporter of
something like five background papers just to write this
manufactured goods.
one conference volume paper. After that, I started doing
In 1929, we have another shock: the Great Depression
a lot of analytical and empirical work on various episodes
that redistributes political power once again, this time
in U.S. trade policy history. Once I had written enough
away from the protectionist Republican Party to the more
papers, it became obvious that I really ought to synthepro-trade Democratic Party, which at the time drew much
size them and turn it into a book. That was around 2000.
of its political support from the South. Also, we have this
After various delays, I came close to finishing the book in
trade war after the Hawley-Smoot Tariff of 1930, which
2006, but then 2007 came, and like many economists, my
leads many people to think trade policy should take a
work got diverted by the financial crisis and I returned to
different direction. So President Franklin Roosevelt and
looking at issues related to the Great Depression. After
Secretary of State Cordell Hull introduce the Reciprocal
more delays, I finally got back to the book around 2013 and
Trade Agreements Act (RTAA) in 1934 and we move on to
pushed it through to completion.
this third era of reciprocity where we’re willing to reduce
our tariffs in conjunction with other countries reducing
EF: You argue that the United States has gone through
their trade barriers as well.
three major eras in trade policy — and structure the
book accordingly. Could you describe those?
EF: The Founders could have looked to other ways to
raise revenue. Was the tariff broadly seen as simply
Irwin: I tried to start the book with principles about
the least bad way?
what government officials and representatives are trying
to achieve with trade policy, and it seems to me that they
Irwin: Absolutely. There was a consensus among the
use it to achieve three things. First, they are trying to
Founders that it was the most efficient way of raising public
raise revenue. Second, they are trying to protect domesfunds as well as the most politically acceptable. Consider
tic industries from foreign competition. Third, they are
sales taxes in the early post-colonial period. They were very
sometimes bargaining with other countries to reduce tarcontroversial and very costly to enforce; just think of the
iffs or retaliating against them by raising tariffs.
Whiskey Rebellion. An income tax just doesn’t make sense
Those are the three Rs: revenue, restriction, and recat this time for many reasons. But imports were coming
iprocity. When I looked at the broad canvas of U.S. hisinto a relatively small number of ports, such as Boston,
tory, those three categories really apply to three different
New York, Philadelphia, Baltimore, and Charleston. So it
periods of U.S. trade policy history. Although all three
makes sense that if you have a lot of goods coming into a
elements are always present, to some extent, the question
small number of places, you just tax them right there, which
is: Which one is dominant at any given point? From the
is pretty easy to do. In addition, people don’t easily see the
founding of the country to the Civil War, the debate
tax because it’s built into the consumer price, so there is less
was really about using the tariff to raise revenue. Under
political resistance to it.
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

21

EF: Regarding regional cleavages
surrounding trade policy, how
important do you think the tariff
issue was to the frictions that led
to the Civil War?

Douglas Irwin

you listen to the rhetoric, it might be
reasonable to think that there is a big
shift coming for U.S. trade policy. But
I also noted that if you look back over
the past 250 years, you see that we
have had these periods where trade
➤ Previous Positions
Irwin: You do see the argument out
policy sort of veers off and then evenUniversity of Chicago Graduate School
there that trade restrictions were one
tually returns to the old status quo.
of Business (1991-1997); Board of
Governors of the Federal Reserve System
of the principal reasons the South
For example, Democratic President
(1988-1991)
seceded, not so much among acaWoodrow Wilson slashed tariffs
demic historians but among others
dramatically and tried to introduce
➤ Education
who write on the topic. I think the
much freer trade, but the Congress
Ph.D. (1988), Columbia University; B.A.
tariff issue had very little, if anything,
soon reimposed high tariffs when the
(1984), University of New Hampshire
to do with the Civil War. After the
Republicans were returned to power.
➤ Selected Publications
1828 Tariff of Abominations, South
When you look at what Franklin
Clashing over Commerce: A History of
Carolina essentially said we’re not
Roosevelt did with the RTAA, the
U.S. Trade Policy (University of Chicago
going to enforce this law and we may
introduction of trade agreements was
Press, 2017); Trade Policy Disaster: Lessons
withdraw from the union unless the
a policy of evolution not an overnight
from the 1930s (MIT Press, 2012); Peddling
policy is changed. That precipitated
revolution. The Reagan administraProtectionism: Smoot-Hawley and the
a real crisis, and it was defused with
tion imposed a lot of protectionist
Great Depression (Princeton University
the Compromise of 1833 proposed by
measures in the 1980s, but those
Press, 2011); The Genesis of the GATT,
Henry Clay, which gradually reduced
restrictions soon faded away.
with Petros C. Mavroidis and Alan O.
tariffs. From 1833 until the Civil War,
As a result, I try to suggest in the
Sykes (Cambridge University Press,
tariffs were basically on a downward
book’s conclusion that there’s still a
2008); Free Trade under Fire (Princeton
University Press, 2002); Against the
path. We reduced the tariff further
lot of status quo bias in the system.
Tide: An Intellectual History of Free Trade
in 1846 and then again in 1857. A year
We can’t always believe the strong
(Princeton University Press, 1996)
before the Civil War, the average
rhetoric, and maybe things won’t
tariff was below 20 percent, which
change as much as promised. And
was about the lowest it had been in the entire antebellum
so far, as of August 2017, I think Trump hasn’t changed
period. So the South and the Democrats really held the
much in terms of U.S. trade policy. Yes, he pulled out of
cards in terms of trade policy right up to the Civil War.
the Trans-Pacific Partnership, but maybe Hillary Clinton
What the revisionists of the Lost Cause group will say
would have done so also; Bernie Sanders too. Trump did
is, well, the Republicans assumed power and passed the
say he wanted to renegotiate bilateral agreements with
Morrill Tariff in 1861 and that led to the conflict. But the
these countries. There’s no evidence we’ve moved foronly reason the Morrill Tariff passed was because most
ward with that but that’s at least saying that he’s open
of the South had already seceded after the election of
to the idea of trade agreements. He hasn’t pulled out of
Lincoln. If their representatives had stayed in Congress,
the North American Free Trade Agreement (NAFTA),
they could have stopped it. It wasn’t that the South
although the renegotiation of it is not likely to go well.
left the Union because of the Morrill Tariff; we got the
He might go after China a bit, but consider his announceMorrill Tariff because they left. In fact, it wasn’t Lincoln
ment: He signed an executive order for the USTR not
who signed it but the Democrat James Buchanan before
to initiate an investigation but to look into initiating an
Lincoln took office. So I think there’s basically no eviinvestigation. So there’s nothing there yet. I think the
dence the tariff was a major cause of the Civil War.
administration is quickly learning that there is a process,
there’s a reason why things operate slowly, and you have to
EF: At the end of the book you discuss how predicwork within the laws we have.
tions for U.S trade policy have been really dire. But
Also, any big change in trade policy — in any direction
you offer some caution about such claims.
— is going to generate a lot of opposition. In relation to
NAFTA, when you look at a map of where U.S. agricultural
Irwin: It was a tricky matter for the book because I comexports are produced, you see that a lot come from areas
pleted the manuscript in September of 2016. And I had every
that the president carried and a lot head to Mexico. So
expectation that Hillary Clinton was going to be elected
hopefully government officials begin to realize pulling out
and there would be significant continuity in trade policy.
of NAFTA would not only reduce imports to the United
When Donald Trump was elected, given his extreme rhetStates, it would also lead to reduced market access for U.S.
oric on trade, many people expected big changes in trade
exporters. There are a lot of trade-offs in any policy change.
policy. I did have the opportunity to add a few paragraphs
It’s not a black and white process of you stop imports, you
on Trump, and as you can see I tried to hedge my bets. If
create jobs here, and that’s the end of the story.
22

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

➤ Present Position
John French Professor of Economics,
Dartmouth College (at Dartmouth
since 1997)

EF: What do you think about Brexit and what it portends for trends in trade policy?
Irwin: A lot of people have said that it’s an indicator of
an antiglobalization backlash. Yet I don’t believe it represents a backlash against trade per se because Brexit proponents want to maintain Britain’s access to the European
Union (EU) market and have actually argued for even freer
trade outside the EU. So it wasn’t an anti-trade movement. I think immigration, regulatory, and sovereignty
concerns about the EU were dominant.
If they go through with it, however, Britain could be
making a big mistake. First, Europeans are not going to
give them free and easy access as they had before. Britain
has no trade negotiators because they outsourced that to
the EU. So all of a sudden they’re looking for qualified
staff to negotiate new trade agreements. Second, trade
agreements these days are much more about regulatory
harmonization and coordination than tariff levels. If you
were dealing with only tariffs, that would be much easier
to address. But these are really complicated policy measures where you really need a lot of expertise. To pull out
of the EU and try to replicate that — not just with the EU
but with a whole bunch of other countries — it’s going to
take a long time to repair those networks. With global
supply chains being so important, that can do big harm to
one’s country if you stand outside the system for a while
and then try to get back in.
There’s actually a cautionary tale here from the
American Revolution. After the United States won its
independence from Britain, American leaders thought
that the political settlement would restore U.S. access
to the markets of the British Empire. They were sorely
mistaken: Britain sought to punish the United States by
keeping it out of its markets, and the United States paid a
hefty economic price.
EF: How would you assess the claim that more restrictive trade policies in the late 19th century fueled
industrialization in the United States?
Irwin: This is one of the biggest questions in the history of
U.S. trade policy: Did protectionism foster U.S. economic
growth and development in the late 19th century? I’m
not convinced that we can attribute America’s industrial
advance in the 19th century to high tariffs or protection.
There are a couple points to make on this. There is certainly
a correlation between high tariffs and industrial growth in
the late 19th century, but we can’t leave it at that. That
would be a post hoc, ergo propter hoc argument. Instead,
we need to know the mechanism by which high tariffs
might lead to this growth. Usually the mechanism identified
is that agriculture is a relatively low value added per worker
sector and with the tariff you are going to shift resources
into manufacturing, which is a relatively high value added
per worker sector. So not only do you industrialize, but you

also raise national income because you get workers into
more productive activities. I have done some back of the
envelope calculations about how much labor could possibly
have moved across sectors as a result of the tariff, and the
numbers are pretty small in terms of any possible gain. And,
actually, this intersectoral switch is happening anyway. It’s
a natural process. A lot of the industrialization occurred
prior to the Civil War, between 1840 and 1860 when we
had low and declining tariffs. A lot of the growth in the late
19th century when we had high tariffs is extensive growth,
not intensive growth. In addition, there are so many other
things going on. We had open immigration, so there was a
lot of growth in the labor force. We revamped our banking
laws during the Civil War, finance became very important,
and we got capital deepening. That’s not because of the tariff; that’s because the whole financial system of the United
States was really developing.
Another point to be made is that when you look at the
high productivity growth sectors in the U.S. economy
in the late 19th century, John Kendrick and others have
shown they’re mostly in the non-traded goods, service
sector. Transportation and utilities were growing very
rapidly. It’s hard to see how the tariff would help the nontraded goods, service sector of the economy improve its
performance. Also, Steve Broadberry has done some work
showing that increasing productivity in the service sector
was very important to the United States catching up with
Britain in the late 19th century. That, too, doesn’t seem
to be tariff related. All of this doesn’t lend itself to an
easy story where the tariffs are the key factor behind U.S.
growth and industrialization.
In addition, when you look at particular manufacturing industries, such as iron and steel or textiles, once
again the story doesn’t seem to be particularly strong.
For example, I once looked at the tinplate industry. It’s
true that we didn’t have tinplate production until the
McKinley Tariff, but the reason we didn’t have it was
because we had high tariffs on imported iron bar, which
is an important input to tinplate. So you had a high cost
of production on your intermediate goods and that hurt
downstream producers. When you look at the whole
tariff code in the late 19th century, it’s not geared toward
the production of final manufactured goods. There are
high tariffs for everyone, including on intermediate
goods, and you’re not really helping out downstream producers when you do that.
EF: It is often asserted that the Hawley-Smoot Tariff
played an important role during the Great Depression.
What is your view?
Irwin: I would say most economists have been skeptical of
the claim that the Hawley-Smoot Tariff led to the Great
Depression or even exacerbated it to any great extent. In
their Monetary History of the United States, Milton Friedman
and Anna Schwartz hardly mention the tariff at all.
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

23

Whenever Friedman talked about the Great Depression,
he always said that it was a very bad piece of legislation,
but it didn’t cause the Great Depression, it didn’t generate
25 percent unemployment. I think that’s basically true.
There is something else going wrong in terms of monetary policy or other macroeconomic factors that cause
depressions. Tariffs change relative prices and reallocate
resources between industries but don’t change the level of
activity to that extent. There’s a lot of evidence for that
through history. For instance, in 1922 Congress passed
the Fordney-McCumber Tariff, which raised tariffs more
sharply than even the Hawley-Smoot tariff, and yet an
economic boom followed. Now, the tariff certainly had
nothing to do with that boom, as the economy was recovering from tight monetary policies after World War I. But
the point is we have had a lot of tariff increases in the past
that didn’t lead to depressions and a lot of tariff reductions
that didn’t lead to booms.
My view of Hawley-Smoot is that it was unnecessary,
it was ineffective, and it was harmful. It was unnecessary
because it was introduced in the House at a time of almost
full employment, the spring of 1929. It was ineffective
because the motivation was to help out farmers, but we
were a big net exporter of farm goods so the domestic
price that they faced wasn’t going to be affected by import
duties. It was harmful because it led to a lot of retaliation
against the United States, so our farm and factory exports
were actually harmed.
EF: In addition to legislation like Hawley-Smoot, you
and Barry Eichengreen have looked at some other
factors in the rise of protectionism during the 1930s.
Irwin: Everyone knows a trade war broke out in the 1930s.
But what really caused it? The standard explanation is that
there was chaos and that everyone was trying to protect
their own market in light of the Great Depression. We
found something different. There is a very pronounced
pattern in terms of which countries were adopting protectionist policies and which weren’t. That hinged on something that naturally follows from Barry’s work — how long
you stay on the gold standard.
There’s a trade-off that different countries made. If you
are being confronted with a deflationary shock, you can
use monetary policy to adjust to that. But if you’re on the
gold standard and the hands of the monetary authorities
are tied, you look for other policy instruments to try to
prevent gold outflows and reflate the economy. Trade policy is one of them. So what you find is some countries are
breaking off the gold standard very early and they pursue
reflationary monetary policies. They are able to mitigate
the worst effects of the depression and they don’t face as
much protectionist pressure. In contrast, there are other
countries that stay on the gold standard and their economies remain relatively depressed. Those are the ones precisely where the protectionist pressures are really strong,
24

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

and they impose exchange controls and higher tariffs and
things of that sort.
EF: Why do you think protectionism has such enduring appeal, at least rhetorically?
Irwin: I think protectionism has always had a lot of
appeal because, politically, it’s sort of an “us versus them”
situation. You’re helping out your domestic firms against
foreign firms that are stealing our jobs. It is a nationalistic
view that many people naturally have a desire to try to help
one’s neighbors first.
Also, with protectionism it’s easy to see who’s helped
and harder to see who’s hurt. There are tangible benefits
to some group when you erect a trade barrier, but it’s
much harder to see those who are harmed or pay the price.
It’s a bit of a case of the seen versus the unseen. One way
I try to illustrate this in my classes is to explain one of the
most fundamental theorems of international economics,
the Lerner Symmetry Theorem. It states that a uniform
tax on imports is equivalent to a uniform tax on exports.
But just think about how this plays in the public mind.
If you went out into any city and asked people whether
we should impose an across-the-board tariff on imports
to protect jobs and stop foreign countries from taking
advantage of us, a lot of people would support that. But if
you went to the same people and asked whether we should
impose a uniform tax on all exports, on all farm exports
and manufactured exports, there would be very little
support for that. But the Lerner Symmetry Theorem says
they’re equivalent. So it’s the same policy, but how you
frame it determines the response you will get.
EF: What are your thoughts on the paper by David
Autor, David Dorn, and Gordon Hanson arguing that
rising Chinese import competition has had significant
effects on U.S. manufacturing?
Irwin: I think it’s an important contribution because it
shows us some of the real difficulties in terms of labor
market adjustments to big shocks. The finding that people
drop out of the labor force, retire early, or go on disability
and don’t necessarily move on to other jobs is an important
finding. While I think economists will debate the number
of workers who have been displaced because their estimate
is based on cross-sectional evidence, which is not the ideal
way to do it, we can be pretty confident that the number is
big. That said, here’s my take on it. First, the China shock
was a one-time shock. That is, you had big growth not
just in trade but in a shift of people from agriculture into
industry in China, at the same time as the working-age population was growing. That’s not going to repeat itself. The
rural to urban transition has slowed dramatically, and the
working-age population in China is now actually in decline.
It also was not an aggregate demand shock. Even though
they identify significant harm to certain communities in

the 1990s and 2000s, those were periods of declining unemployment in the United States. So it really draws attention
to the problem with geographically concentrated production and the difficulties of getting workers to move to
different locations or to different industries. In this regard,
I would differentiate between the 1990s and the 2000s.
Autor, Dorn, and Hanson suggest the China shock was
occurring throughout this whole period, but at the end of
the 1990s we had an unemployment rate below 4 percent
with significant wage growth at the lower end of the wage
distribution. There’s actually some evidence that workers
in textile mills in the South who were displaced were getting higher-paying jobs elsewhere. The 2000s is a different
period, the economy was far less robust than in the 1990s,
and the 2008 financial crisis just compounded the problems
for displaced workers. Also in the 2000s you had huge macroeconomic imbalances in China. We had a pretty sizeable
current account deficit during the 2000s, while China had
a current account surplus of 10 percent of GDP. It’s highly
unusual for a large developing country to have a massive
trade surplus like that, which raises the issue of currency
manipulation and so forth. I don’t think that we are going to
see something like that again, in terms of trade imbalances,
and if we begin to go in that direction, there should be
enough warning signs and policy will be different.
In short, the China shock was a big one-off event that
happened under unusual circumstances and is unlikely to be
repeated. We have learned a lot from it, but going forward
I don’t think it changes the consensus that there are still
large benefits to trade. We have always known that certain
communities or certain types of workers are going to be
hurt by trade. This just happened to be a pretty big example. More recent research has also provided some context
or some nuance to what they found. For instance, work by
Rob Feenstra and others has tried to pin down the benefits
to consumers from lower prices, particularly workers at the
lower end of the income spectrum. In addition, some of
the China shock was due to China’s unilateral reductions of
tariff on inputs, which made its final goods producers much
more efficient. That’s not due to a change in U.S. policy —
that’s just China becoming more open and more efficient,
which ultimately is something we want to see.
EF: I know you have just finished a massive book, but
I was wondering what you are working on currently.
Irwin: I’m really excited about my next project, which is
looking at the political economy of trade policy reform
in developing countries. Arguably the biggest change in
the world economy over the past 30 or 40 years is the
increased participation of developing countries and their
unilateral decisions to open up and become part of the
world trading system. The biggest, of course, was China,
which wasn’t because of the World Trade Organization
or external pressure. Rather, in 1978-1979 Deng Xiaoping
decided to open up the economy. It was a unilateral

decision — and that has been the story for a lot of developing countries.
There has been a lot of work looking at what happens
when you go from a closed to an open economy. Sachs and
Warner had a famous Brookings paper in 1995 that was
improved upon by Wacziarg and Welch. And there are
many others now using synthetic control methods to sort
of simulate what would happen to a country if it hadn’t
opened up. Basically all of these papers identify pretty big
effects to GDP, to investment, and obviously to trade.
There is heterogeneity, of course; not everyone is going
to get a big boost from it, but, on balance, a pretty significant positive impact. So the question I want to address is
what was behind the decision of those countries to open
up or not. What I’m doing is looking at various countries in terms of their political decisionmaking process,
starting with Taiwan in the late 1950s, which was really
the first developing country to open up, then Korea, then
Indonesia, then Chile, and so forth. New Zealand enacts
big trade reform in 1984, and there are a lot of countries
in the late 1980s and early 1990s. My initial read is that it’s
not so much that these countries’ policymakers, backed
by some economist, are thinking about the comparative
advantage gains from trade or things of that sort. What
they’re finding is that they have these import substitution
policies and overvalued exchange rates, which have stifled
their exports, and now their exports can’t pay for their
imports. And it’s not that they want to keep out imports.
They desperately want to import things like food and
fuel and especially capital goods. But they don’t have the
exports to pay for them. So they need to do something to
stimulate exports. That requires a devaluation, usually a
big devaluation, and reducing tariffs, which through the
Lerner Symmetry Theorem acts as a brake on exports.
The reason why Taiwan and Korea initially moved in a
more open direction is that the United States was cutting
back their foreign aid. They had huge trade deficits that
were financed by U.S. foreign assistance. By the late 1950s
the United States was saying we’re in the postwar period
now, you’re not being threatened militarily, and so you’re
on your own. The countries realized, well, we can’t cut our
imports and our exports are virtually nothing. We’ve got
to do something about this, and that’s why they shifted
their policy. It’s fascinating to see the pressure that U.S. aid
withdrawal puts on foreign officials to rethink their policies. Also, sometimes it’s the International Monetary Fund
(IMF) providing advice but not necessarily a club over their
heads. And often there are policymakers groping for a solution who have been influenced by an economist. When you
get that sort of link, you sometimes can bring about these
significant changes in trade policy. So in the case of Taiwan
it was Sho-Chieh Tsiang, who was then an economist at the
IMF and who later taught at Rochester and Cornell. The
chief economic minister asked for a memo on what they
should do. Tsiang went there and said devalue and open up.
That’s what they did, and the results were astounding. EF
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

25

ECONOMICHISTORY

Doing Development Differently
Soul City was a bold experiment in rural North Carolina that collided
with the economic and social realities of the 1970s

P

acked into a Dodge compact with five children
and her husband, Jane Ball-Groom arrived at the
Manson, N.C., post office one morning in January
1970. They had driven more than 400 miles from New
York City to work onsite at a community under development by her employer, McKissick Enterprises.
After asking for directions, the family drove a bit farther. “We crossed the railroad tracks and saw this little
cardboard sign on a wooden post. There was a barbed wire
fence with cows behind it. That was Soul City.”
Ball-Groom had traded her family’s apartment in a
public housing project for one of the single-wide trailers plunked in the middle of 1,810 acres of farmland in
Warren County, N.C. To her, it was paradise. She was
among the pioneers who wanted to build a place where
people of all races and classes could have a second chance.
Soul City was the brainchild of lawyer and civil rights
activist Floyd McKissick. In his late 40s, the Asheville,
N.C., native left the Congress of Racial Equality to
form McKissick Enterprises as an instrument of black
economic empowerment.
Soul City, the firm’s biggest endeavor, was unveiled at
a press conference in January
1969. It would be the first
of several planned communities in the South to reverse
“the migratory pattern
of rural people seeking to
leave areas of economic
and racial oppression,”
as McKissick described
it, especially blacks who
had fled north during the
“Great Migration” that had
begun in the early 20th
century.
Using a combination
of public support and private capital, McKissick
and his team of mostly
young and idealistic black
Soul City was marketed as an
integrated community that
would provide a “fresh start”
for people and businesses.

26

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

professionals completed the first phases of Soul City’s
residential and industrial development. They also laid the
groundwork for future growth, including a regional water
system that taps into a nearby lake and serves the residents
of three counties today.
What Soul City couldn’t do was generate enough
jobs to satisfy the Department of Housing and Urban
Development (HUD). The agency withdrew its support
of the project in June 1979, exactly seven years after it
announced the award of a $14 million loan guarantee.
In hindsight, Soul City was an ambitious experiment that
ran out of time. McKissick and his team couldn’t overcome
the challenges of spurring economic activity in a relatively
rural part of North Carolina that lacked the essential ingredients for growth. Nor could they overcome the skepticism
that arose after years of slow progress caused, in part, by a
bureaucratic tangle that was as bad as kudzu.
“Was Soul City too bold, given the racial tension, the
economic conditions, or the lack of infrastructure in the
county? It probably was,” notes Eva Clayton, a former
congresswoman who worked on the Soul City project.
“But I don’t think it was a mistake to try a bold idea. There
is a need for new ideas in rural areas.”
Starting from Scratch
Like other civil rights leaders, McKissick saw economic
progress and independence as the next logical step for
blacks after making major gains for their legal rights in the
1960s. Soul City would be a vehicle for what he and others
called “black capitalism.”
In another century, black entrepreneurship had
emerged in the antebellum era and surged in response to
racial segregation in the post-Civil War South. But times
changed. Communities like Jackson Ward in Richmond
and Hayti in Durham, where blacks had formed their own
base of enterprise, were shattered in the name of urban
renewal in the 1960s. New opportunities were needed.
Like other proponents of the “new town” movement,
McKissick wanted to create an alternative to decaying
urban communities. “The black man has been searching
for his identity and destiny in the cities,” McKissick told
the New York Times in January 1969. “He should be able to
find it on the plains of Warren County.”
What McKissick imagined on those plains was a
new town, like the ones that emerged from the ashes of
European cities destroyed during World War II. He aided

IMAGE: FLOYD B. MCKISSICK PAPERS #4930, 1940S-1980S IN FOLDER 1754: PRESS KIT MATERIALS, 1970-1979 AND UPDATED SCAN 13,
J. E. SHEPARD MEMORIAL LIBRARY UNIVERSITY ARCHIVES, RECORDS AND HISTORY CENTER, NORTH CAROLINA CENTRAL UNIVERSITY

BY C H A R L E S G E R E N A

in this rebuilding in France as an Army sergeant and visited
other sites in France and England. In a 1983 interview,
McKissick recalled telling his relatives and Army buddies
after the war, “If we can spend all this time over in Europe
building, we can sure go back down South and build.”
In 1971, McKissick applied for a loan guarantee under
HUD’s New Communities program, established in 1968
and expanded under Title VII of the Housing and Urban
Development Act of 1970. The program was intended to
support innovative solutions to suburban sprawl, urban
blight, and underutilization of rural communities. Like
the private developments that were built as part of the
new town movement, New Communities projects were
intended to be more self-sufficient than traditional suburbs,
providing the economic opportunities lacking in cities but
avoiding the haphazard growth that characterized suburbia.
McKissick got HUD’s final blessing in 1974, adding to
the financial backing he had from several lenders and his
general partners in the project — the National Housing
Partnership, a private entity created by Congress to
stimulate the development of low- and moderate-income
housing, and MMI Inc., the Cleveland-based development arm of a minority-owned architectural firm.
With financing in hand, McKissick was ready to deliver
on a dream he had envisioned since his Army days. His initial goal was to have an integrated community of 40,000
to 50,000 people and an industrial park teeming with
activity within 20 to 30 years.
But which should come first, the jobs or the people?
This is the classic chicken-and-egg question of developing
a self-sufficient community, particularly one like Soul City
that wasn’t within commuting distance of an urban hub. If
residential construction happens first, then the new residents need a place to work, unless they are retired or independently wealthy. If commercial development comes
first, then the new laborers need roofs over their heads.
Soul City’s developers were told to focus on industrial
recruitment, according to Devin Fergus, a professor of
history and black studies at the University of Missouri. In
a 2010 article in the Journal of Political History, he wrote,
“Over the opposition of McKissick and other town founders, Washington required that industry and commercial
contracts be secured first before new residential subdivisions were built.”
So they completed a 73,000-square-foot building
dubbed Soul Tech I as the first phase of an industrial park.
To support future industrial activity, they also constructed
a regional water system, upgraded and expanded a wastewater treatment facility, and built a fire station.
Despite this progress and initial interest from several
companies, McKissick and his team couldn’t bring large
employers to Soul City. A New York-based manufacturer
produced backpacks and duffel bags for the U.S. military
at the Soul Tech I building for a year before the company filed for bankruptcy in 1979. After that, a chicken
hatchery, a small textile firm, a packaging company, and a

janitorial supplies producer occupied part of the building
at various times.
Several key players expected growth to spread to
Warren County from the Research Triangle region. But
that never happened.
Generating Jobs is Hard Work ...
There wasn’t much happening in Warren County. As of
1969, per capita income in the county was only two-thirds
of the per capita income for North Carolina. More than
40 percent of families lived in substandard housing.
Agriculture had brought prosperity to Warren County
before the Civil War. After the war and the collapse of the
plantation system, however, the local economy struggled.
The county was among the rural communities in the South
where millions of slaves had toiled on plantations and
had limited options beyond sharecropping or working on
someone else’s farm, communities that missed the prosperity of regions like the Research Triangle.
While Warren County had an ample supply of low-cost
labor and land in its favor, it had plenty of competition for
new industry. Two towns in nearby counties — Henderson
in Vance County and Oxford in Granville County — as
well as many others in rural North Carolina were also eager
to replace lost agricultural jobs.
Also, the county’s labor pool was older and unskilled,
and the educational system was ill equipped to prepare
residents for new careers. Soul City’s developers tried to
address this shortcoming by helping to secure funds for a
high school. In the meantime, the community was too far
away from population centers to provide additional labor
options for potential employers.
In general, Warren County’s relatively isolated location
in North Carolina was a handicap. Soul City was next to a
rail line and minutes away from Interstate 85 and U.S. 1. But
it was more than 50 miles, or an hour’s drive, from Durham
or Raleigh, too far from the amenities that prospective
employees and employers look for. To meet these needs,
the developers built a recreational center with basketball
and tennis courts, a bath house, and pool.
Soul City was the only project in the New Communities
program that was neither a satellite community that benefited from the amenities of a nearby city nor within an
urban area in need of revitalization. Most of the privately
funded new towns were satellites — Columbia, Md., was
less than 25 miles from Baltimore, while Reston, Va., was a
similar distance from the nation’s capital.
The proximity of Columbia and Reston to major cities offered them another benefit, says David Godschalk,
a former professor of city and regional planning at the
University of North Carolina at Chapel Hill. These new
towns saw migration from Baltimore and Washington.
“Near Soul City, there was no such metropolitan source
of families that wanted to move out. You weren’t going
to move from Raleigh [and] the little towns [in Warren
County] didn’t have the population supply.”
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

27

Godschalk knows this firsthand. He served on the
board of directors of the Warren Regional Planning
Corporation, a nonprofit formed to develop a general
land-use plan for Soul City and provide technical assistance to Soul City’s developers as well as minority and disadvantaged business owners in neighboring communities.
... Especially For a “New Community”
The economic challenges of developing Soul City were
exacerbated by the complexities of navigating HUD’s
New Communities program.
While McKissick and the developers of 12 other projects received loan guarantees from HUD and other federal
support, their initial investment was steep. There were the
upfront expenses of acquiring land and installing infrastructure in addition to the usual carrying costs of paying
property taxes on undeveloped land and interest on loans.
According to a 1975 report on the New Communities
program, “All other nations engaged in [new town] development employ public mechanisms for land assembly and
the provision of infrastructure. We are the only country
attempting to do this through private developers.”
In the case of Soul City, the developers had $10 million
of HUD’s $14 million loan guarantee and were awarded
more than $5 million in grants from HUD and millions
more from other federal agencies. But it took years before
the money started to flow.
McKissick applied for the loan guarantee in February
1971, almost two years after submitting a preliminary
application. HUD signed a letter of commitment in June
1972, more than a year later, and didn’t spell out the terms
and conditions for the guarantee until it signed a formal
agreement in February 1974.
“Months went by, along with delays in financing and
land acquisition, as the project stumbled into a ‘turf war’
between political operatives inside the White House and
more fiscally minded conservatives operating largely out
of the Office of Management and Budget,” noted Devin
Fergus in his 2010 journal article. “While the federal officials bickered among themselves, commercial real estate
prices rose.”
The developers finally sold their first $5 million in
bonds in March 1974. But they held title to only part
of the proposed 5,000-plus acres that were targeted for
development. The rest of the loan guarantee was needed
to pay for the upfront expenses and carrying costs of the
rest of the project.
Before the developers could issue additional bonds,
however, they had to meet several conditions that HUD
did not impose on other New Communities projects.
Soul City had to generate at least 300 jobs and meet other
requirements pertaining to land sales and infrastructure
development.
These restrictions weren’t lifted until December
1976. McKissick and his team sold another $5 million in
bonds, which was enough to complete several projects and
28

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

support the development of 32 houses between 1977 and
1979. But the last $4 million in guaranteed bonds were
never issued, and numerous lots purchased for development remain fallow today.
On top of these unique challenges, all of the developers in the New Communities program leaned into the
same headwinds faced by the rest of the country in the
1970s. The OPEC embargo of 1973 caused oil prices to
nearly quadruple between October 1973 and January 1974.
Double-digit inflation hit. Interest rates and unemployment soared, undermining the demand for new housing.
“In calculating our loan guarantee commitment, we
had to do financial projections that assumed a 6 percent
compounded rate of inflation for 30 years,” recalls Floyd
McKissick Jr., who worked for his father on the Soul City
project and studied regional planning and law before being
elected to the North Carolina legislature. “That sounded
good in 1971, but they didn’t anticipate the oil embargo”
raising the cost of petroleum-based products like asphalt
for roads and PVC pipes for plumbing in houses. “You
were stuck with that model, when we really needed to go
back and recalculate the numbers.”
Among the 13 projects funded through the New
Communities program, most faced financial difficulties.
The program was re-examined several times and closed
down in 1983 after awarding $590 million in guarantees
and grants. Only The Woodlands in Texas was able to
achieve viability over the program’s 15-year span. In the
private sector, the new town movement took decades to
yield any successes, including Columbia and Reston.
All Eyes on McKissick
Soul City had been under the microscope for years before
HUD ended the New Communities program. The Raleigh
News & Observer ran an investigative series on the project
in March 1975, a year after money started flowing from
the first bond issue. The articles questioned whether the
project benefited only the bank accounts of McKissick
and his colleagues. They also criticized the complex web
of organizations involved with Soul City’s development
and the multiple roles that people had in these groups as
potential conflicts of interest.
Shortly after the series ran, Sen. Jesse Helms, R-N.C.,
and Rep. L.H. Fountain, D-N.C., called for an audit of Soul
City. Helms had opposed the project from the very beginning. After being elected to Congress in 1972, he received
a note of congratulations from McKissick and soundly
rejected the overture, promising “to request a careful independent examination of expenditures” into Soul City. On
the floor of the U.S. Senate three years later, Helms called
Soul City a “gross waste” that exemplified the pitfalls of
government intervention in private development.
The General Accounting Office (GAO) released its
audit of Soul City in December 1975 and found no significant malfeasance. But the damage was done — the
project had come to a standstill. When work resumed,

the negative publicity from the News & Observer exposé
and the GAO audit gave the project a taint of scandal and
corruption that would haunt it until the end.
Some people believed McKissick was held to a different
standard. He was a businessman as well as an activist, yet
any hint of him, his family, or his associates making a profit
from Soul City was viewed suspiciously. An April 1979
article in the Wall Street Journal article zeroed in on the
cars that McKissick’s family drove and the “big microwave
oven” in their kitchen.
It didn’t help that the overlapping organizations formed
to tackle the myriad community development efforts at
Soul City made the whole project much harder for outsiders to understand. “Had this project been next to Raleigh
or Charlotte, you could have probably pulled in some of
the business leaders from those communities for volunteer
boards,” notes David Godschalk. “This was a remote, rural
location in North Carolina ... that didn’t have many human
resources.”
Providing social services like a health clinic and securing
state and federal funding for projects like the regional water
system helped overcome the initial suspicions of public
officials in Warren County and surrounding communities.
“They saw the water and sewer lines going into the ground,”
recalls McKissick Jr. “They saw the positive impacts.”
At the same time, political leaders with entrenched
interests wanted to protect the status quo. It had been less
than a decade since the Civil Rights Act passed in 1964. “I
think more of the skepticism [about Soul City] came from
people who held prejudices and were not open to the idea
of encouraging black entrepreneurship,” adds McKissick
Jr. “They were skeptical of what changes it might bring.”
Ultimately, McKissick and his team remained outsiders, even in the eyes of much of the county’s black
population, despite their best efforts to reach out. “They
had been in this ditch so long that they’d given up hope,”
recalls Eva Clayton. “They couldn’t bring themselves to
believe that outsiders could come and do something that
hadn’t happened in years.”
A Dream Deferred
Floyd McKissick had the drive of a social entrepreneur to
face these challenges. He believed passionately that capitalism could produce innovative solutions to the world’s
social problems. Other developers like Robert E. Simon Jr.

and James Rouse, the creative forces behind Reston and
Columbia, respectively, also touted the social benefits of the
new town movement.
Where McKissick might have fallen short is being
what the Schwab Foundation for Social Entrepreneurship
describes as someone who “continuously refines and
adapts [his/her] approach in response to feedback.” In
other words, he was stubborn.
For example, many people didn’t like the name “Soul
City” and advised McKissick to change it in order to dispel perceptions of the project being for blacks only. He
refused.
Other lessons can be derived from the experiences of
Floyd McKissick and Soul City. His son points to the
30-year time horizon and all-at-once development approach
championed by the New Communities program. Instead, a
developer should look ahead only 10 years and be willing to
respond to changes in market conditions. “You have to be
nimble,” explains McKissick Jr.
Even with a shorter timeline, a developer also needs
deep pockets to build an entire community from scratch.
“Columbia ran in the red for years and years,” says
Godschalk. “Rouse put a lot of his money into it, and he
had a lot of good connections with banks that helped to
keep it afloat,” plus the financial support of Connecticut
General Life Insurance Co. Reston also lost “a considerable amount of money” before it was completed, but it had
the backing of Gulf Oil.
Most importantly, a development needs to have something that creates demand for housing and drives up
property values, advises McKissick Jr. The problem was
Soul City never got a chance to offer those amenities. The
developers had expended their financial and political capital on building infrastructure. Nor was that infrastructure
enough to lure the businesses that would create job opportunities for future residents.
There are certainly examples of cities that developed
in the middle of nowhere. But many of them followed the
path of progress — a railroad or a canal or a mountain pass
through a valley. Would McKissick have succeeded in
finding fertile ground on a former plantation?
David Godschalk believes that efforts to fight the economic tide to develop a community can succeed ... under
the right circumstances. “It is a very complicated effort,
but it’s not impossible.”
EF

Readings
Biles, Roger. “The Rise and Fall of Soul City: Planning, Politics,
and Race in Recent America.” Journal of Planning History, February
2005, vol. 4, no. 1, pp. 52-72.

Rhee, Foon. “Visions, Illusions and Perceptions: The Story of Soul
City.” Honors Thesis Submitted to Department of History at Duke
University, April 1984.

Fergus, Devin. “Black Power, Soft Power: Floyd McKissick, Soul
City, and the Death of Moderate Black Republicanism.” Journal of
Political History, April 2010, vol. 22, no. 2, pp. 148-192.

Strain, Christopher. “Soul City, North Carolina: Black Power,
Utopia, and the African American Dream.” The Journal of African
American History, Winter 2004, vol. 89, no. 1, pp. 57-74.

Minchin, Timothy J. “ ‘A Brand New Shining City’: Floyd B.
McKissick Sr. and the Struggle to Build Soul City, North Carolina.”
North Carolina Historical Review, April 2005, vol. 82, no. 2, pp. 125-155.
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

29

POLICYUPDATE

Time to Unwind
BY H E L E N F E S S E N D E N

T

his fall, the Fed is taking initial steps to unwind a
to the Fed. Once that bond is paid off, it’s taken off the
signature post-recession stimulus policy by trimFed balance sheet, while bank reserves decline by a corming back its massive balance sheet. Under quanresponding amount.
titative easing (QE), the Fed launched several rounds of
In the case of MBS, the process is similar. These secubond buying during and after the financial crisis, boosting
rities are backed by pools of mortgages purchased by the
its balance sheet from around $800 billion to $4.5 trillion.
government-sponsored enterprises (GSEs) Fannie Mae and
The primary goal of QE was to lower interest rates for
Freddie Mac, as well as Ginnie Mae (a federal agency). They
longer-term securities and mortgages, thereby making
repackage the mortgage debt into bonds and then sell the
borrowing cheaper and stimulating the economy. (The
MBS to investors and financial institutions. Under QE, the
Fed also kept its benchmark rates near zero throughout
Fed bought the MBS on the open market and then credited
this time, which affected short-term rates.) The Fed now
the reserve accounts of the GSEs and Ginnie Mae. By extenholds about $2.4 trillion in treasuries (17 percent of the
sion, the MBS roll off when the principal values are paid
market) as well as $1.7 trillion in mortgage-backed securidown for the mortgages underlying the MBS — for example,
ties, or MBS (29 percent of the market).
through normal payments, sale of property, or refinancing.
The Fed has long made clear that it would start to
In turn, the value of the Fed’s MBS holdings declines.
shrink its balance sheet once the process of raising shortThe Fed has not given a public estimate of how far
term interest rates was well underway. In 2014, the Fed
this process will go. But many observers believe that
announced it would stop increasing its net bond holdings
the balance sheet’s ultimate size, as well as the resultand instead maintain the size of
ing amount of bank reserves,
its balance sheet by reinvesting
will be significantly larger than
This year, so far, Fed balance-sheet pre-crisis levels. One reason is
bonds once they matured. Then,
this past June, it said it would
that a central bank’s balance
announcements have not
soon start allowing bonds to “roll
sheet needs to be at least as
sparked turmoil similar to the
off ” — that is, mature and not be
large as the amount of currency
replaced by another security —
in circulation. In the case of
2013 “taper tantrum.”
so that its balance sheet would
the U.S. dollar, total currency
slowly shrink.
circulation (home and abroad)
In October, this process began incrementally, with
has grown from $800 billion in 2008 to $1.5 trillion today.
$6 billion in treasuries and $4 billion in MBS rolling
The Fed estimates this sum will expand to $2 trillion in the
off each month. Those sums will gradually increase to
next five years.
$30 billion and $20 billion per month, respectively. The
Another reason has to do with the conduct of moneaim of such a gradual and transparent implementation is
tary policy. Traditionally, the Fed controlled short-term
to avoid the kind of disruption seen with the 2013 “taper
rates by a combination of adjustments in the quantity
tantrum,” when markets were jolted on fears that the Fed
of reserves and the discount rate; these changes would
would pull back quickly on its stimulus. This year, so far,
affect supply and demand, respectively, in the fed funds
Fed balance-sheet announcements have not sparked simmarket (the overnight interbank market). Since the crisis,
ilar turmoil.
however, excess reserves have grown so much that the
So what does “rolling off ” actually look like? In some
interbank market has effectively disappeared, so such
ways, it’s just the bond buying process in reverse. Under
adjustments would have little effect on rates. The Fed has
QE, the Fed bought treasuries on the open market and
found that a more robust tool, also in effect since 2008,
paid for those purchases (which are assets on the Fed balis adjustments in the interest rate paid on banks’ excess
ance sheet) by crediting banks with reserves (which are
reserves. This way, it has learned, it can control the range
Fed liabilities). This is the main reason why bank reserves
for the fed funds rate.
also dramatically expanded under QE, from $900 billion
“It’s not unreasonable to argue that the optimal size of
before the crisis to $2.6 trillion at their peak (today, they
the Fed’s balance is currently greater than $2.5 trillion and
are more than $2.3 trillion). Under the new policy, most
may reach $4 trillion or more over the next decade,” wrote
treasuries on the balance sheet will still be reinvested.
former Fed Chairman Ben Bernanke on his blog earlier
But for those that are slated to roll off, the Treasury
this year. “In a sense, the U.S. economy is ‘growing into’
Department, as the original bond issuer, will pay off the
the Fed’s $4.5 trillion balance sheet, reducing the need for
expiring debt in a process that ultimately transfers cash
rapid shrinkage over the next few years.”
EF
30

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

BOOKREVIEW

Echo Chamber Dot Com
#REPUBLIC: DIVIDED DEMOCRACY
IN THE AGE OF SOCIAL MEDIA
BY CASS R. SUNSTEIN, PRINCETON,
N.J.: PRINCETON UNIVERSITY PRESS,
2017, 286 PAGES
REVIEWED BY TIM SABLIK

T

he 2016 presidential election as well as tumultuous,
and sometimes violent, demonstrations recently
have had many asking: Is society becoming more
polarized along political lines? According to one study, parents in 1960 were much more likely to object to their child
marrying someone of a different race than from a different
political party; in 2010, the opposite was true. Another
study found that the political discourse of the two parties
in Congress has become more polarized over time. (See
“Interview: Jesse Shapiro,” Econ Focus, Second Quarter 2017.)
What is to blame for this apparent trend? In #Republic,
Cass Sunstein of Harvard Law School points to online
media. Today, individuals can find content for any number
of niche topics or viewpoints. Not only that, they are able
to filter it based on their interests and preferences — some
platforms even do so automatically based on users’ viewing
habits. While this may be a boon for consumers, who are
getting what they want, Sunstein contends that it has troubling implications for democracy.
The book’s title references Benjamin Franklin’s famous
statement on the type of government that the delegates to
the Constitutional Convention had designed: “A republic, if
you can keep it.” Keeping it requires a citizenry well-versed
in a variety of issues and viewpoints, according to Sunstein.
At first glance, the Internet would appear to be a great
enabler of such a society, with information on any topic
imaginable just a few keystrokes away. But Sunstein argues
that the Internet is being used for precisely the opposite
purpose: to reinforce pre-existing beliefs and filter out any
challenges to them.
Sunstein argues this point largely in philosophical terms,
but he also suggests it is a sort of market failure. He notes
that information is a public good because what you know
can freely be passed on to others, to their benefit. This
means information that may not benefit you directly could
still benefit others. From society’s perspective, to the
extent you fail to capture the benefit of the information
that aids others, you’ll underconsume information — or so
Sunstein contends. But it’s unclear that this effect is meaningful as a practical matter; one person underconsuming
information does not inhibit others from seeking it out to
their own benefit. And as Sunstein notes, the Internet does
include general interest news sites without pronounced

political slants, fostering serendipitous discovery.
It is certainly true that the Internet can be used to filter
content, creating “echo chambers” of likeminded individuals. The most chilling example of this, which Sunstein
sets out, is the way terrorist organizations have used social
media to radicalize and recruit members. Sunstein also
cites experiments in sociology showing that when people
are divided into likeminded groups, moderate members
are influenced by those who hold opinions more strongly,
becoming more extreme themselves.
But Sunstein fails to make a compelling case that most
individuals online are exclusively seeking out echo chambers. In fact, citing a study by Matthew Gentzkow of
Stanford University and Jesse Shapiro of Brown University
that finds only a small preference among online users
for news outlets that match their political persuasions,
Sunstein admits that “most people do not consume news in
a partisan way.”
Core to Sunstein’s thesis is his assertion that the Internet
has made it easier to avoid exposure to new information
and views than in the past. He contrasts modern society with a time when physical public forums like parks
and street corners played larger roles, allowing anyone to
engage freely with the public. But it is hard to think of the
Internet as anything but such a public forum writ large,
and avoiding unsought information online does not seem
as easy as Sunstein imagines. Many news sites include
comment sections at the end of each article and provide
links to other (often unrelated) material on the site. And
unless one befriends only likeminded individuals on social
media, exposure to novel information and opinions is likely
to occur more frequently online than on street corners.
Indeed, a recent paper by Gentzkow, Shapiro, and Levi
Boxell of Stanford University found that political polarization has grown most quickly since 1996 among older groups
who are least likely to use social media or read news online.
Sunstein readily acknowledges the many benefits of
social media and the Internet more broadly, and his proposed fixes are ultimately fairly mild. He suggests websites
with opposing views could agree to link to each other’s
content, or that social media services could provide users
with more content outside of their expressed interests.
But would such measures address the causes of polarization or just its symptoms? Some researchers have pointed
out that the divide in voting patterns between rural and
urban residents mirrors a similar divide in health and economic outcomes, suggesting there are deeper issues at
work than how we communicate with one another. Still,
Sunstein’s book is a thoughtful study of how media consumption tailored only to individual desires could exacerbate
the divides, even if it isn’t necessarily driving them.
EF
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

31

DISTRICTDIGEST

Economic Trends Across the Region

Preparing Unemployment Insurance for a Downturn: The Carolinas
BY R I C H A R D K AG L I C

I

n the aftermath of the Great Recession, the United
States saw unemployment rates rise to levels it had not
seen since the early 1980s as employers shed workers
by the millions. Workers who had lost their jobs could not
find other work and flooded into unemployment offices
around the nation applying for benefits to ease the shock
to their household income. Unemployment insurance
claims and payouts soared, straining programs from coast
to coast.
Before all was said and done, 36 states were overwhelmed
and saw their programs reach insolvency, requiring them to
borrow money from the federal government to continue
paying benefits to qualifying workers. The trauma to states’
unemployment insurance trust funds prompted policymakers in several states to make significant changes to their programs in order to place them on a more sustainable footing
for the next recession. Two of those states lie in the Fifth
District: North Carolina and South Carolina. This article
looks at the two states’ unemployment insurance programs
after the Great Recession and how they have changed as
a result. Moreover, it looks at how well prepared each is
to weather the next economic downturn and what recent
changes will mean for workers when it hits.
Employment During and After the Recession
The Great Recession had an uneven impact on employment and unemployment in the nation as well as in the Fifth
District. Twelve months into the downturn, employment
in the District was faring better than the nation as a whole.
The District of Columbia and, to a lesser extent, Maryland
and Virginia were buoyed by the stabilizing presence of the
federal government. Meanwhile, the nascent renaissance
in energy production was benefiting West Virginia. Thus,
none of these four jurisdictions saw job losses that matched
those of the nation. In fact, employment in Washington,
D.C., was actually higher than it had been when the recession got under way.
Without the omnipresence of the federal government
or an energy revolution of their own, North Carolina and
South Carolina felt the effects of the recession immediately and severely. And the severity of the job losses
persisted there throughout. By the time employment had
reached its trough in February 2010, 8.7 million jobs had
been lost nationally, amounting to a 6.3 percent decline,
but job losses in North Carolina and South Carolina
amounted to 7.8 percent and 8.2 percent, respectively
The outsized employment reaction in the Carolinas
could have been expected given the severity of the
downturn and the region’s economic structure. Prior
to the recession, both states were much more heavily

32

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

concentrated in manufacturing and construction, two
sectors that were particularly hard hit and where job losses
were much more acute.
While the national rate of unemployment climbed
5.6 percentage points as a result of the Great Recession
(from 4.4 percent to 10 percent), North Carolina’s rate
jumped by 6.7 percentage points (to 11.3 percent) and
South Carolina’s by 6.0 percentage points (to 11.7 percent).
But even the rise in the unemployment rates did not
fully reflect the level of strain that was and would be placed
on the states’ unemployment insurance programs. One of
the more extraordinary facets of this labor market downturn was the record high percentages of workers who were
unemployed for more than 26 weeks — the long-term
unemployed. Even as the number of workers entering the
unemployment insurance pipeline began to wane in early
2009, still fewer were leaving it.
As a result, 36 states depleted the balance of revenues
within their unemployment insurance accounts at some
point during or after the recession and took out federal
loans to continue paying benefits. North Carolina and
South Carolina were among the most affected states.
At certain points during their programs’ financial crises,
North Carolina had the second-highest federal loan-tototal wage bill in the country, and South Carolina was once
ranked seventh.
Due to the severity of the trust fund crises, the two
states took steps to reduce benefit payouts to pay down
their debt to the federal government and put their programs on better-prepared footing.
Changes to Maximum Duration of Benefits
The federal-state unemployment insurance program is
currently celebrating its 82nd year of existence. In the
early decades of the program’s history, there was quite a
bit of variability in the maximum duration of unemployment insurance benefits that state legislators had written
into law. And in most instances, the maximum duration
was less than 26 weeks.
That began to change in the 1960s during President
Johnson’s “Great Society” and “War on Poverty” as states
began moving toward a consensus of 26 weeks. So since
the middle part of the 1960s up until the Great Recession,
every state had a maximum unemployment insurance
duration of at least 26 weeks.
Yet while the vast majority of state legislatures chose
to set the maximum duration of unemployment insurance
benefits at 26 weeks, there was (and is) nothing in federal
law that mandates a maximum duration of 26 weeks.
It is somewhat surprising that this basic structure of

Labor Market Duress Surrounding Recessions in North Carolina
Months surrounding recession spent in unemployment range
maximum duration of benWeeks of eligibility
efits has held for so long.
under current law
% Unemployment July ’90 - March ’91 March ’01 - Nov. ’01 Dec. ’07 - June ’09
According to the National
5.6 - 6.0
17
8
15
13
Bureau of Economic
6.1 - 6.5
8
17
9
14
Research, prior to the Great
Recession, the U.S. econ6.6 - 7.0
0
9
5
15
omy had gone through six
7.1 - 7.5
0
0
2
16
economic downturns since
7.6 - 8.0
0
0
3
17
the middle part of the 1960s,
and state legislators across
8.1 - 8.5
0
0
4
18
the nation made no signifi8.6 - 9.0
0
0
5
19
cant adjustments. Why?
9.1 and above
0
0
46
20
There are a variety of reaSOURCE: Bureau of Labor Statistics
sons why states choose not
to undertake such reductions in maximum durations. One of the biggest lies in the
but also instituted a variable maximum that is dependent
spirit of the program itself — to provide some support to
on the state’s unemployment rate. (See table.) The state’s
workers who have lost their job through no fault of their
maximum eligibility ranges from 12 weeks (when North
own. The unemployment insurance payments help ease the
Carolina’s unemployment rate is less than 5.5 percent) to
blow to households’ ability to continue spending to meet
20 weeks (if the unemployment rate tops 9 percent). An
basic needs while simultaneously easing the shock to the
individual’s maximum benefits duration is determined
broader macroeconomy, since consumer spending is such
by the state’s seasonally adjusted unemployment rate at
a big part of it.
the beginning of a six-month “base period” in which the
Another reason maximum durations were not reduced
initial claim was filed. The six-month base periods begin
during those prior downturns is that none were nearly as
in January and July each year.
severe as the Great Recession, nor did they have the same
Today, North Carolina’s maximum benefit duration
impact on the solvency of states’ unemployment insurance
is 12 weeks because the state’s unemployment rate was
programs. Reducing the maximum duration during a “regat 5.3 percent in January 2017, the benchmark used to
ular” downturn is politically unpopular, while reducing it
establish the duration. If recent unemployment trends
during an expansion is not a political priority.
hold (the rate was 4.1 percent in July), North Carolina’s
The severity of the Great Recession changed the math.
maximum duration will still be 12 weeks in the first half
States that reach insolvency are still required by law to
of 2018, tying it with Florida for the lowest maximum
make benefits payments to qualified recipients. To do
duration of any state in the nation.
so, they borrow money from the federal government.
Those funds, however, do not come without their costs
Changes to Maximum Weekly Benefits
— employers in affected states are assessed an additional
The other conceivable step that states could have taken to
payroll tax until the state has paid off its debt, thereby
reduce unemployment insurance benefits payouts in the
increasing effective labor costs in the state.
aftermath of the Great Recession was to reduce the maxFollowing the Great Recession, policymakers in North
imum benefit amount. During the period in which states’
Carolina and South Carolina were forced to balance objectrust funds were under the most duress, however, there
tives that were somewhat at odds: paying off the federal
was a strong disincentive to take that step.
debt, providing benefits to ease the burden of unemployEarly on in the recession, Congress passed a supplemenment on households, and keeping taxes on employers low
tal appropriations bill that created a temporary emergency
to stimulate job creation.
unemployment compensation program, EUCO8, to help
In 2011, South Carolina was in the first wave of states
hard-hit states by providing additional weeks of emerthat passed legislation to decrease the maximum number of
gency unemployment benefits that were funded entirely
weeks of eligibility, reducing its maximum from 26 weeks
by the federal government’s general revenues fund. (This
to 20 weeks effective on June 14, 2011. This change remains
program was in addition to a permanent program already
in effect today. South Carolina’s law change was pretty
in place that extended unemployment insurance benefits;
straightforward, with the maximum duration simply tied
that program was funded by both the federal government
to the individual claimant’s eligibility to continue receiving
and the states equally.) But one key stipulation in EUCO8
benefits.
was a “nonreduction rule” that prohibited states from
North Carolina’s changes were slower in coming, more
receiving the federal funding if they “actively” changed
complex, and somewhat controversial. In the 2013 legislathe method by which maximum benefits were calculated
tive session, North Carolina’s assembly passed a law that
in order to reduce that benefit. (Some states had laws on
not only reduced the maximum duration for eligibility,
the books prior to EUCO8’s enactment that automatically
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

33

Increases in Unemployment Rates During Recessions
8

PERCENTAGE POINTS

7
6
5
4
3
2
1
0

July ’90 - March ’91
U.S.

NC

March ’01 - Nov. ’01

Dec. ’07 - June ’09

SC

SOURCE: U.S. Bureau of Labor Statistics

reduced the maximum benefit amount in certain circumstances, most notably if the state’s economy-wide average
wage decreased. Such “passive” adjustments were allowed
under the nonreduction rule.)
So reluctant were states to give up those federal funds
that only two chose to do so — New York and North
Carolina. In February 2013, North Carolina’s legislature
enacted legislation that included provisions to actively
reduce the weekly benefit amounts in the state beginning with claims filed on or after July 1, 2013. Important
changes to the program included reducing the maximum
weekly benefit amount from $535 to $350, eliminating
indexing, and making workers wait a week before receiving unemployment insurance benefits each time they file
a claim. In enacting this legislation, the state violated the
“nonreduction” rule, effectively ending North Carolina’s
participation in the program.
The impact of North Carolina’s efforts to reduce
its weekly benefits payments is noticeable in its rankings relative to other states. In the fourth quarter of
2012, North Carolina’s average weekly benefit amount
expressed as a percent of average weekly wages in the
state was 36.6 percent, the 21st highest in the nation. By
the first quarter of 2017, that percentage had fallen to
27.6 and its ranking to 43rd.
Implications for Public Finance and Workers
Both states’ unemployment insurance programs have
returned to solvency. A combination of improving economic conditions and reductions in benefits payments
allowed both North Carolina and South Carolina to clear
their program’s federal loan balances by the end of the first
quarter of 2015. With the federal debts paid off, employers in the two states are no longer paying the tax penalty.
Today, only California has not yet repaid all of its federal
program loans.
But while the states’ programs have recovered from the
prior economic downturn, how well prepared are they for
the next? And what might workers expect when it comes?
The national economic expansion is currently in its
ninth year, long of tooth by historical standards, so it is
prudent for states to ponder these questions.
34

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

Regarding the first, analysts’ best measure of a state’s
unemployment insurance program preparedness is its average high cost multiple, or AHCM. The AHCM is based
on the state’s reserve ratio (total funds to total wages) and
its benefit cost rate (roughly speaking, benefits paid as a
percentage of total wages). In particular, the AHCM is the
state’s calendar year reserve ratio divided by its average high
cost rate (the average of the three highest calendar year benefit cost rates of the last 20 years, or last three recessions).
An AHCM of 1.00 is believed to be the minimum level of
funding for a state to withstand an average recession for
one year. In the first quarter of 2017, North Carolina’s trust
fund was barely minimally funded with an AHCM of 0.99
while South Carolina’s was not, with an AHCM of 0.60.
So what implications do these changes to state unemployment insurance programs have for unemployed workers when the next recession comes? In South Carolina, the
picture is pretty straightforward. Unemployed workers in
the Palmetto State can anticipate receiving up to 20 weeks
of regular unemployment benefits (as opposed to 26 weeks
during the Great Recession).
In North Carolina, the picture is much more complex,
as the maximum duration of weekly benefits will depend on
the unemployment rate in January and July each year. With
the unemployment rate currently at 4.1 percent, workers in
the state are eligible to receive up to 12 weeks of unemployment insurance compensation. For unemployed workers to
be eligible for an additional week, the state’s unemployment
rate would have to rise by 1.5 percentage points from its current level. And for workers to be eligible to receive the maximum benefits duration of 20 weeks, the unemployment
rate would need to increase by roughly 5 percentage points.
Since no two recessions are identical, it is difficult to
predict what will happen when the next one hits North
Carolina. But history shows that the state’s labor markets
tend to deteriorate more than the national average during
downturns. (See chart above.) Thus, applying today’s laws
to yesterday’s recessions can be illustrative.
During the labor recession of the early 1990s, North
Carolina’s unemployment rate increased from a low of
3.4 percent in January 1990 to a high of 6.1 percent,
a rate that prevailed from January 1992 until August
1992. The unemployment rate was above 5.5 percent (the
state’s new threshold for increasing the maximum eligibility duration by one week) for 25 months. It was above
6.0 percent for eight of those months. (See table on
page 33.) If the current law were in effect then, unemployed workers would have been eligible to receive
13 weeks of unemployment insurance compensation during
two base periods, and 14 weeks of eligibility for two others.
During the 2001 recession, the state’s unemployment
rate started its ascent at 3.0 percent, eventually rose as
high as 6.9 percent, and spent 34 months above 5.5 percent. If the current state law were in effect then, workers
would have been eligible to receive up to 13 weeks of regular unemployment insurance benefits in one six-month

NC Employment Recovery Times
			
1.15
base period, 14 weeks in two periods, and 15 weeks in
two more.

Conclusion
In the Carolinas, labor markets are prone to be more susceptible to economic shocks than the nation as a whole,
as evidenced by sharper job losses, bigger increases in

EMPLOYMENT

1.05
1.00
0.95
0.90

0

+12

+24

+36

+48

+60

+36

+48

+60

MONTHS
1990 2000
2007
SC Employment
Recovery
Times
1.15
1.10
EMPLOYMENT

An Unemployment Insurance Squeeze?
The upshot is that if the next recession is anything close
to the two that preceded the Great Recession, unemployed workers in North Carolina should not expect to
receive more than 15 weeks of regular unemployment
insurance benefits. Moreover, they can expect much
less in average weekly benefits during the weeks they
are unemployed.
In enacting laws that shortened the maximum
duration of benefits, many policymakers argue that
unemployment insurance creates a disincentive for
unemployed workers to find suitable employment,
creating a friction in the labor market and keeping the
unemployment rate higher than it would be otherwise.
It is important to recognize, however, that that argument is predicated on an assumption that there is a
demand for labor.
A good proxy for the strength of labor demand in a
state is growth in its total employment level. After all,
firms hire additional workers when they see, or expect to
see, an increase in the demand for the goods and services
that they produce. Thus, changes in payroll employment
can shed some light on the demand for labor.
Here again, recent historical experience is useful
in thinking about how the changes in unemployment
insurance programs may affect workers in the next
economic downturn. During the recovery from the
1990-1991 national economic recession, a period that
was famously referred to as the “jobless recovery,” it took
North Carolina 28 months to recapture all of the jobs that
were lost during the downturn, on net. In South Carolina,
32 months passed before pre-recession levels of employment were restored.
As bad as the experience was during the jobless recovery of the early 1990s, it was even worse coming out of the
recession that occurred a decade later. During the recovery
from the brief, shallow economic downturn of 2001, approximately five years elapsed before employment in North
Carolina and South Carolina returned to pre-recession levels. And following the Great Recession, it took almost seven
years for pre-recession job levels to be restored in the United
States and the Carolinas. (See charts above.)
It appears that firms in the United States, as well as in
the Carolinas, have been slower to hire coming out of the
last three recessions. Regardless of their unemployment
laws, it is difficult for states to build economic momentum
and increase the demand for labor in the absence of a more
general improvement in national economic conditions.

1.10

1.05
1.00
0.95
0.90

0

+12

+24
MONTHS

1990

2000

2007

NOTE: Month 0 is peak month of employment before recession. On Y axis, 1.00 is employment level
for peak month (total nonfarm payroll employment).
SOURCE: Bureau of Labor Statistics

unemployment rates, and longer recovery periods. That
susceptibility is largely a function of two factors: an economic structure that is still more reliant on manufacturing
and relatively lower educational attainment levels.
In the aftermath of the Great Recession, state policymakers faced the challenge of rebuilding their unemployment insurance trust funds to meet their obligations. In
building a sustainable model for the future, policymakers
were confronted by an age-old balancing act: On one hand,
states want to keep taxes low to encourage hiring, while on
the other, they want to provide benefits to unemployed
workers to help them weather tough economic times. In
the Carolinas, and particularly North Carolina, policymakers placed the priority on keeping taxes low.
While over the longer term, low unemployment insurance taxes may increase the demand for labor in one state
compared to another, there is little to suggest that it would
do so in the shorter term. Indeed, recent history suggests
that labor demand recovers only slowly from economic
recessions. Thus, at the end of the day, these changes to
the unemployment insurance programs in the Carolinas
are not likely to have a significant impact on reducing
the ranks of the unemployed during the next economic
downturn. But in the absence of further changes, they will
increase the number of unemployed workers who will not
be receiving benefits. And in North Carolina, they will also
reduce the average amount of benefits payments.
EF
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

35

State Data, Q1:17
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000s)
788.2
2,747.7
4,383.9
2,075.6
3,954.0
746.9
Q/Q Percent Change
0.2
0.8
0.2
0.4
0.5
-0.3
Y/Y Percent Change
1.0
1.8
1.8
1.8
1.3
-0.6
							
Manufacturing Employment (000s)
1.2
103.3
462.1
243.1
232.6
45.6
Q/Q Percent Change
0.0
-1.0
-0.8
1.1
0.0
-1.9
Y/Y Percent Change
0.0
-0.7
-0.7
2.7
-0.4
-3.3
				
Professional/Business Services Employment (000s) 168.6
453.7
620.3
268.0
728.9
65.1
Q/Q Percent Change
1.0
1.8
0.5
-1.2
1.4
0.6
Y/Y Percent Change
2.7
3.5
3.3
2.3
2.6
-1.1
							
Government Employment (000s)
240.1
510.5
729.4
364.7
715.1
155.2
Q/Q Percent Change
0.1
1.0
-0.3
0.0
0.0
-2.2
Y/Y Percent Change
0.3
1.6
1.0
0.6
0.3
0.5
						
Civilian Labor Force (000s)
396.9
3,205.4
4,943.1
2,322.3
4,281.8
781.6
Q/Q Percent Change
0.9
0.8
0.6
1.1
0.4
-0.2
Y/Y Percent Change
1.3
1.4
2.1
1.2
1.5
-0.2
							
Unemployment Rate (%)
5.7
4.2
5.1
4.4
3.9
5.2
Q4:16
5.8
4.2
5.2
4.3
4.1
5.8
Q1:16
6.2
4.5
5.2
5.4
4.0
6.3
		
Real Personal Income ($Bil)
47.2
317.3
392.7
179.4
405.7
60.8
Q/Q Percent Change
0.8
0.2
1.3
1.1
0.9
1.0
Y/Y Percent Change
1.3
1.5
1.9
2.0
1.2
0.2
							
New Housing Units
677
3,786
15,843
8,290
7,244
675
Q/Q Percent Change
-36.7
27.0
21.7
20.2
17.9
5.3
Y/Y Percent Change
-2.9
8.0
37.9
21.7
10.8
32.6
							
House Price Index (1980=100)
821.0
451.6
349.0
358.0
437.5
229.3
Q/Q Percent Change
1.0
-0.2
0.7
1.3
-0.4
-1.1
Y/Y Percent Change
5.8
2.8
5.1
5.6
3.1
1.4
NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease. The manufacturing composite index is a
weighted average of the shipments, new orders, and employment indexes.
2) New housing units and house prices are not seasonally adjusted; all other series are seasonally
adjusted.
3) Manufacturing employment for DC is not seasonally adjusted

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor/Haver
Analytics
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics
New housing units: U.S. Census Bureau/Haver Analytics
House Prices: Federal Housing Finance Agency/Haver Analytics

For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org

36

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 2006 - First Quarter 2017

Change From Prior Year

First Quarter 2006 - First Quarter 2017

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
06 07 08 09 10

First Quarter 2006 - First Quarter 2017

10%
9%
8%
7%
6%
5%
4%
11

12

13

14

15

16

17

3%
06 07 08 09 10

11

12

13

14

15

Fifth District

16

17

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
06 07 08 09 10

11

12

13

14

15

16

17

13

14

15

16

17

16

17

United States

Nonfarm Employment
Major Metro Areas

Unemployment Rate
Major Metro Areas

New Housing Units

Change From Prior Year

First Quarter 2006 - First Quarter 2017

First Quarter 2006 - First Quarter 2017

Change From Prior Year

First Quarter 2006 - First Quarter 2017

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%
06 07 08 09 10
Charlotte

11

12

13

Baltimore

14

15

16

17

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
06 07 08 09 10

Washington

Charlotte

40%
30%
20%
10%
0%
-10%
-20%
-30%
-40%
11

12

13

Baltimore

14

15

FRB—Richmond
Manufacturing Composite Index

First Quarter 2006 - First Quarter 2017

First Quarter 2006 - First Quarter 2017

30

30

20

20

10

-30

-20

-40
11

12

13

14

15

16

17

-50
06 07 08 09 10

11

12

13

14

15

12

United States

Change From Prior Year
First Quarter 2006 - First Quarter 2017

-20

-10

11

House Prices

-10

0

-50%
06 07 08 09 10
Fifth District

0

10

-30
06 07 08 09 10

17

Washington

FRB—Richmond
Services Revenues Index
40

16

16

17

16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

06 07 08 09 10
Fifth District

11

12

13

14

15

United States

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

37

Metropolitan Area Data, Q1:17
Washington, DC

Baltimore, MD

Hagerstown-Martinsburg, MD-WV

Nonfarm Employment (000s)
2,639.7
1,388.0
106.0			
Q/Q Percent Change
-1.0
-1.6
-3.4			
Y/Y Percent Change
1.8
1.2
1.1			
						
Unemployment Rate (%)
3.7
4.4
4.0			
Q4:16
3.8
4.3
4.3			
Q1:16
3.8
4.6
4.5			
						
New Housing Units
5,070
1,447
261			
Q/Q Percent Change
9.4
25.8
11.1			
Y/Y Percent Change
-1.1
5.8
31.8		
		
Asheville, NC

Charlotte, NC

Durham, NC

Nonfarm Employment (000s)
187.8
1,161.5
306.3			
Q/Q Percent Change
-1.4
-1.1
-0.5			
Y/Y Percent Change
2.1
3.1
2.3			
						
Unemployment Rate (%)
4.0
4.7
4.4			
Q4:16
4.2
4.7
4.5			
Q1:16
4.0
4.9
4.6			
						
New Housing Units
473
4,978
1,099			
Q/Q Percent Change
8.0
19.5
15.0			
Y/Y Percent Change
7.3
22.5
-18.5			
						
					
Greensboro-High Point, NC
Raleigh, NC
Wilmington, NC
Nonfarm Employment (000s)
359.2
605.9
123.5			
Q/Q Percent Change
-1.2
-1.1
-0.7			
Y/Y Percent Change
1.3
2.9
2.9			
						
Unemployment Rate (%)
5.1
4.3
4.6			
Q4:16
5.2
4.4
4.8			
Q1:16
5.3
4.4
4.9			
					
New Housing Units
870
3,870
432			
Q/Q Percent Change
40.3
28.3
-26.3			
Y/Y Percent Change
53.4
84.1
9.1			
			
NOTE:

Nonfarm employment and new housing units are not seasonally adjusted. Unemployment rates are seasonally adjusted.

38

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

Winston-Salem, NC

Charleston, SC

Columbia, SC

Nonfarm Employment (000s)
260.8
349.3
393.6		
Q/Q Percent Change
-1.0
-0.5
-0.9		
Y/Y Percent Change
0.7
3.3
1.3		
					
Unemployment Rate (%)
4.7
3.8
4.2		
Q4:16
4.9
3.7
4.1		
Q1:16
4.9
4.6
5.0		
			
New Housing Units
456
1,726
1,190		
Q/Q Percent Change
95.7
27.6
14.9		
Y/Y Percent Change
65.2
17.2
22.1		
					
				
Greenville, SC
Richmond, VA
Roanoke, VA
Nonfarm Employment (000s)
407.7
661.5
162.4		
Q/Q Percent Change
-1.9
-1.5
-1.0		
Y/Y Percent Change
0.9
0.9
1.5		
					
Unemployment Rate (%)
4.0
3.9
3.7		
Q4:16
3.9
4.2
4.1		
Q1:16
4.7
4.1
3.8		
					
New Housing Units
1,190
1,671
N/A		
Q/Q Percent Change
-6.7
75.3
N/A		
Y/Y Percent Change
15.2
56.2
N/A		
					
				
Virginia Beach-Norfolk, VA
Charleston, WV
Huntington, WV
Nonfarm Employment (000s)
762.7
117.4
137.0		
Q/Q Percent Change
-1.5
-1.8
-2.4		
Y/Y Percent Change
0.5
-0.8
0.7		
					
Unemployment Rate (%)
4.2
5.0
5.8		
Q4:16
4.5
5.5
5.9		
Q1:16
4.6
6.1
6.4		
					
New Housing Units
1,628
51
35		
Q/Q Percent Change
25.2
-8.9
-23.9		
Y/Y Percent Change
8.1
6.3
-144.8		
					
				
For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org
E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

39

OPINION

Do Low Interest Rates Punish Savers?
BY K A RT I K AT H R E YA

W

e typically hear the stance of monetary policy
described as “easy” or “accommodative” when
interest rates are relatively low, and “tight” or
“restrictive” when rates are high. This language embodies a
judgment that low rates are helpful to households and businesses. But in periods when the Fed keeps rates relatively
low, one often hears the concern that savers are harmed by
low interest rates. There is some truth to that statement:
When the Fed cuts interest rates, certain types of interest
income tend to fall. However, this is not the whole picture.
One way of looking at this question is by considering
the counterfactual: What position would savers be in today
had the Fed pursued different policies? Many readers will
know that the Fed’s monetary policy goals are to achieve
both maximum sustainable employment and low, stable
inflation. Economic models strongly suggest that the best
way a central bank can support the employment side of its
mandate is by achieving success on inflation, which creates
favorable conditions for investment and growth over time.
The interest rate policy that delivers this outcome tends to
recommend rates that track the so-called “natural real rate,”
a conceptual interest rate that is thought to produce stable
inflation and employment outcomes. Our best estimates —
including a measure provided by Richmond Fed economists
Thomas Lubik and Christian Matthes — indicate that the
natural rate has fallen in recent years and with it, the appropriate setting for the Fed’s policy rates.
In fact, over the last several years, many economic models were calling for far lower interest rates than the Fed was
able to implement due to the so-called “zero lower bound”
on interest rates. Had the Fed’s policy rates instead been
higher, the evidence suggests that economic outcomes
would have been considerably worse. From this perspective,
higher rates would likely have been detrimental to savers
and virtually all households.
It is certainly true, though, that the Fed’s policies have
unintended distributional effects. How someone is affected
depends on their situation. For example, workers obviously
are directly affected by labor market conditions, and households may feel the effects of inflation differently depending
on the assets they hold. Many observers note that seniors
on fixed incomes may be affected by low rates without
experiencing the direct benefit of a healthier labor market.
The Fed pays close attention to such effects in evaluating
how its policies are affecting the economy.
Fortunately, research suggests the effects of easier
monetary policy on seniors are relatively limited. For example, a 2013 study by Richard Kopcke and Anthony Webb
published by the Center for Retirement Research at Boston
College looked at the asset holdings of households aged
40

E CO N F O C U S | T H I R D Q U A RT E R | 2 0 1 7

60-69 as of 2007. They found that the poorest 40 percent
of households headed by seniors held less than $3,000 in
financial assets on average. The wealthiest 20 percent of
seniors hold considerably more financial assets, but largely
stocks, which pay dividends and for which returns tend to
rise in response to low interest rates, all else equal. Research
suggests that even the seniors whose income seems most
affected by low rates — those in the middle-to-upper
income categories — still receive a relatively small share
of their income from investments. They tend to rely more
heavily on Social Security, real estate, and pensions.
There are many ways in which the Fed tries to minimize the inadvertent distributional effects of its policies.
For example, when it buys assets on the open market in
the conduct of monetary policy, it purchases mainly U.S.
Treasuries, which affect financial markets broadly with
minimal effects on relative asset prices. The extraordinary
period of the Great Recession changed this practice some,
but the Fed is taking action to move back toward more normal operation in monetary policy. (See “Time to Unwind,”
page 30.)
Moreover, savers are not just savers — they are also
participants in the overall economy. Many are workers:
As noted, if rates had instead been higher in recent years,
employment outcomes would surely have been worse,
and job loss is typically a more traumatic financial event
than the losses one faces when asset returns experience
a cyclical decline. Savers are also consumers, and lower
Fed policy rates generally mean lower loan rates for goods
like homes and automobiles, as well as lower interest
payments on variable rate loans. Finally, many savers also
hold assets whose values tend to rise in low-interest-rate
environments. Low rates tend to boost housing prices, for
example, and housing comprises a large majority — nearly
two-thirds — of assets for households in the middle of the
wealth distribution. This is especially true of older households preparing for retirement; roughly 80 percent of
households aged 65 and older own their homes, compared
to roughly 64 percent for the nation as a whole, according
to the Census Bureau.
In the end, the Fed is bound by Congress to focus on
the macroeconomic outcomes in its dual mandate. The Fed
does not have tools well-suited to targeting specific asset
returns or distributional outcomes. As economic models
tell us, the best way the Fed can help the greatest number
of households is by pursuing the monetary policies that best
support a healthy economy and price stability over time. EF
Kartik Athreya is executive vice president and director
of research at the Federal Reserve Bank of Richmond.

NEXTISSUE
The University of BMW

A common complaint among firms today is the difficulty of finding
and retaining skilled workers in a tightening labor market. BMW,
which has been building cars in Spartanburg, S.C., since the 1990s,
has been getting attention for its extensive training programs in
coordination with regional universities and community colleges.
BMW’s experience raises the question of what the Bavarian
luxury carmaker can teach the United States about workforce
development.

Drug Spending

Prescription drugs cost far more in the United States than in
other developed countries. To some extent, high prices reflect
the high costs of drug development, and some argue that the
United States disproportionately funds innovation that benefits
the rest of the world. Are Americans paying too much for drugs,
or are people abroad paying too little?

The Family Footsteps

Children entering a parent’s career is a common phenomenon
in many fields — such as farming, medicine, sports, and
entertainment, to name a few. When this happens, is there an
economic explanation? Yes, more often than you might think.

Federal Reserve
Today, information crosses the globe in the
blink of an eye, but in the United States
payments move much more slowly. While
consumers can choose from an increasing
number of innovative platforms, actual
payment processing still relies on older,
slower methods. In 2012, the Fed began
collaborating with the private sector to
discuss how to develop a faster, more
efficient payment system.

Economic History
Once a prominent city, Petersburg, Va., has
recently become known for its urban decline
and associated severe fiscal problems. The
intensity of these problems may have been
tied to a combination of being “too close”
to the more prosperous city of Richmond
and an inability to shrink its borders and
adjust its urban infrastructure.

Interview
Nobel laureate Jean Tirole of the Toulouse
School of Economics on online platforms for
buying and selling, the future of jobs in an age
of robots and artificial intelligence, and his
new book Economics for the Common Good.

Visit us online:
www.richmondfed.org
•	To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
• To subscribe to our magazine
•	To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

Measuring Regional Economic Business Activity
70

3-MONTH AVERAGE

40
30

65

20

60

10

55

0

50

-10

45

-20

40
35

-30
-40
2007 2008 2009 2010

2011

Manufacturing (left axis)

2012

2013

2014

2015

2016 2017

ISM Manufacturing (right axis)

Recession
70

40

3-MONTH AVERAGE

30
2018

30

65

20

60

10

55

0

50

-10

45

-20

40
35

-30
-40
2007 2008 2009 2010

2011

Services Revenues (left axis)

2012

2013

2014

2015

2016

ISM Non-manufacturing (right axis)

30
2017 2018
Recession

A

n important part of the mission of each Federal
Reserve Bank is to understand the economy
of its district. One of the tools the Richmond
Fed uses to understand the Fifth Federal Reserve
District (D.C., Maryland, Virginia, North Carolina, South
Carolina, and most of West Virginia) is a survey of manufacturing and service sector firms that are located
throughout the region.
Surveys are administered monthly, and participants
are asked whether business conditions improved, worsened, or stayed the same across a variety of indicators.
Through these surveys we are able to collect timely
information that is not otherwise available.
The survey of manufacturing firms began in June
1986 and took its current form in November 1993. The
manufacturing survey asks firms questions about shipments of finished products, new order volumes, order
backlogs, capacity utilization, lead times of suppliers,
number of employees, average work week, wages,
inventories of finished goods, and expectations of capital expenditures.
The survey of service sector firms began in 1993 and
asks questions regarding revenues, number of employees, average wages, and prices received.
Once we have gathered the responses to the survey
questions, we develop diffusion indices and publish a
monthly business conditions report.

Check out our most recent reports here:
https://www.richmondfed.org/research/regional_economy/surveys_of_business_conditions

Is your firm interested in participating in our surveys?

We strive for a representative sample of firms across industry, size, and location throughout our
district (D.C., Maryland, Virginia, North Carolina, South Carolina, and most of West Virginia). The survey
takes just a few minutes and provides an opportunity to comment on your local business conditions.
70
65
60

Contact our survey team to sign up! Rich.RegionalSurveyTeam@rich.frb.org