View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

SECOND QUARTER 2018

FEDERALRESERVE
RESERVEBANK
BANKOF
OFRICHMOND
RICHMOND
FEDERAL

TARIFFS
and
TRADE
DISPUTES
The Impact
in the Region

When Inflation
Hits Producers

Is Cash
Still King?

Interview with
Chad Syverson

VOLUME 23
NUMBER 2
SECOND QUARTER 2018

COVER STORY

10

Tariffs and Trade Disputes
How are recent moves affecting businesses in the Fifth District?

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
DIRECTOR OF RESEARCH

Kartik Athreya
EDITORIAL ADVISER

Aaron Steelman

FEATURES

14

Producers under Pressure?
What role do producers’ costs play in determining inflation?

EDITOR

Renee Haltom
SENIOR EDITOR

David A. Price
MANAGING EDITOR/DESIGN LEAD

Kathy Constant

18

Is Cash Still King?
Despite new technologies for electronic payments, cash has
never been more popular. What’s driving the demand?

STAFF WRITERS

Helen Fessenden
Jessie Romero
Tim Sablik
EDITORIAL ASSOCIATE

Lisa Kenney

­

CONTRIBUTORS

Michael Stanley
Anthony Swaminathan
Sonya Waddell

DEPARTMENTS

1		 President’s Message/Trade and Trepidation
2		 Upfront/Regional News at a Glance
3		 Federal Reserve/Computer Models at the Fed
6		 Jargon Alert/Network Effects
7		 Research Spotlight/Misperceptions of Mobility
8		 At the Richmond Fed/How Do Banks Use the Discount Window?
9		 The Profession/The Economist as Public Intellectual
22		Interview/Chad Syverson
28		 Economic History/The Great Telegraph Breakthrough of 1866
31			Book Review/The Future of Work: Robots, AI, and Automation
32		 District Digest/The Opioid Epidemic, the Fifth District, and
			 the Labor Force
40 Opinion/Great Expectations

DESIGN

Janin/Cliff Design, Inc.

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/
RichFedResearch
Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Econ Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025x (Online)

PRESIDENT’S MESSAGE

Trade and Trepidation

I

f a foreign country can supply us with a commodity
cheaper than we ourselves can make it, better buy it
of them with some part of the produce of our own
industry, employed in a way in which we have some advantage.” Since Adam Smith wrote those words in 1776, it has
become an enduring consensus among economists that
trade makes us all better off by giving consumers and businesses access to more and cheaper goods, and by spurring
new efficiencies and innovations.
That doesn’t mean there aren’t costs. Recent research
suggests it can take a decade or more for a local labor market
to adjust to the job loss that results from foreign competition. In our district, many communities have been disrupted
by the loss of furniture, textile, and steel manufacturing. In
the long run, these disruptions may be outweighed by the
benefits of trade, but in the short run, figuring out how best
to support the people and communities that bear the costs
is an important objective for policymakers.
Policymakers sometimes try to curb foreign competition in the first place via trade restrictions such as tariffs
or quotas. Regulating trade is far outside the Fed’s purview, so it’s not our place to weigh in on the pros or cons
of any particular policy. But economic theory tells us that
restricting trade has a number of potential downsides.
One possible harm is that consumers pay higher prices,
either because there isn’t a domestic substitute for the
foreign good or because the higher price for foreign goods
enables domestic producers to raise their prices as well. In
addition, U.S. producers import a large share of their intermediate inputs; if those inputs get more expensive, firms
might have to raise their prices to recover their costs. We
might also see negative economic effects if other countries
impose their own trade restrictions to retaliate. That could
make U.S. exports less desirable, leading to an oversupply
of, and lower prices for, the affected goods. The resulting
lower profits for these manufacturers could put jobs at risk.
It’s not all downside; for example, firms in the industries being protected may create more jobs, as several
metal manufacturers recently have announced they will
do. But economic theory suggests those job gains could be
offset by job losses in other sectors.
The current trade disputes put several industries in
the Fifth District at risk, as Tim Sablik discusses in
“Tariffs and Trade Disputes” in this issue. (See page 10.)
Car manufacturers in South Carolina, soybean farmers
in Virginia, and pork producers and tobacco farmers in
North Carolina are all facing new tariffs on their products
in China. Maryland and West Virginia are both large
importers of steel and aluminum; tariffs could increase
costs for manufacturers in these states.

Of course, we don’t know
precisely what the effects of
these tariffs will be. Supply
chains have grown increasingly
complex, which makes it difficult to predict how changing
prices and costs will be dispersed. And if firms expect the
tariffs to be temporary, then
they might be less likely to
significantly alter their prices
or production processes.
But one area where I believe
we are seeing a clear impact is confidence. For the most
part, people feel pretty good about the economy. The
University of Michigan’s Survey of Consumer Sentiment
is back to pre-Great Recession levels, and the Conference
Board’s measure of consumer confidence is actually higher
than it was in the mid-2000s. At the same time, people
are increasingly worried about the future with regard to
trade. The share of households in the Michigan survey
who spontaneously mentioned trade as a concern has more
than doubled since May, from 15 percent to 35 percent, and
the Conference Board’s surveys document a widening gap
between people’s confidence about the present and their
expectations for the future.
Similar results are obtained from surveys of CEOs and
business owners. While many firms continue to project
high levels of hiring and investment, those projections
have fallen in recent months, and 95 percent of CEOs surveyed by the Business Roundtable were concerned about
the effects of tariffs on U.S. exports.
It’s certainly a concern I’ve heard from our business
contacts throughout the Fifth District. And I’m not alone;
in July’s Beige Book, a compilation of regional data from
each of the 12 Federal Reserve districts, every single Reserve
Bank specifically mentioned trade policy as a source of concern or uncertainty for businesses in their district.
Uncertainty is bad for business. So in addition to the
effects on sales and prices, the extent to which trade policy affects confidence is something I’ll be watching very
closely.
EF

TOM BARKIN
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

1

UPFRONT

Regional News at a Glance

BY L I S A K E N N E Y

MARYLAND — On Aug. 3, Guinness opened a brewing operation in
southwest Baltimore County, the company’s first in the United States since the
1950s. The Guinness Open Gate Brewery & Barrel House is touted as a beer
destination, with a brewery, restaurant, and taproom. It employs about 200
people, most in the restaurant and taproom but some in packaging. Guinness
estimates the brewery will bring 300,000 people to the area in the first year,
which could help the county’s revitalization efforts for the Route 1 corridor
from Elkridge to Laurel.
NORTH CAROLINA — The state’s captive insurance program had a $30 million
economic impact in 2017, the largest since its inception in 2013, according to a
new report from the North Carolina Department of Insurance. Captive insurance
is when a business creates its own insurance company to cover its risks, a form of
self-insurance; captive insurers set up in the state are regulated by the Department
of Insurance. North Carolina’s 232 active captive insurers pay premium taxes to the
state; the economic impact also comes from revenues to service providers (CPAs,
actuaries, investment managers, and the like) and hospitality businesses.
SOUTH CAROLINA — On June 20, Volvo opened its first U.S. plant in
Ridgeville. The $1.1 billion plant will have 1,500 workers by the end of 2018 and
4,000 by the end of 2021. Production will begin in the fall with the redesigned
S60 sedan, and starting in 2021, the plant will also produce a new XC90 SUV.
The plant is expected to make about 150,000 vehicles per year when it is at full
capacity.
VIRGINIA — In late June, Virginia announced it is partnering with the
Newport News Shipbuilding division of Huntington Ingalls Industries to provide
support to the shipyard in hiring and training new employees. Over the next five
years, Newport News Shipbuilding plans to hire 7,000 people, including creating
2,000 new positions, to support new and existing contracts. Current employees
will also be retrained on new technology. The initiative will be supported by state
agencies including the Virginia Economic Development Partnership, the Virginia
Community College System, the Virginia Employment Commission, and the
Virginia Office of Veterans and Defense Affairs.
WASHINGTON, D.C. — Twenty years ago, D.C.’s finances were overseen by a
financial control board due to a ballooning deficit and a “junk” bond rating. That
turmoil is now a distant memory: The District was awarded Moody’s Investors
Service’s highest credit rating, AAA, on July 12. That follows Standard and Poor’s
and Fitch upgrading D.C.’s general obligation bond ratings to AA+. Moody’s
noted that the ratings bump was due in part to D.C.’s expanding high-wage
economy and strong four-year financial plan.
WEST VIRGINIA — After the U.S. Supreme Court struck down a federal
law on sports betting in May, the West Virginia Lottery Commission got to
work crafting rules for the state’s casinos. Earlier in 2018, the state legislature
had passed a law allowing sports betting, which the Lottery Commission
estimates will have an economic impact of $5.5 million in its first year. On July
9, the commission released the emergency rules for implementation of the law,
including requirements for sports betting lounges and specifics on who can use
sports wagering apps. These rules will allow casinos to request the necessary
licenses immediately and begin securing vendors and equipment.
2

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

FEDERALRESERVE

Computer Models at the Fed
Modeling the U.S. economy on computers has come a long way
since the 1950s. It’s still a work in progress
BY DAV I D A . P R I C E

O

ne evening in the fall of 1956, Frank Adelman, a
physicist at the Berkeley Radiation Laboratory —
now the Lawrence Livermore National Laboratory
— came home from work with a question for his wife, Irma,
a Berkeley economist. He wanted to try writing a program
for the lab’s new IBM 650 vacuum-tube computer, but he
had found that all of the physics problems he considered
interesting were too complex. He asked Irma whether she
thought there was an economic model that he could use
instead.
“A few days later,” she remembered, “I presented him
with a copy of the book by Laurie [Lawrence] Klein and
Art Goldberger, An Econometric Model of the United States
1929-1952.”
Frank obtained approval from his boss for one free
hour of central processor time, with the stipulation that
they would have to reimburse the lab for any additional
time at an hourly rate of $600, several times her monthly
salary. The couple then set to work together on writing
code for Klein and Goldberger’s 25-equation model of the
U.S. economy. Their new side project was a journey into
uncharted territory: Before then, the results of such models had been worked out by human assistants — known
as “computers” or “computors” — wielding slide rules or
mechanical calculators.
Working in the lab’s computer room at night, loading
the code and data via punched IBM cards, the Adelmans
had an initial version ready to present at an economics
conference a little more than a year later. Frank’s boss,
impressed, allowed them a second free hour, which they
used to create a more elaborate version, the results of
which appeared in 1959 in the journal Econometrica.
From this modest start, the science — and, some would
say, the art — of computer modeling of the economy has
become indispensable to policymakers and businesses
seeking to forecast economic variables such as GDP
and employment or to analyze the likely effects of policy changes. The Fed’s main computer model since the
mid-1990s, known as FRB/US (commonly pronounced
“ferbus”), has about 380 equations covering the behavior
of households, firms, inflation, relative prices, numerous
interest rates, and government taxes and spending (at the
federal, state, and local levels), among other phenomena.
Yet even as large-scale macroeconomic models such
as FRB/US have attained a role probably undreamed of
by Irma and Frank Adelman, their usefulness is debated

within economics circles — a reflection of a rift, starting in
the 1970s, between many research economists in academia
and their counterparts in policymaking institutions and
businesses.
The Road to FRB/US
Modern econometric models are descendants of work
done by researchers at the Cowles Commission (later
the Cowles Foundation) at the University of Chicago
from 1939 to 1955. (The organization then moved to Yale
University, where it has been since.) The Cowles researchers had the benefit of already-existing theories of the
business cycle, efforts by Simon Kuznets and others to
collect macroeconomic data, and pioneering attempts by
Jan Tinbergen to create models of the economies of the
United States and his native Netherlands.
From this starting point, the Cowles group established
an approach in which they represented the economy as
a set of simultaneous equations — that is, equations that
had to be solved together, not one by one. Each equation
specified how some economic variable (such as aggregate
personal consumption) on the left side of the equals sign
depended on some other variables, which reflected what
economic theory or the researcher’s judgment suggested
about the determination of that variable. The model could
then be estimated using statistical methods. This “estimated” model could then, in theory, be used to forecast
the path of the economy or analyze policy changes.
Lawrence Klein, who joined the Cowles Commission
after finishing graduate school at MIT, continued the
Cowles approach to model building at the University
of Michigan, Oxford University, and the University of
Pennsylvania, eventually receiving a Nobel Prize for his
work. Writing in 1950, before the computer age had
reached econometrics, he noted that an “annoying problem” in such research was “the laboriousness and complexity of computation” — the problem that Irma and Frank
Adelman would address on the night shift later in the
decade using a model he had co-created.
At the Fed’s Board of Governors, work on an econometric model of the U.S. economy began in 1966 as a
collaboration between Fed economists and academics.
The resulting model, which was used by Fed staff starting in 1970, was known as “MPS” for the institutions
involved (MIT, the University of Pennsylvania, and
the Social Science Research Council). The staff started
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

3

The mid-to-late 1960s would be the
high-water mark of joint work between
policymakers and academic economists
on macroeconomic models.
work on a global model in 1975, which led to MCM, for
“multi-country model,” coming into use in 1979.
As it turned out, the collaboration on MPS in the
mid-to-late 1960s would be the high-water mark of joint
work between policymakers and academic economists on
macroeconomic models. Interest among academics in such
projects declined afterward — the result, in large part, of a
single article by Robert Lucas of the University of Chicago
that did not initially attract much attention. In the article,
published in 1976, Lucas presented what is now universally
called the “Lucas critique”: In simple terms, he argued that
Cowles Commission-style large structural models were all
but useless in analyzing the future effects of policy changes
because they failed to account for people’s and firms’ expectations, especially the possibility that their expectations
would anticipate possible policy changes. In his view, to the
extent that economic actors were able to anticipate policy
changes, and thus adapt to them, models that could take
into account only the prior behavior of individuals and firms
would generate “invalid” results.
FRB/US at the FOMC
In reaction to the Lucas critique, as well as various limitations that the Fed encountered in using the MPS and
MCM models, Fed economists began work on successors
to them in 1991 and 1993, respectively. The resulting
models, FRB/US and its international counterpart, FRB/
MCM, replaced the earlier ones in 1996.
FRB/US, which the Fed’s Board of Governors released
to the public on its website in 2014, added extensive and
complex mechanisms for factoring in expectations. When
using the model, Fed staff can determine the assumptions
they want it to make about how different players in the
economy — for example, financial-market participants,
nonfinancial firms, and households — form their expectations of the economy and policy and how accurate their
expectations are.
Todd Clark, a senior vice president in the Cleveland
Fed’s research department and head of its macroeconomics
group, says that FRB/US “was a product of trying to build
in a lot of the things that had been learned about macroeconomics since the old MPS model was put in place.”
The results of FRB/US simulations make their way into
monetary policymaking at the Fed in several ways. First,
they are used directly by Fed economists and Federal Open
Market Committee (FOMC) members to analyze the outcomes of possible policies. For example, then-Vice Chair
Janet Yellen noted in speeches in 2012 that she had used
FRB/US to obtain projections of how long inflation would
4

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

remain in abeyance if the Fed continued its policy of low
interest rates. Second, forecasts from FRB/US are included
in the Tealbook, the set of materials that the research staff
prepares for the FOMC in advance of committee meetings.
Finally, and probably most importantly, FRB/US forecasts
are one input into the staff’s own forecasts, which are a central part of the Tealbook.
The staff forecasts are “judgmental,” meaning the
staff makes its own subjective decisions about how much
weight to give various pieces of quantitative and nonquantitative information. Christopher Sims of Princeton
University reported in a 2002 article that these judgmental
forecasts have been “historically slightly better” than the
FRB/US forecasts; in interviews he conducted with Board
of Governors staff members, they told him that the superiority of the judgmental forecasts came, not from better
foresight on the humans’ part, but instead from superior
knowledge of the current state of the economy. All other
things equal, a more accurate starting point means better
forecasts.
In assessing the current state of the economy, according to Sims, one area of advantage for the staff over
FRB/US and other current computer models — beyond
the staff’s ability to assimilate unstructured quantitative
and nonquantitative information — is a better ability to
assess how unusual shocks to the economy are likely to play
out. Events that have not been defined within a model, or
are outside the statistical experience of the model, such
as an oil-price shock, a major terrorist attack, or a largescale financial crisis, are beyond the model’s ken. “Analysis
of such historically unusual disturbances — including
the determination of whether they really are historically
unusual — will inevitably involve an element of subjective
judgment,” Sims noted.
The Rivals
Outside the Fed, FRB/US has been criticized from a
number of directions. For some economists, such as Ray
Fair of Yale University, its way of handling expectations
disconnected it from the statistical theory underlying
the original Cowles Commission-style large models. For
others, FRB/US does not go far enough in addressing the
issues raised by the Lucas critique.
Two other families of macroeconomic models have
swept macroeconomic research in academia, largely
because they sidestep Lucas’ objections to traditional
models. One of these, known as DSGE models, for
“dynamic stochastic general equilibrium” models,
emerged in the 2000s. DSGE models generally embody
a world in which individuals and firms know a lot about
the future: While they don’t know specifically what will
happen, they do know all of the possible shocks to the
economy and the chances of each of those shocks actually occurring. Richmond Fed research director Kartik
Athreya, in his 2013 book Big Ideas in Macroeconomics,
explained, “DSGE, taken literally, just means a model

in which decision makers think about the future, where
that future is uncertain, and where the outcomes do not
surprise people beyond what the realization of uncertainty itself does.”
Use of DSGE models within the Fed has been growing. Economists at the Fed’s Board of Governors have
developed two, known as EDO (a model of the U.S. economy) and SIGMA (a multi-country model). The research
departments of several Reserve Banks — the Chicago Fed,
the New York Fed, and the Philadelphia Fed — have also
developed and used DSGE models.
The answer to the question of whether FRB/US or
DSGE models give better forecasts and policy analyses is
not yet clear. Economists at the Board of Governors fed
economic data from mid-1996 to late 2004 into EDO and
found that its forecasts were “as good as, and in many cases
better than, that of the forecasts of the Federal Reserve
staff and the FRB/US model.” But they noted that EDO,
having been developed after the period in question, benefited from previous research, including the Board’s own
research, “on what types of models are likely to explain
the data well.”
Although DSGE models avoid the limitations of traditional models with regard to expectations, they do have
limitations of their own. Current DSGEs assume a “representative” household — that is, they generally assume all
households behave identically.
Yale’s Ray Fair, a rare academic proponent of traditional large-scale macroeconometric models, contends
that the level of knowledge of the future assumed by
DSGEs is unrealistic. “That’s a highly restrictive assumption,” he says. “Sometimes stock markets and bond
markets are pretty good, but to say that the average person or the average firm has that kind of sophistication
seems highly unrealistic. And it makes a big difference:
Properties of the model are very sensitive to whether you
generally assume that or not.”
Apart from the trade-offs made by builders of DSGEs,
Fair argues, the significance of the Lucas critique as a practical matter has itself been overstated. “There’s nothing
wrong with the logic of it,” Fair says of the critique. “The
question is how empirically relevant it is. It may be that
the things Bob [Lucas] was worried about may be small
quantitatively relative to other things.”
The other major family of macroeconomic models that
has emerged in reaction to Lucas’ 1976 article is VARs, or
vector auto-regressions, first proposed by Princeton’s Sims
in 1980. In this approach, the researcher simply makes a

list of the variables that he or she believes are relevant to
whatever issue is being looked at. Beyond that list, there’s
no need for economic theory: The researcher doesn’t need
to specify how the variables are related to one another.
Loosely speaking, the variables and some prior values of the
variables are all regressed on past values of each other.
Clark of the Cleveland Fed says all three families of
models have something to offer. “You see in modern
central banking the use of a range of models within the
Federal Reserve System,” he says. “There’s an old quote
from a statistician, George Box. ‘All models are wrong, but
some are useful.’ ”
Of DSGE models and models like FRB/US, Clark says,
“They are useful for helping us understand fundamental
issues with monetary policy and other policies. They’re
also helpful for telling a story around a forecast and giving
us insight into the structural forces that might be driving
the outlook.”
At the Richmond Fed, a type of VAR known as a
time-varying parameter VAR, built by Thomas Lubik and
Christian Matthes, is used to forecast the U.S. economy
and to analyze policy questions. An advantage of this type
of model, Lubik says, is that it can deal with nonlinear
behavior in the way some variables influence the economy,
such as the effects of interest-rate changes when interest
rates are near zero. To work on diagnostic questions about
the economy — what caused X to happen? — Richmond
Fed researchers use a variety of other models, including a
DSGE model.
One of the drawbacks of DSGEs and VARs, according
to Lubik, is that they are difficult to analyze and adapt to
the needs of the policymakers when they are implemented
on a large scale. While they enjoy academic respectability,
sometimes the utility of the theoretically imperfect model
makes it the better choice. “This has been the tension for
the last 10 to 20 years between academics and policymakers,” he says.
On the policymakers’ side, the theoretical limitations
of traditional models, and of hybrids like FRB/US, are well
understood. “But at some point, you need answers fast,”
Lubik says. “FRB/US in general tends to perform quite
well for forecasting and policy analysis.”
Whether quick and dirty or slow and theoretically clean,
computer models are essential to monetary policymaking
at the Fed. But when the next major negative shock to the
economy occurs, it may well be one that model-makers
didn’t envision — putting human judgment at a premium
over computer chips more than ever.
EF

Readings
Brayton, Flint, Andrew Levin, Ralph Tryon, and John C. Williams.
“The Evolution of Macro Models at the Federal Reserve Board.”
Carnegie-Rochester Conference Series on Public Policy, December 1997,
vol. 47, pp. 43-81.

Sims, Christopher A. “The Role of Models and Probabilities in the
Monetary Policy Process.” Brookings Papers on Economic Activity, 2002,
no. 2, pp. 1-62.

Fair, Ray C. “Has Macro Progressed?” Journal of Macroeconomics,
March 2012, vol. 34, no. 1, pp. 2-10.
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

5

JARGONALERT

Network Effects

I

n 1907, a group of investors that included J.P.
Morgan took control of the American Telephone
and Telegraph Company and named Theodore Vail
president. (Vail had also been AT&T’s president in the
1880s.) Roughly 6,000 independent phone companies had
sprung up since Alexander Graham Bell’s original patent
expired in 1894, and Vail quickly embarked on a new strategy of acquiring them. Had these competitors not become
part of the Bell system, Vail wrote in the company’s 1908
annual report, “each little system would have been independent and self-contained without benefit to any other.”
A telephone without a connection at the other end, Vail
explained, “is one of the most useless things in the world.
Its value depends on the connection with the other
telephone — and increases with the
number of connections.”
The term didn’t exist at the
time, but Vail was describing what’s
known today as a “network effect”
or, by some economists, as a “network externality.” Network effects
occur when “the utility that a user
derives from consumption of the
good increases with the number of
other agents consuming the good,” as
Michael Katz and Carl Shapiro of the
University of California, Berkeley
described in a 1985 article in the American Economic
Review. (Shapiro later wrote a book about network effects
with fellow Berkeley economist Hal Varian, now the chief
economist at Google.)
In general, there are two types of network effects:
direct and indirect. Direct effects occur when a good’s
value increases as the number of users goes up. Telephones
exhibit direct network effects, as did fax machines before
they were supplanted by email. Today, an oft-cited example of direct network effects is social media — the more
friends you have using a given platform, the more enjoyment you’ll get from it. An Internet search engine may also
exhibit network effects; more users enable the company to
refine the engine’s algorithm, making it more effective and
leading more people to use it. (See “Interview with Jean
Tirole,” Econ Focus, Fourth Quarter 2017.)
Indirect effects occur when an increase in consumers
using a good leads to the creation of more complementary
goods, thus making the original good more valuable. This
is common in platform situations. For example, as more
people use a particular videogame system, companies will
create more games compatible with that system. Greater
availability of games makes the system more attractive to
6

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

future players, and competition among game developers
drives down the price of games.
Robert Metcalfe, the electrical engineer primarily
responsible for inventing Ethernet local networks, is
widely credited with popularizing the idea of network
effects. In the 1980s, Metcalfe’s sales pitch for his new
technology stated that the effect would be proportional
to the square of the number of connected users of the
system, a formula that came to be known as “Metcalfe’s
law.” While there’s little empirical evidence to support the
law specifically, it’s often still used as shorthand to assess
technology companies’ values.
Network effects can contribute to a situation known
as “lock in,” in which a particular standard becomes dominant and consumers find it very
costly to switch. In these situations,
the producer of the standard may be
able to exercise monopoly power. In
1998, for example, the Department
of Justice sued Microsoft for
allegedly abusing Windows’ ubiquity as an operating system to
promote Internet Explorer. More
recently, critics have contended
that Google consistently manipulates its search results to direct users
away from competing services in
other markets that Google serves.
In addition, network effects don’t increase indefinitely.
Take a dating website, which initially becomes more useful as more people sign up and the number of potential
matches increases. But after a certain point, there might
be so many users that it’s difficult for people to sort
through the matches — a form of network “congestion.”
Congestion can also occur if a site or system’s infrastructure is insufficient to support the number of users. Or,
networks may become “polluted” if they reach a size such
that the quality of each additional user declines.
It’s also possible for an increase in users to create
more value for one side of the market while detracting
from the value for the other side. A website whose visitors increase will become more attractive to advertisers,
but the increase in advertisers might then turn away
some of those visitors. By many accounts, the ubiquity
of advertising contributed to the demise of MySpace,
which lost the social networking war to Facebook in the
late 2000s. Today, many Facebook users complain about
intrusive ads, but they continue using the site, in part
because everyone else does — a testimony to the power
of network effects.
EF

ILLUSTRATION: TIMOTHY COOK

BY J E S S I E RO M E RO

RESEARCH SPOTLIGHT

Misperceptions of Mobility

T

BY A N T H O N Y S WA M I N AT H A N

he American dream holds that with talent, good
the chances that those at the bottom will make it to the top,
ideas, and hard work, anything is possible. In
while Europeans underestimate those chances and overesAmerica, the common perception is that the martimate the chances that those at the bottom will stay there.
ket system is relatively fair and opportunities for mobility
Perceptions of mobility also correlate significantly
abound. Europeans, stereotypically, believe the opposite.
with individual characteristics. In general, left-leaning
There, the market system is viewed as fundamentally unfair;
respondents and the college-educated are more pessimiswealth is seen as the result of persistent socioeconomic
tic. Women, parents, low-income respondents, children
advantages. Opportunities for mobility are supposedly few
of immigrants, and those who have experienced mobility
and far between.
are generally more optimistic. Black Americans, though
Recent research on intergenerational mobility in the
facing low real levels of mobility, are especially optimistic.
United States and Europe, however, shows that American
The survey data also show a significant correlation
optimism and European pessimism might be misplaced.
between individuals’ perceptions of mobility and their
Research shows that mobility in
support for redistribution.
the United States may be lower
Pessimism is positively cor“Intergenerational Mobility and Preferences
than assumed, while mobility
related with support for all
for Redistribution.” Alberto Alesina,
in Europe exceeds Europeans’
dimensions of redistribution
Stefanie Stantcheva, and Edoardo Teso.
perception of it. Indeed, new
measured, while optimism is
data show that the United States
negatively correlated with most
American Economic Review, February 2018,
may have lower levels of mobility
of them. Additionally, support
vol. 108, no. 2, pp. 521-554.
than most European countries.
for equality of opportunity polA recent article by Harvard
icies, like investment in educaUniversity economists Alberto Alesina, Stefanie
tion and health care, is more sensitive to perceptions of
Stantcheva, and Edoardo Teso in the American Economic
mobility than support for equality of outcome policies,
Review tackled this issue of (mis)perception. The authors
such as expanded safety nets or more progressive taxation.
used survey and experimental data from the United States
There are large differences between left- and right-leaning
and Europe to compare perceptions of mobility with
respondents, as the views of right-leaning respondents are
actual patterns and analyzed the relationship between
much less sensitive to their perceptions of mobility.
individuals’ perceptions of mobility and their support
To isolate the effect of mobility perceptions on redisfor redistributive programs. Their work built on previtributive policy preferences, the authors ran an experiment
ous research on the linkages between intergenerational
testing the effect of a pessimistic shift in perceptions of
mobility and preferences for redistributive policy, which
mobility. Participants in the experimental group watched
highlights the importance of individual experiences, pertwo animations presented as summaries of recent research,
ceptions of inequality, beliefs about fairness, and self-fulone claiming that most poor children stay poor and few
filling ideological models of mobility.
become rich and another claiming that most rich children
The main source of data for the article is an origistay rich and few become poor. The survey measure for
nal survey administered in the United States and four
perceptions of mobility was administered before and after
European countries (Sweden, Italy, France, and the United
the treatment. Overall, those who saw the films were more
Kingdom). The focus of the survey is questions about perpessimistic relative to the control group.
ceptions of mobility, including one asking respondents to
The authors found no statistical difference in the effect
indicate how many of 100 children from the lowest quintile
of the films on perceptions of mobility between left- and
in the respondents’ country they believed would end up in
right-leaning respondents. They did, however, find a differeach of the five income quintiles as adults. The survey also
ence between these groups in the effect of the treatment
addressed participants’ socioeconomic backgrounds, indion redistributive policy preferences, as only left-leaning
vidual experiences of mobility, and views on fairness.
respondents subsequently increased their support for equalThe survey results confirm that Americans and
ity of opportunity polices (there was no effect on support
Europeans hold the stereotypical perceptions of mobility
for equality of outcome policies). Though they became more
commonly ascribed to them. In general, Americans are
pessimistic, right-leaning respondents had no change in
more optimistic than Europeans. Moreover, Americans are
their support for any redistributive policies — perhaps, the
generally too optimistic relative to reality, while Europeans
authors suggest, because they view government as unable to
are generally too pessimistic; Americans vastly overestimate
fix the problem or perhaps as the problem itself.
EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

7

ATTHERICHMONDFED

How Do Banks Use the Discount Window?
BY H E L E N F E S S E N D E N A N D R E N E E H A LT O M

Highlighted Research

“The Fed’s Discount Window: An Overview of
Recent Data.” Felix P. Ackon and Huberto M. Ennis.
Federal Reserve Bank of Richmond Economic Quarterly,
First-Fourth Quarter 2017, vol. 103, nos. 1-4, pp. 37-79.

T

he discount window is the Fed’s lending facility to
depository institutions, meant to provide short-term
loans to institutions with temporary liquidity shortfalls. But
should the Fed have a discount window open at all times,
including outside of widespread financial crises?
The potential costs of having a discount window have
long been recognized. As an example, renowned economist Anna Schwartz regularly expressed reservations
about having a discount window open and argued that
historically it has been used to lend to not just illiquid, but
insolvent banks. When this is the case, the discount window can have the effect of allowing uninsured depositors
to pull out of the bank before incurring losses — increasing
the costs of a bank’s failure on the FDIC and ultimately on
taxpayers. Forcing banks to rely only on private shortterm funding sources can create greater market discipline.
Richmond Fed economist Huberto Ennis has been
studying these issues for several years. “We need to better
understand the role of the discount window and what it is
being used for,” he says. “Looking at recent transactions
data, for example, can help us determine if we should continue having a discount window open at all times.”
This has previously been difficult because the details
around discount window activity weren’t made public on
a regular basis. That changed with a provision in the 2010
Dodd-Frank Act that requires the Fed to publish transactions data with a two-year lag. In a recent article, Ennis
and research associate Felix Ackon analyzed 16,514 loans
from July 2010 to June 2015 to identify patterns.
The loans fall into one of the discount window’s three
programs. Primary credit and secondary credit are emergency credit programs that constitute a backup source of
funding for eligible financial institutions. In the former,
institutions in good financial standing can get overnight
loans with “no questions asked,” paying an interest rate
higher than the Fed’s policy rate. Institutions not eligible
for primary credit can access secondary credit; those loans
come at an even higher interest rate and with greater Fed
scrutiny. A third program, seasonal credit, is aimed at
smaller institutions with a predictable and demonstrable
seasonable pattern in their funding needs.
Ennis and Ackon found that even though this period
covers the post-crisis years, when banks generally were
8

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

awash with liquidity and large quantities of excess reserves,
many of them still borrowed nontrivial amounts from the
discount window.
To estimate just how common borrowing was, Ennis
and Ackon needed to filter out “test” loans, which depository institutions conduct to make sure the systems
involved in processing discount window loans are working
as expected. Because the data don’t state which loans are
tests and which aren’t, they assumed that loans in amounts
greater than $10,000 were of the nontest variety (while
noting that some smaller loans likely are actual loans, and
some larger loans might be tests). Roughly one-third of the
total loans were categorized as test loans.
In the primary credit program — the biggest of the
three programs — there were almost 6,800 nontest loans
over the five-year period, mostly overnight, with an average
amount of $3.8 million. After 2012, primary credit borrowing dropped significantly (by 40 percent). Some banks were
frequent users: While almost 600 banks took only one
nontest loan during the five-year period, 28 banks took 30
or more nontest loans.
As might be expected given its higher interest rate, the
secondary credit program is used much less often than
primary credit. Of 650 total loans, only 39 were nontest
loans.
Discount window lending is collateralized, which
reduces the credit risk (to the Fed) of providing those
loans. Ennis and Ackon studied the composition of collateral that borrowers pledged with the Fed (including
consumer and commercial loans, securities, and other
bank assets) and the loan-to-collateral ratios. In general,
borrowing banks had more collateral than the amount they
borrowed, although in some cases, collateral utilization
was high, close to 100 percent.
Overall, Ennis says, “depository institutions do seem
to see routine provision of backup funding by the central bank as a valuable option for short-term liquidity.
However, a more clear understanding of the circumstances
that trigger discount window borrowing is needed to better assess the value of having the discount window open at
all times.”
Ennis and Ackon’s study is part of a broader range of
questions Richmond Fed researchers have asked about the
roles and implications of Fed lending. In 2016, Ennis and
policy advisor John Weinberg looked at the role of Fed
lending in the implementation of monetary policy. Ennis
has also studied how discount window stigma — the fear
banks may have that discount window borrowing connotes
poor financial health — could affect the ability of Fed
lending to smooth market distress.
EF

THEPROFESSION

The Economist as Public Intellectual

I

n 1977, Harvard University economist John Kenneth
Galbraith published The Age of Uncertainty. The book
was paired with a 12-part television series produced by
the British Broadcasting Corporation. Galbraith generally
took a skeptical view of the ability of unregulated markets
to produce either efficient or equitable outcomes. Three
years later, Milton Friedman of the University of Chicago
hosted a 10-part television series produced by the Public
Broadcasting Service based on Free to Choose, published the
same year and co-authored with his wife, Rose. In contrast
to Galbraith, Friedman argued that markets not only do a
good job of allocating goods and services, they also provide the best means for low- and middle-income people
to improve their circumstances. Galbraith and Friedman
were “public intellectuals,” presenting ideas on big topics
in an engaging, nontechnical manner to lay audiences.
Galbraith and Friedman had long had outsized voices
in the public arena. Galbraith had published The Affluent
Society, a best-seller, and was a founding member of
Americans for Democratic Action, which lobbies for
progressive causes. Friedman also had already published a successful book aimed largely at noneconomists,
Capitalism and Freedom, and had written regular columns
for Newsweek magazine, alternating with Paul Samuelson
of the Massachusetts Institute of Technology.
Both the economics profession and communications
technology have changed dramatically in the years since.
What has this meant for the role of economists as public
intellectuals?
As the growth of the Internet and other forms of
communication has exploded, the volume of economic
commentary has grown sharply as well — a boon for discerning consumers. Some have worried, though, that as supply
has increased, the caliber of discourse has declined, a trend
that could worsen. But this concern may be overstated due
to mechanisms that could foster quality control.
Economics faculties have an interest in monitoring the
output of their colleagues. They can’t formally prevent
others from publishing relatively brief articles that lack the
precise but often narrower statements that characterize
peer-reviewed academic papers. But they can make it plain,
especially to junior colleagues, that their professional interests would be best served if their popular writings were also
careful and measured.
In addition, economics has become increasingly formal and specialized. Friedman and Samuelson were giants
within the economics profession, but their interests were
broader than the typical economist then and certainly
today. As such, they were more inclined — and probably better equipped — to reach a general audience than

BY A A RO N ST E E L M A N

someone whose work is narrower and often doesn’t have
direct policy relevance.
Still, it is likely that the overall flow of opinions coming directly from economists to the public will increase.
For economists who have difficulty publishing in leading
journals or those who find academia unsatisfying for other
reasons, moving to positions in which they are rewarded for
speaking more directly to the public may prove increasingly
viable and desirable. Among those who stay, we may see
more economists writing nontechnical essays but on fairly
specific topics related to their academic work. In this vein,
Glenn Hubbard, an economist at Columbia University and
chair of the Council of Economic Advisers from 2001 to
2003, thinks that “people who contribute rigorous thought
to public discourse are well thought of (even though many
may disagree with their point of view)” and notes that the
most effective communicators, whether junior or senior
faculty members, “speak from a basis in their own scholarly
ideas and explorations.”
Some have asked: Might we see another Friedman or
Samuelson, a “superstar” economist in the prime of his or
her career who moonlights as a public intellectual? It seems
doubtful. Friedman published Capitalism and Freedom a year
prior to A Monetary History of the United States, 1867-1960
(co-authored with Anna Schwartz), a monumental book
and one of his most important academic contributions. But
it’s rare for someone to do work on the academic frontier
as well as work that speaks to a lay audience simultaneously. The process is more likely to be sequential: publish
significant academic papers and then turn to popular-level
writing. New York Times columnist Paul Krugman — like
Friedman and Samuelson, a Nobel Prize winner — started
writing primarily for a popular audience after he had done
most of his work on international trade and economic
geography cited by the Nobel committee. Similarly, Gary
Becker, also a Nobel laureate, greatly expanded his public
output after publishing his most pioneering work using
economics to analyze issues such as crime, the family, and
labor market discrimination.
There is considerable popular demand for economic
information and commentary. That much is clear. And,
says Hubbard, such communication is important: “Good
nontechnical writing on topics of economic importance is
vital to build support for good policy.” But the nature of
the rewards may be different in this new era too. Where
Galbraith and Friedman earned small fortunes from their
best-selling books, today’s public intellectual in economics
may have to be satisfied with the less tangible reward of
clicks and likes. As every economist knows, utility comes in
many forms.
EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

9

TARIFFS and
TRADE DISPUTES

How are recent moves affecting
businesses in the Fifth District?
By Tim Sablik

O

n July 6, 2018, a U.S. cargo ship raced across the Pacific
toward the port of Dalian in China. Its mission: make
landfall and unload its cargo of soybeans before a 25 percent
Chinese tariff went into effect at noon. Unfortunately for the
U.S. shippers and the Chinese buyers, the boat arrived a few
hours too late.
China’s tariffs on nearly $34 billion in U.S. exports — including food products, such as soybeans and pork, and other products,
such as cars — were a response to tariffs imposed by the United
States on a similar amount of Chinese exports on manufacturing
inputs and capital equipment. In late August, the United States
raised tariffs on an additional $16 billion of Chinese exports, and
China responded in kind.
President Donald Trump has made trade policy a focus of his
administration. His first major action this year came in March
when he implemented a 25 percent tariff on steel and a 10 percent tariff on aluminum. They are the first significant tariffs on
steel imports since President George W. Bush raised tariffs on
steel in 2002, later removing them in 2003. In recommending
the tariffs to President Trump, the Commerce Department said
that the measure was intended to increase domestic steel and
aluminum production. Initially, key U.S. trading partners such
as Canada, Mexico, and the European Union (EU) were exempt.
But the Trump administration ended the exemptions in June,
prompting Canada, Mexico, and the EU to respond with tariffs
of their own.
This flurry of tariff activity is significant in the modern era.
Recent decades have seen most developed nations move toward
opening up their markets to foreign trade. According to the
World Bank, the weighted average of U.S. tariffs across all imports
in 2016 was just 1.6 percent, similar to that of the EU. What is
behind the new rise of trade barriers, and how will they affect
businesses in the Fifth District?
The Trade Debate
For most of the postwar era, trade grew faster than world GDP.
After World War II, Allied leaders were interested in getting
the world economy back on track and avoiding the isolation
and protectionism that many blamed for the Great Depression.
Under the General Agreement on Tariffs and Trade, which later
became the World Trade Organization (WTO), member nations

10

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

What We Export (2017)
35
30

$BILLIONS

25
20
15
10
5
0

NC

Chemicals

SC

VA

Coal

MD
Machinery

Transportation Equipment

Computers and Electronics

WV
Plastics and Rubber

Other

What We Import (2017)

$BILLIONS

agreed to work together to reduce tariffs and other trade
barriers. World trade accelerated rapidly in the 1990s
and early 2000s with the dissolution of the Soviet Union
and the entry of China into the WTO. (See “Goodbye,
Globalization?” Econ Focus, Fourth Quarter 2015.)
Most economists view this expansion of trade as a good
thing. For example, 85 percent of economists responding
to a 2012 survey by the University of Chicago’s Initiative
on Global Markets (IGM) Forum agreed that freer trade
allows firms to improve production efficiency and offers
consumers better choices. While some industries are
harmed by exposure to foreign competition, economists
generally agree that in the long run, the overall gains from
trade are much larger than the losses for some industries.
That said, some economists have recently noted that the
costs of open trade may be larger and more persistent for
affected industries and workers than previously thought.
Traditional economic models have assumed that workers in harmed industries could easily transition to businesses that benefit from trade. But in a series of research
papers, David Autor of the Massachusetts Institute of
Technology, David Dorn of the University of Zurich,
and Gordon Hanson of the University of California, San
Diego found that this transition process may not work as
smoothly as economists hypothesized.
Autor, Dorn, and Hanson found that China’s entry into
world markets beginning in the 1990s significantly hurt
manufacturing workers in southern states, such as North
Carolina, Tennessee, and Mississippi. Those regions experienced higher unemployment for a decade after the initial
China trade shock, and some workers in impacted industries
experienced lower annual earnings relative to workers in
regions that were less exposed to trade with China.
The Trump administration has also emphasized the
costs of unrestricted trade. To impose tariffs on China,
President Trump invoked the Trade Act of 1974, which
empowers the president to take action in response to
trade practices by foreign governments that either violate
international agreements or are “unjustified” or “unreasonable.” The Trump administration has alleged that China
has used improper practices to obtain intellectual property
from U.S. companies. President Trump has also voiced a
desire to reduce the U.S. trade deficit, which he attributes
to unfair practices on the part of U.S. trading partners.
In imposing the steel and aluminum tariffs, the president
cited national security concerns and the need to protect
America’s metal industry and its workers.
But tariffs entail costs as well. Tariffs imposed by the
United States on other countries raise the cost of imports.
They may also raise the price of the same goods produced
domestically since U.S. producers face less competition
from foreign producers subject to the tariffs. Tariffs
imposed by other nations on the United States raise the
costs domestic exporters face in those markets. What
costs will recent tariffs impose on importers and exporters
in the Fifth District?

50
45
40
35
30
25
20
15
10
5
0

NC

Chemicals

SC

VA

Transportation Equipment

Computers and Electronics

MD
Machinery

Metal Manufacturing

WV
Apparel and Accessories

Other

SOURCE: U.S. Census Bureau

Fifth District Manufacturing
South Carolina is one of the biggest exporters in the
Fifth District, shipping around $32 billion in goods
in 2017, roughly 15 percent of the state’s GDP. A significant portion of those exports came from South
Carolina’s growing manufacturing sector, specifically
transportation manufacturing. South Carolina’s largest
category of exports is transportation equipment, which
includes cars, car parts, airplanes, and airplane materials. BMW’s plant in Spartanburg, S.C., employs 10,000
people and was the largest U.S. automobile exporter
by value in 2017. Workers at Boeing’s facility in North
Charleston, S.C., assemble and ship the firm’s new
787 Dreamliners. All told, transportation equipment
accounted for more than half of the value of the state’s
exports in 2017. (See charts.)
Those industries stand to be directly hit by China’s
recently adopted tariffs. China was South Carolina’s top
trading partner for exports in 2017; in July, it raised its tariffs on U.S. vehicles to 40 percent, after previously pledging
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

11

to reduce its tariffs on all imported cars from 25 percent
to 15 percent. All told, the U.S. Chamber of Commerce
estimates that China’s recent tariffs could affect $2.8 billion of South Carolina’s exports. On the import side, U.S.
steel and aluminum tariffs may squeeze auto and aerospace
manufacturers in the state by increasing the cost of inputs.
South Carolina’s top two import commodities in 2017 were
machinery and transportation equipment.
So far, however, the impact has been minimal, says Scott
Baier of Clemson University. Baier has studied trade issues
and spoken with local business owners about the effects of
the recent tariffs. “Businesses are more concerned about
things that may be coming down the road,” he says.
In May, the Commerce Department initiated an
investigation into imposing tariffs on imported automobiles and parts. Car tariffs have been a point of contention for trade negotiations with the EU, which imposes
a 10 percent tariff on U.S. automobiles, compared to the
2.5 percent tariff the United States imposes on European
cars. Raising car tariffs would certainly affect South
Carolina’s auto industry.
West Virginia’s largest export is coal, which is on the list
of products targeted by China’s August tariffs. The state
also exported $157 million in aluminum products in 2017.
Domestically, metal manufacturers stand to benefit from
the aluminum tariffs on foreign competitors, but exporters
also face increased costs from retaliatory tariffs on metal.
According to the U.S. Chamber of Commerce, West
Virginia exports steel and aluminum products to Canada,
Mexico, China, and the EU, all of which have imposed
tariffs on metals in response to the U.S. tariffs. All told, the
U.S. Chamber of Commerce estimates that foreign tariffs
may affect $178 million in exports from West Virginia.
The steel and aluminum tariffs also matter for Maryland
manufacturers. As a share of total imports, Maryland is
the fourth-largest importer of steel and aluminum in the
country, according to the Brookings Institution. The
tariffs have already begun to impact the prices and supply
chains of Maryland firms that rely on those inputs, according to a report from the state Chamber of Commerce.
Additionally, the state imported about $11 billion worth of
cars in 2017, which would be exposed to any future escalation of auto tariffs.
Farming in the District
Like its southern neighbor, North Carolina is also home
to several aerospace manufacturers that exported nearly $3
billion in products and parts combined in 2017. But North
Carolina’s biggest exposure to tariffs so far is in the agricultural sector. The tariffs China imposed in July included a
variety of U.S. agricultural exports, such as pork, soybeans,
and tobacco. North Carolina is responsible for about onetenth of all pork produced in the United States, making it
the second-largest pork-producing state in the country.
Andy Curliss, CEO of the North Carolina Pork
Council, says that pork exports to China have fallen since
12

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

April, but producers have shifted some of those exports to
South Korea. Mexico also imposed a 20 percent tariff on
U.S. pork, which may further disrupt exports.
“It remains to be seen how this will all shake out economically,” Curliss says.
Agriculture is also the sector of Virginia trade most
directly impacted by the current tariffs. It exported
nearly $600 million in soybeans in 2017, making it the
state’s leading agricultural export and third most valuable
exported commodity overall. More than half of those
soybeans went to China, making it the largest importer
of Virginia’s agricultural products. With so much of their
sales tied to China, Virginia farmers are approaching the
coming harvest season with concern.
“Already this year our exports of soybeans to China
have decreased by 50 percent,” says Stephanie Agee,
director of marketing and development for the Virginia
Department of Agriculture and Consumer Services.
In the short run, changes in prices for goods subject to
tariffs, such as cars or soybeans, are likely to be the most
visible effects of the tariffs. Global soybean prices fell to
their lowest in years on the news of the Chinese tariffs,
and car manufacturers such as BMW have stated that
they will raise the price of cars exported to China to pass
along the cost of the country’s higher auto tariffs. But in
the modern global economy, tariffs may disrupt more than
just the prices of the goods they target.
Ripple Effects
Econ 101 students learn that trade allows countries to
specialize in goods that they have a comparative advantage in producing. Each country can then trade with other
nations for the goods they lack. This simplified model
of trade imagines that all goods are wholly produced by
domestic firms and then traded in their final form.
In reality, modern multinational firms divide their production processes across many countries based on their
comparative advantages, and final goods may be assembled
from parts that cross foreign borders many times. These
global supply chains have been a big driver of world trade
and economic growth. According to a June 2018 article
in the Journal of Economic Literature, only a small subset
of firms export or import, but these firms are larger and
more productive than those that stick to purely domestic
production. Moreover, the largest and most productive
firms export and import a lot, accounting for a substantial
share of aggregate trade volume.
“Because of the reliance on global supply chains and
interfirm trade now, tariffs are more likely to be disruptive
than in the past,” says Clemson’s Baier. He is hardly the
only economist who thinks so. In a recent IGM Forum
survey, 77 percent of responding economists agreed that
import tariffs are likely to be “substantially more costly”
than they would have been a quarter of a century ago
because of the importance of global supply chains.
Complex global supply chains also mean that countries

targeted by tariffs are unlikely to be the only ones who
feel pain. For example, Alonso de Gortari of Princeton
University found in a 2017 paper that nearly 75 percent
of the foreign inputs used in Mexican vehicles exported
to the United States were produced in America. Using
this information, de Gortari estimated that when Mexico
exports cars to the United States, an average of 38 percent
of the value from those cars is actually domestic production
returning home. This share is much larger than economists
previously thought. If supply chains for other goods follow
a similar pattern, it suggests that tariffs on foreign imports
may substantially harm domestic firms as well.
Mary Lovely of Syracuse University and Yang Liang of
San Diego State University explored whether this might
be true of the recent tariffs in a May 2018 article for
the Peterson Institute for International Economics. They
found that many of the goods targeted by U.S. tariffs on
China are produced by multinational firms operating in
China rather than domestic Chinese companies. Moreover,
many of these products are purchased by American firms
as inputs into production processes here at home. Raising
the cost of those inputs through tariffs would likely harm
American production. In theory, firms can rearrange their
supply chains to avoid the added costs of tariffs, perhaps
choosing to obtain more inputs from American producers.
But this may not be so straightforward in practice.
“It’s costly for firms to change their supply chain,”
says Gary Hufbauer, a nonresident senior fellow at the
Peterson Institute for International Economics. “A lot
of their supplies have gone through a lengthy regulatory
approval process, and it’s not easy for firms to find an
alternative supplier who meets the same level of quality
and specifications.”
For example, in late July the EU agreed to buy more
U.S. soybeans, which could partially make up for lost
sales to China. But Agee of the Virginia Department of
Agriculture and Consumer Services says that Europe has
different standards for agricultural products than China,
which may limit the ability of farmers to shift products
originally grown for the Chinese market to Europe unless
those differences are addressed.
Firms also face uncertainty about whether to seek new
suppliers for imports and new markets for exports or
whether to ride out the higher cost of tariffs in the hope
that they prove to be temporary.

Uncertain Future
In the July 2018 Fed Beige Book, which summarizes
business conditions in each of the 12 Federal Reserve
districts, all Reserve Banks reported that businesses were
feeling direct effects or facing some uncertainty related to
changes in trade policy — compared to three a year ago.
Should firms decide to act and seek new suppliers or new
exports markets because of tariffs, those decisions could
easily outlast the policies that prompted them.
“If China establishes other sources for soybeans that
can meet their needs, why would they come back to the
United States?” says Agee.
In the simple case, trade disagreements could merely
reshuffle trading partners for a while. In the extreme case,
an escalating trade war between many countries could call
the whole global supply chain model into question.
“We haven’t seen very large global tariffs since the
1930s,” says Hufbauer. “If that happens, that’s going
to give a lot of multinational firms pause as they try to
figure out where the world economy is headed and how
they fit into it. It would be a real shakeup to the order
we know.”
So far, most firms appear to be taking a wait-and-see
approach. Only about 20 percent of national businesses
responding to a recent survey by the Atlanta Fed said they
were reassessing their capital expenditure plans as a result
of the tariffs. The share was slightly higher for manufacturers — about 30 percent — but the authors of the study
note that “tariff worries have had only a small negative
effect on U.S. business investments to date.”
And while most businesses have focused on the
potential downside from the tariffs, others have highlighted the potential upside. In a June 2018 survey, the
Richmond Fed asked businesses in the Fifth District
what they thought the effect of the steel and aluminum
tariffs would be on the overall economy. About half of
the respondents expected the effect would be negative,
but more than a quarter of business owners thought
the tariffs could ultimately be positive if they improved
domestic production or led to better trade deals in the
future.
“There is the promise of more talks with Europe aimed
at achieving zero industrial tariffs,” says Hufbauer. “If that
happens, that would be a big payoff. But right now it is just
a promise to talk, not a promise to act.”
EF

Readings
Autor, David H., David Dorn, and Gordon H. Hanson. “The China
Shock: Learning from Labor-Market Adjustment to Large Changes
in Trade.” Annual Review of Economics, October 2016, vol. 8,
pp. 205-240.

Lovely, Mary E., and Yang Liang. “Trump Tariffs Primarily
Hit Multinational Supply Chains, Harm U.S. Technology
Competitiveness.” Peterson Institute for International Economics
Policy Brief No. 18-12, May 2018.

De Gortari, Alonso. “Disentangling Global Value Chains.”
Manuscript, Nov. 26, 2017.

Mengedoth, Joseph. “Fifth District Firms Weigh In on Steel and
Aluminum Tariffs.” Federal Reserve Bank of Richmond Regional
Matters, Aug. 10, 2018.

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

13

PRODUCERS
UNDER
PRESSURE?
What role do producers’ costs
play in determining inflation?
By Jessie Romero

B

etween June 2017 and June 2018, trucking transportation costs in the United States increased
7.7 percent. Steel, aluminum, and copper were all
up more than 10 percent. Wheat prices climbed more
than 20 percent, and food processors paid 13 percent
more for chickens. Yet in many cases, rising costs for
businesses were not reflected in the prices paid by consumers. Finished appliances were up only 1.1 percent over
the same period; food prices increased just 1.4 percent.
Companies including Sysco, Procter & Gamble, and
Unilever all have reported difficulty raising prices in the
U.S. market.
How long will that last? Overall, the prices producers
pay for inputs, as measured by components of the Bureau
of Labor Statistics’ Producer Price Index (PPI), have been
outpacing consumer prices for more than a year (see chart

Ups and Downs
PERCENT CHANGE FROM A YEAR AGO

Swings in the PPI don’t necessarily translate into changes in the CPI
60
40
20
0
-20
-40
-60
1990

1995
CPI

2000

PPI: Processed Goods

2005

2010

PPI: Unprocessed Goods

2015
PPI: Services

NOTE: Data on Services for Intermediate Demand are available beginning in 2010. Shaded areas
denote recessions.
SOURCE: Bureau of Labor Statistics via Federal Reserve Economic Data (FRED)

14 E C O N F O C U S | S E C O N D Q U A R T E R | 2 0 1 8

below), which had led some observers to predict that more
rapid inflation is imminent. But while the PPI does paint a
picture of the costs facing various industries, it isn’t necessarily a good predictor of consumer measures of inflation,
such as the Consumer Price Index (CPI). In part, that’s
because the indexes are designed to measure different
things; in part, it reflects that firms make pricing decisions
based on many factors in addition to input prices. And
even to the extent the PPI does help predict consumer
price changes in the short run, in the long run, the overall
level of prices depends on monetary policy.
Piecing Together the PPI
To calculate the PPI, the Bureau of Labor Statistics (BLS)
surveys a sample of firms about the revenue they receive
on more than 10,000 goods and services, from rivets to
refrigerators to radio advertising. The BLS then aggregates that information into two main categories: final
demand and intermediate demand. Final demand is the
revenue domestic producers receive for the goods and services they sell to consumers, to the government, to businesses for capital investment, and for export — in other
words, for goods and services that are not used as inputs
to create other domestic products. Intermediate demand
is the revenue domestic producers receive for goods and
services that are sold as inputs into other domestic products. When a retailer sells a refrigerator to a homeowner,
the retailer’s revenue is counted in final demand; when
the manufacturer sold the refrigerator to the retailer,
the manufacturer’s revenue was counted in intermediate
demand, as was the revenue received by the companies
that supplied the manufacturer.
The BLS has two different systems for categorizing
intermediate demand. In the “commodity type” system,
the BLS calculates separate indexes for processed goods,
such as tires or cement; unprocessed goods, such as crude

An “Org Chart” for the Producer Price Index

The Bureau of Labor Statistics aggregates PPI data into multiple categories
petroleum or gravel; and services, such as warehousing or financial services. In the “production
FINAL DEMAND
flow” system, the BLS calculates indexes for goods
and services in four stages. Stage 1 goods are the
Personal Consumption • Export • Government • Capital Investment
first in the process; stage 4 goods are finished
INTERMEDIATE DEMAND
products and are sold to final demand.
Because the PPI for final demand includes
Stage 4 Goods
goods and services sold for personal consumpStage 3 Goods
tion, there is a high degree of overlap between
Processed
Unprocessed
OR
Services
Goods
Goods
Stage 2 Goods
items covered by that portion of the PPI for final
Stage 1 Goods
demand and its more famous sibling, the CPI —
the refrigerator sold in the above example would
be included in both. But, as the names suggest, the
fundamental difference between the two indexes is that
a senior vice president at the Cleveland Fed. “And it’s not
the PPI measures prices from the producer’s perspective
a crazy idea since one of the things the PPI measures is
while the CPI measures prices from the consumer’s perinput prices. But the linkage isn’t actually that strong.”
spective. This leads to a number of differences in the ways
Clark first studied the relationship between the PPI
data are collected. For example, the PPI does not include
and the CPI in a 1995 article. He found that, historically,
sales and excise taxes since these are not revenues that
changes in the PPI had to some extent preceded changes
accrue to a producer, but taxes are included in the CPI
in the CPI, but he also found that the PPI was of little
since they’re part of what a consumer pays.
value in forecasting future values of the CPI, which sugAnother difference between the two indexes is that
gested that the producer price changes weren’t necessarthe CPI includes only the health care costs consumers
ily driving the consumer price changes.
pay themselves, while the PPI also includes health care
Clark’s research preceded major changes the BLS made
paid for by a third party, such as an insurance company or
to its aggregation system in 2014, but more recent research
the government. The PPI also includes the interest rate
also suggests that changes in input prices are not a good precomponent of financial services, so changing rates change
dictor of future inflation. In a 2018 article, Mark Bognanni
the index; interest rates don’t directly affect the CPI. In
and Tristan Young, also with the Cleveland Fed, studied the
addition, owners’ equivalent rent — the amount homepredictive power of the ISM Manufacturing Price Index,
owners would have to pay to rent rather than own their
another measure of input prices; it did help to improve forehomes — is not included in the PPI, but it makes up about
casts of the PPI, but that did not translate into improving
one-quarter of the CPI. An additional significant differforecasts of changes in the index of Personal Consumption
ence is that the PPI, by definition, does not cover imports
Expenditures, or PCE (the consumer inflation measure gensince they are not domestically produced.
erally used by the Federal Open Market Committee).
Eyeballing the data also suggests that changes in the
From Producers to Consumers
PPI for intermediate demand don’t have much of relationDo changes in the PPI predict changes in the CPI? “This
ship to future values of the CPI; this is especially true for
is not an uncommon take on the PPI,” says Todd Clark,
unprocessed goods. (See charts.)

Scattered

-60

-40

30

30

20

20

10

10

-20

0

20

40

60

CPI

CPI

The CPI doesn’t have a strong relationship with previous values of the PPI

-60

-40

-20

0

-10

-10

-20

-20

-30

Past PPI (Unprocessed Goods)

20

40

60

-30

Past PPI (Processed Goods)

NOTE: The figures plot the CPI against the PPI for Intermediate Demand two quarters previously. Both series present percent changes from a year ago. Data are for 1975-2018.
SOURCE: Bureau of Labor Statistics via Federal Reserve Economic Data (FRED)

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

15

In a 2016 article, Jonathan Weinhagen, an economist
at the BLS, found that price increases in earlier stages
of intermediate demand did help predict price increases
at later stages in the PPI. So why isn’t there a stronger
relationship between the CPI and the PPI? One reason
might be that both measures are averages across a large
number of different industries. There are some industries
in which higher input prices do translate directly into
higher consumer prices, but these “pass through” effects
could be masked when they’re averaged with industries
with different cost structures. For example, increases in
food-related PPIs tend to lead to increases in the CPI
for food purchased in grocery stores but not for food purchased at restaurants, where service and preparation are a
large part of the value.
The relationship, or lack thereof, between the PPI
and CPI also reflects the measurement differences. For
example, the exclusion of imports, which account for about
15 percent of GDP, means that the PPI doesn’t reflect any
cost savings producers achieve from buying intermediate
inputs overseas. Nor does it reflect cost increases if imports
become more expensive, for example because of tariffs. (See
“Tariffs and Trade Disputes,” page 10.)
In addition, growing global trade in intermediate
inputs means that the baskets of goods measured in the
CPI and the PPI have less and less in common over time.
In a recent working paper, Shang-Jin Wei and Yinxi Xie
of Columbia University documented a growing divergence between producer price indexes and consumer price
indexes in most industrialized countries, including the
United States, beginning around 2001. They attributed
this divergence to the increasingly global nature of many
companies’ supply chains.
Making the Markup
While there is some evidence that producers pass on cost
increases, intermediate input costs are just one factor
in a firm’s pricing decisions. A firm also has to consider
labor and capital costs and the competitive landscape,
all of which affect how much a firm marks up the prices
of its goods over intermediate input costs. “If all these
conditions were static, then yes, one might expect to see
a consistent and stable relationship between the prices
of materials inputs and the prices of finished goods,” says
Alex Wolman, vice president for monetary and macroeconomic research at the Richmond Fed. “But of course,
these conditions aren’t static.”
One of the most important considerations is the
customer. While it won’t come as a huge surprise to
most shoppers, a large body of research has demonstrated that different firms charge different prices for
essentially the same goods, and that the same firm may
charge different prices at different times. Nicholas
Trachter of the Richmond Fed, with collaborators
elsewhere, has shown how this price dispersion can
arise based on the variation in consumers’ abilities and
16

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

willingness to shop around, and how stores employ
pricing strategies in response. The more costly it is for a
consumer to search for a different seller, either because
other sellers are hard to find or because the consumer is
unwilling to spend much time searching, the higher the
price a given firm can charge.
In this respect, the Internet might be one factor
making it more difficult for producers to raise prices,
both by making it easier for customers to shop around
and by making it easier for new companies to set up
shop. Procter & Gamble, for example, announced last
year it was cutting prices on its Gillette razors by up
to 20 percent in response to competition from online
retailers. Fed Chairman Jerome Powell attributed low
inflation in part to the “Amazon effect” in his semiannual testimony before the Senate Banking Committee
in March. (The Internet isn’t the first technology to
affect prices; see “The Great Telegraph Breakthrough
of 1866,” page 28.)
Another factor potentially limiting firms’ abilities to
pass on cost increases is the concentration of the retail
sector: In 2017, the five largest retailers in the United
States accounted for 36 percent of the 100 largest retailers’ total U.S. sales. And when retailers get large enough,
they may be able to exercise what’s known as monopsony
power, where they are effectively the only buyer and can
dictate terms and prices to their suppliers.
Many manufacturers have reported being forced to
sell their products at lower prices lest they lose their
place on a store’s shelves. Some companies might even
get large enough for this to affect the economy as a
whole. In the early 2000s, for example, Jerry Hausman
of the Massachusetts Institute of Technology estimated
that Walmart and its ilk had lowered annual food price
inflation by three-quarters of a percentage point.
Retail isn’t the only sector that’s highly concentrated;
concentration has been increasing across all public firms
since the late 1990s. (See “Are Markets Too Concentrated?”
Econ Focus, First Quarter 2018.) Intuitively, one would
expect greater market concentration to enable firms to
raise prices, but economists studying market power have
come to conflicting conclusions about the extent to which
markups have increased economy-wide. One way that
increasing market concentration could coexist with low
prices is if the firms that have grown large are also the firms
that have built their strategy around low prices — and thus
exert their influence over their suppliers rather than their
customers.
In the Long Run
Another reason producers might not be willing or able to
pass on higher input costs could be the virtuous circle of
inflation expectations. Economists have found that one
of the most important determinants of future inflation is
what people expect inflation to be. So if firms believe the
central bank is committed to keeping inflation low and

stable, they won’t try to raise prices beyond that rate —
which, in turn, contributes to keeping inflation low and
stable. (See “Great Expectations,” page 40.) Most measures of inflation expectations have ticked up in recent
months, but they remain relatively low and well-aligned
with the Fed’s 2 percent target for the inflation rate.
Even if firms in sectors with rising input costs were to
pass on those costs to consumers, it wouldn’t necessarily
lead to inflation in the sense that monetary policymakers
use the word, to mean a persistent increase in prices across
the entire economy. In the short run, changing supply and
demand conditions might lead to higher prices for certain
goods and services. But in the long run, under this definition, inflation is determined by monetary policy, and those

supply and demand conditions affect only relative prices
of the particular goods and services. “If apples get more
expensive relative to oranges, that’s not inflation,” says
Clark. “Inflation is when prices increase for both apples
and oranges — and everything else.”
The relationship between the PPI and CPI illustrates
the complex interactions between costs and competition
that influence firms’ pricing decisions. And while the PPI
might not be a perfect harbinger of what’s to come, it’s still
a valuable indicator for policymakers. “It’s one of many
tools we can use to assess the overall state of the economy
and where we are in the business cycle,” Clark says. “It’s
useful even if it’s not predictive of the inflation measure
we’ve chosen to target.”
EF

Readings
Bognanni, Mark, and Tristan Young. “An Assessment of the ISM
Manufacturing Price Index for Inflation Forecasting.” Federal
Reserve Bank of Cleveland Economic Commentary No. 2018-05,
May 24, 2018.
Clark, Todd E., “Do Producer Prices Lead Consumer Prices?”
Federal Reserve Bank of Kansas City Economic Review, Third
Quarter 1995, pp. 25-39.

Menzio, Guido, and Nicholas Trachter. “Equilibrium Price
Dispersion Across and Within Stores.” Review of Economic
Dynamics, April 2018, vol. 28, pp. 205-220.
Weinhagen, Jonathan C. “Price Transmission within the Producer
Price Index Final Demand-Intermediate Demand Aggregation
System.” Bureau of Labor Statistics Monthly Labor Review,
August 2016.

2018

The Richmond Fed
Research Digest summarizes
externally published work
of the Bank’s research
department economists.
Full citations and links
to the original work
are also included.

Richmond Fed Researc

h Digest

Summaries of wor
k by economists in
the Bank’s Researc
published external
h Department
ly from June 1, 201
7, through May 31,
2018
Welcome to the sev
enth annual issu
e of the Richmon
The Federal Reserve
d
Fed Research Dig
Bank of Richmon
est.
d produces several
the work of econom
publications that
ists in its Researc
feature
h Department, but
publish extensively
those economists
in other venues. The
also
annual, brings this
Richmond Fed Rese
arch Digest, a mid
externally publish
-year
ed research togeth
summaries, full cita
er
in one place with
tions, and links to
brief
the original work.
articles may require
(Please note that
registration or pay
access to
Fed Research Dig
ment.) The next
issue of the Richmon
est will be publish
ed on June 28, 201
d
Underlining indicate
9.
s Rich
mond Fed economi

sts.

Bankruptcy and De
linquency
in a Model of Unsec
ured Debt

By Kartik Athreya,
Juan M. Sánchez, Xua
n S. Tam, and Eric R.
International Econ
Young
omic Review, forth
coming
onsumer debt delin
quency, unlike ban
kruptcy, is informa
initially promised
l default—not payi
. Delinquency occu
ng back debt as
rs frequently, but
their credit status
many delinquent
fairly quickly. Acco
borrowers improve
rding to recent New
data, 85 percent of
York Fed Consumer
borrowers who are
Credit Panel/Equifax
sixty to ninety days
following quarter,
delinquent make
and 40 percent redu
a payment in the
ce their debt by mak
forgiveness, or both
ing payments, rece
.
iving partial debt
In an article forth
coming in the Inter
national Economic
Fed, Juan M. Sánc
Review, Kartik Athr
hez of the St. Loui
eya of the Richmon
s Fed, Xuan S. Tam
Young of the Univ
d
of
City University of
ersity of Virginia and
Hong Kong, and Eric
Zhejiang Universit
these observations
R.
y use
. They evaluate the
extent to which quan data and theory to shed light on
can be useful for
understanding the
titative models of
consumer default
borrower-level, shor
informality of delin
t-term dynamics
quency complica
of delinquency. The
tes this analysis. In
lending contracts
particular, a featu
is a penalty rate on
re of man
past-due payments
claim to impose this
. However, while mos y unsecured
penalty rate, the prop
t lenders might
is not clear. As a resu
ortion of consume
rs who actually pay
lt, an open question
such penalties
the interpretation
is whether the data
of how borrowers
help the authors
are treated in delin
clearly discipline
quency.
Athreya, Sánchez,
Tam, and Young esta
blish
level dynamics asso
two stylized facts
from the data that
ciated with delinquen
describe individua
cy. As noted abov
mean a persistent
l‐
e, they show that
cessation of paym
delinquency does
ents, and they deta
the debt of delinque
not
il substantial disp
nt borrowers. In add
ersion in the chan
ition, the authors
facts regarding hete
ge in
assemble previous
rogeneity in the use
ly undocumented
of delinquency (and
bankruptcy), both
by income group

C

Richmond Fed Research

Digest 2018

Visit: www.richmondfed.org/publications/research/research_digest
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

17

IS C A SH
STILL KING?
Despite new technologies for electronic
payments, cash has never been more popular.
What’s driving the demand?
By Tim Sablik

I

n Sweden, signs declaring “no cash accepted” or “cash free” are becoming commonplace.
In 2018, more than half of households surveyed by the Riksbank (Sweden’s central bank)
reported having encountered a business that refused to accept cash, compared with just
30 percent four years earlier. Many banks in Sweden no longer accept cash at the counter.
Customers can still rely on ATMs for their cash needs, but those are becoming increasingly
scarce as well, falling from 3,416 in 2012 to 2,850 in 2016.
In part, the country’s banks and businesses are
responding to changing consumer preferences. Use
of debit cards and Swish, Sweden’s real-time electronic payment system that launched in 2012, has
surged in recent years while cash usage has steadily
declined. Swedish law allows businesses to refuse
to accept cash, and many firms have championed
noncash payments as cheaper and safer than cash.
(Thieves have also responded to Sweden’s shift
toward a cashless society. According to a recent
article in The Atlantic, the country had only two
bank robberies in 2016 compared with more than
100 in 2008.)
Given the spread of payment innovations around
the world, one might expect that many other countries are following Sweden’s example. But when it
comes to cash, Sweden is an outlier. In a 2017 paper,
Clemens Jobst and Helmut Stix of Austria’s central
bank measured currency demand for the United

States and a handful of other countries going back
to 1875. They found that while currency in circulation as a share of GDP has fallen over the last 150
years, that decline has not been very large given the
evolution in payment technologies over the same
period. Moreover, starting in the 1980s, currency
demand in the United States actually began rising
again.
Over the last decade, dollars in circulation as a
share of GDP have nearly doubled from 5 percent
to 9 percent. Today there is $1.6 trillion in cash in
circulation, or roughly $4,800 for every person in
the United States. And the United States is hardly
unique; cash in circulation has surged in recent years
in much of the world despite the spread of new ways
to pay.
As the number of dollars in circulation continues to swell, it raises an important question: What
is driving the demand for cash? While monetary

U.S. currency in circulation continues to grow despite new payment options

$BILLIONS

Medium of Exchange
One way to understand the demand for cash is to study how
people pay. Cash has a long history of facilitating exchange
going back to the coins minted from precious metals and
used by ancient civilizations. And despite the availability of
new electronic payment options today, cash remains popular with consumers. According to the Survey of Consumer
Payment Choice conducted by the Federal Reserve System,
consumers used cash in 27 percent of transactions in a typical month in 2017, making cash the second most popular
payment option after debit cards. That share has held fairly
constant since 2008 when the survey began.
The Diary of Consumer Payment Choice, also published by the Fed, provides a more detailed snapshot of
how consumers use cash. Participants are asked to record
information about every payment they make over a threeday period. According to the latest data from the 2016
Diary, consumers frequently relied on cash for low-value
purchases. Cash was used for more than half of all in-person
purchases costing less than $10. In contrast, consumers
used it in only 8 percent of purchases over $100.
“Consumers rate cash highly for being low cost and
easy to use,” says Claire Greene, a payments risk expert at
the Atlanta Fed who works on the Survey and the Diary.
“At the same time, there are other characteristics where
cash rates poorly. It’s dead last for record-keeping and
rates poorly in terms of security.” While consumers are
protected from fraudulent charges to their debit or credit
cards, cash comes with no such protection; once it’s lost or
stolen, it’s gone. This may explain why most consumers are
hesitant to carry enough cash for large purchases but are
happy to use it for small ones.
The data also indicate that cash is an important payment option for low-income households. According to
the 2016 Diary study, households that earned less than
$25,000 a year used cash for 43 percent of their payments.
Lower-income households are less likely to have access to
some payment methods such as credit cards, making cash
an attractive option. Indeed, a 2016 article by Zhu Wang
and Alexander Wolman of the Richmond Fed illustrated
the importance of cash for this demographic. Wang and
Wolman studied billions of transactions from a national
discount retail chain that primarily serves low-income
households. They found that while the share of cash
transactions declined from 2010-2013, cash was still used
in more than 70 percent of purchases at the stores.
These data indicate that cash remains an important
medium of exchange in the modern economy, but they
don’t explain the growing volume of dollars in circulation.
Since consumers mostly use cash for small-dollar purchases, they typically don’t carry much cash on them. The

Currency Demand Driven by $100s
1,800
1,600
1,400
1,200
1,000
800
600
400
200
0
1993

1996

1999

2002

2005

2008

2011

2014

2017

$100

Total

SOURCE: Federal Reserve Board of Governors

$100s Recently Surpassed $1s

Volume of U.S. Notes in Circulation by Denomination
14
12
BILLIONS OF NOTES

authorities that issue currency, such as the Fed, have good
data on how much currency is out there, determining what
happens to cash once it’s in the wild presents a much bigger challenge.

10
8
6
4
2
0
1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016
$1

$2

$5

$10

$20

$50

$100

SOURCE: Federal Reserve Board of Governors

respondents to the 2016 survey had an average of $219 in
cash on their person or property. That still falls short of
the $4,800 per capita of U.S. currency in circulation. Who
holds the bulk of that money, and how is it being used?
Flight to Safety
In addition to being used for exchange, cash also acts as a
store of value. High-denomination notes are best suited
for this purpose, so tracking their circulation can provide a
sense of how important this aspect of cash is for explaining
currency demand.
In the United States, large-denomination notes seem
to be driving the growth in cash. The $100 bill accounts
for most of the total value of currency in circulation. (See
chart.) Demand for $100 bills has significantly outpaced
other denominations in terms of pure volume as well, averaging an annual growth rate in notes of nearly 8 percent
since 1995 compared with 3 percent to 4 percent for most
other notes. In fact, in 2017, the $100 bill surpassed the
$1 bill as the most widely circulated U.S. note. (See chart.)
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

19

While some of this demand may come from domestic
savers, researchers believe a significant share of $100 bills
are traveling overseas. Ruth Judson, an economist at the
Fed Board of Governors, has spent years attempting to
estimate how much currency is outside the United States
using available data on cross-border currency flows and
comparisons to similar economies whose currencies are
not as widely used abroad.
“We think that the significance of foreign demand is
unique to the dollar,” says Judson. “Other currencies are
also used outside their home countries, but as far as we can
tell, the dollar has the largest share of notes held outside
the country.”
One way to measure the importance of foreign demand
for the dollar is to compare currency circulation in Canada
and the United States. Both have similar payment technologies and are close to each other in geography and
economics, but the Canadian dollar is not as widely used
in other countries. In 2017, Canadian dollars in circulation
were equivalent to 4 percent of the country’s GDP, or less
than half of the U.S. share. Using this as a starting point,
Judson estimated in a 2017 paper that as much as 70 percent of U.S. dollars are held abroad. Additionally, Judson
estimated that as much as 60 percent of all Benjamins are
held by foreigners.
“Overseas demand for U.S. dollars is likely driven by its
status as a safe asset,” says Judson. “Cash demand, especially from other countries, increases in times of political
and financial crisis.”
Some countries, such as Ecuador and Zimbabwe, have
adopted the dollar as their primary currency in response
to economic crises or pressures on their own currencies.
And U.S. Treasuries as well as dollars remain safe-haven
assets in times of global distress, like the financial crisis of
2007-2008. For example, Judson found that while international demand for dollars began to decline in 2002 after
the introduction of the euro, that trend reversed after the
2007-2008 crisis.
Crises prompt domestic households to seek the safety of
currency as well. In their 2017 paper, Jobst and Stix found
that even countries without strong international demand
for their currency experienced increased cash demand after
2008. Their analysis suggests that heightened uncertainty
following the global financial crisis may explain some of the
widespread currency growth over the last decade.
Another factor that may be contributing to the recent
growth in cash demand is the historically low cost of holding it. Inflation creates a disincentive to hold cash since
it erodes its value over time. But over the last decade,
the United States and much of the rest of the world have
experienced very low inflation and interest rates. Japan has
experienced low inflation and near-zero interest rates for
decades, which may partly explain why its ratio of currency
in circulation relative to its GDP is nearly 19 percent, the
highest among developed economies.
But even the best estimates of dollars held abroad or in
20

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

domestic safes or under mattresses still leave a significant
amount of cash unaccounted for. Some researchers argue
that there is another source for the growing demand for
high-denomination notes: the underground economy.
The Costs of Cash
“A key thing about cash is that it’s anonymous and hard to
trace,” says Kenneth Rogoff of Harvard University. In his
2016 book The Curse of Cash, he argued this makes cash the
ideal medium of exchange for consumers who value privacy, both for legitimate and illegitimate reasons. “There’s
a lot of evidence that cash plays a big role in tax evasion
and crime,” says Rogoff.
Even setting aside the U.S. dollars circulating overseas, Rogoff estimates that cash used in the domestic
economy to hide otherwise legal transactions from tax
authorities plays a significant role in roughly $500 billion
in lost federal revenues annually. Cash is also used in
illegal businesses like drug trade, human trafficking, and
terrorism.
In addition, high-denomination notes are targets for
counterfeiting, requiring monetary authorities to develop
new security features to stay ahead of counterfeiters.
Although authorities estimate that the volume of counterfeit dollars in circulation today is small, it has been
a costly problem for the United States in the past. (See
“The Counterfeiting Weapon,” Region Focus, First Quarter
2012.) And staying on top of new counterfeiting threats to
ensure today’s cash is genuine is not without cost.
The availability of large-denomination notes may also
impose costs on monetary policymakers, Rogoff argues.
During the Great Recession, the Fed lowered its interest
rate target to near zero, but some economists argued
it should have gone even lower. Cash poses a potential
problem for maintaining negative interest rates, however, because households and businesses can choose
to hold cash instead of assets that bear a negative rate
of interest. (See “Subzero Interest,” Econ Focus, First
Quarter 2016.) Of course, there is some cost to holding
large sums of cash, which means that in practice central
banks could reduce rates into slightly negative territory,
as the European Central Bank has done with its deposit
rate. Importantly, as Rogoff shows in his book, it is possible to use taxes and subsidies on deposits of cash at the
central bank to create significant space for negative rates
without otherwise changing anything about cash. But the
availability of cash, especially large-denomination notes,
nevertheless imposes some floor on how low interest
rates can go.
Despite these costs, Rogoff doesn’t advocate for completely eliminating physical cash, at least not anytime in
the foreseeable future.
“It’s really about regulating it better,” he says. One
option to regulate cash would be to track it better at the
point of sale, using modern scanners to record serial numbers, for example. This would make cash less anonymous,

which Rogoff argues would largely eliminate much of the
demand for it.
“If there is no way for criminals to launder cash back
into the system, the demand for cash for tax evasion and
illegal transactions will drop,” he says.
Another way to potentially reduce some of the costs
associated with cash would be to eliminate higher-denomination notes. As data from the Fed Survey and Diary
studies suggest, cash in the legal economy is mainly used
for small-value purchases, which would be unaffected by
the elimination of large notes. On the other hand, underground economic activity is more reliant on the portability
of large-denomination notes. Eliminating large-denomination notes would also increase the cost of holding cash to
avoid negative interest rates, perhaps loosening the lowerbound constraint on monetary policymakers.
“For most people, it would be good to still have cash
for small transactions,” says Rogoff. “But that’s not an
argument for keeping $100 bills, many of which are concentrated in the wrong hands.”
Indeed, some regions have already moved to eliminate
high-denomination currency. The eurozone ended production of the 500 euro note in 2016, citing concerns that
it was being used to “facilitate illicit activities.”
Some have speculated that these steps could be
taken even further, by replacing cash with new digital
alternatives.
Cash 2.0?
Can new technology provide the benefits of cash without the costs? With the advent of cryptocurrencies like
Bitcoin, it’s a question more researchers have been asking. In light of the decline in cash use in Sweden, the
Riksbank has begun investigating the possibility of issuing
an electronic currency. A recent paper from the Bank
for International Settlements argued that issuing digital
currency could provide new monetary policy options for
central banks, but it would also raise new questions about
the central bank’s role in providing payment and banking
services to the public.
The Fed has stated it has no plans to issue a digital
currency, and in a forthcoming paper with Charles Kahn
of the University of Illinois at Urbana-Champaign and

Francisco Rivadeneyra of the Bank of Canada, Richmond
Fed economist Tsz-Nga Wong argued that the central bank wouldn’t have much comparative advantage in
issuing one anyway. Electronic money requires different
safeguards from cash to ensure that each transaction is
properly authorized and that payers are not attempting
to spend the same digital dollar twice. Decentralized networks, such as Bitcoin, solve this problem by recording all
transactions on a public ledger and relying on other users
to verify transactions. This verification process is slow and
energy inefficient — but moving the public ledger to the
central bank’s ledger wouldn’t make much difference.
Another solution is to rely on trusted intermediaries
to manage user accounts and verify transactions. This
system already exists in the private financial sector.
Whenever individuals make electronic payments using
the ACH network or a credit or debit card, financial
intermediaries verify the transaction and manage the
transfer of funds from payer to payee. In order to implement the same account-based verification system for
central bank-issued digital currency, individuals would
need to open accounts with the central bank. Today,
the Fed needs to settle only a relatively small number of
transactions between banks each day after banks have
aggregated their own transactions. It would be much
costlier for the Fed to directly manage a significantly
larger number of frequently used retail accounts for every
consumer in the country, a problem the private financial
system has already solved. Replacing bank accounts with
central bank electronic currency would also destroy the
social value created by the private financial system, which
reallocates balances in checking accounts and deposits to
business loan and investment.
Physical cash also offers some advantages over digital
currency. It is not susceptible to theft or disruption from
cyberattacks, and it offers users anonymity that accountbased digital money lacks. While this anonymity facilitates illegal transactions, as Rogoff argues, it also grants
law-abiding consumers a measure of privacy. Overall, taking cash digital would not be a simple swap.
“I definitely don’t favor getting rid of cash anytime
soon,” says Rogoff. “In the end, it’s a cost-benefit analysis,
and the benefits of cash are not zero.”
EF

Readings
Jobst, Clemens, and Helmut Stix. “Doomed to Disappear?
The Surprising Return of Cash Across Time and Across
Countries.” Centre for Economic Policy Research Discussion
Paper No. DP12327, September 2017.
Judson, Ruth. “The Death of Cash? Not So Fast: Demand for U.S.
Currency at Home and Abroad, 1990-2016.” Paper presented at
the International Cash Conference, April 25-27, 2017.
Kahn, Charles M., Francisco Rivandeneyra, and Tsz-Nga Wong.
“Should the Central Bank Issue E-money?” Manuscript, May 2018.

O’Brien, Shaun. “Understanding Consumer Cash Use: Preliminary
Findings from the 2016 Diary of Consumer Payment Choice.”
Federal Reserve Bank of San Francisco Fednotes, Nov. 28, 2017.
Rogoff, Kenneth S. The Curse of Cash. Princeton, N.J.: Princeton
University Press, 2016.
Wang, Zhu, and Alexander L. Wolman. “Payment Choice and
Currency Use: Insights from Two Billion Retail Transactions.”
Journal of Monetary Economics, December 2016, vol. 84, pp. 94-115.

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

21

INTERVIEW

Chad Syverson
Editor’s Note: This is an abbreviated version of EF’s conver-

sation with Chad Syverson. For additional content, go to our
website: www.richmondfed.org/publications

22

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

EF: Some have argued that the productivity slowdown since the mid-2000s is due to mismeasurement
issues — that some productivity growth hasn’t been
or isn’t being captured. What does your work tell us
about that?
Syverson: It tells us that the mismeasurement story,
while plausible on its face, falls apart when examined. If
productivity growth had actually been 1.5 percent greater
than it has been measured since the mid-2000s, U.S. gross
domestic product (GDP) would be conservatively $4 trillion higher than it is, or about $12,000 more per capita. So
if you go with the mismeasurement story, that’s the sort of
number you’re talking about and there are several reasons
to believe you can’t account for it.
First, the productivity slowdown has happened all over
world. When you look at the 30 Organization for Economic
Co-operation and Development countries we have data for,
there’s no relationship between the size of the measured
slowdown and how important IT-related goods — which
most people think are the primary source of mismeasurement — are to a country’s economy.
Second, people have tried to measure the value of
IT-related goods. The largest estimate is about $900
billion in the United States. That doesn’t get you even a
quarter of the way toward that $4 trillion.
Third, the value added of the IT-related sector has
grown by about $750 billion, adjusting for inflation, since

PHOTOGRAPHY: JEFF SCIORTINO

Productivity growth drives economic growth, and for
about the last 15 years, the United States and much of
the world has experienced a significant productivity
slowdown. The causes remain a puzzle to economists,
and the predictions about when — or if — the United
States will emerge from this slowdown vary widely.
Chad Syverson, an economist at the University of
Chicago’s Booth School of Business, has spent much
of his career researching issues related to productivity
at both the macro and micro levels. His research has
shed light on why some firms are significantly more
productive than others within the same industry, a
long-standing question among economists working in
the field of industrial organization. His work has also
helped us better understand the process of learning by
doing, why some firms have vertical ownership structures (and why those might not be very different from
horizontal ownership structures), and the value of
carefully done industry case studies. He recently has
started researching the economics of artificial intelligence and what future developments in that area may
mean for productivity growth.
Syverson joined the University of Chicago faculty
in 2001, initially in the Department of Economics.
In 2008, he moved to the university’s Booth School
of Business. He is currently an editor of the RAND
Journal of Economics and was formerly an editor of the
Journal of Industrial Economics. In addition to publishing prolifically in top professional journals, he is
also the co-author of a microeconomics textbook with
his colleagues Austan Goolsbee and Steven Levitt.
Syverson earned undergraduate degrees in both economics and mechanical engineering and attributes
his interest in productivity and firm dynamics to his
engineering background.
Aaron Steelman interviewed Syverson in his office
on the University of Chicago campus in June 2018.

the mid-2000s. The mismeasurethings that the new technology
We are going out on a limb a little bit
ment hypothesis says that there
made possible. So it’s not that
by saying this, but we think artificial
are $4 trillion missing on top of
you are simply swapping the old
intelligence checks the boxes for a
that. So the question is: Do we
widget for a better one. You
general-purpose technology. And it
think we’re only getting $1 out of
are actually doing completely
every $6 of activity there? That’s
different things now that you
seems that with some fairly modest
a lot of mismeasurement.
have the new technology. This
applications of AI, the productivity
Finally, there’s the difference
is related to Paul David’s widely
slowdown goes away.
between gross domestic income
cited work on how the electric
(GDI) and GDP. GDI has been
motor didn’t just directly replace
higher than GDP on average since the slowdown started,
the steam engine. It eventually led to a complete change in
which would suggest that there’s income, about $1 trillion
the way factories were designed once people realized you
cumulatively, that is not showing up in expenditures. But
could put a little motor on every single machine. The work
the problem is that was also true before the slowdown
didn’t have to be stacked on many floors around the single
started. GDI was higher than GDP from 1998 through
power source any more.
2004, a period of relatively high-productivity growth.
Moreover, the growth in income is coming from capital
EF: Would you consider artificial intelligence (AI) a
income, not wage income. That doesn’t comport with the
general-purpose technology? If so, how do you assess
story some people are trying to tell, which is that companies
the view that the returns on investment in AI have
are making stuff, they’re paying their workers to produce it,
been disappointing?
but then they’re effectively giving it away for free instead of
selling it. But we know that they’re actually making profits.
Syverson: It’s way too early. There are two things creatWe might not pay directly for a lot of IT services every time
ing this lag for AI. First, aggregate AI capital right now is
we use them, but we are paying for them indirectly.
essentially zero. This stuff is really just starting to be used
As sensible as the mismeasurement hypothesis might
in production. A lot of it is simply experimental at this
sound on its face, when you add up everything, it just
point. Second, a lot of it has to do with complementarity.
doesn’t pass the stricter test you would want it to survive.
People have to figure out what sorts of things AI can augment, and we’re not anywhere down that road yet.
EF: What might we learn from past examples of the
Erik Brynjolfsson, Daniel Rock, and I are going out on
diffusion process of general-purpose technologies,
a limb a little bit by saying this, but we think AI checks the
such as electricity, when considering future producboxes for a general-purpose technology. And it seems that
tivity trends?
with some fairly modest applications of AI, the productivity slowdown goes away. Two applications that we look
Syverson: I think there are a couple of lessons. One is that
at in our paper are autonomous vehicles and call centers.
it is not unusual at all to have an extended period — and
About 3.5 million people in the United States make
by extended, I mean measured in decades — of slow protheir living as motor vehicle operators. We think maybe
ductivity growth, even after a major technology has been
2 million of those could be replaced by autonomous vehicommercialized and a lot of its potential has been recogcles. There are 122 million people in private employment
nized. You saw that with the internal combustion engine,
now, so just a quick calculation says that’s an additional
electrification, and early computers. There was about a
boost of 1.7 percent in labor productivity. But that’s not
quarter-century of pretty slow productivity growth before
going to happen overnight. If it happens over a decade,
you saw the first acceleration in productivity coming from
that’s 0.17 percent per year.
those technologies.
About 2 million people work in call centers. Plausibly,
The second part is that you don’t necessarily have
60 percent of those jobs could be replaced by AI. So when
just one acceleration and then it’s over. There were mulyou do the same kind of calculation, that’s an additional 1
tiple accelerations from electrification separated by a
percent increase in labor productivity; spread out over a
decade. To me, that says that just because we’ve had one
decade, it’s 0.1 percent per year. So, from those two applicaIT-related acceleration, that doesn’t necessarily mean it’s
tions alone, that’s about a quarter of a percent annual accelover. We can have a second wave. Technologies don’t just
eration for a decade. So you only need maybe six to eight
have to come, give what they have to give, and then go
more applications of that size and the slowdown is gone.
away. You can get multiple waves.
Why that would happen is tied to some of the comEF: Many explanations have been offered about
plementarity stories where the first set of gains is driven
why we observe very large productivity differences
by direct replacement of the old technology with the
among firms in the same industry. As the use of
new technology. The second wave comes when people
micro-productivity data has grown, do you think
recognize there are completely different ways of doing
economists have been converging on a consensus?
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

23

Chad Syverson

Syverson: An important fact is that
have learned about the importance of
➤ Present Position
the skewness of everything is increaseach side and what drives the fundaEli B. and Harriet B. Williams
ing within industries. Size skewness, or
mentals on both sides. That’s going
Professor of Economics, University
concentration, is going up. Productivity
to help us get a more comprehensive
of Chicago Booth School of Business
skewness is going up. And earnings
answer of the causes of productivity
(at Chicago Booth and, previously, the
skewness is going up. To describe why
dispersion within industries.
Department of Economics, University
our earnings are stretching out like
of Chicago, since 2001)
this, why there is a bigger gap between
EF: Regarding management prac➤ Education
the right tail and the median, I think
tices, it seems a little puzzling that
Ph.D. (2001), University of Maryland;
you have to understand the phenomelagging firms wouldn’t have done
B.A., Economics, and B.S., Mechanical
non of increasing skewness in producmore to replicate what more sucEngineering (1996), University of North
tivity and size. Is that technological? Is
cessful firms have done. You could
Dakota
it policy? Is it a little bit of both? I don’t
imagine possible stories about
➤ Selected Publications
think we really know the answer.
why that may be the case, but it
“Challenges to Mismeasurement
That said, I think it’s less of a mysseems like an important question
Explanations for the U.S. Productivity
tery now than it was when I started
to answer.
Slowdown,” Journal of Economic
working on this many years ago back
Perspectives, 2017; “Healthcare
in graduate school. At that time, peoSyverson: I agree, and there is some
Exceptionalism? Performance and
ple would tell stories about maybe it’s
evidence we can look at from work
Allocation in the U.S. Healthcare
this, maybe it’s that, maybe it’s everydone by Bloom and some colleagues.
Sector,” American Economic Review,
thing. There was a lot of speculation
I’ll call it the India experiments. They
2016 (with Amitabh Chandra, Amy
and not a lot of evidence. Since that
did a randomized controlled trial
Finkelstein, and Adam Sacarny);
time, I think the profession has been
with textile producers in India. They
“Vertical Integration and Input
really good at systematically going
provided management consulting
Flows,” American Economic Review,
after an answer.
practices to 28 plants — a small sam2014 (with Enghin Atalay and Ali
The biggest change is the amount
ple but still useful — and asked the
Hortaçsu); “Toward an Understanding
of Learning by Doing: Evidence from an
of work that has been done on manmanagement of every plant why they
Automobile Plant,” Journal of Political
agement practice. There’s still much
hadn’t previously instituted some of
Economy, 2013 (with Steven Levitt and
more work to do, but increasing prothe management practices that the
John List); “Market Structure and
ductivity dispersion seems related at
consultants recommended. Basically,
Productivity: A Concrete Example,”
least in part to management practhere were three classes of explanaJournal of Political Economy, 2004; and
tices. Nick Bloom and John Van
tions. First, there was, I didn’t know
numerous other papers
Reenen deserve a lot of credit for colabout them. The second was, I knew
lecting systematic evidence on manabout them, but they’re just not going
agement practices in their World Management Survey
to work here. The third was, they might work here, but I
program. The program has gathered information on tens
didn’t have the time to put them into place. And then they
of thousands of firms now. They and their co-authors
tracked the plants over time and asked those who still had
have also been able to put supplemental management
not adopted those practices why they hadn’t. Obviously,
practice questions on the Census Bureau’s annual survey
plants are unlikely to still give the first answer, but you still
of manufacturers.
had a lot giving answer two or three.
So we have a lot more systematic data on that now, and
Now, maybe there’s something special or unusual
there’s no doubt productivity is correlated with certain
about the setting of that experiment. But I do think the
kinds of management practices. People have also develfact that management is often just mistaken is a nontrivoped more causal evidence. There have actually been some
ial factor. There is evidence coming out of this body of
randomized controlled trials where people intervened in
work that suggests companies don’t know where they
management practices and saw productivity effects.
are in the distribution — they don’t know whether they
Is that all of the story? No, I don’t think so. If I had to
are well-managed or not. You can’t fix yourself until you
guess, it’s probably 15 to 25 percent of the story. There’s
know you have a problem.
a lot more going on. I think part of it has to do with firm
Also, I think even if you know you have a problem, a
structure. I have done work on that.
lot of firms can’t simply say, well, we see this competing
I think we have gotten better at measuring quality difcompany over there has an inventory management trackferences in labor and a little bit better at measuring quality
ing system that seems really useful, so we’ll install it on
differences in capital, though I think capital mismeasureour computers and our problems will be solved. That’s not
ment is still the biggest issue with measuring productivity
how it works. The firm that has adopted this practice has
on the input side. A lot of work has also been done on the
people trained in how to do it. It has changed its system,
way we measure productivity on the demand side. We
so that there’s an interaction and a feedback loop between
24

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

what the system is recording and recommending and what
you do. If you just say, OK, we’re going to start collecting
these data now and then do nothing else, you’re not going
to get the productivity benefits that the company with the
complements is getting. I just think this stuff is way more
complex than people might initially think.
An example I talk about in class a lot is when many
mainline carriers in the United States tried to copy
Southwest and created little carriers offering low-cost
service. For instance, United had Ted and Delta had Song.
They failed because they copied a few superficial elements
of Southwest’s operations, but there was a lot of underlying stuff that Southwest did differently that they didn’t
replicate. I think that presents a more general lesson: You
need a lot of pieces working together to get the benefits,
and a lot of companies can’t manage to do that. It also typically requires you to continue doing what you have been
doing while you are changing your capital and people to do
things differently. That’s hard.
EF: It is often argued that the health care sector is
fundamentally different than other sectors of the
economy — and that these differences might produce relatively less variation in productivity within
the health care sector. What does your work suggest
about the idea of health care “exceptionalism”?
Syverson: In general, we think companies that do a better job of meeting the needs of their consumers at a low
price are going to gain market share, and those that don’t,
shrink and eventually go out of business. The null hypothesis seems to be that health care is so hopelessly messed
up that there is virtually no responsiveness of demand to
quality, however you would like to measure it. The claim
is that people don’t observe quality very well — and even if
they do, they might not trade off quality and price like we
think people do with consumer products, because there is
often a third-party payer, so people don’t care about price.
Also, there is a lot of government intervention in the
health care market, and governments can have priorities
that aren’t necessarily about moving market activity in an
efficient direction.
Amitabh Chandra, Amy Finkelstein, Adam Sacarny,
and I looked at whether demand responds to performance
differences using Medicare data. We looked at a number of
different ailments, including heart attacks, congestive heart
failure, pneumonia, and hip and knee replacements. In
every case, you see two patterns. One is that hospitals that
are better at treating those ailments treat more patients
with those ailments. Now, the causation can go either way
with that. However, we also see that being good at treating
an ailment today makes the hospital big tomorrow.
Second, responsiveness to quality is larger in instances
where patients have more scope for choice. When you’re
admitted through the emergency department, there’s still
a positive correlation between performance and demand,

but it’s even stronger when you’re not admitted through the
emergency department — in other words, when you had a
greater ability to choose. Half of the people on Medicare
in our data do not go to the hospital nearest to where they
live when they are having a heart attack. They go to one
farther away, and systematically the one they go to is better
at treating heart attacks than the one nearer to their house.
What we don’t know is the mechanism that drives that
response. We don’t know whether the patients choose a
hospital because they have previously heard something
from their doctor, or the ambulance drivers are making
the choice, or the patient’s family tells the ambulance drivers where to go. Probably all of those things are important.
It’s heartening that the market seems to be responsive
to performance differences. But, in addition, these performance differences are correlated with productivity — not
just outcomes but outcomes per unit input. The reallocation of demand across hospitals is making them more
efficient overall. It turns out that’s kind of by chance.
Patients don’t go to hospitals that get the same survival
rate with fewer inputs. They’re not going for productivity
per se; they’re going for performance. But performance is
correlated with productivity.
All of this is not to say that the health care market is
fine and we have nothing to worry about. It just says that
the mechanisms here aren’t fundamentally different than
they are in other markets that we think “work better.”
EF: What does your work tell us about why some
firms benefit from common ownership of production
chains, how those benefits can be measured, and how
large those benefits might be?
Syverson: In a paper with Enghin Atalay and Ali Hortaçsu,
we found that most vertical ownership structures are not
about transferring the physical good along the production
chain. Let’s say you are a company that owns a tire factory
and a car factory. When you look at instances analogous to
that, most of the tires that these companies are making are
not going to the parent company’s own car factory. They
are going to other car factories. In fact, when you look at
the median pair, there’s no transfer of goods at all. So the
obvious question becomes: Why do we observe all this
vertical ownership when it’s not facilitating the movement
of physical goods along a production chain? What we
speculated, and then offered some evidence for, was that
most of what’s moving in these ownership links are not
tangible products but intangible inputs, such as customer
lists, production techniques, or management skills.
If that story is right, it suggests a reinterpretation of
what vertical integration is usually about in a couple of ways.
One, physical goods flow upstream to downstream, but it
doesn’t mean intangibles have to flow in the same direction.
Management practices, for instance, could just as easily go
from the downstream unit to the upstream unit.
The second thing is that vertical expansions may not
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

25

be as unique as we have thought. They may not be particularly different from horizontal expansions. Horizontal
expansions tend to involve firms starting operations in
a related market, either geographically or in terms of
the goods produced. We’re saying that also applies to
vertical expansion. A firm’s input supplier is a related
business, and the distributor of its product is a related
business. So why couldn’t firms take their capital and say,
well, we think we could provide the input or distribute
the product just as well too? So, conceptually, it’s the
same thing as horizontal expansion. It’s just going in a
particular direction we call vertical because it’s along a
production chain. But it’s not about the actual object
that’s moving down the chain.
We were able to look at this issue, by the way, because
we had Commodity Flow Survey microdata, which were
just amazing. It’s a random sample of shipments from a
random sample of establishments in the goods-producing
and goods-conveying sectors of the U.S. economy. So, if
you make a physical object and send it somewhere, you’re
in the scope of the survey. We get to see, shipment by
shipment, what it is, how much it’s worth, how much it
weighs, and where it’s going. And then we can combine
that with the ownership information in the census to
know which are internal and which are external.
EF: You have done a lot of work examining the concrete industry. Why concrete? And what can we learn
about more general phenomena by looking at some
pretty narrow industries?
Syverson: And not just concrete, but ready-mix concrete in
particular. The reason is that it is a great laboratory for testing economic theory. It has a set of characteristics that not
many industries have. One, it’s geographically ubiquitous.
Two, because of the transport costs and the perishability of
the product, every one of these geographic markets is basically independent, and you can only ship this stuff so far.
So every city is basically a different market. Three, almost
all concrete is bought by the construction sector, but it’s
a small share of construction costs. What that means is
that construction activity is basically an exogenous mover
of concrete demand. Furthermore, there are a lot of firms
in the concrete business, so even a modest-sized market is
going to have multiple plants run by multiple companies.
This means that it is like an economist having a laboratory
full of petri dishes where you tweak each one and see what
happens differently in response to different stimuli. On top
of all that, the stuff is relatively easy to measure because it’s
physically homogeneous. It’s not a differentiated product,
so the prices are pretty comparable and the units are comparable. Just about everything you would want in an ideal,
clean case study exists in this industry.
So that’s why I have done so much work on concrete.
What can we learn more generally? You hear jokes about
people working in industrial organization (IO) looking at
26

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

case studies and discussing the ketchup literature, or the
yogurt literature, or in this case the ready-mix concrete
literature. I have tried to be clear about what I think the
broader lessons are from these case studies and what we
can learn from them. One of the first studies I did on
ready-mix concrete looked at whether variations in consumer scope for substitution show up in the equilibrium
productivity distribution. In other words, is it indeed
harder to be an inefficient producer in a market where
customers can more easily find the more efficient producers? The answer is yes. I think that is a more general
phenomenon; it’s just one I can measure much better in
that setting than in others. That said, I wrote a companion
paper that does look across manufacturing industries and
found similar things with different measures of substitutability to bolster the generalizability of the findings in the
earlier paper.
Also, Ali and I looked at vertical integration between
the cement and concrete industries. There is clearly an
element of industry specificity to that work. But, on the
other hand, those were sort of the poster-child industries
for the market foreclosure literature. So if you thought
that vertical mergers provide incentives for collusion and
anticompetitive foreclosures, this is where you would see
it. We looked, and we didn’t find it. That might make
you think differently about how likely you would find it in
other industries too.
I understand the case-study method, why it’s important
and what advantages it has. I don’t think people in IO
should cede ground to those who question the value of
individual case studies just because we haven’t done case
studies on the hundreds of other industries out there. We
should use what we know from a case study, along with
theory, to extend our understanding of economics as far
as we can.
EF: You were given access to detailed production
data from an auto assembly plant over the course of a
year. What were those data able to tell you about the
sources of learning by doing?
Syverson: Regarding the data, as a car is being made,
there are things constantly being recorded in the factory’s
information system, either in an automated fashion or by
workers manually inputting information. So Steve Levitt,
John List, and I were able to see every step of the way
whether the step went right or wrong. And then we looked
at subsequent defect rates for every car that was made –
about 190,000 over the course of a year.
Most of the empirical learning-by-doing literature has
looked at unit costs, such as how many worker hours it
took to make a unit, and then examined that over time and
traced out the learning curve that way — how fast people
adapted, for instance. Our more detailed data let us learn
something about where the knowledge resided inside the
organization and how it moved around.

There are a few facts that are important to understanding that in this setting. One is that a lot of learning
happened early, as is pretty common. So, for example,
defect rates fell 70 percent in the first two months of
production. Now, as it happens, the factory only ran on
one shift for the first two months of data we observed, and
then starting in the eighth week, the second shift started.
The second shift’s training was to watch the first shift
for one week. That was it. They weren’t on the line itself.
Once the second shift comes online, they are right at this
new, lower defect level that the first shift achieved. So you
immediately know that it’s not just being on the line for a
while that leads to improvements.
Two, there is a high correlation between defect rates
for a particular operation across shifts. Operations don’t
go wrong with equal frequency. There is a right tail of
processes that go wrong a lot of the time, and then there’s
a left tail where things never go wrong. That’s true across
shifts. So if some operation is problematic on the first
shift, it’s problematic on the second shift, even though the
workers are different.
Three, we were able to see absenteeism every day at the
factory and in which part of the production process the
absent workers were placed. There is a positive relationship between absenteeism rates and defect rates along a set
of operations on the line, but it’s very weak.
So those three things suggest it’s not the workers who
are carrying the knowledge, which, again, is substantial.
Defect rates over the course of the year came down 90
percent total.
What happened is the factory had a set of practices
to take knowledge from the workers and as quickly as
possible put it into the capital of the factory — either the
physical capital, such as changing a faulty part on the line,
or the organizational capital, such as workers conveying
information to each other.
EF: Following the accounting scandals of the early
2000s, there were proposals to require companies to
rotate auditing firms. You have looked at the possible effects of such a mandate. What did you find?
Similarly, what is the potential impact if one of the Big
Four firms were to fail, perhaps because of regulation
or legal action?
Syverson: As you said, Joseph Gerakos and I looked at
two things: mandated auditor rotation and what would
happen if one of the Big Four were to fail. The two issues
are related. A good way to start thinking about them is to
ask whether companies choose auditors based on certain
characteristics or do they just go with the lowest price.
The answer is clear that the auditors are differentiated to
the companies that hire them; companies are looking for
the best match.
When you move around prices exogenously, you see the
customer’s willingness to substitute based on those changes

in prices, and they’re not nearly as willing to substitute one
auditor for another as they would be if the auditors were
not differentiated. So it’s clear something is driving the
value of the match-specific relationship. What does that
mean? It means that if one of the Big Four were to fail,
there would be losses suffered by the audited companies
because you can’t just swap one for the other and not lose
that match-specific value. It also means if you mandate that
they switch auditors after a certain number of years, you
won’t have that match-specific value anymore.
All that said, there is another side to the mandated
switching policy. If you think too much coziness between
firm and auditor can create the potential for corruption,
there’s value in eliminating that. We are not trying to measure that or saying that it’s zero. We are simply saying that
on the other side of the scale is a real cost.
EF: What do you think are some of the big open questions in IO and understanding firm dynamics?
Syverson: With IO, I would like people to pay greater
attention to more general lessons we might be able to take
from case studies. That could involve adding some comment in the paper and maybe writing a companion paper. I
would also like people to avoid thinking that any empirical
work that involves more than one industry is ipso facto
flawed. I think there is a little too much stridency along
that line — not across the board, but I would like to see
people be more accepting of some broader approaches.
One really positive move I’ve seen in IO over the past 10
years is I think the field has moved toward answering more
important questions. That’s not to say the questions were
unimportant before, but I think we’re moving in a good
direction. As I tell people at IO conferences, other fields
are doing IO now. Look at macro and finance and development, just to name a few. They’re trying to answer IO
questions. And in part I worry that they’re doing it because
we haven’t done enough. I think people working in IO can
bring useful insights to the conversations people in other
fields are having.
In terms of firm dynamics, I think we still have further
to go to explain productivity dispersion, in particular
what’s creating this increase in skewness. I also think the
micro aspects of the productivity slowdown are still a
mystery. We have some understanding of these issues, but
there’s a lot we don’t know.
EF: Do you think being an engineer might have affected
your choice of research interests as an economist?
Syverson: There is no doubt. I got into productivity in
grad school because of my engineering background. I was
a mechanical engineer. I like looking at how systems work
together to produce something and how those systems can
be improved. Also, as an engineer, it’s simply fun to go to
factories and see how things are done.
EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

27

ECONOMICHISTORY

The Great Telegraph Breakthrough of 1866
The transatlantic telegraph cable amounted to the information revolution of
the day, tying global markets together in unprecedented ways
BY H E L E N F E S S E N D E N

A

t the height of summer in 1866, U.S. newspapers
were abuzz with the news of a technological marvel:
A transatlantic telegraph cable successfully linked
the United States with Great Britain. Completed on July 27,
the cable generated congratulatory headlines across the
country and ushered in a new era of “real-time” journalism.
“Since Sunday morning we may say that America has
been in direct telegraphic communication with Europe,”
announced the New York Herald on July 31. “Intelligence of
vast importance to the interests of the latter continent … has
reached us on the submarine wire.”
Rather than taking a week or more by ship, this information was transmitted within a day. And it wasn’t just
about war and foreign intrigue but about the markets
connecting the two continents. In record time, the prices
of commodities traded on both sides of the ocean could
be transmitted to merchants who needed that information
to buy or sell their product. Newspapers at the time noted
this particular salience for commerce, with the New York
Herald commenting that the “cable and the news which
was flashed over it exerted a controlling influence in business circles,” including in grain, coffee, cotton, and gold.
What the Herald called a “controlling influence” has
relevance for economists today in understanding how technology and information intersect in the context of information frictions. These frictions occur when buyers and sellers
lack timely access to information that enables markets to
function efficiently, such as prices or the drivers of supply
and demand. In the context of trade, these frictions can
lead importers and exporters to misjudge markets and
misprice goods. This can produce a deadweight loss, when
diminished efficiency means that both sides are unable to
maximize the gains from trade — similar to the effect of
formal trade barriers, such as tariffs.
Economists have been increasingly studying the role
of technology, in particular, as a way to break down
information frictions and make markets more transparent. This field of inquiry applies not just to trade but to
any kind of economic activity, especially when real-time
information is critical but difficult to find. For example,
economists have looked at the effect of Internet shopping on life insurance markets — cheaper on net for
consumers, according to Jeffrey Brown of the University
of Illinois at Urbana-Champaign and Austan Goolsbee
of the University of Chicago. As these and other studies suggest, the speed and ease of online shopping can
28

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

reduce these frictions for consumers.
To anyone who surfs websites to shop, these insights
are intuitive. But as the case of the transatlantic telegraph
cable shows, history is rich with examples of how earlier
breakthroughs had similar effects. In a stroke, the cable
helped reshape many U.S. industries, including one of
the biggest exports, raw cotton, ultimately growing U.S.
exports through increased efficiency.
This story has special resonance in the Fed’s Fifth
District, especially in the Carolinas, where the cotton
industry recovered with surprising speed in the years
following the Civil War. Even though cotton production
and exports sharply fell during the war, both rebounded to
prewar levels by 1870. In particular, the communication
revolution that the telegraph ushered in helped turn splintered local markets into a national network, leading to the
1871 founding of the New York Cotton Exchange.
Missed Connections
By the time the cable joined the two sides of the Atlantic,
the telegraph’s reach had been expanding in the United
States for more than two decades. In 1844, inventor Samuel
Morse attempted an experiment to see whether electromagnetism could be applied to telecommunications,
resulting in the first telegraph line, between Washington,
D.C., and Baltimore, on which he famously clicked “What
hath God wrought?” By 1851, there were 75 companies
that connected major U.S. cities through multilateral
monopolies, in which different lines often competed on
the same links but cooperated via connecting lines. This
hodgepodge of networks led to poor and overlapping service, which was gradually resolved through greater system
integration and horizontal integration by the late 1850s.
Despite this progress on the domestic front, it took
multiple attempts, starting in 1857, for engineers to succeed in laying the transatlantic cable amid challenges
posed by bad weather and deep-sea terrain. The string
of failures fed growing public pessimism; there was even
speculation that the idea of a working connection was
a hoax. But on the fifth try, under the supervision of
financier Cyrus Field, a cable between Newfoundland
and Ireland finally linked the two continents. The first
messages transmitted included a congratulatory note from
Queen Victoria, news of Otto von Bismarck’s victory
over the Austrian army — and cotton prices, which were
quoted in both New York and Liverpool.

Why were cotton prices so prominent in those
initial reports? Most cotton was sent to U.S. ports
for export, with New York City as the most important hub linking U.S. producers to importers in
England. In turn, British textile workers spun
raw cotton into finished cloth, which was sold for
domestic consumption and for export. Prior to the
transatlantic cable, however, there was often a lag
between the price of cotton quoted in Liverpool
and what was quoted in New York, often by a week
or more, depending entirely on ship travel. One
common problem was that the information on foreign demand that New York merchants got from
Britain was outdated, so it was difficult to make
accurate purchasing decisions. Moreover, foreign
demand fluctuated considerably, especially on the
European continent. (Building up storage capacity
could only partly address this issue, due to the fire
hazard posed by cotton and prohibitive construction costs.) In short, this was a classic case of information
frictions causing inefficiencies in trade.
At the same time, the cotton trade was adjusting to profound shocks on both the supply and demand side. Prior
to the Civil War, U.S. cotton production — supported
almost entirely by African-American slave labor — rapidly
expanded to meet growing demand abroad for textiles. In
1860, about 70 percent of U.S. raw cotton was shipped
to Britain, which came to almost 60 percent of all U.S.
exports in terms of dollar value. On Britain’s side, U.S.
cotton was an overwhelming share (almost 90 percent) of
all cotton imports and highly favored due to its strength
and high quality.
This changed abruptly with the onset of the Civil War
and the highly effective Union blockade, which caused
cotton exports to drop by more than 90 percent within
a year. One solution for Britain was to cultivate new
sources for cotton, including India, which soon became
a leading supplier. But once the war and blockade ended,
foreign demand for U.S. cotton rebounded. With the
abolition of slavery, sharecropping became the dominant
labor arrangement in the South. Postwar production and
exports grew quickly enough that by 1870 they reached
their volumes of the late 1850s.
What Hath Morse Wrought?
In several recent papers, Massachusetts Institute of
Technology economist Claudia Steinwender has studied
the effects of the transatlantic telegraph breakthrough of
July 1866, as a critical positive shock to cotton markets.
The fact that this shock was instant and independent of
outside economic conditions, she notes, makes it easier
to see how it affected prices and markets right away. And
indeed, by comparing prices on both sides of the Atlantic,
she found there was an abrupt change. Whereas the average difference between New York and Liverpool prices
was 2.56 pence per pound of cotton prior to the cable, it

Transatlantic telegraph cable arrives at Heart’s Content, Newfoundland,
July 27, 1866. Engraving by unknown artist.

fell to 1.65 pence per pound — a drop of more than a third
— right after. Furthermore, the transatlantic price differences were much less subject to major swings.
In turn, thanks to more timely and accurate information, New York traders were better able to adjust export
volumes to meet fluctuations in foreign demand. Rather
than spend money on costly storage, which required
leaving some of their product idle, exporters could calibrate their shipments more efficiently. In Steinwender’s
calculations, this boosted average daily cotton exports by
37 percent. The variance in daily volume increased even
more, by 114 percent — reflecting the fact that exporters
were able to make these adjustments quickly. Overall,
she concluded, the cotton trade experienced an 8 percent
efficiency gain in annual export value, mostly from the
reduced variations in price differences due to the cable.
Put another way, this efficiency gain was equivalent to a
20 percent drop in storage costs, or the elimination of a
7 percent ad valorum tariff.
“This is a case of how a technological breakthrough
addressed a classic puzzle in trade,” says Steinwender.
“Information about foreign demand is not a given.
Exporters don’t know how much those markets need and
how much they will pay. So how do you know how much
you can supply those markets?”
In a recent paper co-authored with Columbia University’s
Réka Juhász, Steinwender extended this analysis to see how
the telegraph’s information revolution affected the global
textile industry’s supply chain. They found that its impact
was especially concentrated in boosting trade in intermediate goods like yarn and plain cloth, for which information
could be most easily transmitted by telegraph rather than
require the inspection of physical samples. More broadly,
the telegraph helped diffuse information about the technology used in the production process.
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

29

The Real-Time Effect
As this work suggests, the transatlantic telegraph cable
had a profound impact on the cotton trade. But even
before 1866, the telegraph was reshaping domestic markets as well.
To be sure, the telegraph was too pricey for frequent
personal use. One reason why prices stayed relatively high
was that they were largely set by Western Union, which
had become the dominant provider during the Civil War
and consolidated its monopoly status by 1866; until 1900, it
enjoyed a market share of 90 percent or more in each state.
In those decades, rates fell from $1.09 to $0.30 per message,
but Western Union still netted $0.30 to $0.40 per dollar of
revenue. (For comparison, mail postage was only pennies,
while the average hourly wage in 1901 was around $0.25.)
Because of the telegraph’s real-time value, however, certain industries — notably railways, newspapers, and finance
— quickly found important applications in the 1840s and
1850s. The instant transmission of prices in commodities
markets and financial assets, for example, helped cut out
middlemen who used to benefit from arbitrage, while
wholesalers and retailers became more tightly linked in a
truly national economy. The telegraph also aided the railway industry by allowing single tracking through timely signaling, rather than requiring two tracks to avoid collisions.
This innovation facilitated the transport of goods across
the country as it became linked by rail; by the estimation
of economist Alexander Field, the efficiency gain came to
around 7 percent of GDP by 1890.
Meanwhile, beyond cotton, the transatlantic cable’s
effects could be seen in other pockets of global markets. One case in financial markets was the common
shares of the New York and Erie Railroad, which were
traded in both Britain and the United States. Economist
Christopher Hoag of Trinity College has studied how
the advent of the cable equalized share prices, finding
the telegraph was correlated with a reduction in the
transatlantic difference in prices from 5 percent to 10
percent before to 2 percent to 3 percent after. U.S.
bonds that traded in U.S. and London markets also saw
their prices converge. More broadly, the telegraph cable
played a direct role in stimulating trade in general in the
latter part of the 19th century, especially in the years
immediately after 1866, due to improved coordination of
shipping and timelier transmission of market-sensitive
information, according to Trent University economists
Byron Lew and Bruce Cater.

Cotton’s Revival
Postwar cotton production and exports in the South,
including in the Carolinas, both rebounded quickly even
as other cotton-producing countries expanded their reach.
Did the efficiency gains in exports resulting from the
telegraph cable play a role in this domestic recovery?
According to Steinwender, a very rough estimate is that
the United States benefited more on net than Britain,
receiving perhaps 75 percent of efficiency gains. “But as
to how this was distributed across producers, middlemen,
and speculators is harder to resolve,” she adds. “The data
don’t provide a clear answer on how the gains from higher
exports and higher prices were distributed domestically.”
More broadly, however, the telegraph’s information
revolution was one of the factors behind another market
innovation — the introduction of futures trading in 1871
with the New York Cotton Exchange. With a telegraph
network connecting London with New York and the
major cotton centers in the South, merchants could conduct spot and futures trading based on multiple reports
a day. The exchange played a leading role in cotton market integration in the following years in its function as a
clearing house, reducing the role of local middlemen (who
charged commissions) and helping regional growers market crops nationally. Notably, the exchange also allowed
merchants to hedge through futures trading, which was
especially important given the volatility of cotton prices;
once a commodity was hedged, it was easier for merchants
and shippers to secure credit. In turn, the growth of a
nationally integrated cotton market helped spur the development of North Carolina’s textile sector in the late 19th
century as raw cotton from across the South was diverted
to domestic textile production.
The disruptive role of technology in this era did not go
unnoticed by one observer at the time. In an 1870 report,
William Forwood, a Liverpool Chamber of Commerce official, addressed the Civil War’s effects on supply, demand,
and prices and the broader global response. Amid the turmoil in the cotton market, he concluded, the higher prices
resulting from the wartime drop in U.S. supply brought
in new producers, while advances in communication and
transportation encouraged activity in previously quiet markets, not to mention more efficient cultivation. “As water
finds its level, so will price regulate supply,” he wrote.
“[B]ut these maxims have never been so fully demonstrated
as during the crisis through which the greatest trade of the
world has gone during the past 10 years.”
EF

Readings

30

Field, Alexander James. “The Magnetic Telegraph, Price and
Quantity Data, and the New Management of Capital.” Journal of
Economic History, June 1992, vol. 52, no. 2, pp. 401-413.

Hoag, Christopher. “The Atlantic Telegraph Cable and Capital
Market Information Flows.” Journal of Economic History, June 2006,
vol. 66, no. 2, pp. 342-353.

Garbade, Kenneth D., and William L. Silber. “Technology,
Communication and the Performance of Financial Markets:
1840-1975.” Journal of Finance, June 1978, vol. 33, no. 3, pp. 819-832.

Steinwender, Claudia. “Real Effects of Information Frictions:
When the States and the Kingdom Became United.” American
Economic Review, March 2018, vol. 108, no. 3., pp. 657-696.

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

BOOKREVIEW

When the Robots Come
THE FUTURE OF WORK: ROBOTS, AI,
AND AUTOMATION
BY DARRELL M. WEST, WASHINGTON,
D.C.: BROOKINGS INSTITUTION PRESS,
2018, 205 PAGES
REVIEWED BY RENEE HALTOM

R

etail cashiers make up roughly 2.2 percent of the
U.S. labor force. But technologists predict they
will eventually be replaced by microchips that
debit the accounts of shoppers as they exit stores.
The technology to replace harder-to-automate jobs —
paralegals, radiologists, even jazz musicians — is starting
to emerge. The prospects for employment in general are
bleak, argues Darrell West, director of governance studies at the Brookings Institution, in The Future of Work:
Robots, AI, and Automation. “Unless we get serious about
impending economic transformations, we may end up in
a dire situation of widespread inequality, social conflict,
political unrest, and a repressive government to deal with
the resulting chaos.”
West details advances that have the potential to change
virtually any personal or professional task through robotics, artificial intelligence, and the “Internet of things”
(generally defined as fixed and mobile devices connected
to the Internet). Most of this is thus far conceptual, so
the ultimate labor impacts are murky. Estimates of the
share of existing jobs that will be automated in the next
three decades range from around a tenth to one-half. Some
experts predict a high chance that robots will eventually
outperform humans at all tasks, from surgery to writing
best-selling novels.
Such trends clearly command some rethinking about
the role of work in our economy and society. At the same
time, displacement of workers can’t be the end of the
story. Mentioned but unexplored by West is what robots
and AI will free up humans to do. Who anticipates robots’
needs, determines how they’ll be used, and manages them?
Who translates their output into actionable business decisions, firm-level strategies, and the next big innovations?
Without wading into complacency, it is worth noting
that history is on the side of labor. The literature on immigration, for example, has found that in many cases, native
wages rise. And historically, job loss to innovation has been
met with wage growth and new jobs in previously unimagined areas. Many of the nation’s 7.4 million tech occupation
jobs did not exist 30 years ago (and they tend to pay wages
far higher than the national average). In the end, economic
theory suggests things depend on the extent to which technology becomes cheaper than labor in the long run.

For labor to win in that equation, workers will need
greater skills. West advocates a new culture of lifelong
learning, a challenging task since retraining programs and
apprenticeships have struggled to provide skills that stay
relevant over time. A promising avenue may be investment
in soft skills, which are inherently more transferable across
tasks. West cites Massachusetts Institute of Technology
economist Andrew McAfee’s argument that the educational system needs to produce graduates who can negotiate, motivate, provide compassionate service and great
experiences, and intuit the next business problem several
steps in advance. One avenue that West doesn’t mention
is investments in early childhood education; there is evidence that students who lack soft skills early on only fall
further behind in that dimension.
While not explicitly endorsing all of them, West offers
a range of possible ways to buffer the costs to workers. He
would like to see the nation consider health, retirement,
and other benefits tied to “citizen accounts” that are portable across jobs and that could be credited for socially
beneficial activities such as volunteer work (as is done in the
United Kingdom). He also cites paid family leave; revamping the earned income tax credit to help the working poor;
expanding trade adjustment assistance to include technology disruptions; providing a universal basic income; and
deregulation of licensing requirements so that it is easier for
workers to change industries. West advocates a “solidarity
tax” on high net worth individuals to pay for much of this.
Most of these prescriptions are not specific to technology, and many are things society may want to consider anyway. But West makes a familiar and compelling case that
the political system may be slow to act. Whereas society
responded to disruptions resulting from the industrial revolution — with reforms ranging from worker safety to the
creation of primary elections to break up political power
— today the combination of political polarization and economic inequality may make consensus and then productive
change more difficult. West believes recent populist movements spurred in part by economic disenfranchisement
are only the beginning. He advocates reforms to make the
political system more representative, and these, too, are
worth consideration regardless of the scale of automation
to the extent that they make politics more fair.
Though one wonders if labor will become quite as irrelevant as West imagines, his is a comprehensive, though
rather high-level, review of the coming challenges and
proposed remedies. It is hard to imagine that most people
won’t be left far better off due to technological progress.
But West makes a compelling case that the extent to
which they are depends on how public and private decisions alike prepare us.
EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

31

DISTRICTDIGEST

Economic Trends Across the Region

The Opioid Epidemic, the Fifth District, and the Labor Force
BY S O N YA W A D D E L L

I

n 2016, there were more than 63,600 drug overdose
deaths in the United States, 70 percent more than
the number of motor vehicle deaths the same year.
The age-adjusted rate of overdose deaths has more than
tripled since 1999. Of the deaths in 2016, about twothirds were related to opioids; those deaths increased
fivefold since 1999.
Certain states in the Fifth Federal Reserve District
— which includes the District of Columbia, Maryland,
North Carolina, South Carolina, Virginia, and most of
West Virginia — have been particularly hard hit by the
increased opioid use and misuse. The most striking data
come out of West Virginia. At 52 deaths per 100,000
people, West Virginia had the highest drug overdose
death rate in the country in 2016, followed by Ohio at
39.1 deaths. In fact, three district jurisdictions — West
Virginia, Maryland, and D.C. — were in the top seven
states for fatal drug overdoses, and most of those were
opioid-related. (See chart.)
Many have tried to quantify the economic impact of
the national opioid crisis. For example, in an October
2016 article, Curtis Florence, Chao Zhou, Feijun Luo,
and Likang Xu of the Centers for Disease Control and
Prevention (CDC) estimated the national economic burden of prescription opioid abuse in 2013 (including health
care costs, criminal justice costs, and lost productivity
costs) to be $78.5 billion. In a later paper, Alex Brill and
Scott Ganz of the American Enterprise Institute and
Georgia Tech estimated the 2015 per capita state- and
county-level economic burden of the opioid crisis. They
estimated that the per capita nonmortality costs were

Drug Overdose Death Rates in the Fifth District
Deaths per 100,000 people
60
50
40
30
20
10
0

DC
2014

MD
2015

NC
2016

SOURCE: Centers for Disease Control and Prevention

32

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

SC

VA

WV

highest in D.C. ($493) and New Hampshire ($360), and
the highest per capita total costs (including mortality)
were in West Virginia ($4,378) and D.C. ($3,657).
Apart from the obvious public health concerns created
by the crisis, there are two primary economic reasons why
a Federal Reserve Bank such as the Richmond Fed seeks
to better understand the impact of the opioid crisis. First,
a Reserve Bank is tasked with understanding economic
conditions in its region and identifying any economic
impact of the use and misuse of opioids on the district’s
states and localities is part of that effort. Second, the
Fed’s dual mandate of maximum employment and stable
prices requires an understanding of any factor that might
affect labor markets. With historically low unemployment
and widespread stories of employers struggling to find
workers, it becomes even more relevant to understand the
extent to which the opioid crisis affects the pool of available labor throughout the nation.
Documenting the Crisis
The CDC looks at three primary categories of opioids:
natural and semisynthetic opioid analgesics that are often
available by prescription (such as morphine, codeine,
oxycodone, and hydrocodone); synthetic opioid analgesics
(such as tramadol and fentanyl); and heroin. According
to the CDC, processing and analyzing death certificates
indicates two distinct but interconnected trends in the
opioid epidemic: an increase in deaths from prescription
opioid overdoses over a 17-year period, and a recent surge
in illicit opioid overdoses driven mainly by heroin and illegally made fentanyl. (See chart.)
So what explains the national evolution of the
opioid crisis outlined by the CDC? First, there is
evidence that much of the addiction to opioids in
the United States began with a prescription. Three
out of four new heroin users report abusing prescription drugs before using heroin, and people who are
addicted to prescription opioids are 40 times more
likely to also be addicted to heroin. Further, opioid
prescription rates rose considerably for two decades
starting in the mid-1990s, just prior to the beginning
of the rise in opioid-related deaths.
In the Fifth District, overdose death rates have
been highest in West Virginia — where the rate of
opioid prescribing has also been high. Data from the
CDC indicate that at the peak of opioid prescribing in West Virginia (2009), medical professionals
in the state wrote 146.9 opioid prescriptions per
100 people. This was the highest prescription rate

1999

2000

2001

2002

2003

2004

2005

2006

2007

2008

2009

2010

2011

2012

2013

2014

2015

2016

1999

2001

2002

2003

2004

2005

2006

2007

2008

2009

2010

2011

2012

2013

2014

2015

2016

NUMBER OF DEATHS (FIFTH DISTRICT)

2000

NUMBER OF DEATHS (U.S.)

in the country by a wide margin: The next
Opioid Overdose Deaths by Type for the United States
highest rates were in Tennessee, Kentucky,
and Alabama, which had rates in the 130s. On
25,000
the other hand, conditions are changing. By
2016, the rate in West Virginia was down to
20,000
96 prescriptions per 100 people. This was still
15,000
the highest in the Fifth District, and one of the
highest in the nation, but it ranked below some
10,000
states with much higher prescription rates:
Alabama (121.0), Arkansas (114.6), Tennessee
5,000
(107.5), Mississippi (105.6), Louisiana (98.1),
Oklahoma (97.9), and Kentucky (97.2).
0
Two other factors in the evolution of the crisis involved the reformulation of a specific drug
and a decline in the price of heroin. One of the
Synthetic
Heroin
Natural/Semisynthetic
most widely prescribed drugs was OxyContin,
made by Purdue Pharma, which contained a
formulation that released the active ingrediOpioid Overdose Deaths by Type for the Fifth District
ent (oxycodone) over the course of 12 hours.
3,500
Users soon realized, however, that the extended
release properties could be circumvented by
3,000
crushing the pill into a powder that could be
2,500
snorted, smoked, or liquefied and injected. In
2,000
August 2010, Purdue Pharma stopped shipping
its original formulation of OxyContin and began
1,500
shipping exclusively a new formulation, what
1,000
they called an abuse-deterrent formulation,
500
which was much more difficult to abuse.
A few papers, including a January 2017
0
National Bureau of Economic Research
(NBER) working paper by Abby Alpert of the
Wharton School and David Powell and Rosalie
Synthetic
Heroin
Natural/Semisynthetic
Liccardo Pacula of the RAND Corporation, as
well as a 2018 NBER working paper by William
SOURCE: Centers for Disease Control and Prevention/Kaiser Family Foundation
Evans and Ethan Lieber of the University of
Notre Dame and Patrick Power of Boston
University, indicate that rather than reduce overall opiThe rate of overdose deaths involving synthetic opioids
oid misuse or overdose deaths, this reformulation led
other than methadone doubled from 2015 to 2016 and
to the substitution of heroin for other opioids. Evans,
confiscations of fentanyl have been on the rise.
Lieber, and Power argue that each prevented prescripAlthough the national pattern in the evolution of the
tion or semisynthetic opioid death was replaced with a
opioid crisis holds true in the Fifth District overall (see
heroin death. A big part of the reason was that the price
chart), it is not consistent across states. In West Virginia,
of heroin fell from more than $3,000 per pure gram in
for example, the natural and semisynthetic opioid deaths
1981 to less than $500 per pure gram in 2012. This, in
are only just being overtaken by synthetic opioid deaths,
turn, was due primarily to vastly increased supply and
and heroin use is far lower. (See chart on next page.) In
increased purity, primarily coming from Mexico. In
the District of Columbia, however, heroin overdose rates
2014, 79 percent of U.S. heroin came from Mexico, comare well above those of prescription drug rates. (See chart
pared to 15 percent a decade earlier.
on next page.)
A final piece of the evolution came with the increase in
overdoses from illicit synthetic opioids, such as fentanyl.
Effect of Opioid Use on the Labor Force
Pharmaceutical fentanyl is a synthetic opioid pain reliever
In May 2018, the U.S. unemployment rate fell to 3.8 perthat is 50 to 100 times more potent than morphine and is
cent — a rate so low that it has been seen only a handful
thus often used to treat severe pain. But the increase in
of times in the 70-year history of the series. Yet the
fentanyl-related overdoses and deaths in the United States
share of the population aged 25 to 54 years — the prime
arose from illicit fentanyl that is often mixed with heroin
working-age population — in the labor force has fallen
or cocaine, both with and without the user’s knowledge.
from a high of almost 85 percent in the late 1990s to less
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

33

periods (1999-2001 and 2014-2016), Krueger finds
that labor force participation is lower in areas of
500
the United States with a higher rate of opioid
prescriptions and that labor force participation
400
fell more in the 15-year period in areas with a high
rate of opioid prescriptions. These results hold
300
when controlling for things like demographics,
the share of employment in manufacturing, and
200
fixed characteristics of counties.
Although the relationship between the high
100
level of opioid prescription rates at the outset
and sharper declines in labor force participation
0
suggests the possibility of a causal link from opioid prescriptions to employment outcomes, that
leap requires, among other things, differences in
Natural and Semisynthetic
Heroin
Synthetic
opioid prescription rates to be independent of
SOURCE: Centers for Disease Control and Prevention/Kaiser Family Foundation
factors related to the labor market. For example,
both prescription rates and labor force participation rates could be related to, say, workers’ health
Drug Overdose Death Rates: Prescription vs. Heroin
conditions. Krueger himself refers to the results as
Deaths per 100,000 people in 2016
“preliminary and highly speculative.”
25
Another widely discussed work is that of Anne
Case and Angus Deaton of Princeton University
20
published in 2017. They document, among other
things, a rise in mortality predominantly among
15
white, non-Hispanic, lower-educated Americans
due to drugs, alcohol, and suicide. They refer to
10
these as “deaths of despair,” and they narrate, in
their words, a “preliminary but plausible story in
5
which cumulative disadvantage from one birth
0
cohort to the next — in the labor market, in
DC
MD
NC
SC
VA
WV
marriage and child outcomes, and in health — is
triggered by progressively worsening labor market
Prescription Opioids
Heroin
opportunities at the time of entry for whites with
SOURCE: Centers for Disease Control and Prevention		
low levels of education.” With respect to opioids,
they argue that the prescription of opioids for
than 81 percent by the end of 2015, although it has since
chronic pain was not a fundamental factor but added
risen to around 82 percent. There are reports that drug
“fuel to the flame,” making the epidemic much worse
use explains much of the decline in labor force particthan it otherwise would have been. In other words, the
ipation, and, in fact, many employers report high rates
opioid epidemic is a symptom of a larger problem.
of drug test failure among job applicants. The evidence,
The question of whether bad economic circumstances
however, is mixed.
lead to higher opioid use fits into a larger literature that
Most of the work done to disentangle the relationworks to understand the effect of changing economic cirship between opioid use and employment outcomes
cumstances on health outcomes. The results of these analcorroborates the intuition that higher overdose rates
yses are mixed. Some earlier work by Christopher Ruhm
and higher prescription rates are correlated with worse
of the University of Virginia suggests that recessions
employment outcomes. In one of the most cited papers,
might improve health outcomes because, for example,
published by the Brookings Institution in 2017, Alan
unemployed people may have more leisure time for physKrueger of Princeton University reported two major
ical activity. On the other hand, other researchers have
findings. First, in a survey of 571 prime-aged men out of
shown a negative effect of individual job displacement on
the labor force, 31 percent reported taking prescription
health outcomes. Recently, Kerwin Kofi Charles, Erik
pain medication on the previous day. Further, nearly 80
Hurst, and Mariel Schwartz of the University of Chicago
percent of those who took prescription pain medication
found that a decline in manufacturing in a local area in the
in the initial survey also reported taking it in a follow-up
2000s had large and persistent negative effects on employsurvey. Second, by linking 2015 county-level opioid prement rates, hours worked, and wages and that declining
scription rates to individual labor force data in two time
local manufacturing employment increased opioid use and
34

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

NUMBER

Opioid Overdose Deaths by Type in West Virginia

deaths. Further, Ruhm, Alex Hollingsworth of Indiana
University, and Kosali Simon of Indiana University
reported in a 2017 NBER working paper that increased
unemployment in a county increases opioid fatalities. In
this paper, however, they do not address the possibility of
reverse causality — that is, whether an increase in opioid
fatalities has an adverse effect on employment outcomes.
In other words, in this economy in which firms struggle to
find skilled workers to fill vacancies, is the opioid epidemic
further restricting our pool of available labor?
To answer this question, Janet Currie and Jonas Jin
of Princeton University and Molly Schnell of Stanford
University used quarterly county-level data on opioid
prescription rates and employment-to-population ratios
and engaged an econometric technique that allowed
them to tease out causality. They find no effect of opioids on employment-to-population ratios for men and
find that for women, a doubling of opioid prescriptions
would lead to a 3.8 percent increase in employment for
women in counties with education above the mean and a
5.2 percent increase in employment for women in counties with education below the mean. Thus, they argue
that although opioids are addictive and dangerous, they
may allow some women to work who would otherwise
leave the labor force.
In contrast, Dionissi Aliprantis and Mark Schweitzer
from the Federal Reserve Bank of Cleveland — whose
Fed district has also been particularly impacted by the
opioid crisis — published a working paper in May 2018
that finds evidence that opioid availability does decrease
both employment and labor force participation. They
do not find that the opioid prescription rate affects the
number of unemployed in the same way, but — consistent with anecdotal reports — they do find that opioid
prescription levels affect the individual’s decision to
participate in the labor force at all. In other words, an
increase in opioid prescriptions reduces the chance that
someone will be employed, but rather than joining the
ranks of unemployed, they fall out of the labor force altogether — that is, they stop looking for a job. They also
found that opioids reduced participation rates more for
prime-aged men in geographies with high prescription
rates than in geographies with lower prescription rates.
If these results are true, then there are particular implications for West Virginia, which, in addition to having a
high rate of opioid prescriptions and drug overdoses, also
maintains the lowest labor force participation rate of all
states in the country.

So why the different results? The answer is not clear.
The two papers used different estimation strategies and
different data, and researchers are still working to investigate where the different approaches might have led to
different results. There does seem to be a relationship
between labor market outcomes and opioid prescriptions, but empirically understanding the nature of that
relationship is important to policy determination, and the
question of correlation versus causality is still an open one.
Where Do We Go Next?
Much remains to be understood about the crisis and
its effects. One area of uncertainty is the quality of
the data that we use. Two of the most commonly cited
data sources are the National Survey on Drug Use and
Health, which relies on self-reporting and excludes the
incarcerated or those living on the street, and overdose
death rates, which can be understated since many death
certificates in drug overdose cases do not specify the
drug involved. Furthermore, while the data we have —
prescription rates and overdose death rates — might be
correlated with the phenomena we are seeking to study,
such as misuse, abuse, or nonfatal overdose rates, they are
not the same. Better data on misuse and not just deaths
would help researchers to better understand the impact
of the crisis.
In addition, data limitations thus far require analysis
to be done at the county level. Could there be counties
where misuse is high among those living there but where
prescription rates or overdose rates are low because, for
example, the high-prescribing doctors are in neighboring
counties or there is less illicit fentanyl on the market?
What does data at the county level not tell us about an
individual’s use of opioids or an individual’s relationship to
the labor market?
The paper by Currie, Jin, and Schnell brings into
question the causal relationship between prescription
opioid use and employment-to-population ratios. But
they do not address the relationship between heroin use
and labor market outcomes; it is not unreasonable to
think that while, in many cases, a prescription for opioids might enable a person to keep working, heroin use
might be a different story. As the national crisis evolves
from a prescription drug epidemic to an illicit drug
epidemic, researchers will need to find a way to better
understand the relationship between illicit drug use
and labor market participation. In other words, there is
much left to learn.
EF

u

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

35

State Data, Q4:17
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000s)
793.8
2,722.1
4,440.6
2,103.8
3,955.7
747.4
Q/Q Percent Change
0.5
-0.2
0.4
0.7
0.0
0.3
Y/Y Percent Change
1.0
0.4
1.6
1.6
0.6
0.1
							
Manufacturing Employment (000s)
1.3
107.2
467.5
241.9
234.8
47.0
Q/Q Percent Change
0.0
0.6
-0.1
0.3
0.4
1.0
Y/Y Percent Change
8.3
1.2
0.7
2.2
0.8
0.8
Professional/Business Services Employment (000s) 166.8

443.8

627.9

280.9

732.2

66.1

Q/Q Percent Change
0.1
0.0
1.9
1.5
0.1
-1.0
Y/Y Percent Change
0.8
0.4
3.1
1.8
1.8
0.5
							
Government Employment (000s)
239.0
503.3
734.5
366.8
716.1
153.6
Q/Q Percent Change
-0.4
-0.1
-0.4
0.0
-0.2
-0.1
Y/Y Percent Change
-1.0
-0.2
0.8
0.8
0.1
-2.5
						
Civilian Labor Force (000s)
401.4
3,222.2
4,967.4
2,319.1
4,319.5
781.9
Q/Q Percent Change
0.0
-0.1
0.2
0.1
0.0
0.3
Y/Y Percent Change
1.2
0.7
1.6
0.9
1.1
0.2
							
Unemployment Rate (%)
5.9
4.1
4.5
4.2
3.6
5.4
Q3:17
6.1
4.0
4.4
4.2
3.7
5.2
Q4:16
6.0
4.4
5.1
4.6
4.1
5.7
Real Personal Income ($Bil)
47.5
321.6
397.6
181.2
409.7
61.2
Q/Q Percent Change
-0.2
0.4
0.6
0.5
0.3
-0.2
Y/Y Percent Change
1.4
1.6
2.6
2.1
1.9
1.8
							
New Housing Units
2,347
2,817
16,367
7,844
7,792
603
Q/Q Percent Change
92.2
-41.5
-9.3
-12.7
-4.5
-14.6
Y/Y Percent Change
119.6
-5.5
25.7
13.7
26.8
-5.9
				
House Price Index (1980=100)
865.8
469.7
366.2
375.2
454.3
237.0
Q/Q Percent Change
1.2
1.0
0.6
1.2
0.6
1.8
Y/Y Percent Change
7.2
4.1
6.1
6.6
3.7
2.6
NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease. The manufacturing composite index is a
weighted average of the shipments, new orders, and employment indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.
3) Manufacturing employment for DC is not seasonally adjusted

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor/Haver
Analytics
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics
Building Permits: U.S. Census Bureau/Haver Analytics
House Prices: Federal Housing Finance Agency/Haver Analytics

For more information, contact Joseph Mengedoth at (804) 697-2860 or e-mail joseph.mengedoth@rich.frb.org

36

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

Fourth Quarter 2006 – Fourth Quarter 2017

Change From Prior Year

Fourth Quarter 2006 – Fourth Quarter 2017

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
06 07 08 09 10

Fourth Quarter 2006 – Fourth Quarter 2017

10%
9%
8%
7%
6%
5%
4%
11

12

13

14

15

16

17

3%
06 07 08 09 10

11

12

13

14

15

Fifth District

16

17

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
06 07 08 09 10

11

12

13

14

15

16

17

14

15

16

17

16

17

United States

Nonfarm Employment
Major Metro Areas

Unemployment Rate
Major Metro Areas

New Housing Units

Change From Prior Year

Fourth Quarter 2006 – Fourth Quarter 2017

Fourth Quarter 2006 – Fourth Quarter 2017

Change From Prior Year

Fourth Quarter 2006 – Fourth Quarter 2017

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%
06 07 08 09 10
Charlotte

11

12

13

Baltimore

14

15

16

17

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%
06 07 08 09 10

Washington

Charlotte

40%
30%
20%
10%
0%
-10%
-20%
-30%
-40%
11

12

13

Baltimore

14

15

16

17

-50%
06 07 08 09 10

FRB—Richmond
Services Revenues Index

FRB—Richmond
Manufacturing Composite Index

Fourth Quarter 2006 – Fourth Quarter 2017

Fourth Quarter 2006 – Fourth Quarter 2017

Fourth Quarter 2006 – Fourth Quarter 2017

8%

30

20

6%

20

10

4%

0

2%

-10

0%

-20

-2%

-10

-30

-4%

-20

-40

-6%

-30
06 07 08 09 10

-50
06 07 08 09 10

-8%

11

12

13

14

15

16

17

13

United States

Change From Prior Year

30

0

12

House Prices

40

10

11

Fifth District

Washington

11

12

13

14

15

16

17

06 07 08 09 10
Fifth District

11

12

13

14

15

United States

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

37

Metropolitan Area Data, Q4:17
Washington, DC

Baltimore, MD

Hagerstown-Martinsburg, MD-WV

Nonfarm Employment (000s)
2,708.8
1,412.2
107.6			
Q/Q Percent Change
1.0
0.6
2.5			
Y/Y Percent Change
1.3
0.6
-0.3			
						
Unemployment Rate (%)
3.6
4.2
4.3			
Q3:17
3.7
4.2
4.3			
Q4:16
3.9
4.5
4.6			
						
New Housing Units
6,799
1,149
298			
Q/Q Percent Change
3.5
-49.9
-16.8			
Y/Y Percent Change
46.7
-0.1
26.8			
						
		
Asheville, NC
Charlotte, NC
Durham, NC
Nonfarm Employment (000s)
194.3
1,206.9
313.8			
Q/Q Percent Change
2.2
2.4
1.2			
Y/Y Percent Change
1.9
3.2
1.3			
						
Unemployment Rate (%)
3.7
4.2
4.0			
Q3:17
3.7
4.2
3.9			
Q4:16
4.1
4.7
4.4			
						
New Housing Units
689
5,660
1,124			
Q/Q Percent Change
-16.6
-14.9
-0.9			
Y/Y Percent Change
57.3
35.9
17.6			
			
						
Greensboro-High Point, NC
Raleigh, NC
Wilmington, NC
Nonfarm Employment (000s)
363.0
626.3
126.6			
Q/Q Percent Change
1.9
1.2
0.1			
Y/Y Percent Change
-0.2
2.6
2.0			
						
Unemployment Rate (%)
4.8
4.0
4.3			
Q3:17
4.7
3.9
4.2			
Q4:16
5.1
4.3
4.7			
						
New Housing Units
591
3,457
691			
Q/Q Percent Change
-20.8
4.5
44.9			
Y/Y Percent Change
-4.7
14.6
17.9			
						
NOTE:

Nonfarm employment and new housing units are not seasonally adjusted. Unemployment rates are seasonally adjusted.

38

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

Winston-Salem, NC

Charleston, SC

Columbia, SC

Nonfarm Employment (000s)
266.4
356.3
397.5		
Q/Q Percent Change
1.7
0.5
0.7		
Y/Y Percent Change
0.3
2.3
-1.2		
					
Unemployment Rate (%)
4.3
3.6
4.3		
Q3:17
4.3
3.6
4.1		
Q4:16
4.7
4.0
4.3		
					
New Housing Units
635
1,611
1,026		
Q/Q Percent Change
-49.8
-2.0
-17.7		
Y/Y Percent Change
172.5
19.1
-1.0		
					
				
Greenville, SC
Richmond, VA
Roanoke, VA
Nonfarm Employment (000s)
421.5
675.9
161.0		
Q/Q Percent Change
1.8
0.4
0.8		
Y/Y Percent Change
1.7
0.4
-1.0		
					
Unemployment Rate (%)
3.8
3.8
3.7		
Q3:17
3.9
3.8
3.9		
Q4:16
4.1
4.2
4.2		
					
New Housing Units
1,192
1,416
N/A		
Q/Q Percent Change
-26.0
-24.5
N/A		
Y/Y Percent Change
-6.6
48.6
N/A		
					
				
Virginia Beach-Norfolk, VA
Charleston, WV
Huntington, WV
Nonfarm Employment (000s)
781.2
117.4
140.3		
Q/Q Percent Change
-0.6
0.4
1.7		
Y/Y Percent Change
0.3
-0.9
0.1		
					
Unemployment Rate (%)
4.0
5.5
5.6		
Q3:17
4.1
5.2
5.6		
Q4:16
4.6
5.6
6.0		
					
New Housing Units
1,361
30
52		
Q/Q Percent Change
9.6
0.0
0.0		
Y/Y Percent Change
4.7
0.0
0.0		
					
					
				
For more information, contact Joseph Mengedoth at (804) 697-2860 or e-mail joseph.mengedoth@rich.frb.org
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

39

OPINION

Great Expectations
BY J O H N A . W E I N B E RG

W

hy hasn’t inflation increased more quickly,
given the strength of the economy? Part of
the answer might be that firms and households don’t expect inflation to increase more quickly.
Let’s start with how individual firms set prices. Under
an assumption of perfect competition, as you learned from
your Economics Principles textbook, firms don’t have any
pricing power; they just accept the market price, which
is determined by the demand for, and supply of, the good
being sold. But a textbook is about the only place you’ll
find perfect competition; in the real world, goods aren’t
identical, entering or exiting a market can be costly, and
information is far from complete. That means firms have
opportunities to seek to maximize their profits given their
costs, the demand for their goods, and the behavior of
their rivals.
There is currently some debate about the extent to
which the market power of the largest firms has increased
economy-wide and the ensuing effect on the overall price
level. There is little debate among economists, however,
about the role of expectations in determining the price
level. Beginning in the 1960s, a large body of research has
investigated the role that expectations play in dictating
the future path of inflation — and the “stagflation” of the
1970s, when unemployment and inflation rose together,
demonstrated how inflation expectations, once they are
embedded in household and business decisions, can make
it hard to bring inflation down.
What does this have to do with firms and prices? In
addition to competitive factors, firms also have to factor
in future inflation when making pricing decisions. If a
firm expects prices on average to rise by 3 percent over
the coming year, it will take into account the expected
increase in the costs of inputs and the prices of substitutes
when setting its own prices today. Multiply that across all
the firms in an economy, and expected inflation directly
influences actual inflation.
Temporary shocks can alter the path of inflation in
the short run. For example, suppose there is a significant
increase in the intensity of competition in a large sector
of the economy that unexpectedly depresses prices in that
sector. The deviation in that one sector — if big enough —
could hold down overall measured inflation for a period of
time. But in the long run, if inflation expectations remain
well-anchored, the underlying trend of low inflation will
eventually reassert itself. That is arguably what happened
last year when competition drove down the price of wireless
telephone plans; by some estimates, that decline contributed to nearly half of the decline in core consumer price
index inflation. In recent months, however, inflation has
40

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 8

been moving back toward the Fed’s 2 percent target, as the
Federal Open Market Committee believed it would.
Economists and policymakers can obtain indicators of
inflation expectations by asking people what they expect, or
they can infer expectations from market activity. In the first
category, a well-known survey of consumers conducted by the
University of Michigan indicates that inflation expectations
have been fairly stable, between 2.2 percent and 2.8 percent
in the last three years. In the second category, an important
measure is the 10-year “breakeven” rate, which compares
the yield of a 10-year Treasury bond to the yield of its
inflation-indexed equivalent, the 10-year Treasury InflationProtected Security (TIPS). This spread has ranged between
1.2 percent and 2.2 percent in the last three years.
Survey-based measures tend to be higher than market-based measures, which brings me to an important point:
We shouldn’t interpret the level of any given indicator of
inflation expectations as the precise level of expectations
for the Fed’s benchmark measure of inflation, the index for
personal consumption expenditures (PCE). Consumers, for
example, might place different weights on various categories of goods than the weights used to calculate the PCE.
And the spread between TIPS and nominal bond yields
contains not only inflation expectations, but also a risk
premium, which is hard to isolate. What matters, then, is
not necessarily the level of any measure per se, but rather
the changes in that level. Given that levels have remained
steady, current inflation expectations appear well-anchored
in line with the Fed’s target.
The Fed’s inflation target is symmetric, which means we
are concerned about inflation persistently above or below
2 percent. Because core PCE inflation was below target for
quite some time, some observers and policymakers have
argued that we should now allow inflation to run above
2 percent for a while. But expectations have not drifted down
decisively despite inflation being relatively low. So a period
of above-target inflation to ensure stable expectations may
not be necessary, since they’re reasonably steady to begin
with. At the same time, while it may be encouraging that
expectations have remained well-anchored despite a number of disinflationary impulses since the Great Recession,
this was accomplished in part by unprecedented and unconventional monetary policy actions. Now, as the impulses
to inflation appear to be pushing in the other (upward)
direction, we have relatively little in the historical record to
tell us what might make expectations less stable — which
means we shouldn’t take their stability for granted.
EF
John A. Weinberg is a policy advisor at the Federal
Reserve Bank of Richmond.

NEXTISSUE
Help Wanted

Many companies say they’re having a hard time finding employees.
Is the problem that there aren’t enough workers — in other words,
we’re simply in a tight labor market — or that workers don’t have
the right skills? The answer has implications for productivity, wage
growth, and inflation.

Leaving LIBOR

Trillions of dollars of financial contracts are based on an interest
rate known as LIBOR. But LIBOR, which was at the center of a
market-manipulation scandal in recent years, may disappear after
2021. Is the financial system ready?

Sustaining Sovereign Debt

The U.S. debt-to-GDP ratio is high and projected to grow in
coming decades. Throughout history, many sovereign nations
have defaulted on debt obligations they could no longer
honor — sometimes repeatedly — yet creditors continue to
lend to them. What enables nations to issue debt in light of
this uncertainty, and what are the costs of default once a debt
burden becomes unsustainable?

Economic History
Founded in 1876, Baltimore’s Johns Hopkins
University quickly became America’s
first research university. Its emphasis on
advanced research, doctoral education,
and academic publishing created a model
that leading universities in the United
States emulated by the turn of the century.
Today, America’s top research universities
are considered the best in the world.

Jargon Alert
Machine learning is a hot technology
making inroads into insurance, retail, health
care, and other sectors. But what the heck is
it and how does it work?

Interview
Antoinette Schoar of the Massachusetts
Institute of Technology discusses her
research on entrepreneurship, the influence
of artificial intelligence and big data on the
financial industry, and whether the housing
crisis was really a “subprime” crisis.

Visit us online:
www.richmondfed.org
•	To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
• To subscribe to our magazine
•	To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

Richmond Fed Research 2018

Working Papers Series

Economists at the Federal Reserve Bank of Richmond conduct research on a wide variety
of economic issues. Before that research makes its way into academic journals or our own
publications, it is often posted on the Bank’s website as part of the Working Papers series.
January 2018, No. 18-01
The Evolution of U.S. Monetary Policy
Robert L. Hetzel
January 2018, No. 18-02
Allan Meltzer: How He Underestimated
His Own Contribution to the Modern
Concept of a Central Bank
Robert L. Hetzel

February 2018, No. 18-06R
On the Measurement of Large Financial
Firm Resolvability (Revised July 2018)
Arantxa Jarque, John R. Walter, and
Jackson Evert
March 2018, No. 18-07
Asset Bubbles and Global Imbalances
Daisuke Ikeda and Toan Phan

February 2018, No. 18-03
The Costs of (sub)Sovereign Default Risk:
Evidence from Puerto Rico
Anusha Chari, Ryan Leary, and Toan Phan

March 2018, No. 18-08
The Fed’s Discount Window: An
Overview of Recent Data
Felix P. Ackon and Huberto M. Ennis

February 2018, No. 18-04
Regional Consumption Responses and
the Aggregate Fiscal Multiplier
Bill Dupor, Marios Karabarbounis,
Marianna Kudlyak, and M. Saif Mehkari

March 2018, No. 18-09
Temperature and Growth: A Panel
Analysis of the United States
Riccardo Colacito, Bridget Hoffman,
and Toan Phan

February 2018, No. 18-05
Bubbly Recessions
Siddhartha Biswas, Andrew Hanson,
and Toan Phan

April 2018, No. 18-10
Regressive Welfare Effects
of Housing Bubbles
Andrew Graczyk and Toan Phan

April 2018, No. 18-11
Asset Pledgeability and Endogenously
Leveraged Bubbles
Julien Bengui and Toan Phan
July 2018, No. 18-12
A Composite Likelihood Approach for
Dynamic Structural Models
Fabio Canova and Christian Matthes
July 2018, No. 18-13
Labor-Market Wedge under Engel Curve
Utility: Cyclical Substitution between
Necessities and Luxuries
Yongsung Chang, Andreas Hornstein,
and Marios Karabarbounis
August 2018, No. 18-14
Monetary Policy across Space and Time
Laura Liu, Christian Matthes, and
Katerina Petrova

To access the Working Papers Series visit: https://www.richmondfed.org/publications/research/working_papers