View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

SECOND QUARTER 2014

FEDERAL RESERVE BANK OF RICHMOND

Crowded
The Economic
Effects of
Population
Growth
(It’s Not All Bad)

Islamic
Banking

The Making of
Bar Codes

Interview with
Nicholas Bloom

Volume 18
Number 2
SECOND QUARTER 2014

COVER STORY

10

Crowded
While more is not always merrier, population growth over the
last century has had many positive effects

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
Director of RESEARCH

John A. Weinberg
Editorial Adviser

FEATURES

Islamic Banking, American Regulation
For some American Muslims, Sharia-compliant banks are an
important part of the financial landscape
             

15

Kartik Athreya
Editor

Aaron Steelman
S eni o r E dit o r

David A. Price
Managing Editor/Design Lead

Kathy Constant

20

Expanding Unemployment Insurance
Longer unemployment benefits often mean longer
unemployment spells, but economists say that’s not always
a bad thing  

DEPARTMENTs

1 President’s Message/Investing in People as an
				 Economic Growth Strategy
2			Upfront/Regional News at a Glance
4			 Federal Reserve/Will the Graying of America Change Monetary Policy?
7			 Policy Update/Accounting for the Vagaries of the Wind
8			 Jargon Alert/Adverse Selection
9			 Research Spotlight/Benefits of B.A. Alternatives
22				 Interview/Nicholas Bloom
27				 The Profession/Breaking into the Mainstream
28			Economic History/Reading Between the Lines
33				 Around the Fed/Take Your Pick on Jobs Stats
34			 Book Review/The Battle of Bretton Woods: John Maynard Keynes, 		
				 Harry Dexter White, and the Making of a New World Order
36				 District Digest/The Unconventional Oil and Gas Boom
44	Opinion/Moral Hazard and Measurement Hazard

Staff WriterS

Renee Haltom
Jessie Romero
Tim Sablik
Editorial Associate

Lisa Kenney

­

Contributors

R. Andrew Bauer
Jamie Feik
Charles Gerena
Robert L. Hetzel
Wendy Morrison
Karl Rhodes
Design

Janin/Cliff Design, Inc.

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/
RichFedResearch
Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Econ Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025x (Online)

Cover photographY:
Huang Xin/getty Images

President’sMessage

Investing in People as an Economic Growth Strategy

I

t might not be obvious why the president of a Federal
Reserve Bank would be interested in workforce development — what does it have to do with interest rates
and inflation? But workforce development is intimately
related to part of the Fed’s legislative mandate, which is
promoting maximum employment. That has proven to be a
difficult task in the wake of the 2007-2009 recession, as I’m
sure you are all too aware. This has led me and other policymakers to ponder a difficult question: Given the limitations
of monetary policy, what can be done to improve labor market outcomes in the long run?
At the Richmond Fed, our research suggests that much
of what we’re currently seeing in the labor market reflects
structural trends rather than a primarily cyclical change
in labor market behavior. That has prompted us to think
about long-term strategies to prepare workers for the labor
market. We’ve been thinking about workforce development at the level of the individual: What can be done to
improve people’s skills and adaptability, which economists
call “human” capital?
To think about those strategies, it’s helpful to begin in
the early 1960s, when economists began seriously studying
the forces and decisions that lead people to differ in their
capabilities. They proposed thinking about knowledge and
skills as simply another form of capital that makes workers
productive, just like physical capital such as machines or
computers. Workers acquire this human capital by making
investments, such as attending school, getting on-the-job
training, or even receiving medical care.
More recently, a consensus has developed that human
capital is more than just the number of years spent in school
or on the job. Research suggests that noncognitive skills
— such as following instructions, patience, and work ethic
— lay the foundation for mastering more complex cognitive
skills and may be just as important a determinant of future
labor market success. These basic emotional and social
skills are learned very early in life, and it can be difficult for
children who fall behind to catch up: Gaps in skills that are
important for adult outcomes are observable by age 5 and
tend to persist into adulthood.
What does the economics of human capital imply for
workforce development programs? Several insights are especially relevant. First, it makes economic sense to concentrate
intensive human capital investment in the form of formal
schooling on the young: The earlier workers invest, the longer they have to profit from their investments. In addition,
because earnings typically increase with age, young people
attending school tend to sacrifice less by way of forgone
earnings than older workers. Another key takeaway is that
investments in early childhood can affect later decisions
about formal schooling. If the foundations for learning are

laid very early, then even mild
delays in acquiring noncognitive skills might make skill
acquisition more challenging
later in life; after all, why try
as hard to get good grades,
stay in high school, or enroll
in college when those efforts
might not pay off?
Human capital economics
also implies that higher education should lead to higher
future wages, both because
education is costly to acquire and because it can elevate a
person’s productivity. Indeed, the data confirm that the
payoff to education is quite high.
Just as this view of workforce development points toward
investment early in life, it also points toward the challenges
confronting later interventions. Asking adults to reinvent
themselves in the face of a relatively short remaining working horizon, when early retirement and exiting the labor
force become viable options, is asking a lot of both the
workers and the workforce development professionals who
train them. And, indeed, research suggests that workforce
development efforts that focus solely on training or retraining adult workers might have only modest effects on employment and job retention.
Of course, this does not mean that adults cannot or should
not learn new skills; I am deeply sympathetic to the plight of
workers who have been laid off from jobs they performed
admirably for decades, and I commend those who wish to
complete or further their education. But we may need to be
cautious about treating older workers’ difficulties as remediable through training, when the appropriate course of action
may actually involve greater use of the social safety net.
We may be able to help a large number of future workers,
however, by expanding our focus and thinking about workforce development not as a cure for the short-term shocks
that individuals may experience, but rather as a long-term
vaccine that will protect them against future shocks.
EF

jeffrey m. lacker
President
Federal Reserve Bank of Richmond

This message was excerpted from a speech delivered June
26, 2014, and appears here as condensed by the Washington
Post on July 14, 2014.
Econ Focus | Second Quarter | 2014

1

UpFront

Regional News at a Glance

Coal Crunch

Massive Mining Layoffs Hit WV

D

uring the summer of 2014, three major coal mining companies announced plans to lay off a total of
1,800 employees in West Virginia.
The largest announcements came from Bristol, Va.based Alpha Natural Resources. In late July and early
August, the company put 1,129 employees on notice at
various subsidiaries in the southern half of the state,
where mine productivity is low compared with other
U.S. coal-producing regions.
The company cited several reasons for reducing its
West Virginia operations, including persistently weak
demand for coal, competition from lower-cost operators in other regions, competition from natural gas as an
alternative to coal for power generation, and new regulations from the Environmental Protection Agency.

P H O T O G R A P H Y : U N I T E D M I N E WO R K E R S O F A M E R I C A

Open-pit mining in Wyoming is far more efficient than
underground mining in West Virginia.

2

(For more on the prospects for West Virginia coal, see
“The Future of Coal,” Econ Focus, Fourth Quarter 2013.)
“EPA’s new MATS (mercury and air toxics standards) air emissions rule alone is expected to take more
coal-fired power generation offline next year than in
the previous three years combined,” the company predicted. “Much of that is in markets historically supplied
by Central Appalachian mines.”
Other major layoff announcements during the summer came from Cliffs Natural Resources of Cleveland
(397 employees) and Coal River Energy of Alum Creek,
W.Va. (280 workers). Coal River Energy blamed its
pending layoffs on “weak coal demand and government
regulations,” while Cliffs Natural Resources cited poor
market conditions for metallurgical coal (coal used to
make steel).
The summer’s total number of announced layoffs
represents 9.5 percent of the state’s jobs in coal mining
and coal mining support, but the industry’s employment
will not decline 9.5 percent because hiring will offset
some of the layoffs. The net loss of jobs during the past
two years, however, has accelerated a downward trend
that began in 2012. Coal mining employment in West
Virginia, including support positions, has plummeted
from an 18-year high of 24,928 jobs in 2011 to a 10-year
low of 19,040 jobs in the first quarter of 2014. The most
recent wave of layoff announcements suggests that the
number will continue to decline rapidly for at least the
rest of the year.
—Karl Rhodes

In for a Dollar

Discount stores engage in a high-price bidding war

T

he Charlotte area-based retailer Family Dollar
has been targeted for takeover by two of its competitors. In July, the company announced it was being
acquired by Dollar Tree, which is headquartered in
Chesapeake, Va., for $8.5 billion, or $74.50 per share.
In August, rival Dollar General offered to pay $78.50
per share, an offer that Family Dollar’s board of directors rejected on the grounds that the Federal Trade
Commission (FTC) would be unlikely to approve the
deal. Dollar General upped its bid to $80 per share,
or $9.1 billion, but Family Dollar spurned that offer
as well. On Sept. 10, five days after being rebuffed the
second time, Dollar General launched a hostile take-

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

over bid. Family Dollar’s board is recommending that
shareholders reject Dollar General’s tender offer. The
shareholder vote is scheduled for December 11.
The three chains are the major players in the “super
discount” retail sector, which grew significantly during
the Great Recession and has continued to expand.
Dollar General is the largest of the three, with more
than 11,000 stores in 40 states. Family Dollar has about
8,000 locations, and Dollar Tree has about 5,000 locations in the United States and Canada. By comparison,
Wal-Mart has around 4,200 U.S. locations.
Despite the moniker “dollar store,” both Family
Dollar and Dollar General sell goods at a range of

prices, and Family Dollar says that proximity to a
Dollar General is a major factor in its pricing decisions.
According to Family Dollar’s board of directors, it’s
thus likely the FTC would block the deal on antitrust
grounds, or at the very least require a protracted review
process. “The government wants to prevent mergers
that transform the structure of a market in a way that
raises prices and thus injures consumers in that market,” says Alan Meese, an antitrust expert at William &
Mary Law School and former antitrust litigator.
Invoking antitrust concerns is a common tactic for
companies that don’t want to be bought, according to
Meese. “Raising antitrust concerns to thwart a more
generous bid can raise suspicions about the motives of
the target’s board.” Still, the Dollar Tree deal may be
more likely to pass muster with the FTC; Dollar Tree
caps its prices at $1 and has promised to divest itself of
as many stores as necessary to win regulatory approval.
Dollar General has agreed to sell up to 1,500 stores, but
so far it is unwilling to promise more. “In this context
Family Dollar’s directors have a fiduciary duty to obtain
the best deal for shareholders,” says Meese. “If they

have a well-informed good faith belief that the FTC
will block the more lucrative transaction, they should
recommend shareholders approve the sure thing.”
Just how much monopoly power a combined Dollar
General-Family Dollar would actually be able to exercise depends on how the relevant market is defined.
The dollar stores’ $48 billion market is only a tiny slice
of the total market for fast-moving consumer goods,
such as groceries and toiletries; Walmart’s U.S. sales
alone were more than $279 billion in fiscal year 2014.
And an analysis of shopping data for about 80,000
households by the company InfoScout suggests that
consumers have plenty of other options. In any given
month, nearly 93 percent of households also shopped
at a supercenter such as Walmart or Target, and
when asked, 81 percent of Family Dollar shoppers said
Walmart was a good substitute for Family Dollar.
Regardless of which company ultimately wins over
Family Dollar’s shareholders, the deal will come under
close FTC scrutiny to ensure that consumers can
stretch their dollars as far as they did before.
— J e ss i e R o m e r o

It’s All Business

	NC expands the role of its business court with new law

N

orth Carolina’s business court has been in existence since 1995, but it recently got quite the
facelift. On Aug. 6, Gov. Pat McCrory signed into law
an act aimed at modernizing and streamlining the state’s
specialized business court. Proponents believe these
changes will make the state more business-friendly by
establishing clear precedents and definitive case law.
Business courts are specialized courts that hear only
designated business cases. They currently exist in varying
forms in 20 states, with Delaware’s Court of Chancery
being the longest-running and most prestigious.
But the makeup of business courts differs greatly
from state to state in several respects. For instance,
North Carolina and Delaware have specialized business
courts, while some other states only have business divisions within their existing general courts; some business
courts are statewide and some are limited to metro areas,
such as Pittsburgh and Chicago; and the criteria for qualifying for business court is different in every location.
Despite this wide variety, each state with a business
court system generally creates it with the goal of improving efficiency and predictability in business litigation.
The new North Carolina law was spearheaded by
Republican state senators Tamara Barringer and Bob
Rucho, who told the Charlotte News & Observer in June
that their goal was to enhance the existing court and
“make the state more attractive to businesses, including

out-of-state companies looking to relocate.”
One of the ways that North Carolina hopes the law
will help it to compete is through new rules on holding
company reorganizations — that is, when a new corporation becomes the sole shareholder of an existing
corporation through a merger. In a page taken from
Delaware’s playbook, an entirely new section was added
that permits holding companies to reorganize without
shareholder approval as long as certain requirements are
met. Once the merger is complete, the shareholders will
maintain the same rights in the new holding company.
Other sections in the law deal directly with the operation of the business court. Business court appeals will
now go directly to the state Supreme Court, rather than
through the Court of Appeals. The law also creates a
category of mandatory complex business cases that are
required to be tried in business court: Cases valued at
more than $5 million involving corporate law, intellectual
property law, and certain other areas fall under this designation, as do business contract disputes worth more than
$1 million when all parties consent to the designation.
While the law does not create any new judgeships,
the 2014 Appropriations Act does call for two new
business court judges in 2015, bringing the total to five.
The updated law applies only to cases brought to the
court after Oct. 1, 2014, and most provisions of the new
law went into effect on this date.
—Lisa Kenney

Econ Focus | Second Quarter | 2014

3

FEDERALRESERVE

Will the Graying of America Change Monetary Policy?
By Dav i d A . P r i c e

Economists
ponder whether
demographic
change will
reduce the
potency of the
Fed’s interest
rate moves

A

OLD-AGE DEPENDENCY (PERCENT)

merica, like many industrialized countries, is aging. The
Census Bureau projects that by
2030, over 20 percent of U.S. residents
will be 65 or older, up from 13 percent in
2010 and less than 10 percent in 1970.
For elder-law attorneys and hearing-aid
companies, the economic implications
of this trend are more or less obvious.
For fiscal policymakers, especially with
regard to programs like Social Security
and Medicare, the implications are also
obvious — although the precise extent
of the effect is up for debate. But what
are the trend’s implications for monetary policy?
Despite the certainty of the oncoming demographic change, little is known
about how it is likely to affect the Fed’s
policy tools. Some policymakers and
observers have expressed concern, however, that the Fed’s ability to stimulate
the economy may decline for demographic reasons, if it hasn’t already
done so. For example, New York Fed
President William Dudley suggested
in a 2012 speech that “demographic
factors have played a role in restraining
the recovery,” in part because spending
by older Americans is “less likely to be
easily stimulated by monetary policy.”
If the contentions of some economists are correct, the aging
trend will affect asset marAmerica’s Aging
kets in ways that will influ40
ence how the Fed conducts
monetary policy, perhaps
35
forcing the Fed to make big30
ger interest rate changes for
25
the same amount of stimulus or tightening it wishes
20
to apply to the economy. It
15
could also lead the Fed to
resort more frequently to
10
unconventional tools such as
5
massive purchases of assets
0
— the so-called “quanti1950 1960 1970 1980 1990 2000 2010 2020 2030 2040 2050
tative easing” in which the
Note: The Census Bureau defines the old-age dependency ratio as the populaFed engaged after the Great
tion aged 65 and over as a percentage of the population aged 18 to 64.
Source: U.S. Census Bureau, An Aging Nation: The Older Population in the
Recession.
United States (2014)

4

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

An Aging America
America’s aging trend reflects several
distinct causes. The most famous of
them, the baby boom, is the jump in fertility that took place following World
War II and continued for 18 years. Birth
rates during this period ranged from 24
to 26.5 per 1,000 people in the population, compared with 18 to 19 per 1,000
people during the Great Depression
years leading up to the war. The term
“baby boomer” commonly refers to people born in the United States from 1946
to 1964, when birth rates finally fell to
their pre-boom levels.
The baby boom wasn’t America’s
first postwar birth boom — a brief, shallow one took place during the two years
following World War I — nor was it a
historical peak in U.S. birth rates. What
has made it a powerful driver of today’s
aging trend is partly the sheer number
of baby boomers who were born in its
long duration, some 72.5 million in all.
Another reason for the aging trend is
the pattern of U.S. birth rates in the 50
years since the end of the baby boom.
During that time, birth rates never
returned to even the lowest levels of the
baby-boom years. They have hovered
around 15 per 1,000 people since the
early 1970s, declining further with the
2007-2009 recession. In 2012, the latest
year for which data is available, the rate
was down to 12.6 per 1,000.
Combined with the declines in birth
rates are the increases in our life expectancies, from 47.3 years in 1900 to 68.4
years in 1950 and 78.2 years in 2010.
While America is aging, it is far
from alone in doing so. The other large
developed countries are generally older.
In 2012, the populations of Germany,
Italy, and Japan were at least one-fifth
seniors aged 65 or older, a level that the
United States is not expected to reach
for decades.
To be sure, population forecasting
is not foolproof. John Maynard Keynes
asserted in a 1937 speech before the

Eugenics Society that Britain would soon face “a stationary or
declining level” of population — a prediction he made on the
eve of that country’s wartime and postwar baby booms. In the
case of the present-day United States, one of the variables that
will affect the age structure of the population is the course of
future immigration. Still, given the size of the baby-boomer
pig moving through America’s demographic python, there is
little debate that America will be getting older.

Defanging the Fed
People’s patterns of consumption and savings tend to vary
in predictable ways as they get older. That’s according to
the “life cycle hypothesis,” originated in the early 1950s by
Franco Modigliani, then an economics professor at Carnegie
Mellon University, and Richard Brumberg, a graduate student at Johns Hopkins University. The basic idea is simple:
Individuals try to smooth out their consumption over their
lifetimes by borrowing when they are young adults, building
up savings as their incomes increase during their working
years, and drawing down their savings after they retire.
For economists studying the effect of demographic
change on financial markets, the ages 40 to 64 are often considered the asset-accumulating years. Some economists have
argued that the long-term upward trends of recent decades
in the stock market and housing markets have been driven in
part by the rise of the baby boomers. Indeed, since the late
1980s, a number of economists, starting with Greg Mankiw
of Harvard University and David Weil of Brown University,
have suggested that the influence of life cycle effects may
lead to declining house prices as the baby boomers leave
their asset-accumulating years behind.
One aspect of the life-cycle effect with implications for
monetary policy is that older households tend to hold less
debt as a fraction of net worth, which could work to reduce
the sensitivity of their consumption to interest rates. “A
change in interest rates on a large sum of debt implies higher
interest payments,” International Monetary Fund (IMF)
economist Patrick Imam said in an email. “Therefore,
younger households have to cut their expenditure much
more to pay the higher interest payments than older households, and vice versa if interest rates go down.”
Another life-cycle effect that could dampen the influence
of monetary policy is the assumed tendency of older individuals to be more risk-averse in their investments than younger
ones, in line with the common advice of financial writers and
advisers to shift assets into less risky investment categories
as one ages. Such risk-aversion by a growing population of
older investors could create headwinds for the Fed because
its low-interest-rate policies get some of their effectiveness from a “risk-taking” channel of monetary policy: that
is, the tendency of some investors in a low-interest-rate
environment to reduce their holdings of safe assets such as
Treasuries in favor of riskier assets such as stocks and highyield bonds, a process sometimes known as a search for yield.
But that effect works only if people actually take greater
risks in response to easier monetary policy, and some econo-

mists believe that older households may be less willing to do
so. In this view, the less risky the investments that investors
move into in response to low Fed policy rates — if they move
their money at all — the less stimulus to economic activity
through the risk-taking channel of monetary policy.
“Financial entities and households have been found to
take more risk by borrowing more and investing in riskier
assets when interest rates fall and less when interest rates
rise,” Imam said. “Older people, who are more risk-averse
— as they cannot easily make up for losses — may be less
sensitive to the ‘search for yield’ effect than younger ones.
Elderly households would not want to invest as much in risky
sectors, thereby not allowing those sectors to take off on a
large scale.”

Into The Gray Unknown
Yet a number of complicating factors leave it unclear how
much the Fed’s policy tools will be weakened, or even
whether they will be significantly affected at all. As it turns
out, households don’t seem to dissave as much in retirement
as the classic life-cycle hypothesis predicts. Despite the theory, moreover, households increasingly keep borrowing even
in their later years.
“We have seen in the last couple of decades, as households have refinanced mortgages in midlife into their 50s
and sometimes even 60s, more households reaching traditional retirement age with mortgage debt on the books,” says
Massachusetts Institute of Technology economist James
Poterba, who has studied the effect of aging on financial
markets. “The days of people borrowing when they were
32, paying off the mortgage when they were 62, and burning
their mortgage have become fewer and fewer as more people
have refinanced.”
The risk-taking channel also doesn’t seem to behave
entirely in accord with the predictions of theory, Poterba
notes: The Fed’s Survey of Consumer Finances indicates
that older households continue to hold risky assets, such
as stocks, in significant amounts. “Even at the traditional
retirement age of 65, the typical household has quite a number of years left that it needs to draw its resources down
over,” Poterba says. “There probably is some shift toward
less risk appetite in those older years, but people don’t hit
retirement and say they don’t want risky assets anymore.”
A further complicating factor is that in an increasingly
open global economy, financial assets can cross borders.
Countries are not aging in lockstep: For example, China,
Japan, and continental Europe are aging faster than the
United States, which, in turn, is aging faster than many emerging-market economies. In theory, to the extent that changing
demographics leads to changes in asset prices and returns,
investors in aging, lower-return markets can be expected to
move their assets to younger economies in pursuit of higher
returns, somewhat muting the effects on asset markets of
demographic shifts within a country. But the extent to which
such movements would offset the influence of demographics
on the effectiveness of monetary policy is unclear.
Econ Focus | Second Quarter | 2014

5

“Our ability to model these cross-border macroeconomic
effects is still very inadequate,” says Brookings Institution
economist Ralph Bryant. “There are miles and miles to go
before we are in a better place to generate reliable conclusions about effects on policy.”
Finally, there is another channel through which life-cycle
behavior may affect the power of monetary policy — a wealth
effect that pushes in the opposite direction as the effect on
consumption by the young, possibly amplifying the influence
of interest-rate changes. A more familiar example of a wealth
effect is the effect on a household’s financial behavior when
it enjoys significant appreciation of its house, an increase
in its wealth that may lead it to spend more. In the context
of life-cycle behavior and monetary policy, the idea is that
although many older households are cash-strapped, older
households as a group tend to be wealthier than the young
and hold more financial assets. Older households, therefore,
are likely to be more exposed to the effect of interest-rate
changes on financial assets through changes in their wealth.
In an older society, that effect may increase the responsiveness of the household sector as a whole to monetary policy.
Which effects will prevail? It’s challenging to reach firm
empirical conclusions in this area because demographic
change is slow. One such effort, by Imam of the IMF, studied the effect of monetary policy shocks on inflation and
unemployment in the United States, Canada, Japan, the
United Kingdom, and Germany and found that their effect
has decreased over time. Imam further looked at whether
this effect was associated with the timing of the aging of
those societies and found “quite a strong negative long-run
effect of the aging of the population on the effectiveness of
monetary policy.” Imam estimated the change that a 1 percentage point increase in the old-age dependency ratio — the
ratio of people older than 64 to those of traditional working
age — would make in the effectiveness of a 1 percentage
point shock to interest rates by monetary policymakers. He
determined that a 1 percentage point increase in the old-age
dependency ratio reduces the effect of such an interest-rate
change on inflation by 0.1 percentage point and its effect on
the unemployment rate by 0.35 percentage point.
The Census Bureau estimates that the old-age dependency ratio in the United States will rise by 14 percentage
points from 2010 to 2030. If Imam’s estimates and the
Census Bureau’s estimates were to hold, they would imply
a 1.4 percentage point drop in the Fed’s ability to affect

inflation and a 4.9 percentage point drop in its ability to
affect unemployment. Over the course of a 20-year period,
such a change might be perceived as modest from one year
to another, but cumulatively it would amount to a strong
negative effect indeed.

Higher Expectations
If such a scenario occurred, the Fed would need to use its
policy tools in an increasingly aggressive way to achieve the
same results. In addition, any downward push from demographics on the Fed’s influence would increase the chances
that it will one day have to grapple again with the zero lower
bound — the assumed inability of monetary policy to reduce
nominal short-term interest rates below zero. This limitation has led to the use of some unconventional monetary
policy tools since the Great Recession, most notably quantitative easing. Because quantitative easing enables the Fed
to add further monetary stimulus to the economy even when
interest rates are at or near zero, it is possible that the ship
QE would sail more often in the future.
Demographic change would also affect Fed policy in
other ways. The fact that the elderly are more likely to be
out of the labor market would probably have ripple effects
on other features of the economy that Fed officials look at
to determine monetary policy, such as the natural rate of
unemployment (that is, the lowest level of unemployment
that the economy can maintain in the long run).
An older society may also bring the Fed a somewhat different set of political pressures. The disproportionate absence
of the elderly from the labor force would tend to lead them
to be more concerned about the Fed’s inflation mandate
than its employment mandate. Charles Bean, former deputy
governor of the Bank of England and its chief economist
before then, suggested in a 2004 speech that aging may affect
central banks by increasing the constituency for low inflation
in another way, as well. Given the higher asset holdings of an
older cohort, he predicted, with more of its wealth in bonds
than stocks, an older society will tend to favor low-inflation
policies (to the extent that bond holdings of seniors are not
inflation-protected). At the same time, Bean said, with the
decline of defined-benefit pensions, an older society will expect
more from its central bank in preventing falls in asset prices.
While the effects of aging on monetary policies are
uncertain for now, one prediction can be made with confidence: We won’t be getting any younger.
EF

Readings
Imam, Patrick. “Shock from Graying: Is the Demographic Shift
Weakening Monetary Policy Effectiveness?” IMF Working Paper
No. 13/191, September 2013.

National Research Council. Aging and the Macroeconomy: Long-Term
Implications of an Older Population. Washington, D.C.: National
Academies Press, 2012.

Kara, Engin, and Leopold von Thadden. “Interest Rate Effects
of Demographic Changes in a New-Keynesian Life-Cycle
Framework.” ECB Working Paper No. 1273, December 2010.

Poterba, James, “The Impact of Population Aging on Financial
Markets,” in Gordon H. Sellor Jr., ed. Global Demographic Change:
Economic Impact and Policy Challenges. Kansas City: Federal Reserve
Bank of Kansas City, 2005, pp. 163-216.

Miles, David. “Should Monetary Policy be Different in a Greyer
World?” in Alan Auerbach and Heinz Herrman, eds., Ageing,
Financial Markets and Monetary Policy. Berlin: Springer, 2002.
6

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

POLICYUPDATE

Accounting for the Vagaries of the Wind

O

B y Tim S a bli k

n April 29, the U.S. Supreme Court upheld the
to fully address the externality problem, and setting it too
Environmental Protection Agency’s Cross-State
high would be costly and inefficient. Expecting regulators to
Air Pollution Rule (commonly called the Transport
determine the proper level may be unrealistic.
Rule), the agency’s third attempt in two decades to address
In light of this, Nobel prize-winning economist Ronald
the “Good Neighbor” provision of the Clean Air Act. That
Coase proposed an alternative solution. In his famous 1960
provision poses a tricky puzzle for regulators, requiring them
paper “The Problem of Social Cost,” he argued that exterto prohibit air pollutants emitted by sources in one state
nalities should be viewed simply as a market transaction.
from “significantly” interfering with the ability of a downAs with any transaction, externalities involve two sides:
the producer of the externality and the recipient. As long
wind state to meet clean air standards.
as property rights were well-defined and transaction costs
The Transport Rule applies to 27 states in the eastern
were minimal, both parties could negotiate an efficient
half of the United States that were found to have contribsolution to the problem. For example, if it were cheaper for
uted at least 1 percent of sulfur dioxide (SO2) and nitrogen
downwind residents to pay a factooxides (NOx) pollution to at least
one downwind state. These upwind
ry to stop polluting than to accept
The costs of a polluting
states were given an “emissions budthe pollution or relocate themselves,
coal-burning
power
plant,
for
they would do so, and vice versa. In
get” for the pollutants, which took
into account the cost effectiveness instance, are not fully borne by the either case, the externality would be
of implementing pollution controls residents who receive its electricity mitigated efficiently.
The EPA’s Transport Rule incorwithin each state. A number of
because some pollutants blow
affected upwind states and power
porates some of Coase’s insights by
downwind and damage residents using cost-effectiveness to determine
companies challenged the Transport
in other states.
Rule in the U.S. Court of Appeals
pollution limits. But by making those
for the D.C. Circuit. They argued,
determinations itself, the agency has
among other things, that the EPA’s use of cost-effectiveness
opened itself up to criticism from some states that may have
as a guide for pollution reduction would require some states
to clean up more than their share of downwind pollution if
clean up more than their “fair share” of downwind pollution.
that is the most cost-effective option. “Most economists are
In the case, Environmental Protection Agency v. EME
going to say that the least-cost sources of pollution should be
Homer City Generation, the Supreme Court ruled that the
cleaned up first,” says John Whitehead, chair of the departEPA’s cost-based approach was an “efficient and equitable
ment of economics at Appalachian State University. “But it’s
solution” to the problem of cross-state pollution. Justice
hard to argue with the fact this approach might not turn out
Ruth Bader Ginsburg, who delivered the majority opinion,
as fair as some people would like.”
noted that assigning blame to each state proportionally
In the case of other pollutants, such as carbon dioxide
would require regulators to “account for the vagaries of the
(CO2), states have established regional pollution credit
wind” — a nigh impossible task. For example, West Virginia
markets to facilitate the negotiation envisioned by Coase.
contributes significantly to air pollution in a dozen states,
The first of these programs, the Regional Greenhouse Gas
and it receives pollution from about half a dozen.
Initiative, covers northeastern states from Maryland to
This challenge is a classic example of what economists
Maine. Polluting factories in these regions can either reduce
call a negative externality. The costs of a polluting coal-burntheir pollution to comply with environmental mandates or
purchase offset credits from other factories, ensuring that
ing power plant, for instance, are not fully borne by the
overall pollution is reduced efficiently. Whitehead says a
residents who receive its electricity because some pollutants
similar approach for SO2 and NOx would be optimal, and
blow downwind and damage residents in other states. This
the EPA’s Transport Rule does allow states to adopt this
can artificially lower the price of the plant’s electricity,
solution. Unlike harm from CO2, however, the damage
leading to overproduction of both the electricity and the
caused by SO2 and NOx varies by distance traveled, making
pollution byproduct.
it harder to price pollution credits in a regional market.
There are a variety of ways to address such externalities.
This summer the EPA filed to lift the stay on the
One proposed by early 20th century English economist
Transport Rule in light of the Supreme Court’s decision, and
Arthur Pigou is to place a tax on the polluter equal to the
the U.S. Court of Appeals for the D.C. Circuit granted that
cost of the externality, thus requiring producers to account
request on Oct. 23. Other challenges to the rule remain, howfor the full cost of their products. Determining the right
tax level is the challenge. Making the tax too low would fail
ever, and are scheduled for hearings through early 2015. EF
Econ Focus | Second Quarter | 2014

7

JargonAlert

Adverse Selection

D

emocrats and Republicans passionately disagree
about the pros and cons of the Patient Protection
and Affordable Care Act (ACA). But even the most
partisan policymakers can agree that the ACA debate has
brought a somewhat obscure economics concept — adverse
selection — into popular parlance.
In the market for individual health insurance, adverse
selection refers to the fact that, all else being equal, sick people are more likely to purchase health insurance than healthy
people. In many cases, health insurers cannot observe the
difference between sick people and healthy people. Prior to
implementation of the ACA, many insurers required customers to disclose extensive details about their health status.
Insurers then used this information to screen applicants and
set premiums.
Under the ACA, however, health insurers can set premiums only on the basis of age, which is a rough proxy for
health status. They can charge older people up to three
times more than younger people,
but even this price difference is not
enough to cover the cost difference between the average 64-yearold and the average 21-year-old. So
if insurance plans within an ACA
exchange fail to attract sufficient
shares of young people, they might
have to raise premiums for everyone, which would make it even
harder to attract and retain young
people. This could result in an
adverse selection “death spiral.”
Adverse selection occurs whenever asymmetrical information — information known to one party but not the
other — makes it difficult for potential trading partners
to distinguish between high-risk and low-risk transactions.
This problem is particularly endemic to insurance markets.
Without underwriting safeguards, for example, people could
delay buying homeowners’ insurance until their houses are
on fire. Likewise, people could postpone purchasing life
insurance until they are terminally ill. If insurance companies unwittingly assumed such risks, the resulting claims
would drive up the cost of insurance for everyone.
Adverse selection is most commonly studied in the context of insurance, but it applies to many other markets. For
example, a restaurant owner in Charlottesville, Va., decided
to replace his individually priced entrées with an all-you-caneat buffet. He expected a certain amount of adverse selection — people with bigger appetites would be more likely to
select his restaurant — so he priced the buffet higher than
the entrées on his old menu. The owner was not surprised
8

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

by the copious quantities that his new customers consumed,
but he was shocked by the massive amounts they wasted.
Rather than risking a death spiral by raising the buffet price,
the owner added a surcharge for customers who did not
clean their plates.
Another hotbed for adverse selection is the used-car market. In 1970, economist George Akerlof made that connection in a Quarterly Journal of Economics article, “The Market
for ‘Lemons.’ ” He noted that as soon as a car’s owner learns
whether it is a lemon or not, “an asymmetry in available
information has developed.” Based on this premise, Akerlof
modeled a used-car market in which all cars have the same
price because buyers cannot discern between good risks and
bad risks. If a car is a lemon, its owner will sell it because
the market price exceeds the car’s true value, but if the car
is good, its owner will keep it because the market price falls
short of the car’s true value. When sellers know the quality
of individual cars and buyers know only the average quality
of all the cars, the market sputters
like a 1970 Gremlin. But when buyers and sellers are able to discern the
quality of individual cars, the market purrs like a late-model Honda.
Flexible prices based on symmetrical information would guard
against adverse selection, but as
noted above, the ACA prevents
health insurers from discriminating
on the basis of health status. So they
use age as a rough proxy for health
status as they attempt to set premiums that are competitive and profitable.
In December 2013, a Kaiser Family Foundation study
estimated that young people (ages 18-34) comprise 40 percent
of the potential market for ACA insurance exchanges. And
at the end of the first open-enrollment period, 28 percent
of enrollees were from that age group. That share is only 3
percentage points better than the Kaiser study’s worst-case
scenario, but the national percentage is not as important as
the share of young people joining each exchange. As of late
April, the District of Columbia’s exchange ranked first with
45 percent. Utah was a distant second with 33 percent, and
West Virginia was last with 19 percent.
No one knows what percentage would signal a death spiral, but a report from the National Association of Insurance
Commissioners emphasized that states must be vigilant
against adverse selection under the ACA. The report warned
that “if the market outside of the exchange is perceived as
more attractive to younger and healthier people, the exchange
could become a ‘risk magnet’ and will ultimately fail.”
EF

Illustration: Timothy Cook

B y K a r l Rh o de s

Research Spotlight

Benefits of B.A. Alternatives

I

By W e n dy M o r r i s o n

n recent years, the pursuit of a bachelor’s degree
different award attainments on average quarterly earnings.
has become as common a part of rhetoric about
In other words, the model measures the effect of the award
the “American Dream” as homeownership. Indeed,
on the earnings of the individual student as compared with
many point to the estimated $1 million additional lifetime
his earnings before obtaining the award. They measure the
earnings of those who complete a traditional bachelor’s
variation in individual earnings over time, as well as the
degree, a figure made famous by a 2012 report from the
variation between individuals, in order to capture the full
Census Bureau, as evidence that college is likely a good
effect of attaining each award. In addition to controlling
bet for everyone. In addition to research on the returns to
for demographic variables like age and sex, the authors
bachelor’s degrees, there has been a substantial amount of
attempt to compare outcomes for individuals with similar
research on the benefits of associate’s degrees, which are
anticipated earnings trajectories by capturing differences
often considered similar to the first two years of a four-year
based on a student’s initial aspirations and age.
college curriculum. The results generally find substantial
The authors find substantial labor market gains
earnings increases linked to associate’s degrees, as much as
associated with associate’s degrees and diplomas, and more
24 percent for men and 31 percent for women.
modest gains associated with certificates, whose returns
Research looking at the value of a bachelor’s degree or
varied highly among fields. One trend that characterized
an associate’s degree has generally measured the value of
all the results was that awards had larger positive effects
the degree relative only to that of high school completion,
on the average earnings of female students than on those
however. The literature has said little, if anything, about
of male students. Men who pursued associate’s degrees
alternate forms of tertiary educaearned an additional $1,484
on average, whereas women
tion like diplomas and certificate
“The Labor-Market Returns to
earned an additional $2,363
programs from community and
Community College Degrees, Diplomas,
on average. Average quarterly
technical colleges, despite more
and Certificates.” Christopher Jepsen,
earnings increases associated
people receiving such diplomas
with diplomas were comparable
and certificates every year than
Kenneth Troske, and Paul Coomes.
to associate’s degrees, at $1,265
associate’s degrees. In a recent
Journal of Labor Economics, January 2014,
for men and $1,914 for women.
Journal of Labor Economics article,
vol. 32, no. 1, pp. 95-121.
Certificates were associated with
Christopher Jepsen of University
a more modest but still positive
College Dublin, Kenneth Troske
effect of around $300 on average for both men and women.
of the University of Kentucky and the Institute for the Study
Income gains associated with certificates were more highly
of Labor, and Paul Coomes of the University of Louisville
variable than gains for associate’s degrees and diplomas, and
attempt to fill this empirical gap by providing one of the first
they were the largest by far for men who entered vocational
rigorous estimates of the labor market returns to community
programs such as electrician and mechanic training and
college diplomas and certificates.
women who entered programs in health. Based on the results
Unlike associate’s degrees, diplomas and certificates
of a sensitivity analysis, the authors find that their results
typically require significantly fewer credit hours to
are indeed robust and speculate that the similarity between
complete and are primarily awarded in technical programs.
community and technical college programs across the
According to the authors, the few studies of the effects
United States means that their findings can be considered
of certificates that do exist offer inconclusive evidence
representative of analogous programs around the country.
and often rely on small, unreliable samples. The authors
These results suggest that human capital investments
use detailed administrative data on individuals within the
in alternate forms of tertiary education in technical and
Kentucky Community and Technical College System, which
vocational fields have substantial labor market returns.
provides the ability to control for a variety of variables that
Judging from the relative scarcity of economic literature
might affect employment outcomes, such as employment
on the effect of these programs and the longtime focus
experience, individual aspiration, innate ability, and race/
of policymakers on four-year degrees, further study of
ethnicity. Additionally, the authors believe that the richness
the benefits of these alternatives may be warranted.
of the data and the similarities among community college
Such research may become increasingly relevant as the
systems around the country make their findings more
conventional wisdom on the value of bachelor’s degrees is
broadly applicable.
called into question amid rising tuition costs and rising levels
The authors use a traditional “fixed-effects” human
of student loan debt.
EF
capital model in order to discern the causal effects of
Econ Focus | Second Quarter | 2014

9

Crowded
By T i m S a b l i k

10

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

T

he 1973 science-fiction film Soylent Green
may be best remembered for Charlton
Heston’s line about the titular food source:
“Soylent Green is people!” The story takes
place in the year 2022, when severe overpopulation has exhausted nearly all natural resources
and people scrape by in hot, dirty, crowded cities.
Outside of theater walls, that future seemed even
more imminent. In 1968, American biologist Paul
Ehrlich published The Population Bomb, which
opened with the prediction that “a minimum of
ten million people, most of them children, will
starve to death during each year of the 1970s.” In
1973, then-president of the World Bank Robert
McNamara declared that “the threat of unmanageable population pressures is much like the
threat of nuclear war.”

Lee Yiu Tung/Shutterstock

While more is not
always merrier,
population
growth over
the last century
has had many
positive effects

Econ Focus | Second Quarter | 2014

2000

900
1000
1100
1200
1300
1400
1500
1600
1700
1800
1900

600
700
800

300
400
500

100
200

1 A.D.

POPULATION (BILLIONS)

Why were Ehrlich, McNamara, and others so worThe World Population Takeoff
ried? In the last two centuries, world population
underwent a previously unimaginable growth spurt
8
(see chart). It took roughly 200 years for the popula7
tion to double from 500 million in the 17th century
6
to 1 billion around 1830. But within 100 years it had
doubled to 2 billion, and then it doubled again by
5
the mid-1970s — less than 50 years. This geometric
4
growth, coupled with apparent resource shortages like
3
the oil crises of the 1970s, alarmed both scientists and
the public.
2
After releasing his book, Ehrlich co-founded the
1
group Zero Population Growth to advocate reducing
0
fertility rates to replacement level (slightly more births
on average than deaths) either voluntarily or by govYEAR
ernment coercion if necessary. Indeed, some countries
Source: U.S. Census Bureau
enacted extreme measures during this time to limit
their population growth. In 1970, China’s fertility rate
was 5.5 children per woman, and government officials feared
predictions of Ehrlich and others in some ways echoed the
that the population would soon overrun available resources.
writings of 18th century economist Thomas Malthus. In his
They began encouraging citizens to marry later, postpone
1798 Essay on the Principle of Population, Malthus observed that
having children, and have fewer children. This culminated in
the Earth’s supply of arable land was largely fixed. He believed
the announcement of the “one-child policy” in 1980, restrictthat improvements to existing land could increase the yield
of subsistence, but only gradually. On the other hand, popuing most couples to one child with the goal of reducing
China’s population growth rate to zero by 2000.
lation, when unbounded from any constraints, would double
Today, China’s fertility rate is 1.6, and it is confronting a
roughly every 25 years, quickly outpacing food supply.
different problem: rapid population aging. Nearly 10 percent
“By that law of our nature which makes food necessary
of the population is over the age of 65, and that is expected
to the life of man, the effects of these two unequal powers
to more than double by 2045. Late last year, China’s governmust be kept equal,” Malthus wrote. “This implies a strong
and constantly operating check on population from the
ment announced a change to the one-child policy: Couples
difficulty of subsistence.” Malthus saw two possible types
in which at least one parent is an only child are allowed to
of checks: voluntary (choosing to marry later, have fewer
have two children.
children) or involuntary (famine, war). Malthus believed
Other developed nations are facing similar demographic
involuntary checks were typically not necessary because
shifts (see chart on next page). According to an August report
people took into account their ability to provide for children
from Moody’s Investors Service, the number of countries
when deciding to have a family. But he saw little means for
in which at least a fifth of the population is older than 65
near-term improvement. Malthus thought that population
will jump from three to 13 by 2020. Swelling retiree ranks
would increase when food became more available and ecoare expected to strain tax-funded pension and health care
programs, potentially slowing economic growth. In a July
nomic conditions were good and contract during lean times,
report, the Organization for Economic Co-operation and
resulting in a populace that always hovered around subsisDevelopment projected global economic growth will slow
tence levels.
from 3.6 percent to 2.4 percent over the next 50 years, in
His view largely fit the pattern of human history to that
part due to aging populations and stagnant or declining
point, but it failed to predict the two centuries that folworkforces.
lowed. Population and productivity of arable land increased
So what happened? Why were the doomsayers so wrong?
dramatically, while the quantity of land used for agriculture
Did government policies go too far in averting an overpopremained largely the same. In fact, economic research
suggests that gains in agricultural productivity may have
ulation crisis? Research shows that there never really was
occurred because of rapid population growth. In a 1999
an overpopulation crisis in the sense that many feared. The
survey of more than 70 studies of the impact of population
demographic movements of the last two centuries were
growth on the land quality of developing nations, Scott
largely natural responses to advances in science and mediTempleton of Clemson University and Sara Scherr, presicine, and population growth seems to have been a positive
force for many countries.
dent of Ecoagriculture Partners (a nonprofit that supports
sustainable agricultural development), found a “U-shaped”
relationship between population density and land producFalse Prophets
Concerns about food and resource scarcities due to overtivity. All else being equal, increases in local population denpopulation were certainly not new to the 1970s. In fact, the
sity make existing land more expensive and labor cheaper.

11

Aging Populations
n 2014 n 2045

China

100+
90–94
80–84
70–74

AGE

60–64
50–54
40–44
30–34
20–24
10–14
0–4
0

50
100
POPULATION (MILLIONS)

150

Germany
100+
90–94
80–84
70–74

AGE

60–64
50–54
40–44
30–34
20–24
10–14
0–4
0

2

4

6

8

POPULATION (MILLIONS)

Japan
100+
90–94
80–84
70–74

AGE

60–64
50–54
40–44
30–34
20–24
10–14

Demography and Economic Growth

0–4
0

2

4
6
POPULATION (MILLIONS)

Source: U.S. Census Bureau International Database

12

Initially, this can lead to some resource degradation in the
form of deforestation as farmers use land more frequently
or convert land to agricultural production. But as labor
becomes comparatively cheaper, people begin to invest in
techniques that economize on land, like soil fertilization or
land improvements like terraces.
Similar economic processes can work to extend other
natural resources as well. The late University of Maryland
economist Julian Simon wrote in his 1981 book The Ultimate
Resource that most natural resources were actually becoming more abundant in the 20th century despite rapidly
growing populations. Simon argued that as long as markets
were functioning, resource scarcity from higher populations
would be reflected in higher prices, which in turn would
prompt people to seek new ways to extract previously
unprofitable resources or develop new ways to conserve and
economize existing resources.
Simon famously wagered Ehrlich and his colleagues in
1980 that any raw materials of their choosing would be
cheaper in 10 years after correcting for inflation, indicating
that they had in fact become less scarce. Ehrlich selected
$1,000 worth of five different metals, agreeing that the loser
of the bet would pay the other the difference in value 10
years later. In 1990, all five metals were significantly cheaper,
and Ehrlich sent Simon a check for $576.07. In some ways,
Simon was lucky. Some of the metals Ehrlich chose were
at cyclical highs. Had the bet been conducted during each
decade of the 20th century, Simon would have come out
ahead only about half of the time. And despite his overall
optimism about the positive effects of population growth,
Simon readily acknowledged that they were contingent on
many other factors, like government institutions and functioning markets.
“A lot depends on the context,” says John Pender, a
senior economist at the U.S. Department of Agriculture who
studied the impact of population growth in developing countries like Honduras and Ethiopia. In a contribution to the
2001 book Population Matters, Pender found that increased
population was negatively associated with crop yields and
land sustainability in Honduras. But the effects were minor
compared with more important factors like underdeveloped
infrastructure and inefficient government policies.
Population can also impact resource sustainability
through its interaction with economic development. “In a
densely populated, resource-dependent economy, the real
problem is poverty,” says Pender. “When you’re depending
on a very small number of assets, you may sometimes be led
to degrade your resources.”
Indeed, economists over the last 50 years have tried to
pinpoint how population growth affects the economy.

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

8

10

Does having more people help or hinder economic growth?
As the typical economist refrain goes: It depends. Initially,
there was little evidence that the rate of population growth
played much role in economic development. But by looking at

6
5
4
3
2
1

FERTILITY RATE (BIRTHS PER WOMAN)

7

0

1960
1962
1964
1966
1968
1970
1972
1974
1976
1978
1980
1982
1984
1986
1988
1990
1992
1994
1996
1998
2000
2002
2004
2006
2008
2010
2012

INFANT MORTALITY (DEATHS PER 1,000 BIRTHS)

both sources of population growth — rising fertility
South Korea’s Demographic Transition
and falling mortality — economists have found that
population does indeed influence economic poten90
tial in important ways.
80
The majority of the extraordinary population
Mortality
70
Fertility
increase over the last century has been due to reduc60
tions in infant mortality and gains in overall life
50
expectancy. In 1900, average life expectancy was 30
40
years, but by 2005, it had more than doubled to 66
30
years worldwide, and most demographers expect it
20
to continue to rise. In addition to improving the
10
quality of life of individuals around the world, such
0
gains in lifespan have fostered economic growth.
As people live longer, it becomes more profitable for them to invest in training and education.
YEAR
NOTE: Example of the rapidly falling mortality and fertility rates during the “East Asian miracle.”
This means workers are better skilled when they
Source: World Bank
enter the workforce and they live longer, healthier,
more productive lives. And these gains have been
widespread. According to research by Harvard University
may not survive. But as mortality rates fall, families adjust and
School of Public Health economists David Bloom and David
fertility rates decline. Depending on the speed of adjustment,
Canning, infant mortality in poor countries is one-tenth to
this process can create a “demographic transition,” which
one-thirtieth as much as it was in countries with comparable
creates the potential for significant economic gains.
levels of income in the 19th century.
“As both mortality and fertility decline, it changes the
On the other hand, population growth driven by high
age structure of the population, impacting what is known as
fertility rates seems to be correlated with lower income, as
the dependency ratio,” explains Yazbeck. The dependency
measured by GDP per capita. The data seem to suggest that
ratio refers to the number of young people (up to age 14) and
many countries fall into one of two “clubs”: low income and
old people (age 65 and over) in an economy compared to the
high fertility, or high income and low fertility. Just as higher
number of working-age individuals.
life expectancy increases incentives to develop human capiHigh fertility rates imply a higher dependency ratio, as
there
are a larger number of nonworking children per family.
tal, higher fertility rates make it more difficult to do so.
This can act as a drag on economic growth as more resources
“If families are very large, then households have less
are required for education and childcare, potentially divertmoney to invest in their children’s education,” says Abdo
Yazbeck, lead economist at the World Bank’s Africa diviing them from more productive areas of the economy. But if
fertility rates change quickly in response to declining mortalsion. Having many children back-to-back also limits the
opportunities for women to enter the workforce.
ity, then the dependency ratio can decline as a “baby boom”
But the correlation between income and fertility runs
generation enters the workforce with fewer dependents to
in the opposite direction as well. The late University of
care for.
Chicago economist and Nobel laureate Gary Becker showed
“The key is the speed at which this process takes place. If
that economic conditions influence family size decisions.
both legs of the transition move fast, we now have very good
In wealthier, developed nations where education and labor
evidence to suggest the impacts on the economy are huge,”
market opportunities for women are higher, the cost of
says Yazbeck.
forgoing wages to have children is greater, leading couples
According to research by Bloom and fellow Harvard
to have fewer children. Conversely, in nations with poor
economist Jeffrey Williamson, this “demographic dividend”
economic or education opportunities, women often marry
accounted for as much as a third of the economic growth
younger and have more children at a younger age. This
enjoyed by a number of East Asian countries like Japan and
means the strong correlation in the data may reflect the tenSouth Korea between 1965 and 1990. During that time,
the dependency ratio in East Asia fell from 0.77 to 0.48 as
dency for countries to be pushed into one club or the other
mortality and fertility rates both fell rapidly (see example in
through positive or negative feedback effects. That is, good
chart). Williamson estimated that a 1 percent increase in the
labor market and education opportunities reinforce lower
growth rate of the working-age population is associated with
fertility rates and vice versa.
a 1.46 percent increase in the growth rate of GDP per capThe good news for developing nations is that mortality
rates have been declining worldwide due to the spread of
ita. Similarly, a 1 percent decrease in the growth rate of the
modern medicine, and there are also strong feedback effects
dependent population is associated with a 1 percent increase
between mortality and fertility rates. When mortality rates
in the growth rate of GDP per capita.
are high, families tend to “overshoot” their desired family size
Of course, demography alone is not enough to produce
to insure against the possibility that some of their children
an economic boom. In order to reap the rewards of the
Econ Focus | Second Quarter | 2014

13

demographic dividend, a country must have the institutions
in place to productively put individuals to work. For example, during the same period as the “East Asian miracle,”
demographic trends in Latin America resembled those of
Southeast Asia. But episodes of high inflation, political
instability, and restrictive trade or labor policies seem to
have prevented those countries from benefiting from the
demographic window in the same way. And for countries
that do manage to capture the dividend, it doesn’t last forever. As the large working-age cohorts approach retirement,
dependency ratios climb again.

Demographic Challenges and Opportunities
The last stage of the modern demographic transition is population aging. Gains in life expectancy alone will increase
the number of retirees, but as “baby boomers” age, many
countries face a dramatic reversal of the dependency ratio
declines they enjoyed in previous decades. Japan, one of the
earliest East Asian countries to begin its demographic transition, is now undergoing rapid population aging. About one
in four people are currently over the age of 65, but by 2045
the number could be nearly two in five, according to the
Census Bureau’s international database. European nations
like Germany face similar patterns, as does China.
Just as elevated dependency ratios from high fertility
rates can slow economic growth, an increase in retirees
can have a similar effect. The European Union’s Economic
Policy Committee wrote in 2010 that the increase in the
proportion of retirees will “amplify expenditure on public
pensions and health and long-term care and thus puts a burden on maintaining a sound balance between future public
expenditure and tax revenues.” In addition to the challenges
they pose for public finance, older individuals tend to work
and save less, which means a decline in both labor and capital
for developed economies.
In a 2011 working paper, Bloom, Canning, and fellow
Harvard economist Günther Fink looked at the economic
growth of countries between 1960 and 2005 (when dependency ratios were falling) and estimated what that growth might
have looked like under the projected demographic trends for
2005 to 2050. Out of the 107 countries they analyzed, about
half would have grown more slowly under the aging population trend. The authors estimated that OECD countries
would have grown at 2.1 percentage points per year rather than

the observed 2.8. This means that the average OECD income
per capita of $10,000 in 1960 would have grown to $25,500 in
2005, about $10,000 less than actually observed.
But the authors note that their estimates likely overstate
the effects for a number of reasons. For one thing, populations will adjust to changing demographics. As workers live
longer, healthier lives, they may work longer. Additionally,
other demographic groups may enter the labor force in
greater numbers in response to increased demand for labor
as baby boomers retire. Finally, the demographic shift that
produced the dividend may also help to soften the blow of
population aging: Because of declining fertility, the cohorts
that followed the boom generation have higher levels of
human capital as families and governments invested more in
each child. Their higher productivity could then offset some
of the losses from the large number of retirees.
In contrast, many developing nations have just begun
their demographic transition. Youth dependency ratios in
sub-Saharan Africa appear to have peaked in 1985, about
20 years after East Asia. Fertility and mortality rates have
been falling steadily in many African countries, presenting
the opportunity for an economic growth dividend from
falling dependency ratios. In a 2011 article in Population
Studies, University of Sussex economists Robert Eastwood
and Michael Lipton estimated that between 1985 and 2025,
sub-Saharan African countries may enjoy a demographic dividend equal to 0.32 percent per capita GDP growth per year.
That dividend is smaller than the one enjoyed by East Asia,
but given that demographic changes happen slowly, there is
still time to build up markets and institutions to take even
greater advantage of positive demographic forces.
“In general, the story is quite hopeful,” says Yazbeck.
“But the reality is that this is a country-specific process, so
some countries in Africa will be able to capture a sizable
demographic dividend, and some probably will not.”
Yazbeck and other economists stress that having the
correct policies in place — opportunities for human capital development, robust market economies, and access to
modern health care — is the key to reinforcing and taking
advantage of the demographic changes that have been occurring over the last two centuries. The upside for policymakers
is that many of these policies are beneficial in and of themselves. Reinforcing growth-enhancing demographic changes
is a free bonus.
EF

Readings
Birdsall, Nancy, Allen C. Kelley, and Steven Sinding (eds.). Population
Matters: Demographic Change, Economic Growth, and Poverty in the
Developing World. New York: Oxford University Press, 2001.
Bloom, David E., David Canning, and Günther Fink. “Implications
of Population Aging for Economic Growth.” National Bureau of
Economic Research Working Paper No. 16705, January 2011.
Eastwood, Robert, and Michael Lipton. “Demographic transition
in sub-Saharan Africa: How big will the economic dividend be?”
Population Studies, March 2011, vol. 65, no. 1, pp. 9-35.

14

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

Malthus, Thomas Robert. An Essay on the Principle of Population, 1st
edition. London: J. Johnson, St. Paul’s Churchyard, 1798.
Simon, Julian L. The Ultimate Resource 2. Princeton, N.J.: Princeton
University Press, 1996.
Templeton, Scott R., and Sara J. Scherr. “Effects of Demographic
and Related Microeconomic Change on Land Quality in Hills and
Mountains of Developing Countries.” World Development, June
1999, vol. 27, no. 6, pp. 903-918.

Islamic Banking,
American Regulation
For some American Muslims,
Sharia-compliant banks are an important
part of the financial landscape
By R e n e e H a lt o m

M

ost Americans don’t have to think about whether basic banking services
are available. If anything, it feels like the choices in savings accounts, auto
loans, mortgages, and investment vehicles are overwhelming.

Not so for a certain segment of the U.S. population. There were roughly 2.8 million
Muslims in the United States as of 2010, according to the Pew Research Center’s
Religion and Public Life Project, though estimates vary (see map on next page). The
most recent study published by the Association of Statisticians of American Religious
Bodies estimates that Islam was the fastest-growing religion in the United States
between 2000 and 2010. Yet there are relatively few financial products available here
for those followers who require their financial contracts to comply with Islamic laws
and moral codes, called Sharia law.
Econ Focus | Second Quarter | 2014

15

Muslim Population in the United States

Notes: There were 2,106 congregations and 2.6 million Muslim adherents reported in 592 counties in 2010. The numbers in parentheses indicate the number of counties that fall
within each range. Only two counties — Harris County, Texas, and Cook County, Ill. — were reported as having more than 100,000 adherents.
Source: 2010 U.S. Religion Census: Religious Congregations and Membership

Islamic finance is rooted in the principle that investments should create social value and not merely wealth. The
Quran, the 1,400-year-old text that governs followers of
Islam, prohibits riba, the charging or receiving of monetary
benefit from lending money, interpreted in modern terms as
a prohibition against interest. Islamic finance also prohibits
excess risk or uncertainty (gharar), gambling (maysir), and
sinful activities (haram). Transactions generally must be tied
to real, tangible assets.
Globally, the Islamic finance industry is between $1 trillion and $1.5 trillion in size, according to the World Bank, in
the vicinity of Australia’s or Spain’s gross domestic product.
It’s unsurprising, perhaps, since Muslims are almost a quarter of the world’s population. That’s an upper bound on the
demand for Islamic finance, since not all Muslims demand
Sharia-compliant contracts. But in Muslim-majority countries like Bangladesh, Islamic financial products constitute
as much as two-thirds of total financial sector assets. There
are more than 400 Islamic financial institutions across 58
countries. Roughly 5 percent of total Islamic financial assets
are housed in non-Muslim regions like America, Europe, and
Australia.
The United States’ Muslim population is roughly equal
to that of the United Kingdom, a country that houses $19
16

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

billion in Islamic financial institution assets, more than
20 banks, and six that provide Sharia-compliant products
exclusively. Yet our market for Islamic financial products
is much smaller. There’s no single list of participating firms
or aggregate estimate of assets, but one can find roughly
a dozen firms that routinely offer Islamic banking and
investment products to businesses and consumers, though
several don’t even market such products on their websites.
At the same time, this is an industry on the rise. Just 20
years ago, there were few Islamic financial products being
offered at all in the United States. The industry is rapidly
growing and adapting to American regulation. Should
we expect it to be a large presence in our future financial
landscape?

How Does it Work?
Islamic finance may be rooted in ancient texts, but as an
industry it is relatively young.
The broader field of Islamic economics originated
in 1930s India, when the country’s Muslim population,
then about one-fifth of its total, feared marginalization
by British colonialism and the Hindu-led movement for
Indian independence. Heavily indebted Muslim farmers
throughout the country were at risk of losing their land.

Scholars blamed an abandonment of Islamic principles and
called for a return to “true” Islam. Economist Timur Kuran
of Duke University, author of several books and articles on
Islamic economics, has argued that this revival was part
of a broader movement to restore Muslims to their faith,
carve out an identity for Muslim minorities, and generally
protect Muslim interests. The state of Pakistan emerged
from the same effort.
Usury discussions in religious texts far predate this movement, of course. Many followers of Islam, along with other
religions, loosened usury restrictions over time, until the last
century when older notions of usurious interest were revitalized. Still, what constitutes riba has long been controversial.
To some scholars, it means excessive interest — which led
poor, indebted citizens to slavery in medieval times — while
to others, it means any interest at all. Scholars have also
disagreed on the virtues of charging interest for business
investment versus consumption, allowing for inflation compensation, and a host of other matters.
Modern Islamic finance takes the narrower interpretation that no interest is permissible. Three alternative
products are available in the United States. One of the
most common contracts is musharaka, in which the lender
and customer own an asset together, with the borrower’s
share of the property increasing gradually with his payments
until he assumes ownership entirely, with profits and losses
shared. In a murabaha contract, the lender purchases an
asset — a home or even commercial equipment — on behalf
of a borrower, who gradually pays back the principal plus
an agreed-upon markup and assumes ownership at the end.
Ijara contracts resemble a lease-to-own arrangement that
includes both repayment of principal and a rental fee for
exclusive use of the asset.
The first bank following Islamic law opened in Egypt in
1963. Following the global oil boom, the industry developed
in earnest in the Middle East in the mid-1970s. In the 1990s,
the first international accounting standards were developed
for Islamic finance, and the first market emerged for Islamic
bonds. Those bonds, called sukuk, tie investments to tangible
assets that issue payment streams based on their revenues,
much like securitized equity financing.
Islamic finance came to the United States in the 1980s
when two institutions opened on the West Coast. Their
investment and home finance services were available only
regionally. The market broadened considerably in the late
1990s, paralleling the Muslim population growth in the
United States: 50 percent in the 1990s, and two-thirds in
the 2000s.
The institutions operational today provide services in
several states, most prevalently where the Muslim population is concentrated. University Islamic Financial (a subsidiary of University Bank) based in Ann Arbor, Mich., serving
the large Muslim population of metropolitan Detroit and
surrounding states, is the first and only exclusively Shariacompliant bank in the United States — it offers no other
products. Devon Bank in Chicago is the only other bank

regularly offering Islamic financing products. Reston, Va.based Guidance Residential is the largest nonbank financial
institution offering Islamic finance services, having provided
more than $3 billion — which it claims is nearly 80 percent
of the total — in musharaka mortgage financing in 22 states
since its doors opened in 2002. California-based LARIBA is
another large Islamic mortgage lender, and it also provides
business financing.

Is it Really Islamic?
To critics, Islamic finance is a distinction without a difference. According to research by Feisal Khan, an economics
professor at Hobart and William Smith Colleges in upstate
New York, most Islamic finance transactions are economically indistinguishable from traditional, debt- and interestbased finance. Where there is principal and a payment plan,
there is an implied interest rate, Khan argued in a 2010
article. He is not the first economist to make such a claim.
Many Islamic scholars argue that murabaha contracts don’t
share risk and thus are not Sharia compliant — and experts
estimate that such contracts constitute up to 80 percent of
the global Islamic finance volume.
Other economists have noted that the terms of Islamic
financial contracts often move with market interest rates. In
the United States, Islamic financial products are frequently
marketed with information about implied interest rates to
allow customers to compare prices or simply to comply with
American regulation. A study of Malaysia, the world’s largest Islamic finance market, found that Islamic deposit rates
fluctuate in step with market interest rates.
To economists, it would not be surprising if Islamic and
traditional finance tended to converge. A tenet of banking
theory is that debt contracts with collateral minimize risk
better than equity contracts when it is costly for banks
to identify borrower-specific risks. Equity contracts, by
comparison, entail greater monitoring costs or more risk.
If equity contracts are less efficient, then one would expect
banking institutions to gravitate away from them.
But to Islamic finance advocates, equivalent pricing does
not create an equivalent product. Stephen Ranzini, president and CEO of University Bancorp, the holding company
of the Islamic bank, acknowledges that there are firms that
market themselves as Sharia compliant but that are taking
standard loan documents and replacing the word “interest”
with “lease.” But he says this does not describe the majority
of Islamic financial service providers, who are concerned
with the intent behind Islamic law. “True Islamic finance is
absolutely not the same as traditional finance. The contracts
are different; the risks are different.”
Ranzini also notes that Islamic lending is designed to protect borrowers who fall on hard times: Recourse if a borrower
is unable to pay is rare, and firms generally cannot profit from
a borrower’s financial distress since late fees in most cases can
only cover the cost of collection. Most Islamic financial institutions have a supervisory board of Sharia scholars to review
and approve the details of contracts.
Econ Focus | Second Quarter | 2014

17

proliferation of Islamic finance difficult. Possibly because
the products are unfamiliar to many investors, there is a
smaller secondary market for Islamic financial products, so
it has been harder for Islamic mortgage lenders to remain
liquid, hindering the market’s growth. In the United States,
housing agencies Freddie Mac and Fannie Mae started
buying Islamic mortgage products in 2001 and 2003, respectively, to provide liquidity, and they are now the primary
investors in Islamic mortgages. By 2007, one firm, Guidance
Residential, was relying on more than $1 billion in financing
from Freddie Mac.
Overall, there are few opportunities to take advantage of
economies of scale with Islamic finance. “There’s not a big
enough market now for large, national banks to offer Islamic
products, and only in states with
the largest Muslim concentrations is it worthwhile for the
or the fraction of the U.S. Muslim
smaller banks to expand into
that market,” says Blake Goud,
Regulatory Challenges
population that demands
Islamic finance expert with
Regardless of whether Islamic
Sharia-compliant financial services,
the Thomson Reuters Islamic
finance is truly distinct, its ecoFinance Gateway.
nomic similarities to traditional
the alternative is to not use financial
Moreover, traditional deposit
finance have opened doors in the
insurance
— which banks rely
United States.
services, to use conventional
on for stability — is at odds with
Banks here are normally
Western financial products, or to
Sharia law. In 2002, Virginiaprohibited from taking on partbased SHAPE Financial Corp.
nership or equity stakes in real
rely on informal avenues, such as
sought FDIC deposit insurance
estate, a provision meant to
for an Islamic deposit-like prodlimit speculation. But in Islamic
borrowing and investing among
finance, the bank assumes foruct for which returns would
family and friends.
fluctuate with the bank’s profits
mal ownership. Regulators in
and losses. The FDIC refused
the United States have held,
because the deposit could
however, that Islamic finance is
decline in value, so SHAPE had to alter the product to be
compatible with the prohibition on real estate investments
based solely on profit — not loss — sharing. This is now
in some cases. In 1997, the United Bank of Kuwait (UBK),
the United States’ only Islamic deposit product, currently
which then had a branch in New York, requested interprebeing offered by one institution, University Bank. Muslim
tive letters from its regulator, the Office of the Comptroller
depositors have been known to donate undesired proceeds
of the Currency (OCC), on ijara and murabaha mortgage
to Islamic charities, a way to offset, or perhaps make peace
products. The OCC approved them on the very grounds that
with, a degree of Sharia noncompliance.
they were economically equivalent to traditional products.
In the OCC’s view, because the purchase and sale transactions are executed simultaneously, the bank’s ownership
Prospects in the United States
is merely for “a moment in time,” and therefore the Islamic
Though the growth rate of the American Muslim population
contracts avoid the type of risk that real estate restrictions
may have peaked due to demographics, it’ll remain high in
were intended to limit. (The joint ownership that defines
the near term. Globally, the Muslim population is forecast
musharaka contracts, on the other hand, is not currently
to grow twice as fast as the non-Muslim population through
approved for use by banks and is used in the United States
2030. They’ll continue to be small minorities here but will
only by nonbank mortgage lenders.) From an accounting
still more than double in that timeframe.
standpoint, the transaction appears as a loan (an asset) on
Some factors seem to suggest there is large latent demand
the bank’s balance sheet. The borrower is responsible for
for Islamic financial products in the United States. On avermaintaining the property and paying all expenses, and in
age, Muslims in the United States are relatively high income
the event of default, the bank may sell it to recover what is
and highly educated. They are also significantly younger
owed, as in a mortgage. UBK left the U.S. market in 2000
than the average population — the median Muslim in North
after financing the purchase of 60 homes, but regulators
America is just 26, but the average American is 37 — and thus
have since applied the OCC’s guidance to other institutions.
still approaching peak earning years.
In other ways, however, Sharia requirements have made
But there are little data on what fraction of the U.S.
Islamic investment firms have some more obvious differences from traditional finance. Their holdings must not
involve alcohol, gambling, pork-based food — and according
to some Islamic scholars, defense and weaponry, tobacco,
or entertainment. Perhaps surprisingly, the United States is
the fourth-largest domicile of Islamic investment funds, due
almost entirely to the Amana Mutual Funds Trust based in
Bellingham, Wash., whose income and growth funds hold
almost $3.5 billion in assets. As a group, Islamic investment
funds hold primarily equities. Many employ a third party to
screen the investments for Sharia compliance, or halal, often
as defined by the Accounting and Auditing Organization
for Islamic Financial Institutions, a body that sets global
finance standards. Eligible investments typically must not
derive more than 5 percent of
income from activities considered unethical.

F

18

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

Muslim population actually demands Sharia-compliant
financial services. “There are not even consistent estimates
of the size of the Muslim population,” Goud notes. A 1998
study from LARIBA contended that at most 2 percent of
American Muslims will use only Islamic financing. The alternative is to not use financial services, to use conventional
Western financial products, or to rely on informal avenues,
such as borrowing and investing among family and friends.
There are limited data from countries with larger Muslim
populations. A 2013 World Bank study of 64 such countries found that Muslims were significantly less likely than
non-Muslims to have formal banking accounts, but they
were no less likely to use financial services overall. It’s not
clear whether that suggests simply a preference for informal
financial services, or rather that customers could be drawn in
if the right compliant products were available. Four percent
of unbanked people in non-Muslim countries cite religious
reasons, according to the World Bank, but the number is
7 percent in Organization of Islamic Cooperation countries,
suggesting that Muslims may be somewhat more likely to
have religious reasons for avoiding formal financial services.
There are no data on whether U.S. Muslims are relatively
unbanked. Only one-third of U.S. Muslims own their homes,
compared with 58 percent of the general public, although
that discrepancy could be partly explained by the relatively
young age of the U.S. Muslim population (the average firsttime American homebuyer is 34 years old).
At the same time, there’s no reason Islamic financial
products must be restricted to Muslims, Ranzini says. For
example, there is considerable overlap between Islamic
finance and so-called “socially responsible” investing, such
as mutual funds that buy equities of environmentally friendly
or tobacco-free companies. A 2013 survey commissioned
by Abu Dhabi Islamic Bank found that between 12 percent
and 20 percent of customers in Turkey, Egypt, the United
Arab Emirates, and Indonesia said they would bank only
with Sharia-compliant institutions. But up to half said they
preferred ethical investing, whether or not it was Islamic. If
anything, Goud argues, Islamic standards are more restric-

tive because “socially responsible” investment products
generally do not exclude leverage.
Because of restrictions on leverage, proponents argue
that Islamic finance could be good for financial stability.
“Islamic investors sold their stock in Worldcom and Enron
as those companies’ leverage levels rose. Some potentially
bad behaviors — excessive leverage and excessive financial
engineering — wouldn’t even be possible in Islamic finance,”
Ranzini says. Globally, Islamic finance assets have grown
by more than 20 percent annually since the financial crisis,
according to the Islamic Financial Services Board (IFSB), a
multinational assembly that sets international standards for
the industry.
It’s not that Islamic banks are better performers as a rule,
since what they gain in safety, they may lose in efficiency.
Where the differences seem to matter is during crises. A
study by international economists Thorsten Beck, Asli
Demirgüç-Kunt, and Ouarda Merrouche of several hundred
institutions in 22 countries found that while Islamic banks
tend to be less efficient, they are less prone to disintermediation during financial crises, when they remain better
capitalized with lower loan losses. Separate studies by the
International Monetary Fund and the IFSB also found superior performance following the 2007-2008 crisis.
Another factor is that non-Muslim governments are
moving toward issuing sukuk to draw the investment of
oil-rich Muslim countries. In June, the United Kingdom
issued more than $330 million in sukuk — compared with
more than $100 billion in global sukuk offerings in 2013 —
becoming the first country outside the Islamic world to do
so. Prime Minister David Cameron said he wanted to make
London “one of the great capitals of Islamic finance anywhere in the world.” Luxembourg, Hong Kong, and South
Africa have announced plans for their own offerings. Sukuk
may also provide liquid assets to help domestic Islamic
banks manage their balance sheets.
Whether Islamic finance continues to grow in the United
States, the market is a small but significant segment of the
American financial system.
EF

Readings
Beck, Thorsten, Asli Demirgüç-Kunt, Ouarda Merrouche.
“Islamic vs. Conventional Banking: Business Model, Efficiency,
and Stability.” Journal of Banking and Finance, February 2013, vol.
37, no. 2, pp. 433-447.

A

Chong, Beng Soon, and Ming-Hua Liu. “Islamic Banking: InterestFree or Interest-Based?” Pacific Basin Finance Journal, January
2009, vol. 17, no. 1, pp. 125-144.

Khan, Feisal. “How ‘Islamic’ is Islamic Banking?” Journal of
Economic Behavior & Organization, December 2010, vol. 76, no. 3,
pp. 805-820.
Kuran, Timur. “The Genesis of Islamic Economics: A Chapter
in the Politics of Muslim Identity.” Social Research, Summer 1997,
vol. 64, no. 2, pp. 301-338.

Interested in the history of the Fed?

Visit us @FedHistory on Facebook or Twitter

Econ Focus | Second Quarter | 2014

19

Expanding
Unemployment
Insurance

Longer unemployment benefits often mean longer unemployment
spells, but economists say that’s not always a bad thing
B y Tim S a bli k

A

s unemployment surged during the 2007-2009
recession, individuals who lost jobs turned to
unemployment insurance (UI) for support. In
normal times, states provide up to 26 weeks of UI benefits funded by a tax on employers. On average, these
benefits replace about half of a worker’s previous weekly
wages. Since the 1970s, states and the federal government
have also shared the cost of providing an additional 13
or 20 weeks of benefits to states with exceptionally high
unemployment. During the last recession, the federal
government took on 100 percent of the cost of these emergency benefits. Congress also enacted a series of additional
extensions based on individual state unemployment rates.
The combined programs meant that unemployed workers
in many states could receive an unprecedented 99 weeks of
UI benefits between 2009 and 2012 (see chart).
Proponents of the UI extensions argue that they provide
valuable assistance to individuals struggling to find work
in a weakened labor market. This allows the unemployed
to maintain their consumption, supporters say, which also

Maximum Available Weeks of UI Benefits in Median State

WEEKS OF UI BENEFITS

120
100

Insurance and Incentives

80
60
40
20
0
2008

2009

2010

2011

2012

2013

2014

YEAR
Source: Katharine Bradbury, “Labor Market Transitions and the Availability of Unemployment
Insurance,” Federal Reserve Bank of Boston Working Paper No. 14-2.

20

helps boost the economy. But critics of the large extensions
argue that UI provides a disincentive to look for work until
the benefits expire, prolonging unemployment spells.
The emergency benefits expired on Dec. 28, 2013, returning the maximum duration for benefits to 26 weeks in most
states. (North Carolina cut its benefits six months earlier; see
“Moral Hazard and Measurement Hazard,” p. 44). Lawmakers
who favored the expiration say that labor market conditions
have improved five years after the official end of the recession
and that eliminating emergency benefits will improve conditions further by prompting more job seekers to find work.
They point to the drop in unemployment from 6.7 percent to
6.1 percent in the seven months since the program expired as
evidence of this improvement. But others in Congress want
to reinstate the emergency benefits, arguing that labor market
conditions are still weak and the falling unemployment rate
reflects job seekers giving up rather than finding work; job
seekers still need the additional help, they say.
Most economists agree that UI extensions contribute to
longer unemployment spells, but the magnitude and importance of that effect are debated. Empirical evidence from the
Great Recession suggests that the extended UI benefits had a
small impact on unemployment duration, but there are other
factors to consider as well when evaluating the program.

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

Searching for a job while unemployed is costly. Without
access to income, job searchers must rely on accumulated
savings or borrow to cover expenses while they find a new
job. Research has shown that the average household in the
United States does not have enough saved to weather prolonged joblessness. This means that laid-off workers might
be forced to drastically reduce consumption, increase debt,
or take the first job for which they qualify — even if they
are overqualified. The latter is inefficient, resulting in lost
productivity. UI benefits ease these constraints, allowing

recipients to search longer and find a better-fitting replacement job. Labor economists call this the “liquidity effect,”
and to the extent it drives the longer unemployment spells
associated with UI, it’s not a bad thing.
“If what we see is just the liquidity effect, it means that
we’ve helped job seekers better optimize their own welfare
and society’s welfare,” says Jesse Rothstein, an economist at
the University of California, Berkeley.
Like all insurance programs, however, UI runs the risk of
encouraging the thing it is insuring against: unemployment.
Because UI protects recipients from a portion of their wage
losses, they may have less incentive to search for a replacement job until those benefits expire. Under this “moral
hazard” interpretation, UI extends the duration of unemployment spells not because recipients are benefiting from
reduced liquidity constraints to find a better job match,
but because they are essentially “milking the system” before
beginning their job search in earnest.
How do economists distinguish between these two
effects? One way is to survey how UI recipients actually
spend their time. In a 2010 Journal of Public Economics
article, Princeton University economist Alan Krueger and
Columbia University economist Andreas Mueller looked
at data from the American Time Use Survey, which asks
participants to keep a journal of how they spend their time
each day. Krueger and Mueller found that UI recipients
significantly increased job search efforts as their benefits
approached expiration, while job seekers who were ineligible
for UI benefits exhibited no such spike.
While such evidence points to moral hazard, there is
also evidence that supports the liquidity effect as a driving
factor of extended unemployment duration. Economists
have compared UI to unemployment programs that do not
suffer from moral hazard risk, such as lump-sum severance
payments. Since severance payments provide cash up front,
there is no incentive for recipients to extend their unemployment duration.
In a 2007 Quarterly Journal of Economics article, David
Card of the University of California, Berkeley, Raj Chetty of
Harvard University, and Andrea Weber of the University of
Mannheim found that UI and severance payments in Austria
extended unemployment duration by similar amounts. This
suggests most UI recipients are not motivated to abuse the
system.
“From that evidence, one can conclude that it’s generally
beneficial to provide relatively generous unemployment
insurance,” says Mueller.
It’s possible that different effects dominate depending on
economic conditions, however. During recessions, when the
labor market is weak, UI recipients may not have the ability
to pick and choose among job offers, and the moral hazard
effect may consequently be much less pronounced. In a 2011
paper, Johannes Schmieder of Boston University, Till von
Wachter of the University of California, Los Angeles, and
Stefan Bender of the Institute for Employment Research
looked at data from Germany over a 20-year period to see

if the effects of UI varied across the business cycle. They
found very little difference in UI’s effect on unemployment
duration across the cycle, though disincentive effects were
slightly smaller during downturns.
But even if the effects of UI on unemployment duration
were entirely driven by moral hazard, the overall effect may
not be very large. Rothstein looked at data from the Great
Recession and found that UI extensions raised the unemployment rate by at most half a percentage point in early
2011. Several other studies have found similar or smaller
effects.
“Even if none of what we observe is driven by the liquidity
effect, the moral hazard is still much smaller than what we
previously thought,” says Rothstein.

Macroeconomic and Long-Term Effects
Proponents of expanding UI benefits during economic
downturns also argue that it helps the broader economy,
not just individual recipients. To the extent that recipients
are liquidity-constrained, increasing benefits allows them
to smooth consumption. In addition to making recipients
better off, proponents argue this elevates consumption levels for the overall economy. In a key study from 1994, MIT
economist Jonathan Gruber found that UI benefits helped
recipients in the United States maintain consumption close
to their pre-unemployed level. Without the benefits, recipients’ consumption would have fallen by 22 percent, three
times more than it did.
But just as UI affects individual incentives, it can also
shape the incentives of employers to create jobs, which can
have a negative effect on the broader economy. UI eases the
liquidity constraints of job seekers and allows them greater
ability to hold out for higher-paying jobs. All else equal, that
pushes up the average threshold wage that would persuade a
worker to take a job. Since the marginal profit from hiring is
reduced, employers may post fewer vacancies.
Mueller says that macro effects like these are very difficult to assess empirically, but it is important to keep them in
mind when determining how much — and for how long —
to expand UI benefits. “The disincentive effects from UI are
not that large,” he says. “But it is important to scale benefits
down at some point because of the possibility that providing
high benefits for a very long time changes cultural norms
such that people begin to rely more on the program. If that
were to happen, the disincentive effects might become
larger than what we measure now.”
Indeed, there is some evidence that keeping expanded
benefits in place for too long can change job seeker behavior
over time. Thomas Lemieux of the University of British
Columbia and W. Bentley MacLeod of the University of
Southern California, Los Angeles, studied the effects of a
major expansion in UI generosity implemented in Canada
in 1971. The Canadian government reduced the duration of
previous work required to qualify for the program from 30
weeks in a two-year period to eight weeks in a single year,
continued on page 35
Econ Focus | Second Quarter | 2014

21

Interview

Nicholas Bloom

Editor’s Note: This is an abbreviated version of EF’s conversation with Nicholas Bloom. For the full interview go to our website:
www.richmondfed.org/publications

22

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

EF: “Uncertainty” is a broad term. What does it mean in
your research, and how can we measure it?
Bloom: There isn’t a standard accepted definition. The
average Joe on the street would say that uncertainty is
not knowing the future. For example, the outcome of the
Giants-Royals World Series is uncertain when it’s happening. And that definition works well in most contexts.
In economic models this can be formally represented as
the “stochastic [random] volatility” of factors — such as productivity or demand — that drive economic activity. When
volatility is higher, uncertainty would be higher. That’s the
definition financial economists would use and I typically
have used when modelling uncertainty shocks.
There is another definition going back to Frank Knight,
the late Chicago economist. He defined “risk” as when you
have a known distribution for a future outcome and uncertainty as when you have an unknown distribution. For example, the outcome of a coin flip is risky, while the economy
was uncertain post 9/11 because it was almost impossible to
predict what would come next. This definition of uncertainty is often called Knightian Uncertainty.
In terms of measuring uncertainty in the economy,
we currently only have proxies — stock market volatility,
newspaper mentions of uncertainty, or the volatility of macroeconomic data. But that’s something I hope will improve
over time.
The old example of an uncertainty shock that I used
in my Ph.D. work in the early 2000s was 9/11. This event

P hotography : W inni W intermeyer

There’s no question that the policies used to treat the
Great Recession and its aftermath were extraordinary.
After the housing decline and financial crisis cast
doubt over trillions of dollars in financial assets worldwide, policymakers responded in kind with large-scale,
unprecedented policies that generated uncertainty
about future policy.
One question on many people’s minds was, to what
extent was policy uncertainty making the recession
worse? And exactly how large had policy uncertainty
become? Some said policy had created too much uncertainty, while others said policymakers hadn’t done
enough to mitigate the economic uncertainty caused
by the recession.
This debate put Stanford University economist
Nicholas Bloom’s research in the spotlight. When Bloom
started his Ph.D. at the University College of London in
the mid-1990s, he was mainly interested in adjustment
costs: how expensive it is to hire or fire a worker, or to
buy a piece of equipment and get rid of it. Bloom thought
adjustment costs would be even more important in an
uncertain environment, which would make mistakes
more likely. He has devoted much of his research career
since then to quantifying uncertainty and measuring how
it affects the economy, with several measures displayed
on the website PolicyUncertainty.com.
After earning his doctorate in economics in 2001,
Bloom worked at the management consulting firm
McKinsey & Co. and became interested in a second
hard-to-measure phenomenon: the effect of good versus
bad management practices on the productivity of firms.
With co-authors, he launched the World Management
Survey, which documents management practices across
more than 10,000 firms worldwide in manufacturing,
retail, schools, and hospitals.
Large-scale measurement, Bloom says, is the next
frontier in research on both uncertainty and management. It wasn’t long ago that economists were skeptical
of efforts to accumulate comprehensive datasets over
time, such as the measures of aggregate economic activity that Simon Kuznets pioneered in the 1940s. Today,
it is hard to imagine policymaking without them. With
Bloom and his co-authors’ continued efforts, research
on uncertainty and the effects of management may follow the same path.
Renee Haltom interviewed Bloom via videoconference in October 2014.

EF: What are the most important
generated a spike in every measure
things we learned in the Great
of uncertainty. Then the Great
A lesson from uncertainty
and its aftermath
Recession hit, and this made the
research is the medical principle Recession
about the effects of uncertainty?
9/11 uncertainty spike look like a
of “first, do no harm.” It may
small blip. Measures of uncertainbe
that policy actions generate Bloom: One obvious lesson is that
ty — like the VIX index of stock
market volatility [the Chicago
more uncertainty damage than high uncertainty can indeed slow
growth in the short run.
Board Options Exchange Market
help. Politicians often act based economic
The basic idea is that firms and conVolatility Index], which measures
the market’s expected volatility over on partial information or hastily sumers struggle to make decisions if
the next 30 days — went up by about developed ideas, when often the they are really uncertain about the
500 percent. Similarly, newspaper
future. The reason being that bad
best course would be to stay
indices of uncertainty jumped up by
decisions, such as investments or
calm and inactive.
about 300 percent. Even the Federal
hires that you come to regret in the
Reserve’s Beige Book had a surge of
future, are often costly to reverse.
discussion of uncertainty — before
In economics terms, firms face
the Great Recession, each month had about three or four
“adjustment costs.” So when uncertainty spikes, the natural
mentions of the word “uncertain,” but after the Great
response is to pause to avoid making a costly mistake. And
Recession it hit nearly 30.
of course, if every firm and consumer in the economy pauses,
Interestingly, the Great Depression of 1929-1933 was
a recession ensues.
another period where there was broad concern over uncerTherefore, the second lesson is the medical principle of
“first,
do no harm.” It may be that policy actions generate
tainty. Newspaper coverage of uncertainty and stock market
more uncertainty damage than help. One reason is that policyvolatility rose sharply in this period. In fact, one of Ben
Bernanke’s key papers before he became Fed chairman
makers have an incentive to be policy hyperactive. I saw
was, amazingly, on how uncertainty can impair investment.
this when I worked in the U.K. Treasury. Politicians had to
Christina Romer, chair of President Obama’s Council of
be seen as acting in response to bad events; otherwise, the
Economic Advisers during the Great Recession, had studpublic and media claimed they were not responding or, worse,
claimed they didn’t care. So politicians would act, often based
ied uncertainty too. So some of the key policymakers in
on partial information or hastily developed ideas, when often
Washington at the time were acutely aware of what uncerthe best course would be to stay calm and inactive.
tainty could do to an economy.
So hasty or unpredictable policy response to recessions
can
actually make the recessions worse. A classic example is
EF: To what extent does uncertainty cause recessions,
the
accelerated
depreciation allowance that Congress debated
versus recessions causing uncertainty?
introducing for several months after the 9/11 attacks. Many
commentators argued that this delayed the recovery as busiBloom: This is a key question in the literature. Economists
nesses waited to see what the decision would be. In fact, the
love clean models and clean stories, but I think in this case
Nov. 6, 2001, FOMC minutes even contained an explicit
we have to recognize that causation runs both ways.
discussion of the damaging policy uncertainty this introduced.
Recessions typically start with a nasty shock — like an oil
shock, a financial crisis, or a war — a negative “first moment”
shock, in the language of economics models. These shocks
EF: How big a factor was policy uncertainty in the
also induce uncertainty, known as a “second moment” shock.
severity of the Great Recession and its slow recovery?
For example, both of the oil shocks in the 1970s pushed
the economy into recession through higher oil prices, but they
Bloom: That’s a very tough question to answer. The full
also increased uncertainty over future oil prices and global
experiment is this: If you held everything else constant
economic growth. Likewise, the recent U.S. and European
and did not have the rise in uncertainty, what would have
housing and financial crises were both bad news but also
happened to the drop in economic output? I think, based
increased economic uncertainty.
on some rough calculations I lay out in my 2014 Journal of
Moreover, recessions tend to induce uncertainty on
Economic Perspectives paper, that the recession would have
an ongoing basis. As conditions worsen, businesses slow
been about one-third less. So I think uncertainty was a major
down, firms fail, and consumers change behavior. Likewise,
factor, though not the biggest factor, which I think was a
as policymakers try to revive growth, they tend to try
combination of the housing and financial crises.
increasingly extreme policies, which have the negative
If you then break out policy uncertainty from uncertainside effect of increasing uncertainty. So recessions and
ty, it’s even harder to tell. From my paper with Scott Baker
uncertainty are tied together in a vicious cycle. Uncertainty
and Steve Davis, the best evidence that it matters is when
leads to recession, which increases uncertainty, making the
we look at individual sectors. We interact our policy uncerrecession worse.
tainty measure with sector-level measures of the exposure to
Econ Focus | Second Quarter | 2014

23

Nicholas Bloom
census and then wrote an article in
the first year of the Quarterly Journal
of Economics, “The Source of Business
Profits.” He argued that management
was the biggest driver of the huge
differences in business performance
that he observed across literally thousands of firms.
Almost 150 years later, work looking at manufacturing plants shows
➤ Selected Previous Positions
Research Fellow, Centre for Economic
a massive variation in business perPerformance, London School of
formance; the 90th percentile plant
Economics (2003-2006); Management
now has twice the total factor proConsultant, McKinsey & Company
ductivity of the 10th percentile plant.
(2002-2003); Business Tax Policy
Similarly, there are massive spreads
Advisor, Her Majesty’s Treasury (2001across countries — for example, U.S.
2002); Research Economist, Institute
productivity is about five times that
for Fiscal Studies (U.K.; 1996-2002)
of India.
➤ Education
Despite the early attention on
B.A., Cambridge University (1994)
management by Francis Walker, the
MPhil., Oxford University (1996)
topic dropped down a bit in economPh.D., University College of London
ics, I think because “management”
(2001)
became a bad word in the field. Early
on I used to joke that when I turned
➤ Selected Publications
up at seminars people would see the
“Does Management Matter: Evidence
“M-word” in the seminar title and
from India,” Quarterly Journal of
Economics, 2013 (with Benn Eifert, Aprajit
their view of my IQ was instantMahajan, David McKenzie, and John
ly minus 20. Then they’d hear the
Roberts); “Identifying Technology
British accent, and I’d get 15 back.
EF: Another branch of your
Spillovers and Product Market
People thought management was
research has focused on how manRivalry,”
Econometrica,
2013
(with
Mark
quack doctor research — all pulp-ficagement practices affect firm and
Schankerman and John Van Reenen);
country productivity. Why do you
tion business books sold in airports.
“The Impact of Uncertainty Shocks,”
think management practices are so
Management matters, obviously,
Econometrica, 2009; numerous other
important?
for economic growth — if we could
articles in journals such as the American
rapidly improve management practicEconomic Review, Review of Economic
Bloom: My personal interest was
es, we would quickly end the current
Studies, and Journal of Public Economics
formed by working at McKinsey, the
growth slowdown. It also matters for
management consulting firm. I was
public services. For example, schools
there for about a year and a half, working in the London
that regularly evaluate their teachers, provide feedback
office for industrial and retail clients.
on best practices, and use data to spot and help struggling
There’s also a lot of suggestive evidence that management
students have dramatically better educational outcomes.
matters. For example, Lucia Foster, John Haltiwanger, and
Likewise, hospitals that evaluate nurses and doctors to proChad Syverson found using census data that there are enorvide feedback and training, address struggling employees,
and reward high performers provide dramatically better
mous differences in performance across firms, even within
patient care. I teach my Stanford students a case study from
very narrow industry classifications. In the United Kingdom
Virginia Mason, the famous Seattle hospital that put in
years ago, there was this line of biscuit factories — cookie
place a huge lean-management overhaul and saw a dramatic
factories, to Americans — that were owned by the same
improvement in health care outcomes, including lower morcompany in different countries. Their productivity variation
was enormous, with these differences being attributed to
tality rates. So if I get sick, I definitely want to be treated at
variations in management. If you look at key macro papers
a well-managed hospital.
like Robert Lucas’ 1978 “span of control” model or Marc
Melitz’s 2003 Econometrica paper, they also talk about proEF: How much of the productivity differences that you
just discussed are driven by management?
ductivity differences, often linking this with management.
Economists have, in fact, long argued that management
matters. Francis Walker, a founder and the first president
Bloom: Research from the World Management Survey
of the American Economic Association, ran the 1870 U.S.
that Raffaella Sadun, John Van Reenen, and I developed
government, meaning the share of sector revenue that comes from government contracts. The share is very high
for defense, health care, and construction. When policy uncertainty was
higher, those sectors had much more
stock market volatility and had far
bigger reductions in investment and
employment. That’s even after controlling for other factors, like the level
and forecast of government spending.
So policy uncertainty does appear to
be damaging, particularly in government-dependent sectors like health
and defense.
But aggregating those numbers,
from one sector to the overall economy, is hard. My guess would be that
policy uncertainty caused 10 to 20
percent of the recession, but that’s
a pretty wild guess. And even if we
can show there’s a negative effect of
policy uncertainty overall, it’s hard to
talk about the effects of one individual policy or another. Hopefully that’ll
be the end game for this research, but
we’re not there yet.

24

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

➤ Present Position
Professor of Economics, Stanford
University Economics Department;
Professor by Courtesy, Stanford
University Graduate School of Business;
Co-Director of the Productivity,
Innovation and Entrepreneurship
Program, National Bureau of Economic
Research

suggests that management accounts for about 25 percent of
the productivity differences between firms in the United
States. This is a huge number; to give you a benchmark,
IT or R&D appears to account for maybe 10 percent to
20 percent of the productivity spread based on firm and
census data. So management seems more important even
than technology or innovation for explaining variations in
firm performance.
Coincidentally, you do the same exercise across countries
and it’s also about 25 percent. The share is actually higher
between the United States and Europe, where it’s more like
a third, and it’s lower between the United States and developed countries, where it’s more like 10 to 15 percent.
Now, you may not be surprised to learn that there are
significant productivity differences between India and the
United States. But you look at somewhere like the United
Kingdom, and it’s amazing: Its productivity is about 75
percent of America’s. The United Kingdom is a very similar
country in terms of education, competition levels, and many
other things. So what causes the gap? It is a real struggle to
explain what it is beyond, frankly, management.
EF: What can policy do to improve management practices?
Bloom: I think policy matters a lot. We highlight five
policies. One is competition. I think the key driver of
America’s management leadership has been its big, open,
and competitive markets. If Sam Walton had been based
in Italy or in India, he would have five stores by now,
probably called “Sam Walton’s Family Market.” Each one
would have been managed by one of his sons or sons-in-law.
Whereas in America, Walmart now has thousands of stores,
run by professional nonfamily managers. This expansion of
Walmart has improved retail productivity across the country.
Competition generates a lot of diversity through rapid entry
and exit, and the winners get big very fast, so best practices
spread rapidly in competitive, well-functioning markets.
The second policy factor is rule of law, which allows
well-managed firms to expand. Having visited India for the
work with Benn Eifert, Aprajit Mahajan, David McKenzie,
and John Roberts, I can say this: The absence of rule of law
is a killer for good management. If you take a case to court in
India, it takes 10 to 15 years to come to fruition. In most developing countries, the legal system is weak; it is hard to successfully prosecute employees who steal from you or customers
who do not pay their invoices, leading firms to use family
members as managers and supply only narrow groups of trusted customers. This makes it very hard to be well managed — if
most firms have the son or grandson of the founder running
the firm, working with the same customers as 20 years ago,
then it shouldn’t be surprising that productivity is low. These
firms know that their sons are often not the best manager, but
at least they will not rampantly steal from the firms.
The third policy factor is education, which is strongly correlated with management practices. Educated and numerate

employees seem to more rapidly and effectively adopt efficient management practices.
The fourth policy factor is foreign direct investment, as
multinational firms help to spread management best practices around the world. Multinational firms are typically
incredibly well run, and that spills over. It’s even true in
America, where its car industry has benefited tremendously
from Honda, Toyota, Mitsubishi, and Volkswagen. When
these foreign car manufacturers first came to America, they
achieved far higher levels of productivity than domestic
U.S. firms, which forced the American car manufacturers to
improve to survive.
The fifth factor is labor regulation, which allows firms to
adopt strong management practices unimpeded by government. In places like France, you can’t fire underperformers,
and as a result, it’s very hard to enforce proper management.
EF: Management practices can be viewed as “soft” technologies, compared to so-called “hard” technologies
such as information technology. Do you see anything
special about the invention and adoption of these “soft”
technologies relative to “hard” technologies?
Bloom: The only distinction is that hard technologies, like
my Apple iPhone, are protected by patents, whereas process
innovations are protected by secrecy.
The late Zvi Griliches, a famous Harvard economist,
broke it down into two groups: process and product innovations. Most people who think of innovation think of product
innovations like the shiny new iPhone or new drugs. But
actually a lot of it is process innovations, which are largely
management practices.
Good examples would be Frederick Winslow Taylor and
scientific management 100 years ago, or Alfred Sloan, who
turned a struggling General Motors into the world’s biggest
company. Sloan pushed power and decision-making down to
lower-level individuals and gave them incentives — called the
M-form firm. It seems perfectly standard now, but back then
firms were very hierarchical, almost Soviet-style. And then
there was modern human resources from the 1960s onward
— the idea that you want to measure people, promote
them, and give them rewards. Most recently, we have had
“lean manufacturing,” pioneered by Toyota from the 1990s
onward, which is now spreading to health care and retail. This
focused on data collection and continuous improvement.
These have been major milestones in management technologies, and they’ve changed the way people have thought. They
were clearly identified innovations, and I don’t think there’s
a single patent among them. These management innovations
are a big deal, and they spread right across the economy.
In fact, there’s a management technology frontier that’s
continuously moving forward, and the United States is pretty
much at the front with firms like Walmart, GE, McDonald’s,
and Starbucks. And then behind the frontier there are a bunch
of laggards with inferior management practices. In America,
these are typically smaller, family-run firms.
Econ Focus | Second Quarter | 2014

25

EF: What are the key challenges for future research on
management?

I chose these two topics — uncertainty and management
— more by good luck than by design. During my Ph.D. studies, I became interested in estimating adjustment costs and
from that moved into the literature on real options, which
naturally led to uncertainty. I realized the empirical literature on uncertainty was relatively small compared to the
theoretical literature, and I started to work on that. I was
fortunate to have been doing that in the early 2000s, before
the Great Recession, which kicked this topic up into public
consciousness. And my interest in management came from
working at McKinsey as a consultant and noticing the huge
differences in management practices across firms and how
this seemed to drive massive performance differences, but
management was mostly ignored by economists.
There’s an old saying: What gets measured gets managed. I think in economics it’s what gets measured gets
researched. A great example is the patents database at the
National Bureau of Economic Research, put up by Bronwyn
Hall, Adam Jaffe, and Manuel Trajtenberg. The database is
unbelievable and has really generated enormous growth in
the innovation field. Likewise with management — we hope
if we can build a new multifirm and multicountry database,
we can spur the development of the field.

Bloom: One challenge is measurement. We want to improve
our measurement of management, which is narrow and
noisy.
The second challenge is identification and quantification:
finding out what causes what and its magnitude. For example, can we quantify the causal impact of better rule of law
on management? I get asked by institutions like the World
Bank and national governments which policies have the
most impact on management practices and what size impact
this would be? All I can do is give the five-factor list I’ve
relayed here; it’s very hard to give any ordering, and there
are definitely no dollar signs on them. I would love to be able
to say that spending $100 million on a modern court system
will deliver $X million in extra output per year.
One way to get around this — the way macroeconomists
got around it — is to gather great data going back 50 years
and then exploit random shocks to isolate causation. This
is what we are trying to do with the World Management
Survey. The other way is a bit more deliberate: to run field
experiments by talking with specific firms across countries.
EF: Speaking of the World Management Survey, is
there any precedent for it, or is it the first of its kind?

EF: What are you working on next?
Bloom: A range of topics, but focused on uncertainty
and management in particular. One is trying to improve
our measurement and understanding of uncertainty. As I
mentioned earlier, we currently only have proxies. I hope to
more directly measure firm-level uncertainty, which is what
ultimately drives business decisions, and use this to measure
and model the impact of uncertainty on the economy. This
measure would be based on the expectations of firms. I have
been working with the Atlanta Fed and the Census Bureau to
develop large-scale, monthly surveys of distributional expectations of many thousands of U.S. firms across the country.
A second area is trying to improve our time-series and
cross-country measurement of management to get at many
of the policy questions we’ve discussed. To understand, for
example, the impact of the rule of law or competition on
management and growth, we need to collect data before
and after major reforms. Building large international panel
datasets is the best way to do this. Alongside this, I am continuing to work on field experiments on management in the
United States and abroad to try to pinpoint some key drivers
in a laboratory-style environment.
As you’ve seen in the questions you’ve asked, on uncertainty in particular, it’s still hard to address some of the
policy questions on these topics. For both uncertainty and
management, I think measurement is the way to get at
causation and policy implications.
EF

Bloom: I’m not aware of anything long lasting. There have
been previous attempts to do cross-country management
surveys, but what happened is they ran one or two waves and
then hit serious issues with comparability and sustainability.
You’ve got to be very consistent on methodology across
countries and across time, which is very hard. The alternative model is to have each country fund and run its own
survey, but then you’ve got an apples and oranges problem. I
think we’re the first to be very systematic by trying to apply
tightly the same methodology across countries.
The U.S. Census also ran a management survey in 2010. It’s
called MOPS, the Management and Organizational Practices
Survey, and it surveyed 50,000 American factories. We’re
working with them on redoing that in 2015 to start tracking
differences. The Germans, the Pakistanis, and the Canadians
are also putting management questions into their censuses.
EF: You’ve spent a lot of your career trying to quantify
the seemingly unquantifiable, such as uncertainty and
the effects that trust and management practices have on
productivity. Is that a coincidence?
Bloom: Anything that can be said to be “high” or “low” can
be quantified, and economics is good at this; it’s one of our
strengths as a social science.

u

26

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

TheProfession

Breaking into the Mainstream
B y A a r o n S t ee l m a n

I

n 2003, administrators at the University of Notre Dame
decided to split the Department of Economics into
two: the Department of Economics and Policy Studies
(DEPS) and the Department of Economics and Econometrics
(DEE). Why the divide? In large part because there were significant differences in methodological approaches and fields
of study within the department.
Those who considered themselves within the “mainstream” of the profession, generally using a neoclassical
framework to examine issues such as economic growth
and industrial organization, tended to move to the DEE.
Those whose work was generally considered more “heterodox” or “pluralistic,” employing a variety of methodological
approaches to address questions regarding race and gender,
inequality, and the development of economic thought,
among others, tended to form the nucleus of the DEPS.
Less than a decade later, the DEPS was closed by university
administrators and what was simply called the Department
of Economics emerged again.
Faculty within the DEE tended to neatly fit into the new
department, while many faculty members within the DEPS
moved to various departments throughout the university.
Developments at Notre Dame reflect divisions within the
economics profession more broadly. Heterodox economists
have formed roughly 20 associations around the world,
including the Union for Radical Political Economics and the
Society for the Advancement of Socio-Economics. Most of
their members are considered to be on the left of the political spectrum and have clustered in a relatively small number
of Ph.D. granting institutions around the country, including
American University (AU), Colorado State University, the
University of Massachusetts at Amherst (UMass), and the
University of Missouri-Kansas City.
Not all departments with a strong heterodox presence
are generally considered left-leaning, however. For instance,
at Clemson University, Florida State University, and George
Mason University (GMU), all of which also offer Ph.D. programs, students can work with faculty members interested in
Austrian economics, public choice analysis, and experimental economics. Many of the prominent figures in those fields
are thought of as vigorous defenders of the free market.
Does that mean such departments are explicitly ideological, as some have charged? Not necessarily. Economists at
those institutions, like nearly everyone in the profession,
have opinions about what the world ought to look like. “We
all come to the study of economics with a set of predispositions,” says Robert Pollin, an economist at UMass. “I am
quite open about my commitment to egalitarianism as a
general pre-analytical social commitment. I think it is fair
to say that virtually all of my UMass colleagues share that

commitment in various specific ways.” But economists who
claim to be “apolitical centrists,” Pollin says, also have “a
pre-analytic vision, no less than being an egalitarian.”
Moreover, holding relatively strong normative views
doesn’t mean that ethical principles necessarily trump scientific investigation. Pete Boettke, an editor of the Review
of Austrian Economics, did his Ph.D. at George Mason and
taught at three institutions, including New York University,
once considered the leading center of study in Austrian economics, before returning to GMU.
Boettke notes that “GMU is often misunderstood by
outsiders because so many of our faculty are significant
national and international voices in defending the free
market that outsiders tend to think of the place as rather
homogeneous.” But, he says, “when it comes to economic
theory and economic methodology, GMU is one of the most
diverse scientific environments,” a setting where economics,
not ideology, is stressed. Pollin adds that if “one wants a
solid grounding in mainstream economics and one wants to
develop technical skills necessary to operate effectively as a
professional economist, then UMass is a truly outstanding
place to drink in all that economics has to offer.”
No heterodox department is generally considered to
be among the top 50 departments in the country. But that
doesn’t mean the students drawn to them are mediocre.
Mieke Meurs is an economist and a former Ph.D. program
director at AU. “Every year we attract students of a quality
that one would not expect, given our ranking. These students come to AU because they want to study a variety of
approaches to economic questions,” she says. “One former
student explained it this way: If the only tool you have is a
hammer, every problem looks like a nail. I use this analogy to
talk about the usefulness of heterodox approaches.”
Nearly all students from heterodox departments find jobs
as professional economists. But most find employment at
liberal arts colleges, branch campuses of state universities,
and nonprofit institutions. Not many are able to break into
departments at highly ranked research universities.
Will today’s heterodox departments generally stay on the
outside looking in at the heart of the profession? Probably
for quite some time. But many heterodox economists point
to an example from the 1950s and early 1960s. At that time,
economists at the University of Chicago such as Milton
Friedman and George Stigler led the challenge to the prevailing Keynesian orthodoxy. Within two decades they were
at the forefront of the profession and had built Chicago, an
already strong department, into a powerhouse. No one is
predicting a similar ascendancy for AU, GMU, or UMass,
but the more optimistic envision a time when they, too, will
find a place within the mainstream of the profession.
EF
Econ Focus | Second Quarter | 2014

27

EconomicHIStory

Reading Between the Lines
How the grocery
industry coalesced
behind the UPC
bar code

An early bar code scanner
at a Marsh supermarket in
Troy, Ohio, on June 26, 1974.

28

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

D

allas, May 1971 ­­— the city was
hosting the largest gathering
of the grocery industry, the
annual convention of the Super Market
Institute (now the Food Marketing
Institute). Reporters roamed the convention floor while friends reacquainted
themselves.
R. Burt Gookin, president and CEO
of the H.J. Heinz Company, was a featured speaker. He was scheduled to
provide an update on an industrywide
effort to devise a standard product
code, something that past workgroups
had tried and failed. What the attendees didn’t know was that the executive
would be laying the groundwork for a
multiyear push into new technology,
an effort that would put his industry
connections to the test.
In his speech, Gookin urged every
grocery manufacturer and retailer
to adopt a Universal Product Code
(UPC) that would help modernize the
labor-intensive grocery business. “We
hadn’t had anything like this,” recalls
Thomas Wilson Jr., a former consultant
at McKinsey & Company who helped
Gookin and his group come up with

the UPC. “Technology was stumbling
along in the grocery industry. A number
of good friends who were top executives
came up to me afterward and said, ‘This
is a big deal, isn’t it?’”
Indeed, the UPC and the ubiquitous
bar code that represents it have transformed the supply chain, not only in
the grocery industry but also in other
sectors of the economy. Goods are better managed at every step, from the
supplier’s truck to the store’s shelf to
the customer’s bag.
The grocery industry, which was
more fragmented in the 1970s than it
is today, agreed upon the system ­­— a
numerical code for storing information
about a product and a symbol to represent that code ­­— in less than three
years. Business history in the United
States has plenty of examples of firms
that couldn’t coordinate their efforts to
develop an industry standard without
lengthy wars, such as that between the
Betamax and VHS videotape standards
in the 1970s and 1980s and between the
Blu-ray and HD-DVD formats in the
2000s.
What made the difference with the
UPC bar code? Technological advances
provided the means. Economic pressures provided the motivation to align
the competing interests of grocery manufacturers and retailers behind a single
standard. The pragmatism and determination of key executives like Gookin
helped overcome the industry’s inertia.
IBM had a major role to play, specifically its retail store systems division,
now owned by Toshiba but still based
in Research Triangle Park in Raleigh,
N.C. The company proposed the bar
code design that was chosen to represent the UPC and developed one of the
first supermarket scanners in its Raleigh
facilities. (IBM even commandeered a
supermarket in the Cameron Village
shopping center to take a publicity
photo for its new scanner.)

P hotography : C ourtesy of M arsh S upermarkets

B y Ch a r le s G e r e n a

Lines in the Sand
The idea of automating the checkout process dates back to
the 1930s. But it wasn’t pursued until a pair of graduate students in Philadelphia, Bernard Silver and Joseph Woodland,
decided to take up a challenge posed by the CEO of a local
retailer in the late 1940s. They came up with a pattern of
thick and thin lines to represent information, similar to how
groups of dots and dashes sent over a telegraph can carry a
message. The inspiration came during Woodland’s trip to a
beach when he idly drew lines in the sand.
In 1949, the pair filed for a patent for a bull’s-eye variation of their idea that encoded information using a pattern
of concentric circles. Two years later, Woodland joined
IBM but was unsuccessful at selling the patent to the
multibillion-dollar corporation. He eventually sold the bar
code patent to Philco, which later sold it to RCA.
There were a couple of reasons why no one was interested
in Silver and Woodland’s idea. First, the technology didn’t
exist to reliably read bar codes. Second, bar codes didn’t
have much economic value without a standard for how that
information was stored and read by a machine.
Flash forward to the late 1960s and early 1970s. Grocery
retailers were being squeezed by inflationary pressures. They
made less than a penny on every dollar of sales after taxes,
says Bill Selmeier, founder of a virtual museum of the bar
code called IDHistory.com. Selmeier helped market the
UPC at IBM.
With such razor-thin margins, grocers looked to reduce
costs wherever they could. According to Selmeier, labor
costs of checkout clerks were a significant percentage of
a store’s operating expenses. “There were almost as many
clerk hours as there were backroom hours,” he explains. The
cost of mistakes in ringing up purchases was also high, as was
the cost of individually pricing goods.
The high inflation of the 1970s also complicated pricing.
“Grocers wanted the flexibility to change prices without
having to peel off all the price stickers on items in inventory
and applying new stickers, and risking some cashiers not
paying attention and charging the old price after all,” says
Emek Basker, an economist at the University of Missouri
who has studied the economic effects of bar codes.
Around the same time, several manufacturers of frontend equipment for grocery stores began talking to their
clients about modernizing the checkout process. They
were working on something that could automatically read
product information into a computer system ­­— an electronic scanner. Stop & Shop and Sylvania teamed up to test
a scanner that used incandescent light. RCA approached
Kroger about developing a scanner that used the company’s

laser and machine-readable symbol, which was shaped like a
bull’s-eye and based on Silver and Woodland’s design.
The problem was the lack of a standard product code.
Grocery manufacturers and retailers had different numbering
systems, while each chain of stores had its own. “That would
have been an impossible problem for the grocery manufacturers to tackle. They would have had to have inventory that was
different for each chain,” notes Barry Franz, a former associate director at Procter & Gamble, during an oral history
interview. Franz was one of the executives who represented
grocery manufacturers during the UPC’s development.

Setting the Standard
Earlier in the 1960s, workgroups within the Grocery
Manufacturers Association (GMA) and the National
Association of Food Chains (NAFC) joined together to
tackle the issue of standardization. While they agreed
that something needed to be done, they couldn’t agree on
much else.
Manufacturers wanted a standard that would be cheap
to implement, so their proposed code consisted of five digits that were equal to the item numbers they already used
and five digits that would be unique to the manufacturer.
Retailers wanted just a five-digit product code that would
be quicker to key into an electronic cash register. “The two
sides tended to meet, argue, and go home without any resolution,” recalled Tom Wilson in an oral history interview
recorded by IDHistory.com.
To break the impasse, NAFC president and CEO
Clarence Adamy turned to McKinsey & Company, a management consulting firm that frequently worked with the
grocery industry, in 1968. McKinsey came back to Adamy a
few months later with recommendations for both a product
code and a machine-readable symbol to represent it. The
first phase of the standardization effort would require five
months and $100,000. Adamy said his group didn’t have the
money and passed.
Instead, Adamy worked with the heads of five other
trade associations in the grocery industry to put together
another workgroup to do the job. The Ad Hoc Committee
on a Uniform Grocery Product Code consisted of 10 wellrespected executives representing the manufacturing, distribution, and retail sides of the business.
What made this standardization workgroup different was
the decisionmakers were at the table from the very beginning rather than relying on technical experts who “were not
empowered to solve the problem,” said Franz in an oral history interview. “This was something that was going to have to
be done at a fairly high level.” Also, the focus was on resolving
Econ Focus | Second Quarter | 2014

29

With the standards for the UPC’s format and visual
representation set, the next challenge was persuading
everyone in the grocery industry to use it.
big-picture questions on the economic viability of a standard
product code, not on the details of implementing it.
In August 1970, the ad hoc committee met for the first
time at a hotel near the end of a runway at Chicago’s O’Hare
Airport. In addition to advisers they brought from their
respective firms, they agreed to hire McKinsey to facilitate
the committee’s work.
Seven months later, the committee concluded that a
10-digit, all-numeric code would be economically feasible.
The first five digits would identify the product manufacturer and be assigned by a central authority. The second
five digits would identify the product and be assigned by
the manufacturers.
Before Gookin made his big announcement at the Super
Market Institute’s convention, McKinsey helped drum
up support. Wilson and Larry Russell presented the committee’s recommendations to dozens of groups of grocery
manufacturers and retailers between April and May 1971.
They also met one-on-one with the industry’s top executives
to secure their commitment to the standardization effort
­­— in writing. The last written confirmations came the night
before Gookin’s speech in Dallas.
Even before the ink was dry on those confirmations, the
committee got to work on the visual representation of the
UPC. In March 1971, they formed a Symbol Standardization
Subcommittee chaired by Alan Haberman, chief executive
of a Massachusetts-based supermarket chain, to research
and evaluate the alternatives. Seven manufacturers submitted proposals, including RCA, Singer, and Pitney Bowes.
IBM also threw its hat in the ring. Back in the mid1960s, the company had developed a 60-pound electromechanical behemoth that enabled checkout clerks to enter
a code with product information for each item purchased.
The company decided not to market the system. “It
became obvious that the key entry system wasn’t going to
pay off,” recalled Marvin Mann, former IBM vice president, during an oral history interview. “It would slow down
the checkout operation [by] having to key in more digits
than just the price.”
Then the UPC effort came along. Mann began working
with the ad hoc committee while IBM’s development team
in Raleigh started working on a scanner that would read a
symbol.
The evaluation of the proposed UPC symbols and scanners took two years, focusing on both the economic viability
of the solutions and how well they met the demands of a
typical checkout counter. The symbol had to be as small as
possible ­­— 1.5 square inches ­­— so that it wouldn’t take up
valuable real estate on the package. Yet it had to be repro30

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

duced easily using current printing techniques and read
accurately regardless of how the package was positioned as
it moved across the scanner.
Prototype scanners and symbols were tested at Battelle
Institute’s labs in Columbus, Ohio. At the same time,
says Selmeier, grocery manufacturers brought their marked
goods to Raleigh to verify on IBM’s equipment that they
could be scanned properly. “Grocery manufacturers were
terrified that they were not going to make good symbols,” he
notes. “That would reflect poorly on their product.”
The subcommittee also insisted on real-world testing at
grocery stores. For example, RCA began testing a prototype
at a Kroger store in suburban Cincinnati in July 1972.
The evaluation process culminated with three days of
presentations to the subcommittee in January 1973. Two
months later, the subcommittee agreed upon a version of
the bar code developed by a team at IBM that included Joe
Woodland, who was still working at the company. A press
conference was held in New York to announce the winning
symbol in April 1973.
The other leading contender for the UPC symbol was the
bull’s-eye proposed by RCA. The bar code “could be made
smaller than the bull’s-eye” yet still was scannable from a
variety of angles, recalled Mann during a September 1999
celebration of the UPC’s 25th anniversary. “And it was adaptable to widely varying printing requirements, which was the
make-or-break issue for any of the proposed symbols.”
The bar code could also pack more information into a
given space than the bull’s-eye. That density did require
more computer power to decode, however. IBM’s team
addressed that issue during its 20-minute presentation to the
symbol standardization subcommittee.
“Bob Evans pulled out of his pocket a round silicon disk”
the size of a silver dollar, recalls Franz. “He said, ‘You’re
probably wondering just what we are going to do to be able
to decode [the UPC symbol]. The power of each integrated
circuit on this disk is equal to some of the current moderate
sized computers of today. We’re going to use this power at
each checkout stand.’ ”

The Chicken-and-Egg Dilemma
With the standards for the UPC’s format and visual representation set, the really hard part began: persuading
everyone in the grocery industry to use it. According
to an analysis by the ad hoc committee’s consultant,
McKinsey & Company, manufacturers had to mark at least
three-quarters of their goods with a bar code in order for
the technology to be cost effective. At the same time, at
least 8,000 supermarket locations, about one-quarter of

the total in operation, needed to install scanners.
But who would make the costly investment first? In general, when a technology standard is widely adopted, it tends
to generate “network externalities” ­­— economic benefits
that accrue to users by virtue of the fact that many other
parties are also using it. (For instance, the more people who
connect to a social media network, the more valuable the
service becomes to its users as a means of communicating.)
But these benefits accrue over time and require implementation costs upfront.
“Grocery manufacturers did not want to redesign
their labels as long as only a few supermarkets had scanners,” explains the University of Missouri’s Emek Basker.
“Supermarkets did not want to invest in this expensive technology as long as only a few manufacturers had bar codes on
their labels.”
A number of factors helped the bar code reach critical
mass. The ad hoc committee spent a lot of time and money
winning the support of most grocery manufacturers before
the UPC was announced. In the ensuing years, committee
members were in positions of power to push the skittish
managers back at their corporate offices.
Also, in a convenient twist of fate, the U.S. Food and
Drug Administration issued requirements in 1973 for foods
with added nutrients or that carried nutritional claims to
have additional information on their labels. Since many
processed foods were required to have updated packaging, it
was easier to justify adding a UPC bar code at the same time.
As for the supermarket chains, store managers weren’t
convinced the productivity savings of bar codes would outweigh the substantial costs of implementation, especially at
smaller chains. So McKinsey devised a compelling business
case that focused on two areas where retailers could achieve
short-term, quantifiable savings from implementing the
UPC bar code ­­— reduced labor costs at the checkout stand
and reduced costs associated with pricing and repricing
goods. (The grocery industry was expected to reap $1.4 billion in “hard” savings, with most of the savings accruing to
retailers.) Then, the committee members toured the country
to present their case.
McKinsey also identified long-term, harder-to-quantify
savings from improvements to processes, such as inventory
management. “The grocery manufacturer had much better
information,” says Selmeier. “Because of the cost of data
collection, all the retailers knew was how many cartons of
what product they had shipped to a store.” Still, McKinsey
downplayed these “soft” savings since the ad hoc committee
knew that retailers would be far more interested in boosting
their bottom line immediately.

Beyond the grocery industry, unions opposed the adoption of the UPC bar code because they feared it would lead
to a lot of people losing their jobs. Consumer advocates
feared that goods would be mispriced and the technology
could be used to track people’s purchases.
Eventually, both groups worked together to urge the
passage of item pricing legislation. By 1976, California,
Michigan, New York, Connecticut, Massachusetts, and
Rhode Island required supermarkets with scanners to continue labeling individual items with price stickers.
“The net effect of the legislation was the reduction of
potential benefits of the UPC, thereby lengthening the
payback period for the investment in scanner technology,”
noted a 1999 report by PricewaterhouseCoopers published
on the 25th anniversary of the UPC. “With the extremely
high cost of capital and unstable economic environment of
the late 1970s and early 1980s, a number of grocery chains
decided to hold off on investing in the new technology.”
According to the PricewaterhouseCoopers report, it
would take a normalization of economic conditions in the
latter part of the 1980s as well as a “drop in computing costs,
improvements in scanner technology [and the] elimination
of price-marking legislation” for scanners to become widely
used. When Kmart and Wal-Mart started requiring apparel
makers to mark their goods with bar codes during the 1980s,
UPC registrations spread like wildfire throughout the broader
retail industry.
In a November 2004 paper, economist James Mulligan
at the University of Delaware and Nilotpal Das, a former
visiting professor at Hood College, examined the adoption
of scanners by supermarkets. They concluded that, in certain
situations, the diffusion of new technology is slower when
it improves the quality of service rather than the cost of
production. Typically, a firm is motivated to do something
new when it sees competitors reaping cost savings. But when
a new technology is actually more expensive but adds value
to an existing product, firms may stay on the sidelines if they
believe their customers wouldn’t respond to that added value.
This phenomenon was observed in the adoption of expensive high-speed ski lifts during the 1980s and 1990s. Resort
owners didn’t install them to reduce their costs but to cater
to avid skiers and those who highly valued their time.
Das and Mulligan also found this tendency in the diffusion of scanner technology. During the mid-to-late 1970s
when NCR and IBM released their first scanners, stores in
higher-income areas were more likely to adopt them, perhaps because some of those stores saw a boost in sales from
consumers who placed a high value on their time and liked
the faster checkout process. But stores didn’t see lower costs
Econ Focus | Second Quarter | 2014

31

initially. Especially in communities with lower-income families that value price over speed of service, store managers
didn’t think scanners were worth the expense. It wasn’t until
IBM and others released scanners in the 1980s that could
read bar codes more accurately ­­— even those that were
partially damaged, crinkled, or wet ­­— before supermarkets
could reap savings that could be passed along to price-sensitive consumers.
In the subsequent decades, consumers have benefited
from the labor savings yielded by the adoption of the
UPC bar code. Economist Emek Basker found through her
research, detailed in a June 2013 paper, that “grocery prices
fell considerably in the first decade of checkout automation.
The largest price effects are for produce and meat, perishable items over whose prices store managers tend to have
the most discretion.”

Meanwhile, the grocery industry has realized the hopedfor hard savings from reduced labor costs. It has also
reaped some of the soft savings related to process improvement. The UPC bar code has empowered grocery retailers,
enabling them to design displays to optimize item movement or stock up on a popular item before the manufacturer
realizes that it is in high demand.
Grocery manufacturers have been empowered as well.
Every time a bottle of Head & Shoulders is scanned by a
Wal-Mart associate, information flows from the checkout
stand directly to Procter & Gamble. The company uses
that information from Wal-Mart to determine if additional
shampoo needs to be shipped to a particular location and if
the production line needs to be ramped up.
Basker notes, “Bar codes started an entire revolution at
the back end of supermarkets.”
EF

Readings
Basker, Emek. “Change at the Checkout: Tracing the Impact of a
Process Innovation.” University of Missouri Working Paper No.
13-02, June 2013.
Brown, Stephen A. Revolution at the Checkout Counter: The Explosion
of the Bar Code. Cambridge: Harvard University Press, 1997.
Das, Nilotpal, and James G. Mulligan. “Vintage Effects and
the Diffusion of Time-Saving Technological Innovations: The
Adoption of Optical Scanners by U.S. Supermarkets.” University
of Delaware Working Paper No. 2004-06, November 2004.

Garg, Vineet, Charles Jones, and Christopher Sheedy. “17
Billion Reasons to Say Thanks: The Twenty-fifth Anniversary
of the UPC and Its Impact on the Grocery Industry.”
PricewaterhouseCoopers, 1999.
Haberman, Alan L., ed. Twenty-Five Years Behind Bars: The
Proceedings of the 25th Anniversary of the U.P.C. at the Smithsonian
Institution, September 30, 1999. Cambridge: Harvard University
Press, 2001.

Check out our October Economic Brief

Investing over the Life Cycle: One Size Doesn’t Fit All
This essay questions the conventional wisdom that young people should
invest more heavily in risky assets. Financial advisers commonly
recommend this strategy because young investors can expect
long-run returns on risky assets to outweigh short-term losses,
but the Fed’s Survey of Consumer Finances shows that young
people generally do not follow this advice. Instead, they invest
little or nothing in risky assets initially and increase their holdings
gradually as they approach retirement. Economists find that
accounting for other risks that young people face can help explain
this behavior.
The Economic Brief online series includes essays based
on research by Richmond Fed economists.
To access these articles and other research publications,
visit http://www.richmondfed.org/publications/research/

32

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

AROUNDTHEFED

Take Your Pick on Jobs Stats
B y L i s a Ke n n e y

“Which Estimates of Metropolitan-Area Jobs Growth
Should We Trust?” Joel Elvery and Christopher Vecchio,
Cleveland Fed Economic Commentary, April 1, 2014.

W

hen it comes to unemployment statistics, you can
have them fast or you can have them accurate
— take your pick. That’s according to a recent Economic
Commentary from Joel Elvery and Christopher Vecchio at
the Cleveland Fed, which examines three different employment estimates across four states and six metro areas in the
Fed’s Fourth District to determine which one is the most
accurate measure.
The estimates produced by the Bureau of Labor Statistics
are the monthly State and Metro-Area Employment, Hours,
and Earnings (initial SAE); the annual revision of the initial
SAE (final SAE); and the Quarterly Census of Employment
and Wages (QCEW).
The initial SAE is the timeliest — it is released five weeks
after the end of every month — but the Cleveland Fed finds
that it is the least accurate. In fact, the BLS itself cautions
against using this measure for analysis because of the heavy
revisions it undergoes. It also has a large margin of error and
relies on a sample size that is too small to generate accurate
estimates. Though it may be hard to resist data that is available so quickly, the authors caution that “wrong data can be
worse than no data.”
Another option is the final SAE, the second of two
annual revisions of initial SAE data, released four to 15
months after the initial SAE. The final SAE covers up to
September of the prior year. This tends to be the most
accurate data available. It revises the initial SAE to correlate with the QCEW data and includes an estimate of
other kinds of employment, such as self-employment. The
authors note that sometimes the revisions are so large, they
wipe out the “typical average year-over-year changes in jobs
for those metro areas” that may have been reported in the
initial SAE.
There is also the QCEW, released four to nine months
after the end of a quarter. It is a highly accurate measure
that is an actual count of employment and covers 98 percent of all employment. The QCEW has a high correlation
rate with the final SAE across both metro areas and states.
In looking at the three measures for margin of error,
revision size, and accuracy at both metro and state levels,
Elvery and Vecchio conclude — unsurprisingly — that the
final SAE is the best choice for employment data. Because
the final SAE takes a much longer time to produce, however,
the authors acknowledge that there may be times when the
initial SAE and the QCEW are the only estimates available.
In these cases, they say, “the QCEW is the better choice.”

And although the authors echo the BLS in advising against
overreliance on the initial SAE, they note that it may have
some use as an early indicator of state-level employment
trends.

“The Evolution of U.S. Community Banks and its Impact
on Small Business Lending,” Julapa Jagtiani, Ian Kotliar,
Ramain Quinn Maingi, Philadelphia Fed Working Paper
No. 14-16.

I

n a recent working paper, Julapa Jagtiani of the
Philadelphia Fed and co-authors Ian Kotliar and Ramain
Quinn Maingi of Rutgers University investigate whether
the decline in the number of community banks over the
last decade has affected small business lending. The authors
argue that this is a potential concern because of community
banks’ “special role in supporting small businesses in their
local communities.”
Between 2001 and 2012, more than 1,000 community
banks were acquired by larger institutions or shuttered,
while the number of large banks rose from six to 18. Jagtiani,
Kotliar, and Maingi define community banks as those with
less than $1 billion in total assets and large banks as having
more than $100 billion in total assets.
To determine whether acquisitions have affected small
business lending, the authors analyze risk characteristics of acquired community banks, compare pre- and postacquisition performances of those banks, and examine stock
market reactions to acquisitions.
So has the decline of community banks eliminated unique
support for small businesses and damaged overall small business lending? In short, their answer is no.
The authors find that many of the community banks
that were acquired during the 2007-2009 financial crisis
had unsatisfactory ratings from regulators and were in poor
condition; therefore, “mergers involving community bank
targets so far have enhanced the overall safety and soundness of the overall banking system.” And since large banks
more than doubled their small business lending market share
between 2001 and 2012, the paper finds that these mergers
did not have a negative effect on small business lending.
“Larger bank acquirers have tended to step in and play a
larger role in SBL [small business lending].”
On a policy note, the authors conclude that policies discouraging mergers between large firms and community banks
“could result in a potential unintentional effect on the supply
of SBL lending.” Allowing these sorts of mergers to continue
will result in healthier and more efficient banks overall, they
suggest, not just in regard to certain kinds of lending.
EF
Econ Focus | Second Quarter | 2014

33

BOOKREVIEW

The Birth of Bretton Woods
The Battle of Bretton Woods:
John Maynard Keynes, Harry
Dexter White, and the Making of
a New World Order
By Benn Steil
Princeton, N.J.: Princeton
University Press, 2013, 370 pages

Reviewed by Robert L. Hetzel

B

enn Steil of the Council on Foreign Relations tells
the story of the birth of the Bretton Woods system
of fixed exchange rates at Bretton Woods, N.H., in
July 1944. The United States had already decided on the
design of the system in advance. At the actual conference,
the American architect of the plan, Harry Dexter White,
used his control of the agenda to railroad the American
blueprint past the largely parochial and befuddled delegates
from the 44 Allied countries represented. The United States
wanted a system of international commerce that would
allow unfettered access of foreign markets to its ascendant
export industry. That meant restricting the ability of foreign
countries to devalue their currencies relative to the dollar
and to impose tariffs in order to advantage their own export
industry.
The Battle of Bretton Woods makes this story of American
power both engaging and instructive. It is engaging because of
the way in which he portrays the competition over the design
of Bretton Woods as a contest between two extraordinarily
strong personalities: Harry Dexter White and John Maynard
Keynes. It is instructive because of the way in which he uses
these two individuals to tell the intellectual history of the first
half of the 20th century.
Keynes was 31 years old when, in 1914, the start of World
War I brought an end to the international gold standard.
The era of the gold standard had encouraged a “classical
economics,” which emphasized free trade and free markets.
This intellectual orthodoxy associated the international gold
standard and its free movement of gold and capital with free
trade, laissez-faire, and the quantity theory. In the economic
sphere, governments should give free rein to market forces.
In the 1930s, when the world economy crashed, so did
support for classical economics among both public and professional economists. The demand was for government to
master the destabilizing market forces that had presumably
brought down the world economy. At the radical left end,
34

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

intellectuals demanded central planning. White gravitated
toward this end in principle. Keynes led the right end with
his program to manage trade and the economy. Almost no
one supported free markets.
The debate over the design of a postwar monetary system
played out in this environment. There was a desire to return
to the sense of security and stability that had characterized
the earlier gold standard era. At the time, the consensus
was that such stability required the fixed exchange rates of
the gold standard. Earlier, Keynes, in his 1923 book A Tract
on Monetary Reform, had pointed out that a country had to
choose between internal stability of prices and external stability of the foreign exchange value of its currency.
A system of fixed exchange rates (providing for external
stability) that precluded recourse to deflation (providing
for internal stability) would then build in a fundamental
contradiction. Keynes’ design for the postwar system of
international payments would have had fixed exchange rates
but would have been made to work through “management”
by technocrats to avoid deflation. Moreover, Keynes, like
other contemporary observers, saw the capital outflows
that forced countries on the gold standard, like Britain,
into deflation and eventually into devaluations as evidence
of the destabilizing influence of market forces rather than
as symptoms of monetary disorder. Keynes’ system would
have allowed countries like Britain liberal use of devaluations
against the dollar and exchange controls in order to deal with
trade deficits. This discretionary “management” ran directly
counter to the American agenda of wide open markets for
American exporters.
In their capacity as negotiators, both Keynes and White
pursued the agendas of their respective countries. In his
capacity as British negotiator and in his own personal capacity, Keynes wanted to preserve what he could of the old
system with London at the center of the world financial system. That meant resisting complete American hegemony.
Debate then centered on two issues.
First, what would be the unit of account and the means
of payment in the new system? The United States held most
of the world’s monetary gold. Also, every country after the
war would need dollars to finance the imports required for
basic commodities like food and energy and for reconstruction. The United States wanted a system based on gold and
the dollar. Keynes wanted an entirely new paper currency,
which he called bancor, a term combining the French words
for bank and gold.
Second, Keynes did not want Britain to be completely
dependent on American aid after war. To pay for its postwar imports, Britain would need to export. Keynes wanted
Britain to be able to retain its imperial preferences, which
restricted the ability of its colonies to import from countries

other than Britain. Steil recounts how earlier in opposing the
nondiscriminatory trade clauses contained in the Atlantic
Charter, which would have ended the preferences, “Keynes
exploded in rage in front of the State Department’s Dean
Acheson,” reacting to the “lunatic proposals” of Secretary of
State Cordell Hull.
The forceful, combative, yet complex personalities of
White and Keynes provide a Technicolor background to
the narrative. White wanted to be close to power to exercise influence. Completely apart from the role he played in
securing the American agenda at Bretton Woods, White
wanted to hasten a new economic order based on the Soviet
model of state control. As summarized by Steil, based on an
unpublished essay written by White, White believed that
after the war the American economic system would move
toward the Soviet model. Steil quotes White from the
essay, “Russia is the first instance of a socialist economy in
action. And it works!”

Keynes could be alternately charming and insolent. He
referred to the negotiations at Bretton Woods as the “monkey house.” He said of White, “He has not the faintest
conception of how to behave or observe the rules of civilized
intercourse.”
In the end, Keynes knew that Britain would be dependent on American aid after the end of the war. During the
war, Britain could not sustain its military without LendLease. Assuring Lend-Lease meant cooperating with White,
the U.S. Treasury, and the American demand for a postwar
monetary order of fixed exchange rates and free trade.
Keynes knew he had to accept the White plan and sell it to
British politicians.
Benn Steil has written a book full of historical insight and
human color.
EF
Robert L. Hetzel is a research adviser at the Federal
Reserve Bank of Richmond.

Expanding Unemployment Insurance continued from page 21
and it increased the benefit duration and generosity sharply.
Lemieux and MacLeod hypothesized that workers would
gradually become aware of the more generous benefits as
they were exposed to the program through involuntary
unemployment, and over time this would change their
incentives to supply labor. From 1972 to 1992, unemployment and UI use trended upward, and the authors found
evidence that first-time UI recipients were more likely to
use the system again throughout their lifetime.

Evaluating UI
Determining the desirability of UI as a social insurance
program involves a number of considerations. As with any
insurance program, the possibility of misuse is real. But
many labor economists argue that UI does a reasonable job
of minimizing moral hazard.
“In order to be eligible for UI, you must have an established job history,” says San Francisco Fed labor economist
Robert Valletta. In most states, eligibility for UI is determined based on employment and wages during a 12-month
period preceding unemployment. “So, these are people com-

ing from a career who are just trying to stay afloat during a
difficult period of dislocation.”
Valletta and Rothstein also argue that UI serves a unique
welfare function. In a 2014 working paper, they explored
whether households are able to supplement their income
from UI using other safety net programs once their eligibility for UI benefits expire. They found that in both the
2001 and the 2007-2009 recessions, once UI benefits were
exhausted, family incomes fell significantly and the share of
families below the poverty line nearly doubled.
In the end, evaluating UI may depend on how one views
its intended purpose. If UI is seen more as a program of
social insurance designed to keep middle-class families out
of poverty, then it seems to be largely a success. As a program
of economic stabilization, the evidence is mixed, especially
when one considers the potential long-run costs of expanding benefits for extended periods. It’s also not clear that
UI is the best program to deal with every unemployment
spell. Ultimately, societies must weigh the negative effects
of UI against the benefits when considering changes to the
program.
EF

Readings
Card, David, Raj Chetty, and Andrea Weber. “Cash-on-Hand and
Competing Models of Intertemporal Behavior: New Evidence from
the Labor Market.” Quarterly Journal of Economics, November 2007,
vol. 122, no. 4, pp. 1511-1560.

Lemieux, Thomas, and W. Bentley MacLeod. “Supply Side
Hysteresis: The Case of the Canadian Unemployment Insurance
System.” Journal of Public Economics, October 2000, vol. 78, no. 1-2,
pp. 139-170.

Gruber, Jonathan. “The Consumption Smoothing Benefits of
Unemployment Insurance.” National Bureau of Economic
Research Working Paper No. 4750, May 1994.

Rothstein, Jesse. “Unemployment Insurance and Job Search in the
Great Recession.” Brookings Papers on Economic Activity, Fall 2011.

Krueger, Alan B., and Andreas Mueller. “Job Search and
Unemployment Insurance: New Evidence from Time Use Data.”
Journal of Public Economics, April 2010, vol. 94, no. 3-4, pp. 298-307.

Econ Focus | Second Quarter | 2014

35

DISTRICTDIGEST

Economic Trends Across the Region

The Unconventional Oil and Gas Boom
B y R . A n d r ew B a u e r

A

dramatic shift is taking place in the U.S. energy
sector. For decades, analysts and policymakers
assumed that as U.S. reserves of oil and natural
gas dwindled, domestic production would decrease gradually and imports would increase steadily. But technological
advances in extracting oil and natural gas along with higher
energy prices have changed those assumptions. U.S. energy
production has risen sharply in recent years and is expected
to continue to grow at remarkable rates in coming decades
— with benefits for the U.S. economy overall as well as
within the Fifth District.
One of the most active regions is the Marcellus Shale,
which underlies most of West Virginia, western Maryland,
and parts of Ohio, Pennsylvania, and New York. The
Marcellus Shale is a rock formation located deep beneath
the earth’s surface that contains vast amounts of natural
gas. West Virginia and Pennsylvania have been actively
encouraging the development of this resource in recent
years, and the growth in production and the impact on local
economies has been tremendous. In December 2013, the
Marcellus region provided 18 percent of total U.S. natural
gas production — a remarkable increase from almost no
production just six years earlier. For the local communities
at the epicenter of this production boom, the demand for
workers and housing has jumped to the point where both are
often in short supply. While this transformation of the energy sector is still in its early stages, the potential long-term
impact on the regional and national economy is expected to
be considerable.
There are a number of concerns surrounding the oil and
natural gas boom, particularly its potential effects on the
environment and nearby communities. The sector would
face regulatory challenges if research indicated that current
methods of production create substantial health or environmental risks. The potential benefit of these resources is so
great, however, that additional regulations would likely only
slow development in the sector.

Unconventional Oil and Natural Gas
The U.S. energy boom is due to the development of “unconventional” oil and natural gas. Unconventional refers to the
fact that the oil and gas are trapped in rock formations with
very low permeability and alternative methods are needed to
extract them. Examples include “tight oil” and “tight gas,”
which are found in rock formations such as siltstone, sandstone, limestone, and dolostone; shale gas, which is natural
gas found in shale, a fine-grained sedimentary rock with very
low permeability; and coalbed methane, which is natural gas
found in coalbeds. All of these hydrocarbons are extracted in
ways that differ from “conventional” wells where the oil and
36

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

natural gas naturally flow or can be pumped from an underground reservoir to the surface.
Three factors came together to make production of
unconventional oil and natural gas economically viable:
horizontal drilling, hydraulic fracturing, and increases in
oil and gas prices. While horizontal drilling and hydraulic
fracturing are not new, significant technological advances in
recent decades have allowed developers to better target and
more efficiently extract the oil and natural gas. The process
of horizontal drilling and hydraulic fracturing (commonly
referred to as “fracking”) is more expensive than drilling a
conventional vertical well, but higher prices for natural gas
have made these techniques economically viable. At some of
the early unconventional formations (or “plays”), such as the
Barnett Shale in Texas or the Bakken formation in Montana
and North Dakota, it wasn’t until the mid-2000s, after energy prices rose sharply, that there was more widespread usage
of horizontal wells and hydraulic fracking. In the Barnett
Shale, one of the nation’s most developed shale plays, the
number of producing horizontal wells rose from less than
400 in 2004 to more than 10,000 in 2010.
So what exactly is fracking? Fracking involves injecting
fluids into rock formations to create fractures in the rock
that allow the oil and natural gas to flow through the well
to the surface. A horizontal well that utilizes fracking techniques is dug in several stages. The well is drilled vertically to
a predetermined depth, depending on the depth of the rock
formation, and then the well is “kicked off” or turned at an
angle until it runs parallel within the reservoir. The well can
extend up to three miles through the reservoir, allowing for
a greater number of access points. In drilling the well, several
casings are cemented into place to provide stability and to
ensure that the fracking fluids and the hydrocarbons do not
escape into the surrounding soil.
There has been a lot of controversy surrounding fracking,
particularly about its potential impact on the environment.
There are concerns that the oil and gas could pollute the
groundwater through faulty well design or construction or
through migration to the surface. In addition, there are concerns about the fracking fluid used in the process. Fracking
fluid is roughly 98 percent water and sand, but the chemicals
it contains could pollute drinking water if released. Faulty
well design or construction could result in the fluid escaping into the surrounding environment. Improper handling
of the fluid that returns to the surface through the well is
another issue. This fluid is injected into disposal wells that
are thousands of feet underground, but there are concerns
that the fluid could migrate upward into groundwater.
There are also concerns related to fracking and earthquakes. According to the U.S. Geological Survey (USGS),

Econ Focus | Second Quarter | 2014

West Virgina Gas Production In Billions

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

U.S. Gas Production In Billions

Natural Gas Production (Billion Cubic Feet)
fracking causes earthquakes that are typically
600
30,000
too small to be noticed. The USGS has found,
however, that the injection of wastewater
U.S.
500
25,000
into disposal wells has the potential to induce
West Virginia
larger earthquakes. Of the 40,000 disposal
400
20,000
wells in the United States that are related
to oil and gas activities, there were roughly
300
15,000
a dozen cases where larger earthquakes were
200
10,000
detected. The USGS is currently researching
the issue to better identify induced earth100
5,000
quakes, understand why they occur in some
places but not in others, and determine what
0
0
should be done once they occur.
Another issue is the amount of stress
placed on nearby towns and cities, which
YEAR
Source: U.S. Energy Information Agency
typically experience increased traffic, greater use of local water resources, and more air
and noise pollution.
1 trillion cubic feet (tcf) in 2006 to more than 8 tcf in
Overall, while there are risks and concerns, there does
2012. The EIA expects this trend to continue. Shale gas
not appear to be an inherent problem with this particular
production is expected to reach 17 tcf by 2040. As a
method of energy extraction. In 2004, the Environmental
consequence, total natural gas production is forecast to
Protection Agency (EPA) published the results of a study
increase from roughly 24 tcf in 2013 to more than 33 tcf
on hydraulic fracturing used in coalbed methane reservoirs
in 2040. And as was the case with oil reserves, proved
to evaluate the potential risks to underground sources of
natural gas reserves increased sharply in recent years,
drinking water. The study focused on coalbed methane resfrom 200 tcf in 2004 to 350 tcf in 2013 — an increase of
ervoirs because they are typically closer to the surface and
75 percent in less than a decade.
to underground sources of drinking water. The EPA conWith the increase in oil and natural gas production,
U.S.
energy imports have declined sharply. In 2013, U.S. net
cluded that “the injection of hydraulic fracturing fluids into
energy imports were the lowest in more than 20 years. With
[coalbed methane] wells poses little or no threat to [underan abundance of natural gas, the EIA forecasts the United
ground sources of drinking water].” In 2012, in response
States to become a net exporter of natural gas in 2015. Total
to persistent environmental concerns about the surge in
U.S. energy consumption is expected to continue to outstrip
fracking, the EPA began a new study to “understand the
total U.S. energy production in coming decades, howevpotential impacts of hydraulic fracturing on drinking water
resources.” The agency will release a report for peer review
er, mostly as a result of domestic demand for petroleum
and comment in 2014.
outweighing domestic production. Yet the gap between
production and consumption is expected to narrow conThe Boom in Unconventional Production
siderably. Total energy production is forecast to satisfy all
but 3 percent of total consumption by 2034 — a significant
The boom in unconventional oil and natural gas producimprovement from a 16 percent gap in 2012.
tion is expected to increase in coming years. U.S. tight oil
production has increased from under 500,000 barrels per
day in 2008 to over 2.5 million barrels in 2013. Total U.S.
Economic Benefits of the Boom
crude oil production increased from 5 million barrels per day
The benefits from greater U.S. production of oil and natural
in 2008 to 7.4 million barrels per day in 2013, a 49 percent
gas are expected to be extensive and long-lasting. There
increase. The U.S. Energy Information Agency (EIA) is
already has been strong growth in the energy sector from
forecasting production to reach 9.6 million barrels per day in
increased extraction, distribution, and refining. Much of
2015 -- which would match the record U.S. production level
that growth has been centered in the regional economies
reached in 1970. As these formations are slowly exhausted,
where there is active exploration and extraction. In many of
production is anticipated to then gradually decline in subsethe areas where there are active shale plays — including parts
of North Dakota, Montana, and Texas — unemployment
quent decades to 7.8 million barrels per day in 2040, slightly
rates are among the lowest in the country.
higher than 2013 production levels. Notably, as these plays
The benefits from energy extraction are not limited
are developed, additional supplies are being found, resulting
in sharp increases in proved reserves. Proved oil reserves
geographically, however. Many upstream and downstream
increased from 19 billion barrels in 2008 to 26.5 billion in
industries located across the country have benefited, includ2013, an increase of 39 percent.
ing the fabricated metal industry, pipe manufacturers,
The outlook for natural gas is even more astounding (see
machining industry, oil and natural gas equipment manufacchart). U.S. shale gas production increased from roughly
turers, and truck and construction equipment manufacturers.

37

The production boom in natural gas in recent years also
has resulted in lower natural gas prices. The price of natural
gas in the United States averaged $3.76 per 1 million Btu
(British thermal unit) over the past three years, compared
with $10.15 in Europe and $13.88 in Japan. According to the
EIA, nearly half of household energy consumption in 2009
was in the form of natural gas while roughly 30 percent of
energy consumption in 2010 in the manufacturing sector was
natural gas. In addition, 27 percent of electricity was generated using natural gas in 2013 — a percentage that has been
increasing in recent years as electrical power companies have
been switching away from coal in favor of natural gas by converting old coal-fired units to natural gas ones, shutting down
coal-fired plants, and expanding capacity at existing natural
gas plants or building new ones. Lower natural gas prices
result in lower costs to generate electricity and lower electricity prices that benefit consumers, businesses, and manufacturers. Manufacturers in energy-intensive industries such as
refining, iron and steel, cement, food, and chemicals stand to
gain even greater benefits from lower electricity costs.
Energy-related chemical industries are also likely to benefit greatly from the boom in natural gas production. Natural
gas liquids such as ethane, propane, and butane (known as
associated gas or “wet gas”) are found in some natural gas
reservoirs. These liquids are key chemicals that are used

Marcellus Shale in
West Virginia

SOURCE: West Virginia Geological and Economic Survey, 2014

38

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

widely in a number of manufacturing industries. Ethane
and propane can be processed into ethylene and propylene,
which are found in a myriad of consumer products including food packaging, bottles, trash bags, toys, tires, carpets,
insulation, and clothing, as well as in construction materials
such as siding and PVC. Natural gas or natural gas liquids are
used to produce ammonia, plastics, fibers, pesticides, dyes,
and other chemicals, as well as many household cleaning
solutions. Greater production of natural gas and natural gas
liquids have resulted in lower prices for these key chemical
components. Given the disparity between natural gas prices
in the United States and the rest of the world, chemical manufacturers in this country are likely to enjoy a cost advantage
against their overseas competitors.

The Natural Gas Boom in the Fifth District
The Fifth District has experienced the oil and gas boom
in recent years as a result of the Marcellus Shale in West
Virginia (see map). The natural gas sector has seen a dramatic increase in production in recent years, and those areas
of the state where most of the exploration and production
is taking place have seen a significant pickup in economic
activity, including business creation and job growth. In
addition, given the importance of natural gas and natural gas
liquids to the chemical industry, the higher output of natural
gas may drive additional investment
in chemical production, which, in
turn, would attract additional manufacturers to the state.
Natural gas production in West
Virginia increased 140 percent
between 2006 and 2012. It rose by
39 percent in 2012 alone, in part due
to infrastructure improvements
that allowed producers increased
access to markets. Within the
Marcellus Shale, the number of
producing wells rose from 19 in
2006 to over 1,250 in 2012, and the
amount of natural gas produced
increased from less than 100 million cubic feet in 2006 to roughly
330,000 million cubic feet in 2012.
Although an insignificant contribution to total production in 2006,
Marcellus Shale gas represented
62 percent of all natural gas produced in West Virginia in 2012.
Over this time period, production
from horizontal wells soared from
less than 1 percent of total production in 2006 to 84 percent in 2012.
While this is a dramatic
increase, most of the new production has come from a relatively
concentrated area in the northern

West Virginia Oil & Gas Sector

Employment
2007

2013

569,774

Oil and Gas Extraction
Support Activities for Oil and Gas Operations

Average Salary (in 2013 dollars)

’07 - ’13
% change

’03 - ’13
% change

564,746

-0.9

4.1

37,697

2,442

2,439

-0.1

40.7

2,496

4,463

78.8

Natural Gas Distribution

917

742

Oil and Gas Pipeline Construction

965

Total (All Industries)

Total Oil & Gas Sector
Oil & Gas: % of Total

2007

2013

’07 - ’13
% change

’03 - ’13
% change

39,519

4.8

6.5

65,553

81,970

25.0

45.1

189.8

49,902

68,081

36.4

50.8

-19.1

-29.9

75,181

64,559

-14.1

-11.3

3,247

236.5

356.0

60,889

80,183

31.7

76.5

6,820

10,891

59.7

115.9

62,881

73,698

17.2

34.1

1.2

1.9

167

186

Source: Bureau of Labor Statistics

part of the state. The top natural gas producing counties
in West Virginia in 2012 — Harrison, Wetzel, Doddridge,
and Marshall — accounted for roughly two-thirds of all
Marcellus Shale production. The nearby counties of Upshur,
Taylor, Marion, Tyler, Monongalia, and Ohio combined for
another 17 percent of production in 2012.
The oil and gas sector has been a source of growth for
the state economy with gains in employment, wages, and
business establishments (see table). Between 2007 and 2013,
total employment in the oil and natural gas sector increased
from 6,820 to 10,891, an increase of 60 percent. In comparison, total employment in the state went down by 5,028 jobs,
or 0.9 percent, during that period. Within the oil and gas
sector, the gains were concentrated in two key subsectors:
support activities for oil and gas operations, which increased
79 percent, and oil and gas pipeline construction, which rose
to 3,247 jobs from 965 in 2007. Other subsectors within the
energy sector saw no growth or experienced a loss, however.
Employment in oil and gas extraction was flat over the period, while employment in natural gas distribution declined by
175 jobs, or 19 percent.
Wage growth in the energy sector also outpaced the
statewide average. In 2007, the average salary in West
Virginia was $37,697 (in 2013 dollars), while the average
wage in the energy sector was $62,881, roughly 1.7 times
greater. The average wage in the energy sector rose
17 percent from 2007 to 2013, considerably faster than
the 4.8 percent increase for the average wage across all
industries. And just as employment growth was greatest in
support activities for oil and gas operations and oil and gas
pipeline construction, average salary growth was greatest in
those sectors as well. In the oil and gas operation support
sector, the average salary rose 36 percent; in pipeline construction, the average salary rose 32 percent and, at $80,183, was
more than twice the state’s average salary of $39,519 in 2013.
Similarly, business creation in the energy sector outpaced the overall economy. The number of establishments
in the energy sector grew by 30 percent from 2007 to 2013,
compared with 2.1 percent for the state overall. And as was
the case with job and wage growth, the strongest increases
were in support activities for oil and gas operations and oil
and gas pipeline construction sectors, up 54 percent and

94 percent, respectively.
While gains in employment, wages, and business creation in the energy sector occurred throughout the state,
the northern half of the state saw the greatest benefits. The
unemployment rate in West Virginia in 2013 was 6.5 percent,
considerably less than the 7.4 for the United States, in large
part due to the boom in oil and natural gas. The unemployment rate for the top 10 natural gas-producing counties in
the northern half of the state was 5.5 percent, well below the
state average.
West Virginia’s chemical industry also is expected to
benefit from increased Marcellus Shale gas and natural gas
liquids production. The West Virginia manufacturing sector
has a relatively high concentration of chemical manufacturing, particularly in the resin, rubber, and artificial fibers
sector, as well as in basic chemical manufacturing. There
are 90 chemical manufacturing establishments in West
Virginia, including 45 in basic chemical manufacturing
and 11 in resin, rubber, and artificial fibers manufacturing.
Many of the largest chemical manufacturing companies in
the world have a presence in West Virginia, including Dow
Chemical, DuPont, Bayer, SABIC, and Braskem. With the
prospect of a steady supply of cheap natural gas and natural
gas liquids, additional investment in the chemical industry is
expected in the coming years and decades.

Conclusion
The unconventional oil and natural gas boom has reversed
the outlook for the U.S. energy sector. Instead of decreasing levels of production and reserves, U.S. energy production has jumped in recent years and is expected to
continue to increase, with oil production reaching highs
not seen since the 1970s and the United States becoming
a net natural gas exporter. Since natural gas is widely used
across sectors, the prospect of relatively inexpensive natural gas may translate to broad gains for the U.S. economy.
Greater production of natural gas liquids and natural gas,
for example, will allow the U.S. chemical industry to enjoy
cheaper input costs and a relative cost advantage over its
competitors overseas. With the Marcellus Shale in West
Virginia, the Fifth District is squarely in the center of this
transformation of the energy sector.
EF
Econ Focus | Second Quarter | 2014

39

State Data, Q4:13
	DC	MD	NC	

SC	VA	

WV

Nonfarm Employment (000s)
747.9
2,604.2
4,100.0
1,916.8
3,766.9
764.5
Q/Q Percent Change
0.2
0.3
1.1
0.9
0.0
0.2
Y/Y Percent Change
0.8
0.7
2.2
2.4
0.2
0.0
							
Manufacturing Employment (000s)
0.9
105.3
442.8
228.4
229.3
48.5
Q/Q Percent Change
-13.3
-0.8
0.2
1.7
-0.6
-0.2
Y/Y Percent Change
-13.3
-1.9
0.2
3.1
-0.9
-0.3
						
Professional/Business Services Employment (000s) 155.7
418.4
563.0
239.6
668.2
65.3
Q/Q Percent Change
-0.2
0.5
2.1
-1.2
-1.3
0.3
Y/Y Percent Change
-0.1
1.0
5.0
0.5
-2.1
1.4
							
Government Employment (000s)
239.0
503.0
719.5
352.5
709.9
154.9
Q/Q Percent Change
0.3
-0.1
1.4
0.4
-0.1
0.3
Y/Y Percent Change
-1.7
-0.4
0.1
1.3
-0.3
0.4
						
Civilian Labor Force (000s)
367.7
3,110.2
4,666.5
2,170.6
4,234.8
790.1
Q/Q Percent Change
-0.1
-0.4
-0.4
-0.4
-0.1
-0.5
Y/Y Percent Change
-1.3
-0.9
-1.2
-0.7
0.1
-2.1
							
Unemployment Rate (%)
7.8
6.2
7.2
6.8
5.3
6.2
Q3:13
8.3
6.6
7.9
7.5
5.6
6.5
Q4:12
8.7
6.9
9.0
8.5
5.8
7.2
					
Real Personal Income ($Bil)
45.0
299.9
355.9
159.9
375.5
61.5
Q/Q Percent Change
0.1
0.2
0.6
0.6
0.1
-0.1
Y/Y Percent Change
-0.3
-0.5
0.9
1.2
-0.9
-0.5
							
Building Permits
963
4,308
12,330
5,653
5,713
425
Q/Q Percent Change
-11.0
-13.7
2.6
-9.6
-29.2
-31.1
Y/Y Percent Change
-38.3
10.9
-4.2
23.7
-16.6
-12.6
							
House Price Index (1980=100)
656.2
414.5
305.1
306.7
402.5
220.3
Q/Q Percent Change
0.8
0.4
0.0
-0.1
0.4
0.1
Y/Y Percent Change
9.9
2.1
1.2
0.8
1.6
1.7

40

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

Fourth Quarter 2002 - Fourth Quarter 2013

Change From Prior Year

Fourth Quarter 2002 - Fourth Quarter 2013

Fourth Quarter 2002 - Fourth Quarter 2013

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

10%
9%
8%
7%
6%
5%
4%
3%
03 04 05 06 07 08 09 10

11

12 13

03 04 05 06 07 08 09 10

11

Fifth District

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

12 13

03 04 05 06 07 08 09 10

Unemployment Rate
Major Metro Areas

Building Permits

Change From Prior Year

Change From Prior Year

Fourth Quarter 2002 - Fourth Quarter 2013

Fourth Quarter 2002 - Fourth Quarter 2013

Fourth Quarter 2002 - Fourth Quarter 2013

03 04 05 06 07 08 09 10
Charlotte

Baltimore

11

12 13

Washington

10%
0%
-10%
-20%
-30%
-40%
-50%
03 04 05 06 07 08 09 10
Baltimore

11

12 13

Fourth Quarter 2002 - Fourth Quarter 2013

Fourth Quarter 2002 - Fourth Quarter 2013

Fifth District

10
0
-10
-20
-30
-40
-50

0
-10
-20
-30
03 04 05 06 07 08 09 10

11

12 13

United States

House Prices
Change From Prior Year
Fourth Quarter 2002 - Fourth Quarter 2013

16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

40

10

03 04 05 06 07 08 09 10

Washington

FRB—Richmond
Manufacturing Composite Index

30

12 13

30%
20%

FRB—Richmond
Services Revenues Index

20

11

Change From Prior Year

40%

Charlotte

30
20

12 13

United States

Nonfarm Employment
Major Metro Areas

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%

11

03 04 05 06 07 08 09 10

11

12 13

03 04 05 06 07 08 09 10
Fifth District

Notes:

Sources:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease. The manufacturing composite index is a
weighted average of the shipments, new orders, and employment indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building Permits: U.S. Census Bureau, http://www.census.gov.
House Prices: Federal Housing Finance Agency, http://www.fhfa.gov.

For more information, contact Jamie Feik at (804)-697-8927 or e-mail Jamie.Feik@rich.frb.org

11

12 13

United States

Econ Focus | Second Quarter | 2014

41

Metropolitan Area Data, Q4:13
Washington, DC	
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:13
Q4:12
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Baltimore, MD	Hagerstown-Martinsburg, MD-WV

2,507.1
0.8
0.5

1,328.9
1.8
1.5

104.3		
0.7		
0.3			

5.1
5.4
5.6

6.4
6.7
7.2

6.7		
7.0		
7.8			

5,575
-3.1
-12.8

1,349
-43.9
-21.3

234		
-9.7		
-7.1			

		
	Asheville, NC	Charlotte, NC	Durham, NC	
Nonfarm Employment (000s)
178.0
892.0
289.9		
Q/Q Percent Change
2.2
2.6
2.6		
Y/Y Percent Change
2.7
2.4
2.1			
			
Unemployment Rate (%)
5.5
7.2
5.5		
Q3:13
6.1
7.9
6.0		
Q4:12
7.3
9.1
7.0			
						
Building Permits
371
4,156
689			
Q/Q Percent Change
-11.2
42.8
-48.2			
Y/Y Percent Change
40.0
33.6
32.8			
		
					
Greensboro-High Point, NC	Raleigh, NC	
Wilmington, NC

42

Nonfarm Employment (000s)
351.3
Q/Q Percent Change
2.2
Y/Y Percent Change
1.2
		
Unemployment Rate (%)
7.5
Q3:13
8.2
Q4:12
9.4

555.1
2.2
4.0

143.4			
0.3			
2.7			

5.7
6.2
7.3

7.3			
8.1			
9.2		

Building Permits
Q/Q Percent Change
Y/Y Percent Change

3,092
32.6
-36.0

1,028			
12.0		
52.5		

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

493
-26.0
24.5

Winston-Salem, NC	Charleston, SC	Columbia, SC
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

211.5
2.1
1.9

311.9
-0.1
1.1

366.4		
1.8		
0.9		

6.6
7.2
8.5

5.7
6.2
6.9

6.1		
6.7		
7.4

Building Permits
201
Q/Q Percent Change
-66.6
Y/Y Percent Change
26.4
		

1,069
-16.9
2.6

892		
-2.0		
1.7		

Unemployment Rate (%)
Q3:13
Q4:12

Greenville, SC	Richmond, VA	Roanoke, VA	
Nonfarm Employment (000s)
Q/Q Percent Change
Y/Y Percent Change

317.4
1.7
2.8

639.1
0.8
0.8

160.0		
0.9		
0.4		

Unemployment Rate (%)
5.7
5.5
Q3:13
6.3
5.8
Q4:12
7.0
6.2
			
Building Permits
660
970
Q/Q Percent Change
-17.7
-40.1
Y/Y Percent Change
-2.7
-22.1
		

5.4		
5.8		
6.0		
166		
52.3		
58.1		

Virginia Beach-Norfolk, VA	Charleston, WV	Huntington, WV	
Nonfarm Employment (000s)
752.9
145.6
115.5		
Q/Q Percent Change
-0.9
-0.4
2.6		
Y/Y Percent Change
0.0
-1.4
0.2		
		
Unemployment Rate (%)
5.7
5.6
6.7		
Q3:13
6.0
5.8
7.0		
Q4:12
6.3
6.8
7.2		
				
Building Permits
811
24
50		
Q/Q Percent Change
-67.6
-52.9
163.2		
Y/Y Percent Change
-27.6
-36.8
525.0		
		

For more information, contact Jamie Feik at (804) 697-8927 or e-mail Jamie.Feik@rich.frb.org

Econ Focus | Second Quarter | 2014

43

Opinion

Moral Hazard and Measurement Hazard
B y J o h n A . W ei n be r g

O

ne of the most fundamental features of insurance
its UI benefit payout and duration. Consequently, North
markets is the possibility that providing insurance
Carolina became ineligible for federal extended UI benefits
against a specific hazard will increase the incidence
six months before they expired for the rest of the country.
of the hazard being insured. Someone who is at least partially
This has invited comparisons between North Carolina’s
protected from a specific loss will generally have a reduced
labor market performance and the performance of other
incentive to take costly actions to avoid the loss — the constates with access to extended UI.
Supporters of North Carolina’s decision argue that the
sequence being a higher probability of a loss than if there
swift decline in its unemployment rate since July 2013 is
were no insurance. This phenomenon has long been recogevidence that cutting UI benefits reduced moral hazard and
nized by practitioners of insurance and academics who study
prompted unemployed individuals to more actively seek
insurance markets as the “moral hazard problem.”
work. But critics of the cuts note that labor force particiIf the term sounds a bit, well, moralistic, that’s because
it’s an old term and may have originally been used to describe
pation in North Carolina also fell during the same period,
things that we might be more likely to see in moral terms —
suggesting that some job seekers who lost UI benefits gave up
intentionally setting a fire to make
looking rather than found work. They
a fraudulent insurance claim, for
also note that North Carolina’s perAssessing the effects of
instance. But in its modern usage, it
formance was similar to neighboring
extended UI benefits is not states that did not cut benefits early.
applies more broadly to the incentive
the same thing as assessing
effects of insurance, including cases
In this debate, a few words of
in which moral judgment might not
caution
are in order. First, and most
that policy’s desirability.
be so obviously called for. Risk avoidbroadly, it’s always tricky to draw
conclusions from a single example.
ance is costly, and neither maximum
North Carolina’s labor market is a small sample within the
avoidance (which you would tend to get with no insurance)
whole United States, and attempting to apply its experience
nor minimal avoidance (resulting from full insurance) is
to the other 49 states is unlikely to provide enough evilikely to represent the most efficient insurance contract.
This basic trade-off between incentives for risk avoidance
dence to conclusively determine the effects of extended UI.
and financial protection against risk shows up in any insurSecond, problems analyzing data tend to grow as geographic
coverage shrinks, especially in the case of unemployment
ance setting, including those in which insurance is provided
numbers. The Current Population Survey used by the Bureau
by the public sector. Deposit insurance for banks generally
of Labor Statistics to track unemployment relies on a sample
makes banks and their creditors less likely to pay attention
of households designed to be representative of the entire
to risks that could lead to balance sheet losses — pushing
country. Disaggregating these data to estimate state-level
the banking industry at least somewhat in the direction of
statistics introduces some imprecision. Furthermore, labor
a greater probability of suffering losses. And unemployment
market data at the state level are often more “noisy” than
insurance (UI) affects the job-seeking incentives of the
countrywide data. For example, decisions by a single major
unemployed, pushing the labor market at least somewhat in
employer can have a large impact on state employment and
the direction of more unemployment.
mask the effects of policy changes like adjustments to UI.
In the unemployment case, as in any other, reasoning
Finally, it’s important to bear in mind that assessing the
about the direction of the effect on incentives is one thing.
effects of extended UI benefits on overall employment is
Discerning the magnitude of the effect is more difficult.
one input to, but not the same thing as, assessing the desirThe question in the case of UI, especially since the Great
Recession, is whether and to what extent insurance beneability of that policy. If society places greater value on UI as
a means to improve the welfare of the involuntarily unemfits — and in particular, extended benefits — have affected
employment. During the Great Recession, the federal govployed, it may be more willing to tolerate some broader negernment extended the maximum duration for unemployment
ative effects like increased unemployment duration. As with
benefits to 99 weeks in most states. Thus far, studies have
all insurance problems where there is an element of moral
found fairly modest effects from this change on unemployhazard, an optimal insurance scheme is one that weighs the
benefits of cushioning the insured from some losses against
ment: It appears to have contributed between one-tenth and
the costs of altered incentives. 			 EF
one-half of a percentage point to the overall unemployment
rate. (See “Expanding Unemployment Insurance,” p.20).
Here in the Fifth Federal Reserve District, effective July
John A. Weinberg is senior vice president and director
1, 2013, North Carolina’s legislature dramatically reduced
of research at the Federal Reserve Bank of Richmond.
44

E co n F o c u s | S e c o n d Q u a rt e r | 2 0 1 4

NextIsSue
Dropouts

The economic case for finishing high school is overwhelming:
Dropouts, on average, face higher unemployment, lower wages,
and poorer health. Yet one in five U.S. high school students fails
to graduate on time. The evidence indicates that policies to solve
the dropout problem must start long before high school.

Minimum wage

Calls to raise the minimum wage are commonplace. But does a
higher minimum wage actually make low-income workers better
off? Economists used to be nearly unanimous that it would
increase unemployment. Today, the profession isn’t quite as sure,
but it still may not be an effective antipoverty tool.

Colonial-era land speculation

When history books talk about the motivating factors of the
American Revolution, they typically focus on tax issues. But in
the 1760s and 1770s, Britain’s attempts to curb land speculation
in the trans-Appalachian region became a major threat to the
political rights and economic interests of the Colonial elite.

Federal Reserve
Financial reporters often sound a lot like
bird watchers, classifying Fed officials as
hawks or doves. Hawks are assumed to be
more worried about the inflation side of
the Fed’s dual mandate, while doves are
portrayed as being more concerned about
unemployment. Where did these terms
come from, and do they accurately portray
the disagreements among monetary
policymakers?

Policy Update
The U.S. Justice Department’s “Operation
Chokepoint” initiative is intended to
crack down on banks doing business with
industries potentially engaged in fraud or
illegal activities. But it has drawn criticism
from lawmakers and affected industries,
such as payday lenders, who argue that the
Justice Department is targeting disfavored,
but legal, entities.

Interview
Dani Rodrik, an economist at the Institute
for Advanced Study, discusses globalization, development, and factors that make
governments more likely to implement successful economic policies. (Editor’s Note: This
interview, which was originally scheduled to
appear in the present issue, will run in the
next issue.)

Visit us online:
www.richmondfed.org
•	To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
•	To subscribe to our magazine
•	To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

Our Perspective Series
Read about the Richmond Fed’s view of key financial and economic issues.

Too Big to Fail
Monetary Policy Independence
Housing Finance Policy
Price Stability and
Monetary Policy
Labor Market Conditions
and Policy
Workforce Development

Forthcoming:
Our Perspective on
Financial Education

Forthcoming:
Financial Education

To access the Our Perspective series visit www.richmondfed.org/research/our_perspective/