View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

SECOND QUARTER 2015

FEDERAL
FEDERALRESERVE
RESERVEBANK
BANKOF
OFRICHMOND
RICHMOND

Private Debt
and its
Public Effect
When can household
and corporate debt
harm the economy?

What’s a
Life Worth?

The Fed’s 1994
Rate Hikes

Interview with
James Poterba

VOLUME 19
NUMBER 2
SECOND QUARTER 2015

COVER STORY

11

The Public Perils of Private Debt
Debt makes the wheels of commerce turn. But under certain
circumstances, it can also heighten financial crises and recessions

Econ Focus is the
economics magazine of the
Federal Reserve Bank of
Richmond. It covers economic
issues affecting the Fifth Federal
Reserve District and
the nation and is published
on a quarterly basis by the
Bank’s Research Department.
The Fifth District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
DIRECTOR OF RESEARCH

Kartik Athreya
EDITORIAL ADVISER

Aaron Steelman
EDITOR

FEATURES

16

Building a Smarter Grid
Can “smart grid” technology change the way we use electricity?

Renee Haltom
SENIOR EDITOR

David A. Price
MANAGING EDITOR/DESIGN LEAD

Kathy Constant
STAFF WRITERS

20

What’s a Life Worth?
How to allocate our health care dollars is a challenging
question, but economics could help

DEPARTMENTS

1		 President’s Message/Scaling Back Debt Subsidies
2		 Upfront/Regional News at a Glance
3		 The Profession/Economists Learn to Grapple with Big Data
4		 Federal Reserve/Shifting Into Neutral
8		 Jargon Alert/Real Interest Rate
9		 Research Spotlight/Where Are the Grocery Stores?
10		 Policy Update/Net Neutrality 2.0
24			Interview/James Poterba
30			Economic History/Transformation of Hilton Head Island
34		 Around the Fed/The Effect of the ‘Polar Vortex’ on Economic Activity
35			Book Review/On Inequality
36				District Digest/Show and TEL: Are Tax and Expenditure
			 Limitations Effective?
44 Opinion/Why Do College Graduates Earn More?

Helen Fessenden
Jessie Romero
Tim Sablik
EDITORIAL ASSOCIATE

Lisa Kenney

­

CONTRIBUTORS

Joseph Mengedoth
Eamon O’Keefe
Santiago Pinto
Franco Ponce de Leon
Karl Rhodes
Michael Stanley
DESIGN

Janin/Cliff Design, Inc.

Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261
www.richmondfed.org
www.twitter.com/
RichFedResearch
Subscriptions and additional
copies: Available free of
charge through our website at
www.richmondfed.org/publications or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Econ Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Econ Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal Reserve System.
ISSN 2327-0241 (Print)
ISSN 2327-025x (Online)

PRESIDENT’SMESSAGE

Scaling Back Debt Subsidies

A

s individuals, we love debt and we hate it. More
precisely, we love what it enables us to do — use tomorrow’s income to pay for something we want today —
while we don’t like the burdens it places on us, especially if we
haven’t managed it well or we’ve been financially unlucky.
Private debt has constructive uses, such as allowing households to pay for large purchases like housing or education
over time and allowing owners of firms to borrow against
future earnings to finance projects. Meanwhile, lenders enjoy
a steady stream of interest payments, which is attractive to
more risk-averse investors. But private debt can be very costly
as well. As we saw during the financial crisis of 2007-2008,
highly leveraged balance sheets made it very difficult for
households and firms to adjust to the unexpected shock to
the housing market. Many households were unable to keep
up with their mortgage payments and were forced into foreclosure or bankruptcy. Financial firms that had taken on large
amounts of short-term debt to finance long-term investments
found themselves under significant stress when credit markets suddenly dried up.
Why did those households and firms become so highly
leveraged in the first place? One contributing factor is that
the United States — like many other countries — encourages the use of debt through its tax code. For example,
households are able to deduct the interest payments on their
home mortgages from their taxable incomes. While this policy has remained in place to encourage greater homeownership, it is likely not the most effective way of achieving that
goal. It encourages households that do decide to purchase
a home to take out larger mortgages than they otherwise
would, leaving them more vulnerable to adverse movements
in housing prices. Another criticism is that the tax break is
regressive, since it mostly benefits more affluent households
that can afford to buy homes in the first place.
Private firms also enjoy favorable tax treatment for debt.
The interest they pay on debt is considered a deductible
business expense — unlike dividends paid out on equity.
Economic research suggests that firms do respond to this
incentive. As their marginal tax rate increases, so does their
ratio of debt to assets. And even though banks are subject
to minimum capital requirements, research by economists
at the International Monetary Fund suggests that they also
increase their leverage as a result of this tax distortion. By
encouraging financial and nonfinancial firms to take on
greater leverage, these tax policies increase the risk of insolvency in the event of economic shocks, as we saw during the
financial crisis. Moreover, banks made use of hybrid borrowing arrangements that qualified as capital for regulatory
purposes but qualified as debt for tax purposes.
Of course, tax policy is not the only factor that encourages private-sector overindebtedness. Financial firms that

feel either implicitly or explicitly protected from losses by
government guarantees have
greater incentives to increase
leverage and rely on risky
funding. And prior to the
housing market crash, government home mortgage guarantees contributed to lowered
lending standards that helped
fuel home mortgage borrowing. Additionally, some economists have argued that there
are inherent characteristics of debt that encourage its overuse (see “The Public Perils of Private Debt,” p. 11), although
I am a bit skeptical of these claims.
At a minimum, subsidizing debt through the tax code is
likely to exacerbate these problems. In my view, we would be
better off scaling back the tax preferences that favor the use
of debt over equity. For housing, there are ways to encourage
homeownership (assuming that is a goal policymakers want
to pursue) without encouraging the buildup of private debt.
Establishing tax-preferred savings vehicles that homebuyers
can use as down payments would encourage them to build
equity instead of debt, which would better insulate the economy from the negative effects of price changes in the housing
market. The government already does this to some degree by
allowing first-time homebuyers to withdraw some funds without penalty from their IRA to help make a down payment.
For firms, either eliminating or capping the corporate
interest deduction would help to remove the artificial bias
toward debt financing. Alternatively, the government could
give equity financing equal treatment by providing an equivalent deduction for dividends. A recent study of six large
countries in the European Union by economists at the
European Commission’s Joint Research Center suggests
that fully eliminating the corporate debt bias could cut the
financial losses associated with banking crises by as much
as half. Moreover, reducing excessive household indebtedness would reduce the likelihood of costly and burdensome
workouts when borrowers get in trouble. Regardless of the
exact size of the effect, it seems clear that reducing the tax
favoritism for debt would help reduce the negative effects of
credit booms and busts.
EF

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

1

UPFRONT

Regional News at a Glance

BY F R A N CO P O N C E D E L E O N

MARYLAND — Montgomery and Prince George’s counties agreed in August
to contribute more funding to the Washington Metro Purple Line, a proposed
16-mile light rail line extending from Bethesda to New Carrollton. Gov. Larry
Hogan had demanded more county money as a condition for the project to
proceed. The state has committed $168 million, compared with $347 million from
the two counties. More federal and private funding is necessary to fully finance
the project, estimated at $2.45 billion. If private money comes through and
Congress approves federal funding later this year, construction may begin as early
as May 2016.
NORTH CAROLINA — Blue Cross and Blue Shield, North Carolina’s largest
health insurer, announced in August that it wanted a premium hike of 34.6 percent
for customers under 65 who buy individual plans under the Affordable Care Act
(ACA). The company said the reason was that older and sicker customers continue
to outnumber the healthy ones and use more expensive health care services than
anticipated. The proposed rate increase, effective Jan. 1, would apply only to
individual plans under the ACA, not employer plans, and for most customers, much
of the cost would be covered by ACA subsidies. The North Carolina Department of
Insurance reviewed the request and announced its approval on Nov. 1.
SOUTH CAROLINA — Project leaders in South Carolina and Georgia agreed
in August to start seeking permits so the Army Corps of Engineers can begin
surveying the land on which the Jasper Ocean Terminal, a 1,500-acre facility that
will be located on the South Carolina side of the Savannah River, will be built.
The $4.5 billion project, which is expected to be completed in 2029, will be
the largest single-site container terminal upon completion, easing volume off
neighboring ports in Charleston and Savannah. By 2040, it has the potential to
support more than 1 million jobs in both states, according to a 2010 study by the
University of Georgia and the consulting firm Wilbur Smith & Associates.
VIRGINIA — According to the Virginia Department of Motor Vehicles,
registered in-state ride-sharing drivers for companies like Uber and Lyft are close
to 19,000 as of Aug. 3. Last February, Gov. Terry McAuliffe signed a bill legalizing
ride-sharing services provided they abide by state regulations, and since then they
have expanded throughout the state. Meanwhile, Uber has cut prices in Richmond
and is looking at ways to price its services with more flexibility in different parts
of Virginia.
WASHINGTON, D.C. — City officials approved in July a proposal to place a
minimum-wage hike on the ballot in November 2016. If the measure is adopted,
D.C.’s minimum wage — which currently stands at $10.50 per hour — would
rise to $15 per hour by 2020, making it one of the highest minimum wages in the
country. The minimum wage would also be annually indexed to inflation. The
next step for supporters is to gather the 23,200 signatures needed to ensure the
proposal is placed in front of voters.
WEST VIRGINIA — Declining coal production and lower natural gas
prices have caused West Virginia’s tax receipts to fall, according to the state’s
Department of Revenue. In July, the first month of the 2016 fiscal year, receipts
totaled $251.78 million, an 8.2 percent drop over the previous July. The state
attributed this to a shortfall in severance tax collection, which has fallen by almost
half, year-on-year, due to lower gas and coal prices and declining coal production.
2

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

THEPROFESSION

Economists Learn to Grapple with Big Data

BY J E S S I E RO M E RO A N D A A RO N ST E E L M A N

M

ost of us have had the experience of shopping
online and receiving a recommendation for an
item that caught our eye — and wondered how
the suggestion was generated. You have “big data” to thank
for it. If you go on to make the additional purchase, so does
the retailer. And increasingly, it isn’t just companies that are
interested in big data; economists are too.
Our browsing and purchasing experiences, along with
those of other consumers with similar tastes and interests,
generate an enormous amount of information that can be
collected and filtered to provide us with something like
a tailor-made shopping environment. As gathering that
information has gotten easier and analyzing it has gotten
cheaper, businesses are aiming to use it to boost sales. The
recommendations we receive may not always be hits, but
they don’t have to be. In baseball, a batter who is successful just 30 percent of the time is often an all-star — and
most retailers would be happy with an average much lower
than that.
What exactly is “big data”? There is no consensus
definition, in part because big data is relatively new and
in part because people use it for many different purposes.
But for economists who work with big data, there is broad
agreement on what features make it valuable. Linnet Taylor
of the University of Amsterdam and Ralph Schroeder and
Eric Meyer of the Oxford Internet Institute have surveyed
more than 125 social scientists on various issues related to
big data. In a 2014 article published in the journal Big Data
& Society, they reported that the economists they have
talked to are most interested in “granular, population-level
data with multiple dimensions that allow researchers to
analyse cases along many variables,” which permit them “to
test theories of behaviour that were previously untestable,
creating a new set of metrics for issues of economic interest
which were previously in the realm of theory.”
Economists have worked with big data, both public and
private, in almost all areas of their discipline, from historical tax data to look at economic inequality to Medicare
rolls to examine the efficiency of the health care industry.
But if there is one area where big data may be especially
promising, it is labor economics. Indeed, John Horton and
Prasanna Tambe of New York University’s Stern School
of Business recently noted that we are “clearly entering a
golden age for empirical labor market research,” one where
there is “a growing opportunity to revisit old questions
with new and better data and to answer new questions
raised directly by these new contexts.”
Horton and Tambe point to several recent papers that
have employed big data to interesting effect. For instance,
Iona Marinescu of the University of Chicago and Roland

Rathelot of the University of Warwick used ZIP code level
data from 500,000 job seekers who sent out more than
5 million applications through CareerBuilder.com in 2012
to examine the extent to which geographic mismatch is
a driver of unemployment. They found that job seekers
are 35 percent less likely to apply for a job 10 miles away
from their ZIP code of residence — but because there are
enough job openings on average, this local preference has
been fairly unimportant in the aggregate. In a similar vein,
Scott Baker of Northwestern University’s Kellogg School of
Management and Andrey Fradkin, a postdoctoral associate
at MIT’s Sloan School of Management, used Google search
data to look at online job search patterns in Texas and how
those patterns change as people come close to exhausting
their unemployment insurance benefits.
Of interest to macroeconomists, including monetary
policymakers, is the Billion Prices Project at MIT, which
collects price data daily from hundreds of online retailers.
Those data are used for a variety of research purposes, perhaps most notably to construct a complementary inflation
measure to the Consumer Price Index. Such data are collected internationally, too, and could be particularly helpful for
countries that do not have as reliable government measures.
Most economists are optimistic about the use of big data
in academic research and in the evaluation of public policy.
At the same time, most also agree that the size and complexity of some of those data sets will require new statistical
techniques to get beyond mere correlation and to the identification of causal relationships that help us test theory. As
Jonathan Levin of Stanford University notes, “Everywhere
you look you can generate an interesting fact. But figuring
out how to turn that into … a researchable question is really
challenging.”
Privacy concerns will loom large, too, as researchers
avail themselves of data sets containing sensitive information. “De-identification” methods will need to be robust to
ensure appropriate anonymity. Moreover, the proper use of
predictive modeling to achieve public policy ends will need
to be determined. Big data could be helpful, suggest Levin
and his colleague Liran Einav, in helping the government
identify people with a high marginal propensity to consume
— people who could then be targeted for tax rebates as part
of an “economic stimulus” package. Private firms routinely
engage in similar activities, of course, but people’s reaction
to such measures by the public sector likely would be more
circumspect.
What is not in doubt is that big data is going to keep
getting bigger. As it does, economists will have to figure out,
often in collaboration with colleagues in other disciplines,
such as computer science, how to make the best use of it. EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

3

FEDERALRESERVE
Shifting Into Neutral
BY H E L E N F E S S E N D E N

H

In 1994, the
Federal Reserve
launched a
pre-emptive strike
against inflation in
a series of interestrate hikes that
drew controversy
at the time

ow does a central bank normalize monetary policy after
a long spell of unusually low
interest rates? This may seem like a
question very much of the present, as
Fed leaders ponder interest-rate policy
following the Great Recession of 20072009 and the tepid U.S. recovery. But it’s
also a challenge the Fed confronted two
decades ago. In 1994, the Federal Open
Market Committee (FOMC) wrestled
with a similar dilemma as it considered
emerging from a sustained period of low
interest rates, amid signs of a reviving
economy, growing aggregate demand,
and no obvious signals of inflation. At
the time, a pre-emptive strike had never
been done before. As then-Chairman
Alan Greenspan put it in his 2007 memoir, The Age of Turbulence, such a strategy
carried great risk. “Let’s jump out of
this sixty-story building and try to land
on our feet,” is how he described the
feeling.
Many Americans remember the
1990s as remarkable boom years, when
unemployment dropped to record lows,
productivity kept climbing, and inflation barely nudged. But the early years
of the 1990s were a different story. In
1990-1991, the United States suffered a

A “Soft Landing”

PERCENT

The Fed raised rates in 1994 — and the economic recovery continued

9
9
88
77
66
55
44
33
22
1
1
0
01990

1991

1992

1993

Federal Funds Rate

1994

1995

1996

Unemployment Rate

1997

1998

1999 2000

2001 2002

Core CPI

NOTE: Grey bars denote recessions. The consumer price index (CPI) is used as the chief inflation indicator.
The personal consumption expenditures (PCE) price index is the FOMC’s preferred measure today.
SOURCE: Economic Research Division, Federal Reserve Bank of St. Louis

4

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

recession, followed by a sluggish recovery and rising unemployment. Facing
this environment, the FOMC repeatedly cut the federal funds rate until real
short-term interest rates had effectively
dropped to zero in the fall of 1992.
The picture improved substantially
in 1993, especially by the fall, most notably in business investment and housing
starts, while leading indicators of inflation — such as low inventory levels and
rising inflation expectations reflected in
longer-term bonds — began to appear.
By year-end, the FOMC coalesced
around the view that such historically
low rates were no longer needed to
spur spending and investment. But the
good economic news also posed a new
dilemma: How gradually should the
FOMC dial back its accommodative
stance, given that the recession and high
unemployment were still recent memories? How could it take its message to
the public when inflation appeared contained? And what would be the impact
of making such a move given that it had
been five years since the FOMC last
tightened policy?
Over the course of the next year,
from January 1994 to January 1995, the
FOMC raised the fed funds rate seven
times, from 3 percent to 6 percent, in
what was later seen as a turning point.
Many economists view this episode
as the first major tightening action by
the FOMC that was truly pre-emptive,
moving ahead of concrete evidence of
inflation. The 1994 cycle was also the
first time that the FOMC issued a
statement to announce policy changes
as a way to explain its decision to the
public as well as signal its anti-inflation
commitment to markets. Even though
the FOMC at the time did not view
the statement as a sea change, it turned
out to be the first in a series of moves
establishing greater transparency and
anchoring public expectations about
monetary policy over the medium and
long run.

The 1994 hikes provoked strong political resistance
— especially in Congress — as well as ongoing turmoil in
bond markets. But as the year went on, the fundamentals
bore out the FOMC’s assessment that reverting to tighter
monetary policy would not stop the recovery in its tracks.
The economy continued to expand, rising from 2.7 percent
real GDP growth in 1993 to 4 percent in 1994. Growth
did eventually decelerate in 1995 — as Fed forecasts had
expected — but it was a “soft landing” rather than a hard
fall. In fact, the economy expanded by 2.7 percent in 1995,
although it slowed down in the fourth quarter to less than
1 percent. Meanwhile, inflation stayed contained, with the
core consumer price index (which omits volatile food and
energy prices) generally hovering around or below 3 percent
in 1994 and 1995. To the surprise of many, the unemployment rate kept on falling, from 6.6 percent in January 1994
to 5.6 percent in early 1995 — more than a full percentage
point below FOMC forecasts in early 1994. (See chart.) And
long-term bond rates — after rising in the spring and summer — started falling by late 1994 and eventually stabilized
by early 1996. This movement indicated to the FOMC that
long-term inflation expectations had been anchored by the
series of hikes and the accompanying announcements. The
Fed’s oft-stated intention that it would contain inflation
appeared at long last to be attaining credibility.
J. Alfred Broaddus Jr., president of the Richmond Fed
from 1993 to 2004, had his first rotation as an FOMC voting
member in 1994. The tightening decision, he explains, came
out of a fundamental shift in the 1980s, as economists began
thinking about how Fed monetary policy could stabilize
prices by focusing on inflation expectations and central bank
credibility.
“What you say and what you do before inflation breaks
out affects the Fed’s ability to control inflation at minimum
cost,” Broaddus says. “This understanding grew out of the
mistakes of the 1970s. Back then, inflation psychology was
so embedded, the only thing that would break inflation was
a hit to the economy. By taking a pre-emptive approach, we
learned, you can minimize the fallout from tightening.”

A Gradual Healing
A chief source of the concern to the FOMC in late 1993 and
early 1994 was that the federal funds rate had been unusually
low for more than a year. At 3 percent, that rate might not
seem so low relative to today’s near-zero levels. However,
the FOMC compares the inflation-adjusted (or “real”) fed
funds rate to the economy’s long-run “natural” real rate —
the rate that will neither stimulate nor depress economic
activity — and it sets the real fed funds rate relatively low to
support economic activity during a recession. (See “Jargon
Alert,” p. 8.) Since the Great Recession, economists estimate the natural rate has fallen close to zero, but in the
early 1990s, most calculations put it at 3 percent or higher.
In 1994, with roughly 3 percent inflation, the 3 percent fed
funds rate thus represented an “accommodative” stance
rather than a neutral one.

One reason for the persistence of such low rates was
the legacy of the 1990-1991 recession, which saw real GDP
contract in the fourth quarter of 1990 by an annualized
3.5 percent, followed by a 2 percent drop in the next quarter.
FOMC members were especially concerned over the troubled banking and thrift industry, as hundreds of financial
institutions collapsed in the late 1980s and early 1990s under
the weight of bad loans. Another factor was the recessionary
effect of the spike in oil prices following the Iraqi invasion of
Kuwait in 1990. Regional downturns in places such as New
England and Texas were especially severe.
The FOMC had responded to these conditions by cutting
the fed funds rate a full 3 percentage points, from 6 percent
to 3 percent, from mid-1991 to late 1992. Despite the official
end to the recession in March 1991, however, employers kept
shedding jobs, causing unemployment to rise through June
1992, up to 7.8 percent. Absent any early signs of inflation,
and with the FOMC’s internal “Greenbook” forecast pointing to ongoing slack in the labor market, these conditions
had convinced Greenspan and a majority of FOMC members that low real interest rates were appropriate, especially
since businesses and households were struggling to repair
their balance sheets.
By 1993, however, the economy had turned the corner.
GDP growth rapidly picked up, while the unemployment
rate fell below 7 percent by fall 1993. Meanwhile, several
leading indicators caught the FOMC’s notice, notably,
the yield on 30-year Treasuries, which jumped by almost
half a percentage point in the fourth quarter. To some
members, this indicated that long-term inflation expectations were on the rise even though measured inflation
was holding steady.

‘A Slightly Shabby Notion’
Although the FOMC was largely united on the need for a
policy shift by the winter of 1993, many on the committee,
including Greenspan, were concerned about the market
impact of even a modest tightening. As a way to ease the
surprise, Greenspan decided to make public comments just
ahead of the FOMC’s first meeting of the year. “Short term
rates are abnormally low,” he stated in congressional testimony in January 1994. “At some point, absent an unexpected
and prolonged weakening of economic activity, we will need
to move them.”
When the FOMC gathered for its first meeting of the
year on Feb. 3-4, the discussion focused on the fourth-quarter
strength of a number of indicators, including housing starts,
consumer durables, business fixed investment, and a jump
in the hours of an average workweek. Business inventories
remained at low levels, which caused some members to worry
that tight supply would not be able to keep up with growing
consumer demand. More broadly, the accelerating pace of
GDP growth in the fourth quarter — initially estimated at
an annualized 5.9 percent, later revised to 7.0 percent — suggested to the committee that the economy was ready for a
shift away from zero real interest rates.
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

5

“We have had an extraordinarily successful run in restoring
balance to a disturbed economic system,” Greenspan told his
FOMC colleagues at the February meeting as he concluded
his case for a rate hike. “We haven’t raised interest rates in
five years, which is in itself almost unimaginable … The presumption that inflation is quiescent is getting to be a slightly
shabby notion.”
Greenspan laid out two possibilities to the committee.
One was to raise the federal funds rate by half a percentage
point, or 50 basis points, from 3 percent to 3.5 percent. The
economic fundamentals, in Greenspan’s view, merited such
a shift, but this risked considerable market disruption, given
that it would be the first rise in five years. The other risk
was that 50 basis points might be seen as a one-off measure,
when in fact the FOMC expected that it would have to conduct a series of hikes through the year that were commensurate with rising output and rising demand.
A better approach, argued Greenspan, would be to lift
the fed funds rate by only 25 basis points but then make
an announcement after the meeting — an unprecedented
move at the time — to signal that this was a first step in a
broader strategy to move ahead of inflationary pressures.
Furthermore, he argued, the shock to markets would be
less severe than with an immediate move of 50 basis points.
As the discussion unfolded, this view prevailed among the
committee members, including those who thought the
initial hike should be higher, and they voted unanimously
in favor.
Despite its great significance in retrospect, the FOMC
members generally viewed the decision to state the policy
change publicly as an ad hoc move that addressed the specific conditions of their announcement. Greenspan, who
had in the past opposed the idea of public announcements
on grounds that they limited the Fed’s flexibility, made clear
to the committee he did not view this move as establishing
a new practice.
“We don’t have to announce our policy moves; there’s
nothing forcing us to do so,” argued Greenspan. “The issue
is not whether if we do something, we will be forced to do it
again. I think we can avoid that. … I see no reason for such
an announcement to be a precedent.”
As it turned out, the decision to issue an announcement was the first in many steps toward greater transparency during Greenspan’s tenure. It was not only the
first time the FOMC offered to the public a summary and
brief explanation after meetings that formalized a policy
change; it was also the first year that most policy changes
were made at the meetings. Previously, it was common for
the FOMC to make the policy decisions during conference calls between sessions. Another major step occurred
in 1999, when the FOMC decided to issue statements, in
addition to more precise language on its near- and midterm policy intentions (or “tilt”) after every meeting, not
just after those when a policy change occurred. And three
years later, it started making the vote count and dissents
public.
6

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

The ‘Bond Bloodbath’
Despite Greenspan’s public signals and the calming intention behind the first-ever statement, markets met the news
of the rate hike with surprise, with the Dow Jones Industrial
Average dropping almost 2.5 percent. A more unusual reaction, however, occurred in the bond markets. The yield on
30-year Treasuries — which had been one measure, in part,
of longer-term inflation expectations and generally was not
prone to sudden movements — jumped by 40 basis points,
compared with only 25 for short-term Treasuries. This move
was the start of a long, massive sell-off in the U.S. bond
market, which wound up losing $600 billion in value from
January to September of that year. (Because bond values
and yields have an inverse relationship, the increase in yields
corresponds to a decline in bond prices.)
To some on the committee, including Broaddus, that
jump meant the FOMC’s move had not been sufficient in
anchoring expectations firmly on the Fed’s anti-inflation
commitment and, in fact, showed that the markets still
believed that long-term inflation was a threat. In retrospect, what also may have been going on was an early harbinger of the dynamics at play in 2008, although on a much
smaller scale. Highly leveraged institutions such as hedge
funds had been borrowing short-term to buy longer-term,
higher-yielding debt. As long as bond spreads were stable,
these firms could offer their investors double-digit returns
because they could keep on financing their debt. But once
the Fed moved in February 1994, even a modest rise in shortterm financing costs could upend this strategy. As a result,
these bondholders were forced to sell the securities they held,
including higher-yielding long bonds, to cover the borrowing
costs of their short-term debt. A long-bond sell-off, in turn,
drove those yields higher and steepened the yield curve. Banks
and insurance companies were also badly affected. That year is
still known among investors as “the bond bloodbath.”
The FOMC discussed this turbulence as it weighed its
options at its March meeting. Generally, the indicators that
it noted in late 1993 — housing, business fixed investment
— were still strong. Inflation itself remained moderate, as
were wage gains. The consensus was that the FOMC should
announce a further tightening, with the only question being
how much.
Assessing the recent market turmoil, Greenspan said he
saw an analogy to the 1987 crash, which he said “stripped out
a high degree of overheating.” Before the Feb. 4 decision, he
said, “I don’t think we were aware of the apparent underlying
speculative elements involved in the markets on a worldwide
basis that … our February move unearthed.” But the pattern
was otherwise similar. “While this capital gains bubble in
all financial assets had to come down, instead of the decline
being concentrated in the stock area, it shifted over into the
bond area,” he argued.
Given this market volatility, Greenspan concluded that
the FOMC should only take another modest 25 basis point
step. The committee agreed, although this time two members — Broaddus and Cleveland Fed President Jerry Jordan —

dissented on grounds that a 50 basis point increase was
needed to adequately pre-empt emerging inflation pressures.
Looking back today, Broaddus said he still believes his
dissent was the right decision, in large part due to his concern over the movement in long bond yields. But he also
notes that one tool economists today use to gauge inflation
expectations — the difference between yields on a particular
inflation-protected Treasury (known as TIPS) and non-indexed
Treasuries — was not available yet; TIPS were not issued
until 1996.
“Instead of TIPS, we had to look at long bond rates. But
this was enough to put us on alert,” he says. “That spring and
summer, I still thought we needed to be more aggressive.”

presidents, who were brought in to testify over the course
of that year and were seen by some lawmakers as excessively
hawkish. Rep. Henry Gonzales (D-Texas) introduced a bill
that would, among other things, remove the presidents as
voting members of the FOMC. The legislation failed to gain
traction, but some of its other proposals, including a broader
audit of the Fed, remain in circulation today.
The FOMC went on to lift rates again in August and
November as the economy looked increasingly robust.
Altogether, the fed funds rate had risen to 5.5 percent by
year-end, with the committee voting for one final increase
in January to bring the rate to 6 percent. Unemployment
was steadily dropping, while consumer spending and business fixed investment stayed at brisk levels. Despite higher
mortgage interest rates (a result of rising long-term bond
yields), the housing market was picking up. Reflecting the
committee’s ongoing concern that fall, the November hike,
in fact, was the biggest of the year — a full 75 basis points.
Former Fed Vice Chairman Alan Blinder, who served
on the FOMC from 1994 to 1996, describes the episode as
remarkable for several reasons. One was the degree of unity
in the final votes, even though the actual debates preceding
them had “a lot less cohesion,” in his words.
“Greenspan was very good in using the wording of the
statement to tack one way or the other, making sure as
many members were on board as possible,” says Blinder. “He
would use the ‘bias’ very skillfully. But there was a lot more
disagreement in those debates than the final votes would
suggest.”
During his time on the FOMC, Blinder notes, he had his
differences with Greenspan over transparency issues. But he
still considers the 1994 cycle “a complete success in capping
inflation.
“We held inflation at 3 percent while engineering a soft
landing with the economy at full employment,” he says.
“That is as perfect as you could get.”
EF

The Glide Path
By July 1994, the FOMC had taken the fed funds rate to
4.25 percent and decided to take a pause at its meeting that
month on the grounds that the effects of its action in the
spring were starting to be felt. More broadly, global currency
markets had become volatile, and the committee did not
want to add to those pressures. The committee also decided
to hold off on issuing a public announcement this time —
since there was no policy change to announce — and to
revisit later the broader question of issuing statements. (It
decided in January 1995 to issue public statements when it
had voted for a policy change and to reserve the right to issue
statements even when there was no policy change.)
Regardless of the intent of greater transparency, the
move did not mitigate broader criticism from lawmakers
over the change in policy. During Greenspan’s semiannual
Humphrey-Hawkins testimony before Congress in July,
Sen. Paul Sarbanes (D-Md.) charged that the Fed had “engineered a slowdown in the economy despite the absence of an
inflation problem. The domestic economy is generating less
inflation than it has in three decades.”
Another target was the Fed’s 12 regional Reserve Bank
Readings

“Minutes of the Federal Open Market Committee.” Board of
Governors of the Federal Reserve Sytem, Feb. 3-4, 1994;
March 22, 1994.

Greenspan, Alan. The Age of Turbulence: Adventures in a New
World. New York: Penguin Press, 2007.
Goodfriend, Marvin. “The Phases of U.S Monetary Policy: 1987
to 2001.” Federal Reserve Bank of Richmond Economic Quarterly,
Fall 2002, vol. 88, no. 4, pp. 1-17.

Sellon, Gordon H. “Monetary Policy Transparency and Private
Sector Forecasts: Evidence From Survey Data.” Federal Reserve
Bank of Kansas City Economic Review, Third Quarter 2008,
pp. 7-34.

Pakko, Michael. “The FOMC in 1993 and 1994: Monetary Policy
in Transition.” Federal Reserve Bank of St. Louis Review, MarchApril 1995, vol. 77. no. 2, pp. 3-25.

Walsh, Carl. “What Caused the 1990-1991 Recession?” Federal
Reserve Bank of San Francisco Economic Review, 1993, no. 2,
pp. 33-48.

u

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

7

JARGONALERT

Real Interest Rate

W

hen baseball great Yogi Berra noted that “a
nickel ain’t worth a dime anymore,” he was
restating a central fact of economics: Inflation
erodes the purchasing power of money. By extension, we
need to adjust interest rates for inflation to understand their
value over time. The nominal interest rate is the stated rate
you pay on a loan, or that a bank pays on a deposit. The real
interest rate is the nominal rate adjusted for the change in
purchasing power over time, or inflation. The real interest
rate is what truly affects borrowing, lending, and investment.
One of the first economists to closely examine the interaction between interest rates and inflation was Irving Fisher
(1867-1947). He concluded that inflation and nominal rates
are closely associated: When the money supply goes up, both
inflation and the nominal interest rate
rise in the long run, a relationship known
as the Fisher effect.
Even though inflation and nominal
rates are closely tied, real and nominal
interest rates can diverge, namely, when
prices change quickly and dramatically.
For example, when U.S. inflation (as
measured by the consumer price index)
began picking up in 1973, nominal interest rates went up while real rates fell, as
inflation rose more quickly than nominal rates. Real interest rates fell below
zero and generally stayed there until 1980, when nominal
rates (as measured by the three-month Treasury bill) reached
15 percent. Then inflation finally began to fall. By autumn
1983, the real interest rate had risen to more than 4 percent
(compared with a 9 percent nominal interest rate), as inflation fell more rapidly than nominal interest rates.
Conversely, deflation will push real interest rates above
nominal interest rates. Japan has held nominal interest rates
near zero since the mid-1990s, while its economy has gone
through spells of deflation. In 2013, for example, when the
average bank deposit interest rate was 0.5 percent, Japan’s
real interest rate reached 1.9 percent, according to the
World Bank. In such cases, economists become concerned
that relatively high real interest rates will dampen growth in
an environment that is already trending toward deflation.
The real interest rate reflects the true return on savings as well as the true cost of investment and therefore is
the key rate that influences the economy. For example, an
investor assessing a capital investment decision makes that
calculation by adjusting the rate of return for expected inflation. However, as many economists, including former Fed
Chairman Ben Bernanke, point out, monetary policy does
not determine the real interest rate in the long run. Rather, a
8

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

range of factors, including an economy’s potential for growth
and the productivity of its workforce, establish the real rate
over the medium to long term. Under a concept introduced in
1898 by the Swedish economist Knut Wicksell, this long-run
rate is where the real interest rate settles when labor and capital are fully utilized, a condition known as the equilibrium or
“natural” interest rate. In a robust economy, the equilibrium
rate is high because the return on investment is high, while
the opposite holds in a sluggish economy. For central bankers,
Bernanke argued recently on his blog, the goal is to influence
market rates so that they match up with the equilibrium rate.
Today, economists are engaged in a debate over how to
measure the equilibrium rate, including which variables to
use and how to disentangle long-run factors from shortterm ones. Richmond Fed economists
Thomas Lubik and Christian Matthes
recently analyzed three variables — real
gross domestic product growth, the
core personal consumption expenditures inflation rate (adjusted for energyand food-price fluctuation), and the real
interest rate — and found that the natural rate has fallen from about 3.5 percent
in the early 1980s to 0.5 in the second
quarter of 2015, while never dropping
below zero. In their calculation, the
natural rate has stayed above the real
rate since 2009, which can support the idea that monetary
policy may have been too loose.
While the best measure of the real natural rate is under
debate, a longer-term trend is clear: Both real and nominal
rates have been falling across the globe. Some drivers are
transitory factors tied to the financial crisis response, such as
quantitative easing policies (which lowered long-term bond
yields) and private-sector deleveraging (which dampened consumption). But this drop started well before 2008 (in some
countries, it began as early as the 1980s) and has also affected
long-term rates. For these reasons, argue some economists,
the trend is a sign of factors that are bound to persist for a
while. Possible explanations include expectations of sluggish
long-term growth, especially in China, and slowing global productivity. Demographics are also in play, as aging populations
save more and spend less. Meanwhile, Bernanke has pointed to
a “savings glut” in emerging markets, especially in Asia, while
former Treasury Secretary Lawrence Summers has argued
that a trend known as “secular stagnation” is at work, in which
aggregate global demand has become suppressed. In short, the
persistence of low rates can be a good thing — cheaper borrowing for public and private investment, for example — but it
could also be a symptom of underlying economic fragility. EF

ILLUSTRATION: TIMOTHY COOK

BY H E L E N F E S S E N D E N

POLICYUPDATE

Net Neutrality 2.0
BY T I M S A B L I K

O

n Feb. 26, 2015, the Federal Communications
Commission (FCC) announced a new version of
what it calls its Open Internet rules. The rules, which
went into effect on June 12, reclassify broadband Internet as
a “telecommunications service” and make fixed and wireless
Internet Service Providers (ISPs) subject to Title II of the
Communications Act. Under these rules, ISPs are prohibited
from blocking or slowing any legal Internet content or delivering some content faster in exchange for payment from the
content provider (known as “paid prioritization”).
Collectively, these principles are often referred to as
“network neutrality” or “net neutrality,” an idea that has
been a point of contention in the United States for roughly
a decade. The FCC issued its first rules aimed at enforcing
net neutrality in 2010, but they were struck down by the U.S.
Court of Appeals for the D.C. Circuit in a 2014 decision.
The court held that the FCC did not have the authority to
ban paid prioritization under its existing classification of
ISPs. Reclassifying ISPs as “common carriers” under Title II
is intended to give the FCC that authority.
Proponents of this regulation say that, in the absence of
such rules, ISPs with market power could act as gatekeepers
of Internet content. Currently, content providers pay only
their own ISPs to transmit content. Without a net neutrality
rule, content providers might also have to pay a fee to consumers’ ISPs to avoid having their content transmitted more
slowly. Alternatively, ISPs could block content providers
who refuse to pay.
“Practically speaking, the ISPs would be able to determine the leading company in various sectors, such as search,
video, and so on,” says Nicholas Economides, a New York
University economist who studies net neutrality.
The fixed broadband market is highly concentrated.
While nearly all urban residents have at least two providers
to choose from, fewer than 60 percent of rural residents
do, according to the National Broadband Map maintained
by the National Telecommunications and Information
Administration in collaboration with the FCC. And only
60 percent of urban and 20 percent of rural areas have at
least three providers.
Economides says that ISPs have a strong incentive to
delay all but the highest paying content producer, creating monopolies in all of the various content sectors.
“Monopolists make the highest profits. So as an ISP, if I
create a content monopolist, I will be able to reap a large percentage of his profits through paid prioritization,” he says.
But not everyone agrees that ISPs could get away with
such behavior. If wireless providers are included, the market looks much more competitive: Nearly all urban and
about 70 percent of rural residents have access to at least
10

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

five ISPs. “How you view the market and its structure is
really key to what you think about the FCC and what it has
done,” says Robert Litan, formerly a nonresident senior
fellow at the Brookings Institution.
Economides notes that wireless is not currently a perfect
substitute for fixed broadband given its much higher cost for
comparable service. But he agrees that greater competition
would likely prevent many of the concerns raised by net
neutrality proponents. “If we had more competition in fixed
broadband, it would be a different story,” he says.
Critics of the new rules argue that, in spite of this, there
have been relatively few cases of anticompetitive behavior
by ISPs over the last decade. Moreover, Litan and others
say that any anticompetitive actions could be handled on
a case-by-case basis through existing rules and regulators.
The FCC has used its Enforcement Bureau to investigate ISPs and address claims of anticompetitive behavior in the past. And former Federal Trade Commission
Commissioner Joshua Wright testified before the House
Judiciary Committee in May 2015 that the new rules are
unnecessary because existing antitrust laws are already
“well-suited to handle any such problems as they arise.”
The new rules could also lead to unintended costs. Under
Title II, the FCC has the authority to regulate ISP prices
or mandate the unbundling of services. Although the FCC
explicitly stated that it would not use these powers on
broadband ISPs, Litan and others argue that it has nevertheless had a chilling effect on network investments. Capital
expenditures by several major broadband ISPs declined in
the first half of 2015, after the rules were announced. That
has only happened in two other periods: the dot-com crash
and the Great Recession. Some have suggested this is just
a response to recent changes in consumer behavior such
as cable “cord cutting,” but there is evidence that a similar
decline in investment occurred when Title II was applied to
telephone companies in the mid-1990s.
The net effect of paid prioritization on innovation by
content producers is also unclear, according to a 2014 paper
by Litan and Hal Singer of the Progressive Policy Institute.
While some startups might be discouraged from competing
with prioritized incumbents, the availability of “fast lanes”
could also encourage the development of some high-value,
speed-dependent applications like telemedicine. “I view
paid prioritization as price discrimination based on different
levels of service, which is a core feature in all kinds of markets that are competitive,” says Litan, pointing to different
tiers of package shipping as an example.
The ultimate impact of the FCC’s new rules remains to
be seen. Like the original 2010 rules, they are facing legal
challenge in federal court.
EF

The Public
Perils of
Private
Debt
Debt makes the wheels of
commerce turn. But under
certain circumstances, it
can also heighten financial
crises and recessions
BY T I M S A B L I K

T

he story of the Great
Recession is, in many
ways, a story about debt
— private debt that borrowers
did not repay.

In the United States, household debt grew rapidly in the
1990s and 2000s. In the early 1990s, average household debt
burden was about 80 percent of disposable personal income.
By 2000, it had reached 90 percent, and in 2007 it peaked at
129 percent. Most of this increase came in the form of housing debt, which grew from about $6 trillion in 2004 to nearly
$10 trillion in 2008, according to data from the New York
Fed. As a percentage of gross domestic product (GDP), nonfinancial corporate debt also grew in the years leading up to
the recession of 2007-2009 (see charts on next page).
These developments were not unique to the United
States. A 2014 study by Òscar Jordà of the San Francisco
Fed, Moritz Schularick of the University of Bonn, and Alan
Taylor of the University of California, Davis examined the
growth in public and private debt in 17 advanced economies
between 1870 and 2011. For the first half of the 20th century,
public debt surpassed bank lending (an indicator of private
debt) as a percentage of GDP. But starting in the 1960s, private debt began outpacing public debt rapidly. By the 2000s,
private debt was well over 100 percent of GDP, while public
debt remained close to 70 percent.
“There seems to be a striking difference between what
was going on before World War II and what has been going
on since then,” says Jordà.
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

11

in both periods. Similarly, firms might
borrow to finance investments that they
140
expect will pay off in the future. Debt also
plays an important role in the financial
120
system. If both parties have faith in the
100
collateral underlying debt contracts, that
debt can act as a private form of money to
80
facilitate transactions even if the parties
60
don’t have information about the collateral’s fundamental value, according to work
40
by Gary Gorton of the Yale School of
20
Management.
0
Collateral, which helps ensure debt
repayment, in turn influences borrowers’ access to credit. For example, a private firm might issue debt backed by the
Nonfinancial Corporate Debt Relative to Gross Domestic Product
value of its machinery or factories. If the
100
100
value of those assets goes up, the firm
90
can borrow more against them. Moreover,
80
those assets can serve a dual role, accord70
ing to a seminal 1997 Journal of Political
60
Economy article by Nobuhiro Kiyotaki of
50
Princeton University and John Moore of
the University of Edinburgh. Kiyotaki
40
and Moore analyzed a model in which
30
certain assets served as both collateral
20
and factors of production for firms. In
10
their model, productive firms borrow to
0
increase their investments in those assets,
and the increased demand increases asset
prices. That allows those assets to then
SOURCE: Federal Reserve Board of Governors and Haver Analytics
be used as collateral for more borrowing
to fund more investment, which in turn
pushes asset prices up further. Kiyotaki
The authors found that economic expansions characterand Moore show how this feedback loop can multiply the
ized by rapid growth in private debt were often followed
effects of an initial price increase for the assets, leading to a
by deeper recessions with slower recoveries. Comparing
credit “boom.”
the dot-com crash of 2000 to the recession of 2007-2009
A similar dynamic can be seen in household debt during
illustrates this. In the former case, losses were concentrated
the housing boom of the early 2000s. Households were
in corporate equities, while in the latter they were concenable to borrow more against the value of their appreciattrated in real estate. Both fell by similar magnitudes: about
ing homes. According to the 2014 book House of Debt by
$5 trillion for stocks from 1999 to 2002, and $5.5 trillion
Atif Mian of Princeton University and Amir Sufi of the
for real estate from 2006 to 2009. Yet the dot-com crash
University of Chicago Booth School of Business, “Over half
resulted in only a mild recession, while the recession of
of the increase in debt for home owners from 2002 to 2006
2007-2009 was the most severe since the Great Depression.
can be directly attributed to borrowing against the rise in
How can the debt held by individuals and firms affect
home equity.” Some of those funds were then reinvested
the overall economy so dramatically? And if debt is truly so
in home improvements. But when the value of the assets
damaging, why is it so widely used?
underlying all this debt falls suddenly or is called into question, as happened with housing, the boom turns to bust.
1952—
1954—
1956—
1958—
1960—
1962—
1964—
1966—
1968—
1970—
1972—
1974—
1976—
1978—
1980—
1982—
1984—
1986—
1988—
1990—
1992—
1994—
1996—
1998—
2000—
2002—
2004—
2006—
2008—
2010—
2012—
2014—

PERCENT

1952—
1952
1954—
1954
1956—
1956
1958—
1958
1960—
1960
1962—
1962
1964—
1964
1966—
1966
1968—
1968
1970—
1970
1972—
1972
1974—
1974
1976—
1976
1978—
1978
1980—
1980
1982—
1982
1984
1984—
1986
1986—
1988
1988—
1990
1990—
1992
1992—
1994
1994—
1996
1996—
1998
1998—
2000
2000—
2002
2002—
2004
2004—
2006
2006—
2008
2008—
2010
2010—
2012
2012—
2014
2014—

PERCENT

Household Debt Relative to Disposable Personal Income

Credit Boom

Individuals generally prefer to smooth their consumption
over time, and debt helps make that possible. In general,
younger households borrow more than older households
because they anticipate that their peak earning years are in
the future. Rather than scrape by today and live large tomorrow, borrowing helps them enjoy a comfortable lifestyle
12

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

Collateral Damage
In the financial system, uncertainty over the true value of
collateral breaks down the mutual trust that allowed securitized debt to function as currency. Bengt Holmstrom of
the Massachusetts Institute of Technology (MIT) noted in
a 2015 paper that because debt may be opaque in ordinary

times, there is no infrastructure to verify its true value in a
crisis, and financial markets panic.
The collapse in assets serving as collateral also hurts the
firms and households that invested most heavily in those
assets. Additionally, their ability to borrow further against
those declining assets is constrained, cutting off one means
of servicing their debt. Debt contracts are designed to be
fairly rigid to enforce repayment. Most require regular
minimum payments for the borrower to avoid default. And
many financial debt contracts require borrowers to put up
additional collateral or cash if the existing collateral loses
value, increasing the costs of falling collateral for borrowers.
The debt built up by some firms and households during
the boom weighs on their spending during downturns. In a
2009 paper, Mian and Sufi found that households with the
highest debt growth going into the Great Recession cut
their consumption sooner and more deeply than households
with less debt. Similarly, highly leveraged firms were the
first to make cuts. Xavier Giroud of MIT’s Sloan School of
Management and Holger Mueller of New York University’s
Stern School of Business found in a 2015 working paper that
highly leveraged firms were more likely to lay off employees in response to falling consumer spending; in contrast,
low-leverage firms were able to borrow to cover shortfalls
and avoid cutbacks. Moreover, highly leveraged firms may
forgo investing in profitable projects because they know
that most of the proceeds would go to pay their creditors.
Economists call this effect “debt overhang,” and it can also
slow recovery from a recession.
Some believe that when borrowers cannot cut spending
enough to meet their obligations and are forced to default
or sell assets into a distressed market, prices could fall
through “fire sales,” as other borrowers and creditors are
unloading similar assets on the market at the same time.
Andrei Shleifer of Harvard University and Robert Vishny of
the University of Chicago’s Booth School of Business wrote
in a 2011 Journal of Economic Perspectives article that fire sales
occur in part because the buyers that would place the highest
value on the assets being sold are in the same boat as the sellers. They too are highly leveraged from investing during the
credit boom and are also liquidating assets. The only available buyers, Shleifer and Vishny wrote, are “nonspecialists”
who place a much lower price on the assets.
Such distress sales lower the prices other sellers can
receive for similar assets. “It creates a chain reaction where
the price of the asset you’re trying to sell just keeps spiraling
down,” explains Jordà. “Pretty quickly, everyone is caught in
the same net.”
Yale University economist Irving Fisher first described
such a downward spiral in 1933. He argued that “debt-deflation” cycles could explain how a financial shock turns into
a recession or depression. In his view, the first wave of fire
sales is driven by the most cash-strapped households and
firms. Their actions depress the prices on similar assets,
which increases the burden on the households and firms
with the next highest level of debt, starting the cycle anew.

T

he debt built up by
some firms and households during booms
weighs on their spending
during downturns.

Economists disagree about the effects of fire sales on the
markets for those assets. To study the effect of fire sales on
the housing market during the Great Recession, Mian and
Sufi along with Francesco Trebbi of the University of British
Columbia compared states with different foreclosure laws.
Some require mortgage lenders to go through the courts to
evict defaulted borrowers, while other states do not. In the
latter case, foreclosures can happen more quickly, and Mian,
Sufi, and Trebbi found that house prices fell more deeply
in those states during the recession of 2007-2009. On the
other hand, a 2012 working paper by Kristopher Gerardi of
the Atlanta Fed, Eric Rosenblatt and Vincent Yao of Fannie
Mae, and Paul Willen of the Boston Fed found that the negative effect of foreclosed houses on nearby home properties
was fairly small, ranging from between a half a percent to
slightly more than 1 percent drop in sale prices.
Regardless of magnitude, it seems that higher levels of
household debt wreaked at least some harm on economic
growth. Such an effect “is the opposite of the traditional
view,” says Mian. “In the traditional model, if you see higher
household debt today, it must be that people are smoothing consumption by borrowing against even higher future
income. So higher household debt growth predicts higher
income going forward. But that’s not what we find in the
data at all. That tells us that there is something missing from
those traditional models.”

Debt Externalities?
What’s missing from some standard models, says Mian,
is the possibility that borrowing could be too high from a
social perspective. Recently, some economists have proposed models where agents overborrow during credit booms
because they ignore or underestimate the costs that their
deleveraging will have on the rest of the economy during
a downturn. A 2012 Quarterly Journal of Economics paper by
Gauti Eggertsson of Brown University and Nobel laureate
Paul Krugman of the City University of New York proposes
one such model of these “aggregate demand externalities.” When borrowers cut consumption to reduce their
debt, interest rates fall as the demand for debt goes down.
Eventually, low interest rates lead households and firms that
did not borrow previously to begin borrowing, which helps
counteract the drop in demand. Eggertsson and Krugman
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

13

argue that, in the recent crisis, private debt had grown
so substantially that the subsequent deleveraging pushed
interest rates to zero, and this created new challenges for
monetary policymakers.
“A key insight of these models is that when people are
deciding how much to borrow at the individual level, they
are less likely to take into account the implications of their
decisions for the macroeconomy,” says Mian. “So in a decentralized world where financial markets allow people to borrow as much as they like, you can often end up in situations
where they overborrow from a macro perspective.”
Mian and his colleagues also view fire sales as a potential
source of debt’s social costs. If debt is priced in a manner that
ignores the possibility of fire sales, they argue, borrowers and
creditors could use debt in a way that contributes to a deflationary spiral in asset prices during a downturn. On the other
hand, there is some evidence that borrowers and lenders do
consider the costs that future fire sales could have on them
when writing debt contracts, at least to some degree. A 2010
paper by Hernán Oritz-Molina of the University of British
Columbia and Gordon Phillips of the University of South
Carolina’s Marshall School of Business found that firms in
industries with more buyers for their assets (making fire sales
less likely) had lower borrowing costs.
Additionally, the extent to which borrowers and lenders disregard fire-sale risks could be driven more by policy
actions taken to minimize the damage of fire sales after the
fact rather than by inherent characteristics of debt. Like the
moral hazard associated with insurance, protecting borrowers and lenders from fire sales gives them less incentive to
worry about those risks upfront.
Economists generally agree that institutional factors
already play a role in promoting the overuse of debt. In the
United States, many forms of debt enjoy tax subsidies that
encourage their use. Homeowners who itemize can deduct
the interest on their mortgages from their taxable income;
firms can deduct the interest on their debt as a business
expense, but not dividend payments to shareholders. (See
“President’s Message,” p. 1.)
To be sure, equity holders also receive some tax benefits,
which may be partly passed through to firms in the form
of cheaper equity financing. For example, the tax collector
doesn’t recognize increases in the value of stock holdings
as income until the shares are actually sold.
Also, long-term capital gains are
taxed at preferential rates. Still,
the consensus is that the differing
tax treatment of debt and equity has put
debt financing at an artificial advantage.
“At a minimum,” says Mian, “we
should remove the biases favoring debt
currently in place. But there is also good
reason to actually flip that bias in the
opposite direction.”

14

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

The Goldilocks Level
During a downturn, equity financing has some advantages
over debt. It spreads the risks of asset price changes more
evenly between both parties, which could help to soften the
blow of economic shocks. From the parties’ point of view,
however, this distribution of risk is not always desirable.
For instance, firms funded entirely by equity would have
an incentive to conceal the truth about their prospects in
order to pay less to their shareholders. Managers of those
firms may be less likely to take on profitable risks if most of
the rewards would accrue to shareholders. It would be prohibitively costly for investors to constantly monitor those
firms to ensure they are behaving honestly at all times. As
MIT economist Robert Townsend demonstrated in a seminal 1979 Journal of Economic Theory article, debt aligns the
incentives of creditors and borrowers and its fixed payment
structure removes the need for constant monitoring. (For
a more detailed discussion, see “Building a Better Market,”
Region Focus, Winter 2008.)
That makes determining the right balance between debt
and equity from society’s perspective “a very difficult question to address,” says Taylor. “At the moment, there’s
no theory to say what the ‘optimal’ level of debt is.” A
2011 paper by Stephen Cecchetti of Brandeis University’s
International Business School and Madhusudan Mohanty
and Fabrizio Zampolli of the Bank for International
Settlements attempted to shed some empirical light on that
question by studying debt levels in 18 developed countries
between 1980 and 2010. The authors estimated that household debt starts to become a drag on economic growth once
it reaches 85 percent of GDP, but they noted the effect was
very imprecisely measured.
Even if it were possible to calculate the optimal level of
debt, Taylor says that that figure would likely vary dramatically “across countries and possibly across time.”

A New Kind of Contract
Since debt is here to stay, should policymakers attempt to
contain its negative amplifying effects during a crisis? One
option proposed in the immediate aftermath of the recession
of 2007-2009 was to encourage lenders to renegotiate mortgages with borrowers. Modifying the terms of the mortgage

in line with the borrower’s ability to pay would reduce the
need for them to cut consumption, which, in turn, would
reduce the number of defaults. Since foreclosures and forced
sales can depress the value of similar assets, it may even be in
a lender’s best interests to renegotiate rather than attempt
to sell the collateral into a depressed market.
But lenders, like borrowers, will fail to internalize the
macroeconomic costs of their decisions. It may be preferable from a lender’s perspective not to renegotiate a loan,
since doing so opens the door to renegotiations with other
borrowers. Seizing and selling collateral from borrowers
who default can also be the optimal choice for an individual
lender, even if such decisions impose costs on the rest of the
economy.
If the drop in home prices is exacerbated solely because of
the inability to swiftly renegotiate loan terms, policies spurring such dealmaking could offer social benefits. Indeed, the
Making Home Affordable Program of 2009 was adopted with
such a goal in mind. But despite the policy, few renegotiations
took place. Many attributed this to the securitization of
loans, which split individual mortgages into securities held by
many different parties; most borrowers could not just negotiate with their local bank to modify their mortgages. Even in
the absence of such obstacles, policymakers must also weigh
the possibility that changing the terms of debt contracts after
the fact could have unintended consequences on the pricing
and availability of loans in the future.
Given these challenges, some have suggested that a better approach might be to restructure debt contracts so that
they adopt the risk-sharing characteristics of equity during
downturns, potentially preventing spillovers from occurring
in the first place. Unlike firms, individuals cannot readily
issue equity to finance long-term purchases or investments.
Hybrid contracts such as these could grant them access to
the beneficial risk-sharing aspects of equity during a crisis,

while retaining the positive contractual form of debt in
normal times. In their book, Mian and Sufi proposed such a
change for mortgages. The “shared-responsibility mortgage,”
as they call it, would tie mortgage repayments to local house
price indices. When prices are steady or increasing, the
mortgages act as traditional debt. But when housing prices in
an area fall, homeowners’ monthly payments would automatically shrink by the same proportion. Tying this adjustment
to a local index preserves the incentives homeowners have
to maintain their property, since they cannot influence their
payments by reducing their home’s value alone.
“Our proposal looks like standard debt in most scenarios
because debt is often the optimal contract,” says Mian. “The
economics literature shows that you want to impose risks on
the borrower to the extent those risks are under his or her
control. What we are trying to do in our proposal is address
the negative aspects of debt that are macro in nature.”
To give lenders an incentive to provide this downside
insurance to borrowers, Mian and Sufi propose allowing
lenders to reap some of the reward of rising house prices
by earning a portion of the proceeds when households sell
or refinance their homes. So far, few lenders have experimented with such contracts. Mian suggests that because
the government has historically driven housing policy,
shared-responsibility mortgages might require support
from policymakers before they become more widespread.
At the same time, he acknowledges there may be many
other solutions worth considering.
Ultimately, improving private debt requires a greater
understanding of debt’s role in the economy. And on that
front, Jordà says economists still have much to learn. “I don’t
think we have fully appreciated the role that credit plays in
the economy,” he says. “As a consequence, events like the
recession of 2007-2009 may be more repeatable than we
think.”
EF

Readings
Cecchetti, Stephen G., Madhusudan S. Mohanty, and Fabrizio
Zampolli. “The real effects of debt.” Bank for International
Settlements Working Paper No. 352, September 2011.
Eggertsson, Gauti B., and Paul Krugman. “Debt, Deleveraging,
and the Liquidity Trap: A Fisher-Minsky-Koo Approach.”
Quarterly Journal of Economics, August 2012, vol. 127, no. 3,
pp. 1469-1513.
Fisher, Irving. “The Debt-Deflation Theory of Great
Depressions.” Chicago Econometric Society, University of
Chicago, October 1933.

Kiyotaki, Nobuhiro, and John Moore. “Credit Cycles.” Journal of
Political Economy, April 1997, vol. 105, no. 2, pp. 211-248.
Mian, Atif, and Amir Sufi. House of Debt: How They (And You)
Caused the Great Recession, and How We Can Prevent it From
Happening Again. Chicago: The University of Chicago Press, 2014.
Townsend, Robert M. “Optimal Contracts and Competitive
Markets with Costly State Verification.” Journal of Economic
Theory, April 1979, vol. 21, pp. 265-293.

Jordà, Òscar, Moritz Schularick, and Alan M. Taylor. “Sovereigns
versus Banks: Credit, Crises, and Consequences.” Federal
Reserve Bank of San Francisco Working Paper No. 2013-37,
February 2014.

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

15

Building a O
Smarter
Grid
Can “smart grid” technology
change the way we use electricity?
BY EAMON O’KEEFE

n the hottest days of summer, when many
Americans turn down their thermostats and crank
up their air conditioners, electric utilities have to
boost production to meet high demand. The power plants
they bring online often are more expensive to operate, yet
electricity prices rarely change. Economists envision an
electricity marketplace in which prices reflect the true cost
of producing electricity so that consumers and producers are
constantly adapting to real-world conditions. When demand
increases, prices would rise and demand would decrease
accordingly. New “smart grid” technologies could make that
vision a reality.
“The ‘smart grid’ encompasses a lot of different things,”
says Paul Joskow, president of the Alfred P. Sloan Foundation
and professor emeritus of economics at the Massachusetts
Institute of Technology. But in general, it covers a variety of
technologies that include computerized metering, control,
and sensors. When implemented in homes, power lines,
electrical substations, and transformers, these technologies
could facilitate better monitoring and management of electricity consumption and distribution throughout the grid.
The goal is to build a grid that allows for two-way communication between electricity consumers and producers. In
addition to time-varying rates that could lead to more efficient energy use, potential benefits of a smart grid include
improving the grid’s resilience and better accommodating
renewable energy sources.
Utilities have begun rolling out components of the smart
grid, and pilot programs for dynamic pricing have begun to
pop up around the country. A host of companies are building new technologies for grid modernization; in the Fifth
District, North Carolina’s Research Triangle has become a
hub for such innovation. Home to more than 50 smart grid
companies and a number of supporting research institutions,
Wake County, N.C., has dubbed itself the “smart grid capital of the world.”
“It’s a driver of the future,” says Michael Haley, director
of business recruitment and expansion for Wake County
Economic Development. “It has a disruptive, exciting,
changing nature to it.”
But building the smart grid is expensive, and changing the way electricity is priced could have unintended
consequences. Can smart grid technology live up to the
expectations?

The History of the Grid
America’s electrical grid began with Thomas Edison and his
Pearl Street Station in New York City. Built in 1882, this
energy system relied on a 100-volt coal-burning generator
to power a few hundred lamps. As demand for electricity
grew and the technology for electrical generation increasingly favored large producers, competition between small
power companies gave way to larger consolidated firms that
began to exercise monopoly power in the market. Federal
regulations in the 1930s reformed these electric power
holding companies by subjecting them to regulation by the
16

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

Securities and Exchange Commission or
Types of Time-Varying Pricing Schemes for the Electricity Market
to regulation by state utility commissions
if they limited their operations to a single
Time-of-Use Pricing (TOU) Utilities set higher peak (and sometimes peak shoulder) prices months
in advance, usually for certain predetermined summer afternoons.
state. These moves ushered in the era of
vertically integrated utilities operated as
Critical Peak Pricing (CPP) Utilities call a certain number of peak demand periods, usually a day
before or the day of an event, in which customers pay a higher rate
regulated monopolies. Regulated utility
for a certain number of hours.
companies managed a large portion of the
generation, distribution, and retail serReal-Time Pricing (RTP)
Utilities adjust prices to reflect cost changes in nearly real time,
often hourly.
vices in the electricity market for much of
the remainder of the 20th century.
Variable Peak Pricing (VPP) Utilities set predetermined peak periods, like time-of-use pricing,
Amid growing enthusiasm for free
but charge variable rates as in real-time pricing.
markets in the late 1980s and into the
Critical Peak Rebates (CPR) Utilities pay customers a predetermined rebate for reducing demand
1990s, the United States began restructurduring a peak demand period.
ing certain electricity markets to encourSOURCE: U.S. Department of Energy, Office of Electricity Delivery and Energy Reliability
age market-based competition. In 1992,
the Energy Policy Act allowed for greater
competition in electricity generation by opening up access
mechanism is distorted in much of the electricity market
to the transmission system. This encouraged some states to
because prices fail to reflect the true marginal cost of prochange their regulatory structures to allow for competition in
ducing electricity at any given time.
generation and retail services while maintaining strict regulaMost consumers are billed a flat rate, or in some instances,
tion on transmission and distribution. Today, the electrical
“increasing block pricing,” in which prices rise on a tiered
grids in these regions are managed by Regional Transmission
basis over the course of the billing cycle as a customer uses
Organizations (RTOs) or Independent System Operators
more energy. But during periods of high demand, such as
(ISOs), which are independent from market participants.
hot summer afternoons when many households run energyThe California electricity crisis of 2000-2001 slowed the
intensive appliances like air conditioners, base load capacity
move toward restructuring as the country observed spikes in
is inadequate to meet demand. When this happens, utility
electricity prices from market manipulation that followed
companies have to bring more costly power plants online.
partial deregulation in the state. RTOs and ISOs operate
These “peaker” power plants usually run on natural gas,
in California and much of the country east of the Rocky
diesel, or jet fuel and, because of their high variable cost, are
Mountains, with the exception of parts of the Southeast.
often more expensive to run than base-load power plants
The remaining states have maintained their vertically intethat operate all the time. During these peak demand periods,
grated monopolies, but even many of these areas now allow
the marginal cost of electricity is much higher than at other
for more competition in generation by allowing independent
times, but consumers still pay the same price. Because conpower-generating companies to sell electricity under consumers don’t pay the true cost of generating electricity, they
tract to distribution utilities.
aren’t incentivized to use less energy during peak times.
Another major change in the electricity market has been
Economists have been exploring ways to price electricity
the growth of renewable energy. These sources accounted for
more efficiently for at least 50 years, but until recently, these
13 percent of total U.S. production in 2014 compared with
attempts have been met with limited success. With the
roughly 9 percent in 2004, and the U.S. Energy Information
arrival of more sophisticated and cheaper smart grid techAdministration (EIA) estimates that renewable energy will
nology, however, utilities can now know the demand profiles
account for 18 percent of total electricity generation by 2040.
of each customer in nearly real time. Coupled with advances
Renewable energy is highly variable: Unlike a traditional
in computing power, this has allowed utilities to develop
power plant that can be turned on and off as demand changes,
time-varying pricing schemes that reflect changes in supply
wind and solar power generation can fluctuate widely as
and demand. “There are 8,766 hours in a year, and if you read
environmental conditions change. Renewable energy also
the meter every 10 minutes, that’s over 50,000 data points
has contributed to the decentralization of power generaper household per year. That’s a lot of data to analyze and
tion as distributed energy, the term for generating power
match with the billing factors,” Joskow says. “You couldn’t
at one’s home, continues to gain popularity in the form of
do it for millions and millions of customers 20 years ago, but
rooftop solar panels. These developments pose challenges to
now you can.”
America’s aging electricity grid infrastructure, which was not
The most basic time-varying pricing scheme, time-of-use
built to accommodate these changes in supply.
pricing, involves setting time periods, months in advance,
during which utilities will charge a higher peak price and a
The Smart Grid and Pricing
lower off-peak price (and sometimes a moderate peak shoulTo economists, prices are the fundamental guide to decisionder price). A more dynamic pricing scheme, called critical
making in the economy. When prices go up, economic theory
peak pricing, allows utilities to designate a certain number
says consumers will respond by demanding less. But the price
of days per year as peak periods right before the event occurs
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

17

Even with highly dynamic pricing,
how likely are consumers to turn off their
air conditioners when prices go up?
and then charge a higher price for a few hours. Because utility companies can’t know what days will truly be peak period
situations until shortly before they occur, critical peak pricing ideally helps power companies charge higher peak prices
just on those days that warrant them. In the most dynamic
pricing model, real-time pricing, prices are adjusted hourly
to reflect the true marginal cost of generation.
A fourth pricing program, called peak time rebates,
involves paying consumers for reducing their demand during
a peak usage period. For example, Baltimore Gas and Electric
has an optional “Smart Energy Rewards” program that pays
consumers $1.25 for every kilowatt hour saved during a
peak period compared with one’s typical usage during those
times. But Severin Borenstein, an economist at the Haas
School of Business at the University of California, Berkeley
and director emeritus of the University of California Energy
Institute, notes that peak time rebates may distort incentives because of the baseline used to calculate the reduction
in usage. If customers’ baselines are calculated based on their
usage during other peak periods, they could have an incentive to increase consumption in order to make their baseline
higher than it normally would be.

How Do Consumers Respond?
Studies have found real-time pricing to be more effective
than time-of-use pricing at changing people’s behavior. “But
realistically for most customers, it’s a pretty foreign concept,
and it’s something that most of them are not very excited
about doing because there is a lot of volatility,” Borenstein
says. “There are ways to hedge that volatility, but when
you’re to the point of saying ‘hedging’ to residential customers, you’ve lost 95 percent of them.” Critical peak pricing,
although less granular than real-time pricing, may be easier
for customers to understand and could be an effective transition to more dynamic electricity pricing.
But even with highly dynamic pricing, how likely are
consumers to turn off their air conditioners when prices go
up? Estimates vary for the elasticity of demand for electricity, but Borenstein estimates it could be as little as -0.025,
meaning a 1 percent increase in price leads to only a 0.025
percent reduction in demand. But given that low value, he
still demonstrated in a 2005 study that dynamic pricing
could deliver at least 3 percent to 5 percent cost savings in
electricity generation.
One reason for the low elasticity could be that consumers
don’t pay a lot of attention to their energy use. “Most people
aren’t going to spend their lives in their basements looking
18

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

at their meters,” Joskow says. But for consumers who don’t
want to track their usage closely, demand response could
give them the option to have the utility reduce their demand
for them. Southern California Edison, for example, offers a
program called the “Summer Discount Plan” that provides
residential customers with up to $200 in bill credits per
year to give the company the ability to cycle off their home
air-conditioning units.
Some customers have resisted smart metering technology
due to concerns over privacy and the alleged health effects
of radiation from wireless transmissions of digital smart
meters. Opposition is also likely to come from consumers
who use a large quantity of electricity during peak demand
times and thus would see their electricity bills soar, or from
those who simply don’t like the idea of being forced onto a
time-varying price scheme. It’s also possible that confusion
over dynamic pricing might lead to much higher bills for
customers who fail to understand how time-varying pricing
affects the amount they pay for electricity.
Borenstein and other economists have studied how consumers would respond if companies offered an option
to remain on a flat-rate pricing model or transition to a
time-varying one. He found that if the utility didn’t use
profits from one pool to subsidize the other, the flat rate
pool would progressively become more expensive. This is
because customers who don’t strain the grid as much during
peak hours would see the greatest benefit to switching to a
dynamic pricing model, leaving only the highest peak-use
customers in the flat-rate pool and causing the utility to raise
the flat rate. In theory, this could encourage more people to
reduce their energy use and switch to the dynamic pricing
pool. “You get sort of a virtuous cycle,” Borenstein says.
Dynamic pricing could have unintended consequences
with regard to energy use. Economists note an interesting
feature of dynamic pricing: During a majority of hours, customers would actually see a lower electricity price because
peak periods don’t occur all that often. Would consumers respond by increasing demand during off-peak hours?
Borenstein thinks that although this might be the case, there
would still be a small overall reduction in demand because
turning off a light during a peak demand time wouldn’t necessarily induce a customer to turn that same light on later
when the price was lower. Still, it’s possible that the overall
effect of the smart grid could be to shift rather than reduce
electricity demand.

Other Benefits
In addition to enabling more efficient pricing, the smart
grid would bring other advantages. One of them would be
better responses to power outages. Without smart grid
technology, many power companies rely on customers to
call in and report an outage. In contrast, two-way communication throughout the distribution system, including at
substations, power lines, and transformers, would allow for
“intelligent distribution”: Switches would sense power outages immediately and reroute electricity to isolate affected

sections of the grid. This “self-healing” network would be
able to almost instantly reroute power so that most consumers would hardly know an outage has occurred. Such
an improvement in grid resilience might help ameliorate
the growing strain on the electrical grid from natural disasters and heavy storms. Such technology could have helped
utilities respond more quickly to outages in 2003, when
Hurricane Isabel touched down off the coast of North
Carolina with 100 mph winds and wreaked havoc on the
electrical grid in the affected region. An estimated 3.5 million people in the Fifth District lost power, and some didn’t
see their electricity restored for more than two weeks.
Smart grid technology could also help grid operators
adapt to fluctuating supply from renewable energy sources.
Dynamic pricing would encourage customers to reduce
their demand when a dip in supply from renewable sources
— when the sun isn’t shining or the wind isn’t blowing —
strains the grid. In addition, adjustments to dips in supply
could be enhanced by digital meters that communicate with
household appliances to reduce demand during these times.

Making the Business Case
Utilities are asking themselves a number of questions about
the smart grid. “How is this going to be better for our customers, how are we assured that it’s going to be a reliable
new technology, and is there a business case around it that
we can actually implement?” asks Jason Handley, director of
smart grid technology and operations for Duke Energy.
The case can be difficult to make. Duke Energy in North
Carolina is still a vertically integrated utility, and therefore
any changes to its rate structure have to be approved by the
state’s public service commission. If Duke Energy wants to
roll out digital meters, it has to justify it based on projections
of the company’s ability to recover the cost through rate
increases. But because Duke Energy has already eliminated
the costly process of sending people to read each individual
meter by installing automatic meter reading technology, in
which signals from meters can be picked up from a vehicle,
new smart grid technologies deliver relatively fewer gains.
There are other major roadblocks to implementing the
smart grid, such as a lack of grid interoperability or the
ability of the components of the smart grid to seamlessly
communicate with one another. Today, smart grid technologies are often proprietary, meaning that they weren’t built
to communicate with technology from other companies.
Handley says this lack of shared communication standards is
one of the main challenges of rolling out the smart grid for
many utilities.

There’s also the fact that building a smart grid is very
expensive. A 2011 report by the Electric Power Research
Institute (EPRI) found that the 20-year net investment for
rolling out the smart grid would be between $338 billion
and $476 billion. But in the same report, researchers also
estimated that the technology would deliver $1.2 trillion to
$2 trillion in benefits from lower costs and enhanced reliability, among other aspects.
Some economists remain skeptical of such projections.
Joskow thinks the estimates from the EPRI overvalue the
reliability benefits from smart grid implementation. And
“the payoff for residential is probably not that great in the
short run,” according to Borenstein. This is because a considerable amount of peak demand reduction could come
from dynamic pricing programs with commercial and industrial customers, some of whom have already enrolled in such
programs. Further peak reduction from these consumers
could reduce a fair amount of peak demand without the
costly step of rolling out smart grid technology to residential
customers. But Borenstein also notes that in the long run,
there may be more appliances and smart home devices that
respond automatically, reducing the cost and hassle associated with dynamic pricing for residential customers.
Despite the economic challenges of smart grid implementation, utilities have ramped up their efforts nationwide.
The U.S. EIA estimates that power companies have installed
about 46 million smart meters for residential customers
in the United States as of May 1, 2015. President Obama’s
2009 stimulus package included $4.5 billion for grid modernization, and $8 billion has been invested in 99 smart grid
projects nationwide with the help of combined government
and private sector funds.
The stimulus funds fall far short of the total cost of implementing the smart grid, and it’s not clear that utilities will be
willing to make up the difference. Although better reliability
would be a large benefit for the utilities, Luciano De Castro
of the University of Iowa and Joisa Dutra of Fundação
Getúlio Vargas contended in a 2013 paper that aspects of
reliability have public good characteristics; that is, utilities
may tend to underinvest in reliability because consumers
often aren’t willing to pay for improved reliability for other
customers if they don’t have to.
The smart grid has the potential to improve the reliability
of the electrical grid, better integrate alternative energy, and
facilitate pricing that reflects the marginal cost of generation. What remains uncertain is how consumers will respond
to the promise of dynamic pricing and whether the benefits
of the smart grid will outweigh its considerable cost.
EF

Readings
Borenstein, Severin. “Effective and Equitable Adoption of
Opt-In Residential Dynamic Electricity Pricing.” National Bureau
of Economic Research Working Paper No. 18037, May 2012.
De Castro, Luciano, and Joisa Dutra. “Paying For the Smart Grid.”
Energy Economics, December 2013, vol. 40, no. S1, pp. S74-S84.

Joskow, Paul L. “Creating a Smarter U.S. Electricity Grid.” Journal
of Economic Perspectives, Winter 2012, vol. 26, no. 1, pp. 29-48.
Joskow, Paul L., and Catherine D. Wolfram. “Dynamic Pricing of
Electricity.” American Economic Review, May 2012, vol. 102, no. 3,
pp. 381-385.

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

19

What’s a Life Worth?
How to allocate our health care dollars is a
challenging question, but economics could help
BY J E S S I E RO M E RO

I

n 2008, an Oregon woman dying of
lung cancer was denied coverage for
Tarceva, a drug costing $4,000 a
month. She received health insurance
through the Oregon Health Plan
(OHP), the state’s Medicaid plan,
which in the early 1990s had made
radical changes to its coverage
decisions in an effort to increase
the number of enrollees while
also curbing spending growth.
One of the most controversial
measures was a list of 668 medical procedures, ranked according
to their cost-effectiveness; the OHP
would cover only the first 568. Tarceva,
which extended life by a few months for a
small percentage of patients, didn’t make the cut.
(In response to the public outcry, the drug’s manufacturer,
Genentech, provided the drug free of charge; the woman
died a short time after starting it.)
Oregon’s list of treatments was based on cost-effectiveness
analysis, a technique used to compare both the efficacy
and cost of different medical treatments. The technique is
politically controversial and methodologically challenging,
but many health care experts believe it is a valuable tool for
helping to allocate resources in the face of mounting health
care spending.

Are We Spending Money Wisely?
Americans spend a lot of money on health care: $2.9 trillion
in 2013 (the most recent year for which the Centers for
Disease Control and Prevention has data), or 17.4 percent of
GDP. That’s an increase from just 5 percent of GDP in 1960,
and the Centers for Medicare and Medicaid Services (CMS)
projects that health spending will continue to outpace GDP,
reaching 19.6 percent of GDP by 2024. Rising spending
reflects the rapid increase in health care costs, which have
been well above overall inflation since the mid-1980s. Health
care inflation slowed somewhat as a result of the 2007-2009
recession, but the CMS expects health care inflation to
return nearly to pre-recession levels over the next five years.
Federal, state, and local governments provide a substantial
portion of health care spending: Medicare, Medicaid, the
Children’s Health Insurance Program, and insurance subsidies from the Affordable Care Act make up about one-quarter
20

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

of the federal budget, or $836 billion. (About
two-thirds of that money, $511 billion, went
to Medicare.) In 2013, federal, state, and
local governments paid for 43 percent
of all national health spending, a
share the CMS projects will rise to
47 percent by 2024.
The United States spends
significantly more than other
developed countries. In 2013, for
example, the United States spent
about $8,700 per capita on health
care, compared with an average of
about $3,900 for the other Group
of Seven countries (Canada, France,
Germany, Italy, Japan, and the United
Kingdom). Growth in U.S. per capita spending also has outpaced growth in other countries.
In part, the high level of spending reflects the United States’
relatively high per capita incomes; research has shown that
health spending tends to increase with income. But in a 2008
report, researchers at the McKinsey Global Institute calculated that the United States spends about $2,000 more per
capita than expected based on income levels.
High and increasing health care expenditures are not necessarily a cause for concern in and of themselves. “Increasing
spending is usually a signal that the product or service is one
that brings people more benefits than they could derive from
spending the same amount of money on other available commodities,” says Henry Aaron, a senior fellow in economic
studies at the Brookings Institution. “The issue with health
care is that most of us don’t pay market prices, which can
lead to the purchase of health care services where the value
is less than the total cost of producing them. We may be
consuming some services with only a slight marginal value.”
That view is borne out by multiple studies of Medicare data
showing that regional variation in spending is uncorrelated
with the quality of health care or with health outcomes.
Patients in higher-spending areas see more specialists, get
more tests, and spend more time in the hospital, but they
aren’t healthier. Many researchers believe that the absence
of a link between spending and outcomes reflects a high
level of unnecessary care — as much as 30 percent of all health
care costs, according to the authors of one Medicare study.
Many potential health care reforms, such as highdeductible insurance programs where consumers bear more

of the cost, or salaries for doctors rather than fees per service, are aimed at lowering spending overall. That’s not necessarily the goal of cost-effectiveness analysis, says Milton
Weinstein, a professor at the Harvard T.H. Chan School
of Public Health and Harvard Medical School. “It’s about
spending money wisely. Whatever we spend on health care,
are we getting the most value that we can?” Still, he notes,
there is the potential to lower spending. “If we reallocated
resources from less cost-effective to more cost-effective
health services, we might end up spending less money and
having better health at the same time.” But determining
what’s cost-effective, and how to make use of that knowledge, is the challenge for researchers and policymakers.

Calculating a “QALY”
In medical research, cost-effectiveness is a ratio that
expresses health outcomes in terms of dollars spent. The
numerator of the ratio is the cost of one unit of outcome
and the denominator is the unit of outcome, such the number of illnesses prevented by a vaccine or the number of
new diagnoses made by a screening test. One widely used
denominator is a Quality Adjusted Life Year, or QALY,
which takes into account not only extending life, but also
the quality of a person’s health during that life. (Technically,
research using QALYs is a subset of cost-effectiveness analysis known as cost-utility analysis, but researchers generally
use the broader term.)
A QALY is based on a number known as a “health utility,” which runs on a scale of 0 to 1, with 0 being death and
1 being perfect health. This utility value is then multiplied
by a number of years. If a treatment increases health utility,
extends life, or both, the number of QALYs increases. For
example, Aaron Carroll and Stephen Downs of the Indiana
University School of Medicine have estimated that mild
intermittent asthma in children has an average utility value
of .91 and severe seizure disorder in children has a much
lower average utility value of .70. Thus, returning a child
with asthma to perfect health for 60 years would gain 5.4
QALYs, and the child with the seizure disorder 18 QALYs.
In the view of researchers using this approach, such calculations enable doctors and policymakers to compare different health problems and their treatments. Hypothetically,
if curing intermittent asthma and curing severe seizure
disorder both cost $1 million, the cost-effectiveness would
be about $185,000 per QALY for curing asthma and about
$55,500 per QALY for curing severe seizure disorder, making it more cost-effective to cure the latter.
There are several different techniques for calculating
health utilities. One is based on the “standard gamble,” which
was developed by mathematician John von Neumann and
economist Oskar Morgenstern in their 1944 book, Theory of
Games and Economic Behavior. An individual is given a choice
between a certain health state and a gamble that could lead to
a better or worse outcome. The probability of the better outcome that would make them indifferent between their current state or taking the risk is the utility of their current state.

Another method is the “time trade-off,” in which individuals are asked how many years of life they would be willing
to give up in order to live without a certain condition. For
example, a recent study that used the time trade-off to calculate health utilities for epilepsy asked respondents to choose
between living for 10 years with frequent seizures or living for
X years in perfect health. They found a utility of .303, meaning
respondents would prefer living for about three years in perfect health to living for 10 years with frequent seizures.
The standard gamble and time trade-off are both direct
methods, where researchers ask people about specific diseases. But researchers might also use indirect methods, where
people are given a simple questionnaire and asked to rank
generic health states, such as living with reduced mobility or
requiring assistance with daily tasks. Several indirect questionnaires are widely used by researchers. In general, they are
developed by asking a sample of the public how they value a
certain limited number of health states and then applying an
algorithm to map those health states onto other conditions to
derive utility values for a wide range of conditions.
In much of economics, utility is an ordinal value; a consumer might get more utility from buying oranges than
from buying apples, but it’s not possible to actually measure
how much more utility they get. Such ordinal utility values
cannot be compared from person to person. In the QALY
methodology, however, a health utility is a cardinal value; a
utility of .08 is four times better than a utility of .02. As a
result, it is mathematically possible for researchers to compare utilities across individuals and calculate an aggregate
health utility for a given disease state.

Proceed with Caution
Health utilities can vary widely from study to study depending on the method used to calculate them and on the survey
sample. For example, patients already living with a certain
disease tend to place a higher utility value on that health
state than respondents who are asked to imagine living with
that disease. Or a young athlete might assign a much lower
utility value to a torn ligament than an elderly person. In
addition, the standard gamble generally results in higher
utility values than the time trade-off. That’s because people
tend to be risk averse and thus require a high probability of
an improved outcome in order to take the gamble.
QALYs can also vary in context depending on how a
certain technology is used. For example, as Weinstein noted
in a 2005 lecture at Syracuse University, many people who
have suffered a heart attack routinely receive an angiogram to
check for blocked arteries. For patients who are at high risk
of having a blocked artery, the procedure gains between 20
and 50 QALYs per $1 million. But for patients who are at low
risk, the procedure gains less than 10 QALYs per $1 million.
It also can be difficult to determine how effective a
treatment is because, as Weinstein says, “You can’t conduct a randomized controlled trial of every intervention, or
with every potential category of patient.” For that reason,
researchers have begun tapping into other data sources,
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

21

such as insurance claims and coordinated medical records,
to establish an evidence base for evaluating effectiveness.
And even treatments deemed to be effective might have
lower-than-expected returns given the deleterious effects
the treatments themselves can have on life quality.
Ethical questions also arise about whether different
weights should be assigned to people of different initial
health states or of different ages. For example, as Steven
Pinkerton of the Medical College of Wisconsin and several
co-authors noted in a 2002 article, people with substance
abuse problems tend to be in worse health on average, so a
given intervention might bring them to a health state with
a lower utility value than the same intervention would for a
person in better health. But by that logic, substance abusers
would be less deserving of health care. And, Weinstein asks,
“Should we assign more weight to people at the end of life
because their remaining years are precious? Or should we
assign more value early in life, because once a person has
reached a certain age they’ve already had an opportunity to
live a healthy life?”
Some researchers have argued that these methodological
questions render the QALY useless as a metric. But many
health care experts believe that while QALYs should be
interpreted with caution, they are a valid tool. “Decisions
about resource allocation are being made all the time,” says
Weinstein. “We can make them on an ad hoc basis, or we
can make them with the benefit of some sensible analysis
about the benefits and harms.”

Cost-Effectiveness in Practice
Many industrialized countries use cost-effectiveness
research to make coverage and reimbursement decisions
for their national health insurance plans. In the United
Kingdom, for example, the National Institute for Health
and Care Excellence (NICE) generally recommends that
treatments be covered by the National Health Service
beneath a threshold of between £20,000-£30,000 ($30,300
-$45,500) per QALY. (NICE’s threshold has been the
source of considerable controversy, particularly with respect
to expensive treatments for rare or terminal illnesses.)
Other countries do not define a threshold as explicitly as
the United Kingdom, although they do have implicit thresholds that inform coverage decisions. The World Health
Organization’s rule of thumb is that one to three times GDP
per capita is cost-effective, which in the United States would
be between roughly $55,000 and $164,000.
But in the United States, cost-effectiveness prompts fears
of rationing and “death panels” that would deny access
to lifesaving treatment. In 1989, Medicare proposed using
cost-effectiveness as one of several criteria, but the proposal
met with significant opposition and was never adopted. In
2010, the Patient Protection and Affordable Care Act (ACA)
created the Patient-Centered Outcomes Research Institute
(PCORI) to conduct comparative effectiveness research,
a method of conducting direct comparisons of different
medical treatments that does not take into account cost. In
22

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

establishing PCORI, Congress prohibited the institute from
funding any research that considers cost at all and barred
Medicare and Medicaid from considering cost-effectiveness
as well. (The one exception is the Oregon Health Plan, which
received special federal approval in 1993 for reforms including
a treatment list based on cost-effectiveness and continues
to use a prioritized list of treatments when determining
coverage.)
Aaron notes that although the government monetizes
life in a variety of circumstances, such as when it decides
whether the “cost per life saved” justifies mandating a new
safety standard for automobiles, people tend to find the idea
unsettling. “When one monetizes the value of medical services, one is placing a value on either the extension of life or
on improvements in the quality of life. And that’s something
that a lot of people are very loath to do.” says Aaron.
Cost-effectiveness is widely accepted in the academic
medical community; leading journals regularly publish studies on CEA, to the tune of 567 published studies in 2013,
according to data from the Center for the Evaluation of
Value and Risk in Health, a nonprofit research group. And
among practitioners, says Weinstein, “there is considerably
more acceptance of the need to consider cost and the limitations on resources when making recommendations for
clinical practice.”
In 2007, the American Medical Association endorsed
“value-based decisionmaking” as a strategy to achieve better
value for the amount of spending, and specifically mentioned cost-effectiveness research as “essential” to provide
doctors and patients with the information they need to
make value-based decisions. In 2014, the American College
of Cardiology recommended the use of cost-effectiveness
analysis as one consideration in treatment guidelines, noting
that “Despite [methodological] challenges … the need for
greater transparency and utility in addressing resource issues
has become acute enough that the time has come to include
cost-effectiveness/value assessments and recommendations
in practice guidelines and performance measures.” The
American Society of Clinical Oncology followed suit with a
similar statement in 2015, although it noted that considerable research remains to be done.
Private insurers also may include cost-effectiveness as
one of several factors in deciding what to cover. The clinical policy at Aetna, for example, the third-largest insurer
by market value in the United States, states that “when
effectiveness and safety are equivalent, we may consider the
cost-effectiveness among therapies to determine medical
necessity or to require certain therapies to be tried before
covering equivalent, but more expensive options.” Still,
overall, cost-effectiveness plays a limited role in the United
States health care market.

Does Cost-Effectiveness Work?
Given the complexity of medical care and of the health
care market, it’s difficult to determine how much health
outcomes might improve, or how much money might be

saved, if cost-effectiveness were more widely considered by
insurers and practitioners. There are trade-offs with respect
to health outcomes. As Weinstein and Jonathan Skinner,
an economist at Dartmouth College, noted in a 2010 article
in the New England Journal of Medicine, some treatments
for late-stage pancreatic cancer might be considered costineffective, while diabetes treatment is very cost-effective.
Reallocating resources from one to the other might improve
aggregate health outcomes, but it wouldn’t improve outcomes for patients with late-stage pancreatic cancer.
Research suggests the spending benefits could be large.
In a 2009 New England Journal of Medicine article, Elliot
Fisher and Julie Bynum of the Geisel School of Medicine at
Dartmouth College and Jonathan Skinner found significant
regional differences in the growth of Medicare spending,
even after controlling for differences in health outcomes.
Between 1992 and 2006, for example, spending rose 2.4 percent in San Francisco versus 4 percent in East Long Island.
Over the course of the study, that difference accounted for
more than $1 billion in extra Medicare spending just from
East Long Island. If 30 percent of that spending could be cut

without worsening health care quality, as other research has
found, considering cost-effectiveness could help slow spending growth. Fisher and his co-authors estimated that reducing overall annual growth in per capita spending from the
national average of 3.5 percent to the rate in San Francisco
could save Medicare $1.42 trillion.
At the same time, however, research suggests that the
Oregon Health Plan, the one real example of explicitly using
cost-effectiveness data in the United States, did not succeed
in reducing expenditures. An analysis by the Cascade Policy
Institute, a nonpartisan libertarian research group, found that
growth in Oregon’s Medicaid expenditures closely tracked
the growth across the United States. In addition, the ultimate
benefit of any savings resulting from cost-effectiveness analysis depends on how, or if, those dollars are reallocated to more
cost-effective treatments or to other higher-value uses in the
public or private sector.
Still, the potential is there, and as spending continues to
rise, it will become more important to ensure that the money
is being put to its best use — and that likely means paying
attention to costs.
EF

Readings
Fisher, Elliot S., Julie P. Bynum, and Jonathan S. Skinner.
“Slowing the Growth of Health Care Costs: Lessons from
Regional Variation.” New England Journal of Medicine, Feb. 26,
2009, vol. 360, no. 9, pp. 849-852.

Weinstein, Milton C., and Jonathan S. Skinner. “Comparative
Effectiveness and Health Care Spending: Implications for
Reform.” New England Journal of Medicine, Feb. 4, 2010, vol. 362,
no. 5, pp. 460-465.

von Neumann, John, and Oskar Morgenstern. Theory of Games and
Economic Behavior. Princeton, N.J.: Princeton University Press, 1944.

Richmond Fed Research Digest
is an annual publication that brings the
externally published work of the Bank’s
research department economists
together in one place. It includes brief
summaries, full citations, and links
to the original work.

Visit the digest at www.richmondfed.org/
publications/research/research_digest

2015

Richmond Fed Researc

h Digest

Summaries of wo
rk by economists
in the Bank’s Resear
published externally
ch Department
from June 1, 2014,
through May 31, 201
5
Welcome to the fou
rth annual issue
of the Richmond
The Federal Reserv
Fed Research Dig
e Bank of Richmond
est.
produces several
the work of econom
publications that
ists in its Research
feature
Department, but
publish extensive
those economists
ly in other venues
also
. The Richmond Fed
annual, brings this
Research Digest, a
externally publish
mid-year
ed research togeth
summaries, full cita
er in one place wit
tions, and links to
h brief
the original work.
articles may requir
(Please note that
e registration or
access to
payment.) So boo
Richmond Fed we
kmark this spot on
bsite and mark you
the
r
cal
will publish the nex
endar for June 30,
2016, when the Ban
t issue of the Richm
k
ond Fed Research
Digest.

The Cyclicality of

By Marianna Kudlyak
Journal of Monetary

the User Cost of

Economics, Novemb

Labor

er 2014, vol. 68, pp.

53–67
acroeconomists hav
e long been interest
E C O Ned FinOtheC cos
U St of| labo
S Er tha
CO N D Q U A RT E R |
ness cycle. The liter
atu
re
t firms face over the
usually considers
labor. However, rela
average wage to
busitionships betwee
be the measure of
n workers and firm
may not be a goo
the price of
s are often long-te
d measure of the
rm, and thus wag
price of labor. In
Marianna Kudlyak
es
a Journal of Monetar
of the Richmond
y Economics article,
Fed introduces the
relevant wage mea
concep
su

M

2015

23

INTERVIEW

James Poterba

24

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

EF: How did you become interested in economics?
Poterba: My path to economics began with high school
debate. When I was a freshman in high school, the national
debate topic was “Resolved: that the federal government
should finance primary and secondary education in the
United States.” My high school offered a ninth-grade economics course, and my teacher, Paul Larson, encouraged me
to join the debate team. When I did, I needed to learn how
to discuss issues like whether the value-added tax was regressive and what disincentives for labor supply were created by
the income tax. My sophomore year, the high school debate
topic was “Resolved: that the federal government should
guarantee a minimum annual income to all households.”
This topic also involved taxes and transfers and a lot of economic analysis. My senior year in high school, the topic was
“Resolved: that an international organization should allocate
scarce world resources.” Economics again! I really enjoyed
high school debate, in large part because I enjoyed learning
about the economic issues, and my debate experience was
central to my early interest in economics.
In high school, I also liked science a lot and I thought
I might be a chemist or a chemical engineer — a field that
relies a lot on equilibrium, as economics does. But when
I got to college, I realized the power of economic tools. I
had a very engaging freshman economics instructor, Jane
Katz, who later worked for many years at the New York
Fed. And as a college sophomore, I was in just in the right
spot at the right time when I got to know Larry Summers,
who was then a graduate student at Harvard. Larry was
working with Marty Feldstein on several projects. I worked
as a research assistant for Larry Summers and Kim Clark.
They were studying labor market dynamics. Later, I worked

PHOTOGRAPHY: GREG GIBSON, COURTESY OF THE BIPARTISAN POLICY CENTER

Over the past generation, retirement finance in the
United States has undergone a revolution. While
defined benefit plans (pensions that pay retirees a predefined amount) were once commonplace, they are now
rare for private-sector workers — having been displaced
by defined contribution plans, such as those based on
401(k) accounts and Individual Retirement Accounts
(IRAs). Defined contribution plans do not require the
long job tenure that is typically needed to earn substantial benefits in defined benefit plans, but they do require
workers to make their own investment decisions and to
live with the consequences, for better or worse. These
changes in the private pension landscape have taken
place at the same time that policymakers have been discussing the funding and even the structure of the Social
Security system.
James Poterba of the Massachusetts Institute of
Technology (MIT) has been a leading researcher of
retirement finance since entering the field in the 1990s.
His findings have led to a reconsideration of the simplest versions of the “life cycle” model of savings and
consumption, in which individuals seek to smooth their
consumption over their lifetimes, building assets during
high-earning years and drawing them down steadily
during retirement. With his frequent collaborators
Steven Venti of Dartmouth and David Wise of Harvard,
he has found that some households arrive at retirement
with few assets, while others continue to maintain
high levels of assets throughout much of retirement.
Earlier in his career, as a junior member of the MIT
economics faculty, he focused his research primarily on
tax policy. His transition from taxation research to a
focus on retirement issues began with an examination
of tax incentives for retirement saving in 401(k) plans
and IRAs.
In addition to his work at MIT, since 2008, Poterba
has been president and chief executive officer of the
National Bureau of Economic Research. He is also a
trustee of the College Retirement Equity Fund (CREF)
and an independent director of the TIAA-CREF mutual
funds. David A. Price interviewed him in Washington,
D.C., in June 2015.

with Marty Feldstein on issues
how important government proOver time, firms came to a
involving unemployment insurgrams in the United States and
greater recognition of the true cost other developed countries are in
ance and taxation policy. Marty
and Larry launched me into a
delivering health care, income
of defined benefit plans.
research career in economics.
support, education, and other
After college, I was fortunate
vital functions.
to win a Marshall Scholarship for graduate study in England.
With regard to entitlement programs, one exciting line
When I was there, graduate training in Oxford relied less on
of research has compared countries and tried to use as a data
coursework than a top U.S. Ph.D. program would have, but
point not an individual but in some cases a nation to look at
it also threw you more into the deep end of the pool in terms
how the labor force participation rate, for example, of men
of doing research early on. So I knew less economics than a
in their early 60s, is related to the generosity of the social
comparably aged U.S. graduate student when I finished my
security or the disability insurance system. And the combidoctorate, but I had a little more experience at doing research
nation of access to administrative data plus interesting interbecause I’d started as an undergrad and I’d been able to connational comparisons has generated remarkably interesting
tinue that work right through my graduate experience.
new insight into the operation of a number of programs the
I have been lucky to live under a charmed star and to have
government has managed.
wonderful mentors, terrific colleagues and students, and
great opportunities throughout my career.
EF: One public finance issue is the home mortgage
interest deduction. Many economists oppose the deducEF: Much of your early work looked at the economics of
tion based on equity and efficiency concerns. What
taxation. Are the major challenges to tax policy differdo you think should be done about the deduction, if
ent now than they were then?
anything?
Poterba: One difference is that tax policy discussions and
research on the economics of tax policy in the late 1970s and
early 1980s were set in an environment with marginal tax
rates that were significantly higher than those today. The
United States had a top tax rate on capital income of 70 percent until 1981. The top marginal tax rate on earned income
in the United States at the federal level was 50 percent until
1986. Today, the top statutory rate is 39.6 percent, although
with some add-on taxes, the actual rate can be in the low
40s. We have been through periods when the top rate was
as low as 28 percent. There was a lot more concern about the
distortions associated with the capital income tax and with
taxation in general.
At the same time, the opportunities for studying how
behavior was affected by the tax system when I started in
this field were dramatically different than they are today.
We relied primarily on cross-sectional household surveys.
It’s hard to study how taxation affects behavior when the
variation in the tax system is coming in differences in household incomes that place different taxpayers in different tax
brackets, because income variation is related to so many
other characteristics. Today, by comparison, the field of
public finance has moved forward to use large administrative
databases from many countries, often including tax returns.
It is possible to do a much more refined kind of empirical
analysis than when I started.
The other thing that’s happened is that we’ve devoted
more attention to spending programs. Public finance in
the late 1970s and early 1980s was heavily focused on taxation, at least in the empirical work. But today, health
economics is an enormous subfield of public economics,
and there is broad interest in Social Security and many
other programs. I think this reflects the evolving reality of

Poterba: I began studying various aspects of the tax code
and the housing market in my undergraduate thesis research
in 1979-1980. This is an issue that’s near and dear to my
heart. Let me note several things about the way we currently
tax owner-occupied housing in the United States.
First, because mortgage interest is deductible only for
households that are itemizers on their tax returns and then is
deductible at the household’s marginal income tax rate, this
results in a larger subsidy to households at a higher income
and higher marginal tax rate than for those at lower levels.
Second, the real place where the tax code provides a subsidy for owner-occupied housing is not by allowing mortgage
deductibility, because if you or I were to borrow to buy other
assets — for instance, if we bought a portfolio of stocks
and we borrowed to do that — we’d be able to deduct the
interest on that asset purchase, too. If we bought a rental
property, we could deduct the interest we paid on the debt
we incurred in that context. What we don’t get taxed on
under the current income tax system is the income flow that
we effectively earn from our owner-occupied house, what
some people would call the imputed income or the imputed
rent on the house. The simple comparison is that if you buy
an apartment building and rent it out, and you buy a home
and you live in it, the income from the apartment building
would be taxable income, but the “income” from living in
your home — the rent you pay to yourself — is never taxed.
This is the core tax distortion in the housing market: the taxfree rental flow from being your own landlord.
The natural way to fix this would be to compute a measure of imputed income on your home and include that in
the income tax base. As a matter of practical tax policy,
creating an income flow that taxpayers don’t see and saying they’re going to have to report that on their tax return
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

25

James Poterba

is probably a nonstarter. A number
we’ve seen a tremendous shift in the
➤ Present Positions
of European countries tried in the
private sector from defined benefit
Mitsui Professor of Economics,
past to do something in this direcretirement programs to defined
Massachusetts Institute of Technology
tion, typically in a very simple way,
contribution programs. Was this
President and Chief Executive Officer,
saying something like 3 percent of
mainly a response by firms to the
National Bureau of Economic Research
the value of the home is included in
tightening of the regulatory enviyour income for the year. Almost all
ronment for defined benefit plans,
➤ Education
of those countries have moved away
to changing demand from workers,
D.Phil. (1983) and M.Phil. (1982),
University of Oxford
from this. It therefore seems that
or to something else?
A.B. (1980), Harvard College
the tax reform that one might like on
conceptual grounds is probably not
Poterba: I think it’s a bit of every➤ Selected Publications
politically realistic.
thing. A number of factors came
”Retirement Security in an Aging
Given that situation, other poltogether to create an environment in
Population,” American Economic Review,
icy reforms that might move in the
which firms were more comfortable
2014; “The Composition and Drawdown
same direction probably deserve
offering defined contribution plans
of Wealth in Retirement,” Journal
some attention. Property tax rates
than defined benefit plans. One facof Economic Perspectives, 2011 (with
Steven Venti and David Wise); “Tax
vary from place to place in the United
tor was that when firms began offerExpenditures for Owner-Occupied
States, but they are typically proporing defined benefit plans, in World
Housing,” American Economic Review,
tional to the value of the property.
War II and the years following it,
2008 (with Todd Sinai); numerous other
They are currently deductible from
the U.S. economy and its population
articles in such journals as the National
the income tax base. Disallowing
were growing rapidly. The size of
Tax Journal and the Journal of Public
property tax deductions would be
the benefit recipient population from
Economics; and numerous book chapters.
one way of trying to move gently
these plans relative to the workforce
toward a tax system that was closer
was small. It was also a time when life
to one that taxed imputed rent. One could think about other
expectancy for people who were aged 65 was several years
potential reforms along similar lines, but eliminating the
less than it is today. Over time, the financial executives
mortgage interest deduction turns out not to be the most
at firms came to a greater recognition of the true cost of
natural fix here because it would create distortions between
defined benefit plans.
borrowing to buy a home and borrowing to buy other assets.
I also think the fiduciary responsibilities and the financial burdens that were placed on firms under the Employee
EF: If we tried to address the issue of imputed rent in
Retirement Income Security Act of 1974, or ERISA, have
the way that you suggest, what effect would we see on
discouraged firms from continuing in the defined benefit
house prices? Or if we tried some of the reforms that
sector. ERISA corrected a set of imbalances by requiring
have been discussed concerning the mortgage interest
firms to take more responsibility for the retirement plans
deduction itself?
they were offering their workers and to fund those plans so
that these were not empty promises. ERISA was enacted in
Poterba: Todd Sinai at the Wharton School and I have
the aftermath of some high-profile bankruptcies of major
looked at the consequences of changing some of the tax
U.S. firms and the discovery that their defined benefit plans
provisions, and we typically find that if the market was fully
were not well-funded, leaving retirees with virtually no penforward-looking, and recognized the changes in housing
sion income.
investment that would be associated with tax changes, curBut ERISA and the growing recognition of the costs of
rent house prices would decline by only a few percentage
defined benefit plans are probably not the full story. The
points. There would be variation across types of houses,
U.S. labor market has become more dynamic over time,
related to the typical tax circumstances of their buyers. The
or at least workers think it has, and that has led to fewer
tax benefits, while important, are not a large fraction of
workers being well-suited to defined benefit plans. These
the total cost of an owner-occupied home. Of course, that
plans worked very well for workers who had a long career at
doesn’t say that you’d want to pile on and make a tax reform
a single firm. Today, workers may overestimate the degree
of this kind when house prices are not performing very well.
of dynamism in the labor market. But if they believe it is
Today, house prices have recovered somewhat from the
dynamic, they may place great value on a portable retirement
financial crisis of 2008-2009, but a better time to adopt
structure that enables them to move from firm to firm and to
a reform like this would have been 2005, after a period of
take their retirement assets with them.
strong price appreciation.
Most workers who are at large firms, firms that have 500
employees or more, have access to defined contribution
EF: More recently, one of your areas of research has been
plans. Unfortunately, we still don’t have great coverage at
retirement finance and the investment decisions of worksmaller firms, below, say, 50 employees. For workers who
ers thinking about their retirement. In recent decades,
will spend a long career at a small firm, the absence of these
26

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

employer-based plans can make
Health and Retirement Study,
Nearly half of the population
it harder to save for retirement. A
which is a comprehensive datais relying on a one-legged stool for base on older individuals in the
key policy priority is pushing the
coverage of defined contribution
United States, begins tracking
retirement, with Social Security
plans further down the firm size
survey respondents in their midas the sole leg.
distribution. That’s hard, because
50s. It follows them until they
smaller firms are less likely to
die, so the last survey is typihave the infrastructure in place
cally filed about a year before the
in their HR departments or to have the spare resources to be
individual’s death. Nearly half of the respondents in the surable to learn how to establish a defined contribution plan and
vey turn out to have very low levels of financial assets, under
how to administer it. They are probably also more reluctant
$20,000, as they get close to death. For any economist who’s
to take on the fiduciary burdens and responsibilities that
been steeped in the life cycle model, the notion that you
come with offering these plans.
would reach such a low level of asset holdings, even at old
Another concern, within the defined contribution sysages and when health is poor, is surprising, particularly given
tem, is the significant amount of leakage. Money that was
the risk of out-of-pocket expenses for medical care or nursoriginally contributed for retirement may be pulled out
ing homes. This empirical pattern is a bit of a challenge to
before the worker reaches retirement age.
the life cycle model of my late colleague Franco Modigliani.
I have been quite interested in how individuals arrive
EF: What is causing that?
at such low levels of financial assets. Many of those who
have very little financial wealth as they approach death also
Poterba: Say you’ve worked for 10 years at a firm that offers
reached retirement age with very little wealth. Nearly half
a 401(k) plan and you’ve been contributing all the way along.
of American retirees rely overwhelmingly on Social Security
You decide to leave that firm. In some cases, the firm you
as their source of income. One often hears references to a
are leaving may encourage you to take the money out of
three-legged stool of retirement support, which involves
their retirement plan because they may not want to have
Social Security, private saving, and employer-based saving in
you around as a legacy participant in their plan. They may
a retirement plan. The reality is that nearly half of the popunot want the fiduciary responsibility of having you in the
lation is relying on a one-legged stool, with Social Security as
plan. In this case, the former employer may be encouraging
the sole leg. Only in the top half of the retiree wealth distrithe departing worker to withdraw funds from the retirement
bution does one start to see substantial amounts of support
space. Sometimes, the worker may choose to move the funds
from private pension plans, and only in the top quarter is
from the prior 401(k) plan to a retirement plan at their new
there substantial support from private saving outside retireemployer, or to an IRA. Those moves keep the funds in the
ment accounts.
retirement system. But sometimes, the worker just spends
the money. When an individual leaves a job, they may expeEF: Knowing what you’ve learned over the years, what
rience a spell of unemployment, or they may have health
advice would you give to a 30-year-old worker today
issues. There may be very good reasons for tapping into the
about retirement?
401(k) accumulation. Using the 401(k) system as a source of
emergency cash, sort of as the ATM for these crises, diminPoterba: Save early and save a lot.
ishes what gets accumulated for retirement.
At MIT, I have a lot of engineering colleagues who are
accustomed to answering questions with precise and definEF: Did you venture into this area initially simply
itive answers. If I ask one of them how big a solar array I
because you thought it was an interesting set of quesshould put on my roof to generate enough energy for my
tions, or was there anything in particular that pushed
home, they are able to do a calculation that gives a pretty
you in this direction?
accurate answer to that question. They can design an array
so that I’ll have energy 95 percent of the time. If they ask me
Poterba: My interest in retirement saving began with
in return how much they should be saving for retirement,
my interest in tax policy. A critical feature of the savings
I don’t think I can give them an answer with an analogous
landscape in the United States is the role of tax policy in
level of precision.
encouraging various kinds of retirement arrangements. In
There is a lot of heterogeneity across individuals in
my research on retirement issues, tax-related questions have
their relative tastes for retirement versus pre-retirement
continued to attract my interest. I have also become interconsumption. Some people may regard the availability of
ested, however, in the question of how households formulate
more time in retirement as an opportunity to ramp up their
and carry out their financial plans, particularly in retirement.
spending, to travel, or to enjoy a second home. Others,
For example, some work that Venti, Wise, and I have
particularly lower-income retirees, may devote more time
done looks at the distribution of asset holdings for individuto shopping sales for groceries and for other products they
als who are very close to death. The University of Michigan
buy. They may spend more time cooking at home relative
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

27

to consuming food away from home. They may scale back
on clothing purchases because they are not required to buy
clothes for work. The notion that spending time can save
money is very evident in the behavior of some retirees.
One of the notable examples of this is that early research
on the well-being of retirees pointed to the fact that expenditures on food declined for a number of retirees lower in
the income distribution. That was often viewed as evidence
that these individuals must be worse off when they retired
than they were when they were working — they could not
even sustain their food consumption. Yet more refined analysis of the food expenditure data found that caloric intake
did not decline very much even for those for whom food
expenditure declined. What happened? They shifted from
buying takeaway meals at the grocery store or stopping at
a restaurant to purchasing more food to prepare at home.
Spending declined, but the ultimate objective — nutritious
meals — was not affected nearly as much as the spending
decline suggested. This is microeconomics in action, right?
When money becomes scarce relative to time, individuals
alter the way they choose to produce things.
Many individuals also have some reason for preserving
financial assets until late in life. Textbook life cycle theory
would lead you to expect that peak assets are basically
observed at the moment when someone retires. After that,
leaving aside bequest considerations and the possible need
for late-life precautionary saving, retirees should begin to
draw down assets as they move toward the end of life. But
in fact, at least in the early years of retirement, the late
60s and into the 70s, many households that have financial
assets experience relatively stable assets over that time.
Some even appear to save more during this period. What’s
happening here? Well, either they are planning to leave
these assets to the next generation or to make charitable
gifts late in life, or they are saving for precautionary reasons
like health care costs.
The times when financial assets are drawn down significantly are often when one spouse in a married couple dies,
which may be associated with medical and other costs, and
at the onset of a major medical episode. Health care shocks
may lead to costs for caregivers who may not be covered by
Medicare and other insurance. Retirement is not a homogenous period from the standpoint of financial behavior:
Behavior for the “young elderly” can be quite different from
the behavior of those who are in their 80s and 90s.
EF: You’ve been called the de facto historian of MIT’s
economics department. What did MIT do differently in
economics that helped it become pre-eminent?
Poterba: Let me first explain why I have been interested
for a long time in the history of MIT economics. I arrived
at MIT in 1982, just before the retirement of the postwar
faculty who built the modern department. As a brand
new assistant professor, I attended retirement parties for
Evsey Domar, Cary Brown, Charlie Kindleberger, and Paul
28

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

Samuelson, and then a bit later for Morris Adelman and
Bob Solow. The MIT economics department was a closeknit group of faculty. Attending these retirement parties,
one couldn’t help but be swept up in the incredible sense of
dedication to economics, and dedication to each other, that
this group had in building the department. That got me very
interested in the history of the department.
If you compare a rough ranking of economics departments in 1940 or 1950 with a ranking in 2000, there is a lot of
stability, but the one department that jumps into the ranks
is MIT. MIT has actually had an economics department
for a very long time. The first president of the American
Economic Association, Francis Amasa Walker, was the
president of MIT. He was an economist who was recruited
from Yale to lead MIT, and he introduced a required undergraduate economics course ­— maybe the first such course at
an American college or university.
The MIT economics department was a service-oriented
undergraduate department until 1940 when it introduced
a master’s program. In the mid-1940s, it started a Ph.D.
program. Paul Samuelson’s arrival at MIT in 1940 coincided
with a ramping up of the department’s interest in graduate
training. There were some important hires in the early postwar years that made it possible to build a core faculty that
was involved in graduate training.
Several things helped MIT. First, because it was a
rapidly growing department, it was possible to hire many
leading young economists and bring them to MIT. This
created a great atmosphere and a critical mass of active,
research-oriented faculty. Some of the key figures had an
enormous influence on the development of the department. I am sure that it wasn’t unique to MIT, but the
faculty consisted of a group of good friends who were all
very active in research, all committed to building a Ph.D.
program, and all engaged in building the department.
Second, MIT’s economics department always had a good
balance between teaching and research. The graduate program was well-integrated with research activity.
Finally, in the 1940s into the 1950s, MIT probably benefited from anti-Semitism that was still prevalent in many
other universities. MIT’s department was prepared to hire
leading economists who happened to be Jewish, and it stole
a march on a number of other departments as a result.
EF: You taught introductory macroeconomics at MIT
last spring for the first time. What was that like?
Poterba: I loved it. When I first came to MIT, I taught
undergraduate statistics, but that’s not a course in which
you can convey a lot of economics to the students. Then for
many years, I had administrative assignments that crowded
out undergraduate teaching. I recently decided that I was at
a career stage when it might be fun to teach a large introductory course, and our department needed someone to cover
the macro course, so I volunteered. I hope the students liked
it as much as I did. I found it invigorating to try to distill the

core questions in macro and bring those questions to the students. There are just so many exciting topics in macro today.
Why are global interest rates so low? What is happening in
the eurozone? How do we think about long-term fiscal policy and sustainability in the United States? Why is growth in
the U.S. economy slower than it has been? How does recent
work on long-term inequality and the relationship between
rates of return and growth rates connect to the changing
distribution of resources in the United States? I hope I
succeeded at least a bit in conveying some of my excitement
about these questions.

they supported the creation, in 1920, of the NBER to collect
and disseminate information on the economy.
One reason the NBER is well-regarded is that it doesn’t
get involved in policy debates, although it certainly carries
out research that is relevant for policy. I review working
papers to make sure we stay true to the no-policy-recommendation rule. I learn a great deal of economics in the process. In some cases, I need to reach out to the researchers to
ask them to drop a passage in their paper that makes a policy
statement. Almost always, the researchers are very agreeable
and understanding.
The most enjoyable part of the job is trying to launch and
direct research projects on particular topics. There have been
NBER projects recently on high-skilled immigration, on the
macro consequences of the financial crisis, on sovereign debt
markets and crises, and on energy infrastructure. These projects provide an opportunity for me to work with an array of
researchers to develop research proposals and to seek funding
for these initiatives. I also have the chance to shape where
the research is headed and what questions will get attention.
My NBER role provides a bit of leverage; it’s a way of going
beyond what I can do myself as a researcher and influencing
what others will do as well.

EF: You’ve been president of the NBER since 2008.
What do you see as the role of the NBER in economics?
Poterba: The NBER presidency is an extraordinary experience. It’s a window on economic research and the economics
profession that is very hard to get in any other way. The
NBER is devoted to carrying out and to supporting economic
research, to disseminating research, and to helping educate
the academic, policy, and business communities, and to some
degree the public, about economic activity and economic
analysis. While the NBER is best known for the dating of the
U.S. business cycle, there’s an enormous amount of research
activity that takes place in the 20 distinct research programs
that focus on everything from corporate finance and asset
pricing to labor economics, education, and development economics. The span is remarkable.
I look at each working paper that is submitted for distribution in the NBER working paper series. When the
NBER was founded, one of the key charter provisions was
that it would not make policy recommendations. One of
the founders was the chief statistician at AT&T, one of the
largest U.S. companies of the day. Another was a Marxist
labor organizer. They had rather different views about many
economic issues. They had interacted with each other on
some commissions during the 1910s that had looked at
policy questions such as should there be a minimum wage
and should there be an hours limit. They realized that even
though they might have different answers to those questions, there wasn’t enough data on the distribution of working hours or wages to permit reasoned discussion. Together,

EF: What is the future of public finance economics?
Poterba: I tell incoming graduate students that in the field
of public economics, the questions we confront are always
fresh because economies go through periods of evolving
policy mix, but our underlying analytical tools are remarkably stable. When public finance economists talk about the
optimal design of a tax system, it is worth remembering that
Adam Smith offered four maxims for a good tax system. One
of them is that the tax system should impose the smallest
possible burden beyond the revenue that is collected from
the taxpayer. It’s a very simple statement that the optimal
tax code should minimize deadweight burden, and it remains
a guiding principle that animates research to this day. The
underlying trade-offs in public economics, between equity
and efficiency and between raising revenue and creating distortions, have been with us a long time, and they are likely to
remain the bedrock of the field.
EF
u

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

29

ECONOMICHISTORY
Alchemy Island
Turning Dirt
into Gold on
Hilton Head

From the left, Charles Fraser,
Jack Nicklaus, and Pete Dye
discuss the layout of Harbour
Town Golf Links with
Donald O’Quinn, who oversaw
construction of the course in
the 1960s.

A

s a teenager in the late 1940s,
Thomas Barnwell Jr. earned
money raising butter beans and
“bogging” for crabs along the muddy
shores of Skull Creek on Hilton Head
Island, S.C. In those days, before any
bridge connected the island to the mainland, most islanders still lived off the
land and water.
“The land will take care of you,”
Barnwell’s grandfather often advised.
“Don’t sell it. And if you ever have to
sell it — if you hold on long enough —
you can sell it by the foot instead of
by the acre.” Barnwell and his cousins
used to laugh at such an idea. “Who in
the world,” they asked, “would want to
come to Hilton Head and buy this dirt
by the foot?”
Who indeed?
Development of Hilton Head Island,
named by Capt. William Hilton in 1663,
has transformed one of the poorest and
most isolated corners of South Carolina
into a popular refuge for wealthy people
from all over the world. This economic
miracle was set in motion during the
1950s by Charles Fraser, an innovative
young developer whose vision for Hilton
Head set a new standard for upscale
resort, retirement, and residential communities across the nation. He employed
land covenants and deed restrictions to

preserve the natural beauty of the island
and control every aspect of Sea Pines
Plantation, his 5,000-acre masterpiece
of master planning.
“The modern American resort and
retirement community was invented
on Hilton Head by Charles Fraser,”
wrote Michael Danielson in his 1995
book, Profits and Politics in Paradise: The
Development of Hilton Head Island. “Sea
Pines triggered a remarkable and rapid
transformation of Hilton Head into a
world-class resort.”
As Sea Pines won national and international acclaim, Fraser’s ambition,
reputation, and access to financing
grew exponentially. In the early 1970s,
he borrowed hundreds of millions of
dollars to jumpstart similar projects in
Florida, South Carolina, Virginia, and
Puerto Rico. Lenders took over most
of those projects after the mid-1970s
recession, and Fraser lost much of his
personal fortune. But the Sea Pines style
of development created a lot of wealth
for other people, especially Fraser’s former employees. They call themselves
the alumni of “Sea Pines University,”
where they acquired human capital that
has enabled them to continue turning
dirt into gold on Hilton Head and in
many other areas of the United States.
The success of Sea Pines also created a wide socioeconomic gap between
native islanders and the wealthy people
who have flocked to the place since
the 1950s. But unlike many low-income
people in similar situations throughout
the United States, Hilton Head’s native
islanders own a significant share of the
land that surrounds them. And in recent
years, Barnwell and his family have
demonstrated how to tap the economic
potential of that land without selling it.

Before the Bridge
For nearly 100 years, Hilton Head was
populated primarily by descendants
of former slaves who claimed freedom
on the Union-occupied island during
30

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

PHOTOGRAPHY: COURTESY OF THE SEA PINES RESORT

BY KARL RHODES

PHOTOGRAPHY: COURTESY OF THE SEA PINES RESORT

the Civil War. These native islanders, also called Gullah,
farmed and fished and maintained a language and culture that
reflected strong African roots. The Gullah people owned less
than one-third of the land, but they generally ran the whole
island. Hilton Head was isolated from the mainland — not
only was there no bridge, there was no telephone, electricity,
or running water.
Things started to change in 1949 when the Hilton Head
Company, a timber partnership from Hinesville, Ga., purchased a large portion of the island for $60 an acre. Fraser,
son of the company’s majority partner, worked in his father’s
timber camp one summer and fell in love with the place. To
maximize the island’s development potential, he persuaded
his father to preserve many mature pine trees along the
island’s southern shores. The other timber partners also recognized that Hilton Head had strong development potential. They also cut down trees selectively, but none of them
envisioned the island’s future the way Fraser did.
As a student at Yale Law School, he started making
grand plans to develop an upscale resort and residential
community on Hilton Head. “Fraser studied design and
planning as well as law; and he persistently asked ‘law school
colleagues, law and architecture professors what could be
done with four miles of virgin South Carolina beachfront
and adjacent forests,’” Danielson wrote, quoting Fraser. He
was “strongly influenced by a course at Yale called ‘Land Use
Planning and Allocation by Private Agreement’ taught by
Myres McDougal, a specialist in the use of private covenants
to implement comprehensive land use planning.” He also
consulted “hundreds of landowners and planners along the
east coast.”
Fraser returned to Hilton Head in 1956 — the year when
a privately financed toll bridge opened — and urged his
father’s partners to upgrade their plans for traditional beachfront development. He unveiled an ambitious proposal to
build a world-class resort with at least two golf courses. Golf
was vital to Fraser’s alchemic equation — the catalyst that
eventually would turn dirt into gold on the island’s interior.
“We in the development business now assume there was
a golf course in the Garden of Eden, but Charles was really
the guy who figured out how to use golf courses to create
real estate value,” says Peter Rummell, a Sea Pines alumnus who later ran Disney Development and Walt Disney
Imagineering. “He was always forward-looking — always
trying to figure out what’s going to happen next.”
But the partners of the Hilton Head Company didn’t see
Fraser as a visionary. They still viewed him as the little kid
next door in Hinesville. His innovative ideas were “hooted
at in derision by many of the directors,” Fraser wrote. In
particular, they dismissed the notion that the island could
eventually support two golf courses as “the ‘wild visions’ of an
immature 25-year-old.” (Hilton Head now has 21 golf courses.)
The conflict between Fraser and his father’s partners
ultimately tore the former timber company limb from limb.
Soon after the bridge opened, Fraser’s father broke away
from his partners and put his 20-something son in charge of

This photo was taken near the spot shown on the preceding page.
The Harbour Town golf course, lighthouse, and marina became
world-famous features of Sea Pines and Hilton Head.

developing the family’s acreage on the southern end of the
island. They called the project Sea Pines Plantation.

After the Bridge
Native islanders bristled at the idea of working on anyone’s
“plantation,” but other job opportunities on Hilton Head
were sparse, and it was getting harder to make a living from
small-scale farming and fishing.
Barnwell took a summer job with Sea Pines in the late
1950s helping to clear land for 50 cents an hour, significantly
more than he had been earning raising beans and catching
crabs. He operated a winch on the back of a truck to pick
up tree stumps and haul them to an area where they were
burned. Sea Pines sacrificed a few trees on this altar of economic progress, but Fraser hated to cut down trees.
“Trees were sacred,” Rummell emphasizes. “We didn’t
take down one more tree than we had to.” All that timber
came in handy during the early days of development when
Fraser was desperate to secure financing from the Travelers
Insurance Co. He essentially mortgaged the trees on the
property with a “timber loan” to keep Sea Pines afloat.
Financing high-dollar infrastructure and amenities on a
remote island off the coast of South Carolina was difficult,
so money was extremely tight. Quite often, Sea Pines sold
just enough real estate during the week to make payroll on
Friday. Even so, Fraser insisted that Sea Pines adhere to
high standards of quality and conservation. His blend-withnature vision arguably was focused more on aesthetics than
ecology, but potential buyers liked what they saw, and word
slowly started to spread about “Charlie Fraser’s island paradise,” a phrase Fortune magazine coined in 1967.
A turning point for publicity came in 1962, when the
Saturday Evening Post ran a photograph of Fraser walking in
perfect lock step with an eight-foot alligator. Newspapers
in South Carolina and North Carolina picked up the story,
and national publications chimed in with glowing reviews.
Fraser was a gifted promoter. He changed the name of
“Horse’s Hole” — a small lake on the island — to “Audubon
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

31

Pond.” He called drainage ditches “lagoons.” Subdivisions
were “plantations.” And Fraser never sold condos, he marketed “villas” — lots of them. As Sea Pines’ sales accelerated,
Fraser secured significantly more financing from Travelers
and started building Harbour Town, a picturesque marina
surrounded by restaurants, shops, and condos. The marina’s
yacht basin was designed to be a perfect circle, but Fraser
refused to cut down a massive southern live oak that stood
in the way of that plan. So the company spent an extra
$250,000 to preserve the tree on a small spit of land extending into the otherwise circular harbor. It was a defining
moment for Fraser, Sea Pines, and Hilton Head.
Fraser was always willing to invest lavishly in Sea Pines’
visual appeal. For example, he built a 90-foot lighthouse —
not to guide mariners, but to create a dramatic backdrop for
the 18th green of Harbour Town Golf Links, a new course
designed by Pete Dye and Jack Nicklaus. The course was so
highly anticipated that it was placed on the PGA Tour’s 1969
schedule before the course was completed.
By then, other developers of Hilton Head were creating
their own upscale “plantations” by copying many of Fraser’s
ideas. “The Sea Pines or Hilton Head style became a generic
term among architects and landscape designers,” Danielson
noted. Sea Pines’ land-use covenants (including strict rules
against cutting down trees) became a model for residential
developments nationwide.

Sea Pines University
After earning his MBA from the Wharton School at the
University of Pennsylvania, Rummell drove to Hilton Head
for a job interview with Fraser. It was 1971, a year when the
economic transformation of the island was creating dramatic visual contrasts. As he approached Sea Pines’ swanky

William Hilton Inn, Rummell was surprised to see a native
islander plowing oceanfront land with a mule. That was his
first clue that he was about to get an education unlike anything he had encountered at Wharton.
Fraser hired Rummell and many other Ivy League MBAs,
and the company became, in effect, a postgraduate program for master-planned resort and residential development.
“Charles was a huge believer in looking at what other people
were doing,” Rummell recalls. “Just before we went public, he
chartered a DC-9 and flew 45 employees and their spouses to
Southern California to look at other people’s projects. This
was a small company, and he spent a fortune doing that.”
The small company, however, was growing rapidly. Fraser
assigned Rummell to the team that was developing Amelia
Island Plantation off the coast of Florida. Other teams were
cloning the Sea Pines model elsewhere: on the northern
end of Hilton Head; on Kiawah Island, S.C.; at River Hills
Plantation southwest of Charlotte, N.C.; at Brandermill in
the suburbs of Richmond, Va.; and at Palmas del Mar (Sea
Palms) on the southeastern coast of Puerto Rico.
The company’s liabilities soared from $12.6 million in
1969 to $283 million in 1975, and the interest rates Sea Pines
was paying spiked into the teens. The Arab oil embargo and
the recession of 1973-1975 also hit the company hard. Fraser
publicly blamed the company’s problems on Federal Reserve
Chairman Arthur Burns, but the bigger issue was cost overruns at Palmas del Mar, a resort that was extravagant even
by Sea Pines’ standards. At one point, the company placed a
third mortgage on the southern end of Hilton Head to make
payroll at Palmas del Mar.
“Charles never met a debt instrument he didn’t want to
hug,” Rummell says with a laugh. “Once financing became
readily available, he got way ahead of his capability.” At Sea

Economic Development in Paradise
Residents of Hilton Head Island created a town government
in 1983 to slow down growth and “preserve paradise” by
imposing tighter land-use controls. Over the years, critics of
this approach have caricatured the town’s initial strategy as:
“Now that we are here, let’s blow up the bridge!”
The bridge, of course, is still standing, and for many years,
Hilton Head continued to grow rapidly. The total assessed
value of the island’s real estate nearly doubled from 1990 to
2000 and doubled again from 2000 to 2010 — due partly
to growth and partly to appreciation of existing property.
During the recession of 2007-2009, however, development
slowed dramatically, and the average value of single-family
homes fell from more than $1 million to less than $670,000,
while the average value of condos dropped from $449,000
to $249,000. Those values have not recovered. Also, the
average age of the town’s residents has increased from 40
in 1990 to 51 in 2010, and it is expected to move higher as
the market continues to transition from second homes to
retirement homes.

32

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

In light of these changes, there has been a growing view
among town government leaders that the island should
diversify its economy, attract and retain younger people,
and become a “real town” with a full spectrum of job opportunities and housing options. Toward that end, the town
has eased some zoning restrictions, created an economic
development organization, and hired Don Kirkman as the
organization’s first director.
Kirkman speaks passionately about creating the missing
rungs on the island’s socioeconomic ladder by attracting
small business owners who could live and work virtually anywhere that has good Internet access. “If you can locate your
business anywhere,” he says, “why not locate it in a place
where you would love to live?”
Kirkman says he is optimistic about the island’s future,
but he concedes that it feels strange “to be hired as the
first economic development director for a town that was
formed for the specific purpose of opposing economic
development.”
— Karl Rhodes

Pines University, students learned what to do and what not
to do by interacting with Fraser. “He had a wonderful optimism that was part of his creativity,” Rummell marvels. “He
had the courage of his convictions, so he was hard to help
when he got his mind set on something. Everybody told him
not to go to Puerto Rico.”
During the next few years, the Sea Pines Company lost
everything except its original properties. Fraser sold the company in 1983 for $10 million, a fraction of what it was once
worth. By then, Sea Pines was more of a resort management
company than a development company, but the Sea Pines
style had become a nationally prominent model for resort and
residential development. Several alumni helped finish some
of the bankrupt projects that Fraser had started, and many
of his protégés developed highly successful projects on their
own, both on Hilton Head and across the country.
Sea Pines alumni say they are the biggest beneficiaries
of Fraser’s genius and Hilton Head’s success. Four of them
— including Rummell — went on to chair the Urban Land
Institute, which gave Fraser a Heritage Award, one of
only nine given in the history of the institute, to recognize
land-planning contributions of lasting importance. Several
Sea Pines alumni remained on Hilton Head, including J.R.
Richardson, a prominent local developer whose family owns
and operates Coligny Plaza, the island’s oldest shopping
center. When asked who benefited most from Fraser’s influence, Richardson just smiles and raises his hand.
Richardson was among the many Sea Pines alumni who
were devastated when Fraser died in a boating accident in
2002. Richardson helped make arrangements to bury his mentor on Hilton Head at the Harbour Town marina — beneath
the southern live oak that Fraser refused to cut down.

Gullah Gold
Development of Hilton Head gradually improved the quality of life for Barnwell and many other Gullah people.

They gained electricity, running water, better roads, better
schools, and better medical care. But the economic gap
between native islanders and wealthy newcomers remains
enormous. At the end of one dirt road, there are small shacks
and trailers about 100 yards from luxury homes that are visible through the woods.
“We have these fantastic people who are here from all
over the world in this world-class community. Yet we still
have people on Gumtree Road who are not connected to
public sewer,” Barnwell says. “We still have people who are
not in the economic mainstream.”
Over the years, many Gullah families have sold their
land — including some prime oceanfront properties — but
some native islanders have retained their acreage, following the advice of Barnwell’s grandfather and other Gullah
elders. Selling land still goes against their culture, and it can
be difficult for native islanders to develop their properties,
partly because much of the land belongs to far-flung family
members who inherited portions of it from generations of
ancestors who died without wills. This encumbered land is
called “heirs” property because it is titled to the unnamed
“heirs” of someone who has died.
Today, a nonprofit organization in Charleston, S.C.
— the Center for Heirs’ Property Preservation — is working to clear “heirs” land titles for families in the region
who cannot afford lawyers and want to benefit economically from their land without selling it. In the meantime,
Barnwell and his family have demonstrated how to use
limited liability corporations and long-term land leases to
generate income from their land. The family used both
of those tools to facilitate the development of Bluewater
Resort and Marina, an upscale timeshare on Skull Creek,
where Barnwell used to bog for crabs. He concedes that
turning Gullah dirt into Gullah gold can be tedious and
complicated, but the economic payoff sure beats raising
butter beans and digging up crabs.
EF

Readings
Campbell, Emory S. Gullah Cultural Legacies. Hilton Head Island,
S.C.: Gullah Heritage Consulting Services, 2002.
Danielson, Michael N. Profits and Politics in Paradise: The
Development of Hilton Head Island. Columbia, S.C.: University of
South Carolina Press, 1995.

Fraser, Charles E. The Art of Community Building. 1985.
Fraser, Joseph B., and Margaret Greer. “The Sea Pines Story:
Three-Part Series.” Hilton Head Monthly, April, May, and
June 2005.
Economic Brief

Economic Brief publishes an online essay each
month about a current economic issue.
September 2015 Inflation Targeting: Could Bad Luck
Explain Persistent One-Sided Misses?

October 2015 Calculating the Natural Rate of Interest
To access the Economic Brief and other research publications,
visit www.richmondfed.org/publications/research/

October 2015, EB15-10

Calculating the Natural Rate of Interest:
A Comparison of Two Alternative Approaches
By Thomas A. Lubik and Christian Matthes

The natural rate of interest is a key concept in monetary economics because
its level relative to the real rate of interest allows economists to assess the
stance of monetary policy. However, the natural rate cannot be observed;
it must be calculated using identifying assumptions. This Economic Brief
compares the popular Laubach-Williams approach to calculating the natural
rate with an alternative method that imposes fewer theoretical restrictions.
Both approaches indicate that the natural rate has been above the real rate
for a long time.
The natural rate of interest is one of the key
concepts for understanding and interpreting
macroeconomic relationships and the effects of
monetary policy. Its modern usage dates back
to the Swedish economist Knut Wicksell, who in
1898 defined it as the interest rate that is compatible with a stable price level.1 An increase in
the interest rate above its natural rate contracts
economic activity and leads to lower prices,
while a decline relative to the natural rate has
the opposite effect. In Wicksell’s view, equality
of a market interest rate with its natural counterpart therefore guarantees price and economic
stability.
A century later, Columbia University economist
Michael Woodford brought renewed attention
to the concept of the natural rate and connected
it with modern macroeconomic thought.2 He
demonstrated how a modern New Keynesian
framework, with intertemporally optimizing
and forward-looking consumers and firms that

constantly react to economic shocks, gives rise
to a natural rate of interest akin to Wicksell’s
original concept. Woodford’s innovation was to
show how the natural rate relates to economic
fundamentals such as productivity shocks or
changes in consumers’ preferences. Moreover,
an inflation-targeting central bank can steer the
economy toward the natural rate and price stability by conducting policy through the application of a Taylor rule, which links the policy rate to
measures of economic activity and prices.
Naturally, monetary policymakers should have a
deep interest in the level of the natural interest
rate because it presents a guidepost as to whether
policy is too tight or too loose, just as in Wicksell’s
original view. The problem is that the natural rate
is fundamentally unobservable. It is a hypothetical construct that cannot be measured directly.
Instead, economists have developed various empirical methods that attempt to derive the natural
rate from actual data.3

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

EB15-10 - Federal Reserve Bank of Richmond

Page 1

33

AROUNDTHEFED

Effect of the ‘Polar Vortex’ on Economic Activity
BY L I S A K E N N E Y

“The Effect of Winter Weather on U.S. Economic
Activity.” Justin Bloesch and Francois Gourio, Federal
Reserve Bank of Chicago Economic Perspectives, vol. 39,
First Quarter 2015.

T

he polar vortex that descended on parts of the United
States in the winter of 2013-2014 brought cold temperatures, record snowfalls, and possibly an economic slowdown.
Anecdotes about boats delivering iron ore being unable to
traverse the frozen Great Lakes — thus causing a delay in
steel production — seemed to draw a connection between
the weather and economic activity. But how accurate is that
assumption?
Economists at the Chicago Fed studied whether this
unusual winter actually caused the decline of economic
indicators such as industrial production, employment, and
housing starts from December 2013 to March 2014. They
found that while weather had a significant, but short-lived,
impact on economic activity, the effect was not large enough
to account fully for the weak economy during that period.
They looked at both national and regional data for the
actual winter weather and economic indicators. They also
use historical data to determine if the economy has become
more or less sensitive to weather changes over time.
Both national and regional data lead to similar results,
though the national data are less clear because they cannot
take into account regional variations in the weather. Some
patterns can be attributed in part to the weather, but they
cannot explain the magnitude and timing of the slowdown.
Indeed, the researchers find that “an important share of the
slowdown in the first quarter was driven by an inventory
correction and the effect of foreign trade.”
Also, the timing of the decline was uneven across indicators: Some declined in January, others did so in February,
and still others declined in more than one month.
“Job Switching and Wage Growth.” R. Jason Faberman
and Alejandro Justiniano, Federal Reserve Bank of
Chicago Fed Letter No. 337, 2015.

I

n a recent Chicago Fed Letter, economists Jason Faberman
and Alejandro Justiniano explore whether the worker quit
rate is correlated with wage growth and inflation. They find
it to be not only highly correlated, but also highly predictive
of both future wage growth and future inflation.
Faberman and Justiniano use data from the Job Openings
and Labor Turnover Survey (JOLTS) to estimate the aggregate
quit rate — a proxy for the pace at which workers move to new
jobs — in each month since 2000. They find that the quit rate,
along with wage growth, is highly procyclical, meaning it rises
34

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

during economic expansions and falls during recessions.
The authors find that fluctuations in the quit rate appear
to lead changes in the wage growth, peaking two to four
quarters ahead. They also find that changes in the quit rate
appear to lead changes in the inflation gap (the difference
between actual inflation and long-run expected inflation).
This suggests the quit rate may be a useful predictor of both
future inflation and future wage growth.
“Is the Intrinsic Value of Macroeconomic News
Announcements Related to their Asset Price Impact?”
Thomas Gilbert, Chiara Scotti, Georg Strasser, and Clara
Vega, Federal Reserve Board Finance and Economics
Discussion Series No. 2015-046, April 23, 2015.

S

ome macroeconomic news announcements have a strong
effect on asset prices and some do not. But there is not
much literature on why this is the case. Fed researchers try
to answer that question in a recent Finance and Economics
Discussion Series paper.
First, they define and estimate novel measures of the
intrinsic value of 36 macroeconomic announcements. The
authors’ definition of the intrinsic value of each announcement is its ability to nowcast several fundamentals, namely
GDP, the GDP price deflator, and the federal funds target
rate. (Nowcasting involves a statistical model that produces
predictions about these fundamentals in real time; the actual
measures of these fundamentals are often released only after
a long delay.) Next, the authors decompose each announcement’s intrinsic value into three characteristics: timing of
the announcement, revision noise, and its relation to fundamentals using the same nowcasting framework. Finally, the
paper relates the intrinsic value and the three characteristics
to the announcements’ effect on asset prices.
They find that their novel measure of intrinsic value
“explains between 8 and 22 percent of the variation in the
heterogeneous response of asset prices.” When they estimate
the importance of each of the three individual characteristics
of the announcement, they find that tardiness — the loss of
intrinsic value due to the time lag between the period covered by the announcement and the announcement’s release
— is the most important factor in explaining the asset price
impact. The announcement’s relation to fundamentals is less
important and the revision noise is found to be insignificant.
Another takeaway from the research is that the relationship between the intrinsic value and the asset price impact is
imperfect. Some announcements have a large impact on asset
prices but are not found to have the biggest intrinsic value,
which leads the authors to conclude that it is possible for
financial markets to overreact to certain announcements. EF

BOOKREVIEW

Share and Share Alike
ON INEQUALITY
BY HARRY G. FRANKFURT
PRINCETON, N.J.: PRINCETON
UNIVERSITY PRESS, 2015, 120 PAGES
REVIEWED BY HELEN FESSENDEN

F

or most of the postwar era, concerns about economic equality have been relegated to the sidelines
of mainstream macroeconomics. In recent years,
however, equality has become more salient in economics
literature, one recent example being the surprise success
of Thomas Piketty’s Capital in the Twenty-First Century.
Now, one of the country’s most famous philosophers,
Harry Frankfurt, joins this debate by asking the daring
question: Is equality as important a moral good as other
human values?
Frankfurt, a Princeton University professor (now emeritus), asserts that those who oppose economic inequality are
making a misguided assumption. By defining equality as an
inherent moral good, he contends, we mistakenly focus on
a person’s standing relative to others rather than addressing
how we can meet that person’s most basic material needs. As
a result, our target is a certain level of wealth that has nothing to do with a person’s actual circumstances and wants.
When we make such claims, Frankfurt explains, it is
in part because it is much easier to define what is “equal”
(everyone gets the same) than it is to define what is
“enough.” By “enough,” Frankfurt emphasizes that he is
not referring to subsistence levels, but what a person needs
so that he feels reasonably satisfied, so that “he does not
resent his circumstances,” as Frankfurt puts it. Another
common error among inequality opponents, to Frankfurt,
is that they conflate the effects of inequality with inequality itself. “Whenever it is morally important to strive for
equality, it is always because doing so will promote some
other value rather than because equality itself is morally
desirable,” he argues.
But isn’t a needy individual happier and better off if
he or she gets more of a desired good that others have
in abundance? Not necessarily, contends Frankfurt. We
may try to distribute something valuable, such as food or
medicine, to a group of impoverished individuals, and we
can avoid inequality by making sure that everyone gets
the same amount. But if the allotted portion of food isn’t
enough to end nutritional deprivation, or if the dosage of
medicine isn’t enough to bring people back to health, the
group continues to suffer. This is one reason why defining
what is “enough” is a moral imperative for Frankfurt.
Frankfurt goes on to dissect an economics term —

diminishing marginal utility — with the tools of a philosopher. He takes aim at the view of the late economist Abba
Lerner that because one person’s enjoyment of a particular
good declines as he or she acquires more of it, equality will
maximize aggregate happiness as more people share in the
enjoyment of that good. This view is incorrect, Frankfurt
argues, because there are many instances where each marginal unit is still equally desirable if it follows or is joined by
another. A good example would be a collector who acquires
one more item, but is far from being done and “satisfied.”
And sometimes there are cases when enjoyment increases
with consumption — say, addiction.
Frankfurt makes clear to the reader that he is not arguing
from an anti-egalitarian standpoint as such. He contends
that his central case — egalitarianism has no inherent moral
value — does not mean he opposes attempts to reduce
inequality. In fact, he writes, he supports many of these
efforts. But these steps are means to an end, namely, to
achieve “socially or politically desirable aims” that do have
an inherent value.
Frankfurt keeps his focus on the philosophical argument rather than policy prescriptions. But if a lawmaker or
economist were to apply his reasoning to policy, it might
imply that inequality opponents should look to improving
resources and opportunities for the neediest rather than
equalizing the material conditions of those on the middle
and upper tiers of income and wealth.
Frankfurt closes by discussing the concept of respect,
and why it should matter. As he defines it, equal treatment
is quantifiable and unrelated to a person’s circumstances; as
such, equality is wholly impersonal. Respect, by contrast,
is completely personal, because it is the acknowledgement
by one person of another’s unique needs and achievements.
When someone complains that he or she is not respected,
what they mean is that someone is refusing to “acknowledge
the truth about them,” Frankfurt explains. When someone is denied respect, “it is as though his very existence is
reduced.”
The reason why respect and equality need to be jointly
defined and addressed is that most people confuse the two,
Frankfurt concludes. And this personal angle is why the
broader debate over inequality has taken on such resonance.
When someone demands equal treatment, what he or she is
most likely asking for is respect — that is, an acknowledgement of the reality of their personal lives.
With this book, as in his past work, Frankfurt has shown
why it is so important to question common terms that are
too often used reflexively. Regardless of one’s own views on
the past, present, and future of inequality, On Inequality is
a salutary effort to help readers pause and think about the
beliefs that motivate our rhetoric.
EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

35

DISTRICTDIGEST

Economic Trends Across the Region

Show and TEL: Are Tax and Expenditure Limitations Effective?
By J o s e p h M e n g e d ot h a n d S a n t i ag o P i n to

M

ore than half of the states in the United States
are subject to some kind of limitation on their
ability to raise taxes, spend money, or incur debt.
Most states, at the same time, impose similar constraints
on their local governments. These measures are commonly
referred to as tax and expenditure limitations (TELs). TELs
are part of a larger set of fiscal rules aimed at curbing the
budget process with the objective of constraining decisions
made by governments. Recent research has examined the
effectiveness of TELs in achieving their intended objectives.
This research mainly attempts to disentangle the effect of
TELs on fiscal policies, policy outcomes, and economic performance. The findings are mixed: While a few studies assert
that TELs do restrain governments, others hold exactly the
opposite. Some research work even finds that TELs have been
detrimental to the states’ financial position.

Why Do TELs Exist?
State and local government budgets are constructed following certain fiscal rules defined in advance. While some of
these rules define specific guidelines that should be obeyed
throughout the budgeting process in order to guarantee fiscal transparency and accountability, others explicitly restrict
the size of the government. Among the latter, TELs are
perhaps the most widely used among state and local governments. Specifically, TELs establish a set of rules typically
defined in terms of limits on the growth of tax revenues,
spending, or both, with the ultimate objective of constraining the growth in the size of government. Other fiscal rules,
such as balanced budget provisions and debt limits, do not
necessarily intend to limit the size of government.
James Poterba of the Massachusetts Institute of
Technology argues that the role of TELs and fiscal rules
in general can be characterized by two contrasting views:
the institutional irrelevance view and the public choice
view. The institutional irrelevance view claims that budgetary institutions simply reflect voters’ preferences and do
not directly affect fiscal policy outcomes. States politically
dominated by electorates manifestly opposed to a strong
government presence in the economy would tend to limit
government revenue and expenditure regardless of the existence of TELs, so in this sense the rules will necessarily be
nonbinding and simply viewed, in Poterba’s words, as “veils,
through which voters and elected officials see, and which
have no impact on ultimate policy outcomes.”
The public choice view, on the other hand, supports the
idea that fiscal rules can constrain fiscal policy outcomes.
This view implies that politicians and governments, driven
by self-interest motives, choose policies biased toward
higher levels of taxes and expenditures, and these choices
36

E co n F o c u s | S e co n d Q u a rt e r | 2 0 1 5

do not necessarily benefit the public interest. In this context, fiscal limits, such as TELs, can potentially limit the
set of alternatives that politicians may choose from and,
consequently, influence policy outcomes. Even in this case,
however, it is not clear which rules are effective and how the
system should be designed.
Moreover, the implementation of TELs is challenging
because it is subject to the well-known principal-agent or
delegation problem. The idea is that once voters (the principals) set the limits through TELs, the implementation is
ultimately delegated to politicians or government officials
(the agents), who, as stated earlier, may prefer larger levels
of taxes and spending. In order for TELs to achieve their
intended objectives, voters should be able to follow the
implementation of the rules and monitor governments’
current and future actions. But such monitoring is not only
costly but also imperfect. As a result, governments driven by
self-interest motives might end up adopting alternative and
circumventing actions that will partially offset the effects of
the limitations. For instance, governments may strategically
change their revenue structure and increase reliance on
income sources not subjected to limitations.

State-Level TELs
As of 2010, some 30 states have enacted some kind of tax
or expenditure limitations, of which 23 have only spending
limits, four have only tax limits, and three have both spending and tax limits. The institutional differences across statelevel TELs include the method of codification, approval
procedures, type of limit, specification of the growth factors, treatment of surplus revenues, and provisions for overriding or waiving the limit. These institutional differences
make some TELs more restrictive and binding than others.
Differences in the means of codification translate into
differences in effectiveness. While in some states TELs
are statutory, in others they are codified in the state constitutions. Statutory TELs can be more easily modified or
rescinded by the legislature, so constitutional TELs are
generally considered more effective tools to restrain the
government’s size.
The methods of approving TELs also vary across states.
In general, one of the following procedures is used: citizen
initiative (or referendum), legislative proposal, or constitutional convention. These alternatives are not mutually exclusive and a combination of the three may also be observed.
For instance, the approval of a citizen initiative may require
the approval by the legislature as well.
Differences in the type of limitation are also, of course,
highly significant. States establish limits on expenditures, revenues, appropriations, or a combination of them. In principle,

Local-Level Tax and Expenditure Limitations (TELs)
Type of TEL
Description
since most states also have
balanced-budget provisions
Overall property
Apply to all local governments (applies to aggregate tax rate of all local
tax rate limit
government). Ceiling on the rate; cannot be exceeded without a vote of
in place, expenditure limits
electorate.
should be largely equivalent
to revenue limits. In practice,
Specific property
Apply to specific types of local government (municipalities, counties,
tax rate limit
school districts, and special districts) or specific functions.
however, revenue limits are
Specific limits:
Property
tax
limits
more restrictive than spendLimits on
Limit on the ability of local governments to raise revenue by
assessment
reassessment of property or through natural or administrative increase
ing limits, mostly because
increases
of property values.
spending limits do not generally affect all spending categoProperty tax levy
Limit on the total amount of revenue that can be raised from the
limits
property tax. Generally enacted as an allowable annual percentage
ries, and the spending limits
increase in the levy determined by population growth and/or inflation.
usually apply only to general
fund expenditures, not speGeneral revenue
Limit on the amount of revenue that can be collected during the
increase limits
fiscal year. Usually enacted as a maximum allowed annual percentage
cial funds. The latter means
increase from previous year or a maximum share of local income;
that the legislature can always
typically tied to population growth and/or inflation.
avoid the limits imposed by
General limits
General
Cap on the level of spending during the fiscal year. Usually enacted as
TELs by transferring fundexpenditure
a maximum allowed annual percentage increase from previous year or
ing allocations from one fund
increase limits
a maximum share of local income; typically tied to population growth
to the other. The limits on
and/or inflation.
appropriations are typically
Requires public discussion and specific legislative vote prior to the
set as a percentage of the genenactment of tax increases. Requires formal vote (generally, simple
Full disclosure
eral revenue estimates.
majority) of the local legislative body to increase the tax.
State TELs vary in how
SOURCE: Authors’ analysis
they allow tax revenue or
spending to grow. TELs generally allow tax revenue or
override the constraints. For instance, consider a limit on the
spending to increase according to some combination of
property tax rate. In this case, the restriction operates only
three variables: personal income growth, population growth,
when the rate reaches the ceiling. At this point, tax revenue
and inflation. Since personal income growth is generally
may still increase, but it will be driven solely by the growth
higher than inflation or population growth, limits based on
of the tax base. In other words, this limit by itself may not
the former factor are considered less restrictive.
constrain tax revenues. To avoid this kind of outcome, most
The treatment of budget surpluses is another area of
TELs at the local level combine a property tax rate limit with
variation. Some state TELs include refund provisions that
a limit on property assessment increases. The most restrictive
establish precisely what to do in case of a surplus. The most
limitations are those that apply to increases in total tax revrestrictive TELs require state governments to immediately
enue (property taxes and other types of local tax revenue) or
refund any surplus to taxpayers through rebates. Others
aggregate spending. Generally, the limit allows a given annual
mandate governments to use the surplus in other ways such
percentage increase in tax revenue or spending determined by
as the retirement of debt, the establishment of rainy day or
population growth, inflation, or local income.
emergency funds, or budget stabilization funds.
Most TELs also include extraordinary procedures to overTELs in the Fifth District
ride the constraints. These procedures include, for instance,
North Carolina and South Carolina are the only two states in
a specification of majorities required to change the tax or
the District in which the state-level governments are subject
spending limits. More stringent TELs require supermajorities
to TELs. In 1991, North Carolina adopted a statute that
in typically smaller bodies (such as legislative) and/or simple
limits general fund operating budget spending to 7 percent
majorities in larger bodies (such as the electorate).
of the forecasted total state personal income for that same
fiscal year. South Carolina’s spending limit is mandated by
Local-Level TELs
the state constitution, which limits the annual increase in
Currently, 41 states in the United States impose some kind
appropriations based on an economic growth measure that
of TELs on their respective local governments. The restricis determined by the general assembly. The current formula
tions may fall on the county, municipal, or school district
prescribes that an increase in appropriations be limited to
budgets. The table summarizes the types of TELs that typeither the prior fiscal year appropriations multiplied by the
ically apply to local governments. The most common form
three-year average growth in personal income or 9.5 percent
of TELs at the local level is a property tax rate limitation
of total personal income reported in the previous calendar
imposed on specific types of local governments.
year, whichever is greater.
As with state-level TELs, some of the limitations imposed
A larger number of Fifth District states impose TELs at
on local governments are more restrictive than others dependthe local level. In North Carolina, counties and municipalities
ing on how easy it becomes for governments to circumvent or
are subject to property tax rate limits. Maryland also imposes
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

37

TELs in Fifth District States
State
State Level
LevelTELs
TELs
Spending
SpendingLimit
Limit
No
NoTEL
TEL

Local
Local Level
LevelTELs
TELs
Levy Limit

Levy Limit

MD MD

DC

WV

WV

DC

Rate Limit

Rate Limit

Assessment Limit

Assessment Limit

VA

VA

NC

NC

SC

SC

SOURCE: Authors’ analysis based
on state legislative codes

property tax limits; however, the limit is on the assessment
increase rather than on the rate. West Virginia has the most
potentially restrictive set of TELs in the District, with limits
on the overall property tax rate as well as specific property
tax rates (for agricultural land, for example) and the amount
of property taxes that can be collected. South Carolina
and Virginia impose no TELs on their municipalities. The
District of Columbia limits annual increases in the total property tax levy. (See map and table.)
Full disclosure laws, which require taxpayers to receive
notice of anticipated tax rate increases, do not directly
restrict or limit revenues or expenditures and are therefore not considered potentially binding. Such laws exist in
three states in the District; Maryland, South Carolina, and
Virginia each enacted full disclosure legislation around the
same time period — between 1975 and 1977. In Maryland
and Virginia, the full disclosure laws extend to counties and
municipalities, while in South Carolina the measure also
includes school districts.

Measuring Outcomes of TELs: Challenges
Evaluating the effectiveness of TELs is not easy for several
reasons. First, the empirical analysis is subject to significant
methodological challenges. Second, rules and limitations are
very heterogeneous across states and local governments.
Not only are some rules more restrictive than others, as
highlighted earlier, but they have changed over time as well.
Finally, when assessing the effectiveness of these constraints
on fiscal policies, the evaluation should be performed in relation to their intended objectives.
For instance, do TELs aim to restrict the overall size of
government? Do they intend to limit the growth of certain
specific taxes or expenditures or alter the composition of
government spending and tax revenue?
38

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

One of the methodological challenges in research
on the effect of TELs is the problem of endogeneity
or reverse causality. The problem becomes more
significant when examining the impact of TELs on
spending or taxes. It may be, for instance, that jurisdictions with relatively high long-run growth rates
of taxes and spending would more likely adopt TELs
as a tool to achieve stronger fiscal discipline. Ronald
Shadbegian, an economist at the National Center for
Environmental Economics, noted that “if voters in
states with bigger governments are more likely to vote
for a TEL and government spending patterns persist
over time, then I would expect to find a positive relationship between a TEL and government size, even
though a causal relationship does not exist.” Hence,
failure to acknowledge the fact that the decision to
adopt TELs by a government may be endogenous
would seriously bias the conclusions of the analysis.
Second, the presence of unobservable factors,
such as voters’ preferences, which differ systematically across jurisdictions, may bias the results if they
are not controlled for, as pointed out by the public
choice view. The latter is commonly known as omitted
variable bias. The main problem is that preferences are not
observable. In order to address this issue and differentiate
the effect of TELs on government taxes and expenditures
from the corresponding effect of voters’ preferences, some
research work has relied on panel data regression models.

Measuring Outcomes of TELs: Results
Ideally, as when conducting any kind of policy evaluation,
the effectiveness of TELs should be assessed by comparing
the fiscal outcome with TELs to the counterfactual outcome
that would have occurred in the absence of the limitations.
Since it is not possible to carry out such an ideal experiment,
the impact of TELs is assessed by comparing the outcomes
of the treatment group (TEL states) to those of the control
group (non-TEL states). For instance, the work by Poterba
examines the different responses of TEL and non-TEL
states to negative economic shocks that generate unexpected budget deficits (in his work, he considers the late
1980s and early 1990s).
Research has shown, however, that the robustness of
the results and conclusions of such analysis depend on the
choice of the control group. To overcome some of the weaknesses explained above, recent work by Paul Eliason of Duke
University and Byron Lutz of the Federal Reserve Board of
Governors relies on a novel approach known as the “synthetic control method” to construct the control group. The
objective of their study is to examine the extent to which one
of the most stringent TELs in the United States, Colorado’s
Taxpayer Bill of Rights (TABOR), constrains government
size. Specifically, the synthetic control method relies on
observed data to construct an artificial control group based
on a weighted combination of non-TEL states. The weights
for each state are chosen so that taxes and spending in the

control group match taxes and spending in the treatment
group prior to the implementation of the limitations.
The earlier literature on TELs focused on how fiscal limits affect government growth. The findings of this research
are mixed. For instance, while the work by Poterba concludes that when faced with fiscal distress, TEL states tend
to increase taxes by less than non-TEL states, the work by
Eliason and Lutz indicates that TABOR does not have any
effect on government taxes or spending. To the extent that
the institutional irrelevance view correctly assesses the effectiveness of budgetary rules, the absence of a strong relationship between TELs and fiscal policy outcomes should not be
surprising. In fact, TELs, according to this view, should not
be effective because they are essentially nonbinding.
The lack of association between TELs and government
growth may also be attributed to other factors, however.
Many researchers highlight the fact that earlier studies did
not account for the rich institutional differences across TELs.
As noted earlier, TELs are very heterogeneous. For instance,
some TELs are more restrictive than others, and it is plausible that the ability of TELs to constrain government size
depends precisely on their stringency. In an effort to account
for this heterogeneity, Barry Poulson, distinguished scholar
at the Americans for Prosperity Foundation, constructed
an index of TEL restrictiveness for each of the 50 states.
This methodology was later adopted and extended by other
researchers. For instance, Lindsay Amiel and Steven Deller,
both at the University of Wisconsin, and Judith Stallmann
of the University of Missouri conducted several studies using
indices like the one developed by Paulson and provide conclusive evidence in favor of following such an approach.
Even when TELs are effective at controlling the growth
of specific tax revenues or expenditures, the implementation
of TELs in a context where voters cannot fully monitor government actions ends up having numerous unintended effects
not fully anticipated or envisioned by their proponents. These
effects usually take place when governments take actions to
avoid or circumvent the rules established by the legislation.
One way governments may circumvent the restrictions
imposed by TELs is by issuing debt. Such a hypothesis is
studied by Deller, Amiel, and Stallmann jointly with Craig
Maher of the University of Nebraska Omaha. Specifically,
they claim that when the limits are imposed only on revenues or only on expenditures, governments would be
induced to issue debt. Unlike previous work, which was
unsuccessful at documenting such a relationship, their work
accounts for the heterogeneity of TELs. Specifically, they
found that more restrictive revenue TELs and expenditure
TELs are associated with higher levels of government debt.
Only TELs that limit revenue and expenditure at the same
time restrict the use of debt.
States may still find ways to operate within the limits
imposed by TELs by shifting some of their fiscal responsibilities to local governments. James Cox of California State
University, Sacramento and David Lowery of Penn State
University study such a possibility. They empirically test this

hypothesis by comparing the behavior of pairs of TEL and
non-TEL states. Their findings do not generally show that
states decentralize responsibilities, with the exception of
South Carolina. When comparing state revenue as a fraction
of total state and local revenue in North Carolina, a nonTEL state at the time of the study, and the corresponding
proportion in South Carolina, a TEL state, they found that
the latter was remarkably lower. The authors also underscore
that South Carolina did not explicitly prohibit the decentralization of fiscal responsibilities to local governments.

Costs of TELs
Even if TELs are successful at achieving their intended goal
of restricting government growth, they may do so at the
expense of generating other negative effects. It has been
claimed, for instance, that TELs might negatively affect
the financial stability of the states. A study by Tucker
Staley of the University of Central Arkansas found that
more restrictive TELs are strongly associated with higher
levels of state revenue volatility. At the local level, work by
Mathew McCubbins of Duke University and Ellen Moule,
then at the University of South Carolina, indicates that the
enforcement of property tax limits have induced state and
local governments to rely on a system of revenues is generally
more income-dependent, such as income taxes, charges, and
fees. This means revenues would be subject to even greater
fluctuations during the business cycle.
TELs may also affect the quality of services provided by
governments. The relationship between TELs, particularly
limitations imposed on property tax growth and school
quality, has received a lot of attention in the literature. A
few studies have found that reduced funding as a result of
TELs negatively affects student achievement in public K-12
schools. The work of Thomas Downes of Tufts University
and David Figlio of Northwestern University suggests that
TELs “lead to reductions in student outcomes that are far
larger than might be expected given the changes in spending.” Possible explanations for this result include disproportionate cuts in instructional rather than administrative
expenditures, higher student-teacher ratios, and a shift
especially of the more talented students to private schools.
Matt Davis, Andrea Vedder, and Joe Stone of the University
of Oregon claim that, in fact, the lower levels of education
funding could have been compensated with school-finance
equalization and other alternative revenues. They argue,
however, that TELs may still have a negative impact on student achievement if these constraints make school funding
more unpredictable and volatile, as suggested earlier.
The use of tax and expenditure limitations has spread
since first implemented almost 40 years ago; however, the
effectiveness of TELs in fulfilling their objectives is still in
question. Recent research has led to inconclusive and, at
times, contradictory results. Due to the heterogeneity and
complexity of TELs, significant methodological challenges
remain in answering the question of the effectiveness of
these fiscal rules.
EF
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

39

State Data, Q4:14
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000s)
761.1
2,636.8
4,188.5
1,971.1
3,790.7
763.0
Q/Q Percent Change
1.2
0.6
0.9
1.1
0.4
0.1
Y/Y Percent Change
1.2
1.3
2.5
2.6
0.9
-0.2
							
Manufacturing Employment (000s)
1.0
102.9
454.7
233.0
232.1
47.6
Q/Q Percent Change
0.0
-0.4
1.3
1.2
0.1
-0.1
Y/Y Percent Change
0.0
-1.8
2.4
2.9
0.6
-1.5
						
Professional/Business Services Employment (000s) 160.3
426.6
583.8
260.6
676.0
67.9
Q/Q Percent Change
1.5
0.4
1.3
2.2
-0.6
1.1
Y/Y Percent Change
2.7
2.0
5.4
4.4
0.4
5.3
							
Government Employment (000s)
236.0
506.9
715.0
359.2
707.7
153.8
Q/Q Percent Change
0.9
0.6
-0.2
0.6
0.1
0.8
Y/Y Percent Change
-1.3
0.7
-0.3
1.7
0.4
0.4
						
Civilian Labor Force (000s)
383.7
3,104.6
4,625.7
2,212.5
4,234.3
778.4
Q/Q Percent Change
1.2
0.0
0.0
0.7
0.0
-0.8
Y/Y Percent Change
3.2
-0.1
-0.4
1.9
0.0
-2.1
							
Unemployment Rate (%)
7.7
5.5
5.5
6.6
4.8
6.0
Q3:14
7.8
5.7
6.0
6.5
5.0
6.4
Q4:13
8.1
6.2
6.9
6.6
5.4
6.6
					
Real Personal Income ($Bil)
42.6
300.8
363.4
165.4
388.7
61.9
Q/Q Percent Change
0.8
1.0
1.3
1.4
1.1
0.9
Y/Y Percent Change
2.3
3.5
4.6
4.3
3.1
2.5
							
Building Permits
686
3,778
12,622
6,540
6,896
536
Q/Q Percent Change
0.0
-27.5
-11.6
-6.9
-6.1
-17.7
Y/Y Percent Change
0.0
-12.3
2.4
15.7
20.7
26.1
							
House Price Index (1980=100)
719.8
429.7
315.4
320.1
417.9
227.7
Q/Q Percent Change
3.0
0.7
0.4
0.9
1.2
0.6
Y/Y Percent Change
9.5
3.9
4.0
4.7
4.3
3.3
NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease. The manufacturing composite index is a
weighted average of the shipments, new orders, and employment indexes.
2) Building permits and house prices are not seasonally adjusted; all other series are seasonally adjusted.
3) Manufacturing employment for DC is not seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics
Unemployment Rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor/Haver Analytics
Building Permits: U.S. Census Bureau/Haver Analytics
House Prices: Federal Housing Finance Agency/Haver Analytics

For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org

40

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

Fourth Quarter 2003 - Fourth Quarter 2014

Change From Prior Year

Fourth Quarter 2003 - Fourth Quarter 2014

Fourth Quarter 2003 - Fourth Quarter 2014

4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%

8%
7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

10%
9%
8%
7%
6%
5%
4%
3%
04 05 06 07 08 09 10 11

12

13 14

04 05 06 07 08 09 10 11

12

Fifth District

13 14

04 05 06 07 08 09 10 11

12

13 14

12

13 14

United States

Nonfarm Employment
Major Metro Areas

Unemployment Rate
Major Metro Areas

Building Permits

Change From Prior Year

Fourth Quarter 2003 - Fourth Quarter 2014

Fourth Quarter 2003 - Fourth Quarter 2014

Change From Prior Year

Fourth Quarter 2003 - Fourth Quarter 2014

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%
-5%
-6%
-7%
-8%

04 05 06 07 08 09 10 11
Charlotte

Baltimore

12

13 14

13%
12%
11%
10%
9%
8%
7%
6%
5%
4%
3%
2%
1%

Washington

40%
30%
20%
10%
0%
-10%
-20%
-30%
-40%
-50%
04 05 06 07 08 09 10 11
Charlotte

Baltimore

12

13 14

FRB—Richmond
Services Revenues Index

FRB—Richmond
Manufacturing Composite Index

Fourth Quarter 2003 - Fourth Quarter 2014

Fourth Quarter 2003 - Fourth Quarter 2014

30
20

20

10

10

0
-10
-20
-30
-40
-50

0
-10
-20
-30
04 05 06 07 08 09 10 11

12

13 14

Fifth District

United States

House Prices
Change From Prior Year
Fourth Quarter 2003 - Fourth Quarter 2014

16%
14%
12%
10%
8%
6%
4%
2%
0%
-2%
-4%
-6%
-8%

40
30

04 05 06 07 08 09 10 11

Washington

04 05 06 07 08 09 10 11

12

13 14

04 05 06 07 08 09 10 11
Fifth District

12

13 14

United States

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

41

Metropolitan Area Data, Q4:14
Washington, DC

Baltimore, MD

Hagerstown-Martinsburg, MD-WV

Nonfarm Employment (000s)
2,572.6
1,364.3
Q/Q Percent Change
1.4
1.3
Y/Y Percent Change
1.3
1.3
			
Unemployment Rate (%)
4.8
5.9
Q3:14
5.1
6.1
Q4:13
5.3
6.6
			
Building Permits
4,967
1,694
Q/Q Percent Change
-31.2
-22.6
Y/Y Percent Change
-10.9
25.6
			
		
Asheville, NC
Charlotte, NC

104.4			
1.3			
-1.0			
5.7			
6.0			
6.4			
331			
17.4			
41.5			

Durham, NC

Nonfarm Employment (000s)
182.3
1,089.5
294.2			
Q/Q Percent Change
3.1
2.7
1.2			
Y/Y Percent Change
2.8
3.6
1.5			
					
Unemployment Rate (%)
4.4
5.5
4.7			
Q3:14
4.9
6.0
5.0			
Q4:13
5.5
7.0
5.5			
						
Building Permits
320
4,083
1,055			
Q/Q Percent Change
-14.2
-21.1
19.3			
Y/Y Percent Change
-13.7
-1.8
53.1			
					
					
Greensboro-High Point, NC
Raleigh, NC
Wilmington, NC
Nonfarm Employment (000s)
355.0
573.4
117.7			
Q/Q Percent Change
2.5
1.9
1.0			
Y/Y Percent Change
1.4
3.9
2.7			
					
Unemployment Rate (%)
5.8
4.5
5.3			
Q3:14
6.5
4.9
6.0			
Q4:13
7.5
5.6
6.9		
Building Permits
Q/Q Percent Change
Y/Y Percent Change

687
17.2
39.4

NOTE:

Nonfarm employment and building permits are not seasonally adjusted. Unemployment rates are seasonally adjusted.

42

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

3,016
-5.7
-2.5

621			
-3.0			
-39.6		

Winston-Salem, NC

Charleston, SC

Columbia, SC

Nonfarm Employment (000s)
256.7
326.3
377.3		
Q/Q Percent Change
1.9
1.1
1.4		
Y/Y Percent Change
1.5
3.1
1.1		
			
Unemployment Rate (%)
5.3
5.7
6.0		
Q3:14
5.9
5.7
6.1		
Q4:13
6.8
5.7
6.0		
		
Building Permits
365
1,395
883		
Q/Q Percent Change
-46.9
10.4
-33.6		
Y/Y Percent Change
81.6
30.5
-1.0		
				
Greenville, SC

Richmond, VA

Roanoke, VA

Nonfarm Employment (000s)
395.6
640.7
162.2		
Q/Q Percent Change
2.1
1.4
1.4		
Y/Y Percent Change
2.0
1.7
1.1		
			
Unemployment Rate (%)
5.9
5.1
4.8		
Q3:14
6.0
5.4
5.2		
Q4:13
5.9
5.8
5.5		
				
Building Permits
1,295
824
136		
Q/Q Percent Change
24.0
-34.2
17.2		
Y/Y Percent Change
96.2
-15.1
-18.1		
				
Virginia Beach-Norfolk, VA

Charleston, WV

Huntington, WV

Nonfarm Employment (000s)
757.0
124.2
143.1		
Q/Q Percent Change
-0.3
0.2
2.2		
Y/Y Percent Change
0.2
-0.2
0.9		
				
Unemployment Rate (%)
5.3
5.9
6.0		
Q3:14
5.6
6.3
6.4		
Q4:13
6.0
6.3
7.2		
				
Building Permits
1,614
5
68		
Q/Q Percent Change
30.0
-16.7
100.0		
Y/Y Percent Change
99.0
-79.2
36.0		
				

For more information, contact Michael Stanley at (804) 697-8437 or e-mail michael.stanley@rich.frb.org
E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

43

OPINION

Why Do College Graduates Earn More?
BY K A RT I K AT H R E YA

I

n our research and writing on workforce development
and on earnings differences among individuals, the
Richmond Fed has often highlighted the importance
of college-level training for those who are well-prepared
for it. The economic benefits that workers receive from
college completion are well-known: On average, college
graduates earn almost twice as much over their lifetimes
as high school graduates. Moreover, the size of the earnings gap between college (especially post-college) and high
school graduates has been trending upward for decades.
But where does this earnings premium from higher education come from?
The predominant view among economists is that a student’s investment in higher education adds to his or her
“human capital” in the form of new or improved skills.
This interpretation, which economists Gary Becker and
Theodore Schultz set out in the early 1960s, is an intuitive one: A college student who chooses a field of study
wisely and who graduates will increase his or her value to
employers through higher productivity. (Some other ways
of building human capital include work experience and “on
the job” training programs.) On this view, the question for
policymakers is whether such investments are occurring in
an efficient amount, and if not, whether policies like college
subsidies and student-loan programs could achieve this.
The main rival view is the signaling model. This view,
advanced by Michael Spence, Kenneth Arrow, and Joseph
Stiglitz in the mid-1970s, is also intuitive: It holds that
completion of educational programs, such as college, may
simply demonstrate pre-existing attributes of the student,
such as intelligence or motivation. Under the signaling view,
an employer does not look upon a college degree as a sign of
newly acquired skills so much as a clear signal for identifying
workers with these traits, which they already had.
Thus, one disconcerting possibility is that we might see a
college earnings premium even if education were totally useless
in improving people’s skills. This could happen if someone’s
true productivity is not directly observable and if higher education — even if not affecting productivity at all — is harder for
low-productivity people to complete than for high-productivity
people. In this case, the question for policymakers is whether
time-consuming and resource-intensive education is really
the most efficient way to assess someone’s productivity — or
whether education policies subsidizing education may in fact
be worsening matters by creating a wasteful arms race.
Both human capital and signaling likely play some role in
the way employers look at education, and in particular, college
degrees. We can readily think of fields in which educational
programs and degrees affect eventual job performance, such
as law, engineering, architecture, and medicine. We can also
44

E CO N F O C U S | S E CO N D Q U A RT E R | 2 0 1 5

point to many jobs that have little to do with any specific college degree. But in perhaps the majority of cases, both human
capital and signaling are driving the college premium. It’s
difficult to reach hard and fast conclusions about their relative
importance, however, because their influence is observed only
indirectly. Worse yet, almost any argument in favor of one
interpretation of the data can be used in support of the other.
From an individual’s perspective, the source of the college
earnings premium doesn’t matter. All that an individual needs
to know is that college can be a worthwhile investment —
depending, among other things, on his or her field and readiness. But from society’s perspective, the question of human
capital versus signaling has important implications. Are we
under-investing in higher education, or over-investing? The
greater the importance of human capital, the more promise
higher education holds as a means of increasing individual
incomes and the economy’s productivity overall. The greater
the importance of signaling, the more central other policies
should become to workforce development.
My research and that of some of my Richmond Fed
colleagues has focused on the human capital model and
what it means for individuals and policymakers. For me,
signaling carries less weight as a compelling explanation in
most cases. This is for a few reasons. First, if the signaling
model were largely true, one might expect more employers
to seek to avoid paying the college premium — by looking
for alternatives to the sheepskin, such as more use of job
testing, apprenticeships, and the like. Second, the idea that
employers derive value from the skills taught in higher education (both job-specific, like engineering skills, and general,
like critical thinking) seems consistent with the trends we’ve
seen in the skills demanded in today’s knowledge-oriented
economy. Lastly, one implication human capital theory has,
which signaling does not, is the prediction that earnings will
rise at a diminishing rate for much of working life and then
decline — a pattern observed almost universally in the data.
Even if lengthy education serves largely as a signal, it
may still be the most efficient screening method, yielding
gains for the economy. But based on what we know now,
the human capital model seems generally a helpful way to
think about the investments that students make, and society makes, in higher education. And regardless of which
explanation is right, I think most of us would agree it is
still important to ensure that young people have the best
information and preparation needed to make educational
decisions wisely given their own particular attributes and
circumstances.
EF
Kartik Athreya is senior vice president and director of
research at the Federal Reserve Bank of Richmond.

NEXTISSUE
Trade with Cuba

The United States has recently taken steps to normalize relations
with Cuba. While fully lifting the longtime trade embargo
requires an act of Congress, some states (including Virginia) have
been exporting food and medical products to Cuba for over a
decade. Will they have a leg up if U.S. policy reopens a market
that has been mostly closed for 55 years? What will be the
challenges?

Puerto Rico’s Debt Crisis

Puerto Rico defaulted on its debt in the second half of 2015,
with no clear resolution to its budget imbalances or debt crisis in
sight. What are the options for resolving the debt crisis, and how
does the island’s status as a U.S. territory affect the situation?

The Economist in the Machine

Major technology-oriented companies, such as Amazon, eBay,
Google, and Microsoft, have been hiring in-house research
economists. Going beyond corporate economists’ traditional roles,
such as forecasting, these researchers are providing insight into
their companies’ hard problems and, at the same time, publishing
research like their academic counterparts. What’s it like to be one
of this new breed of researchers? What is motivating companies to
bring them on board -- and economists to join them?

Policy Update
In August, the SEC finalized a rule requiring
public companies to disclose the ratio of
their CEOs’ compensation to the median
compensation of their employees. This “pay
versus performance” rule is a requirement of
the Dodd-Frank Act and complements the
SEC’s 2011 “say on pay” rule. These additional
disclosures are intended to help shareholders
better understand executive compensation,
but some critics have argued they could
create more confusion than clarity.

Economic History
With the immigration debate front and center
in Europe, this is a good time to look back at
the economic legacy of the wave of mass
migration from Europe to the United States
between the Civil War and World War I. What
was immigration’s role in the rapid growth
of the U.S. economy, as industrialization
and urbanization transformed the country?
And what resources and challenges did
newcomers bring to the United States?

Interview
Emi Nakamura of Columbia University on
new methods of measuring price stickiness,
explanations of why inflation didn’t drop
even further after the Great Recession, and
the difficulty of measuring the effects of
monetary and fiscal policy.

Visit us online:
www.richmondfed.org
•	To view each issue’s articles
and Web-exclusive content
• To view related Web links of
additional readings and
references
• To subscribe to our magazine
•	To request an email alert of
our online issue postings

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

Change Service Requested

To subscribe or make subscription changes, please email us at research.publications@rich.frb.org or call 800-322-0565.

A Different Way to Look at Labor Markets
The Hornstein-Kudlyak-Lange
Non-Employment Index
The Non-Employment Index
(NEI) is an alternative to the
unemployment rate that provides
a more comprehensive reading
of labor market health. It is
based on research published
by Richmond Fed economists
Andreas Hornstein and Marianna
Kudlyak, and McGill University
economist Fabian Lange.

Visit richmondfed.org for monthly updates.