View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

SPRING 07 COVER FINAL

7/19/07

SPRING

1:47 PM

Page 1

2007

THE

FEDERAL

RESERVE

BANK

OF

RICHMOND

Making the Grade?
The Debate Over School Choice

• Betting on Prediction Markets
• Decoding the Yield Curve • Interview with Kip Viscusi

SPRING 07 COVER FINAL

7/19/07

1:47 PM

Page 2

VOLUME 11
NUMBER 2
SPRING 2007

COVER STORY

12

Academic Alternatives: The theory of school choice sounds
great but it remains controversial
Evidence from programs like the one in Milwaukee is beginning to
move the discussion from the theoretical to the practical.

FEATURES

20

Ask the Market: Companies are leading the way in the use
of prediction markets. The public sector may soon follow
Markets can offer incentives for people to reveal what they know and
then pool that information to produce the best forecast.
24

Our mission is to provide
authoritative information
and analysis about the
Fifth Federal Reserve District
economy and the Federal
Reserve System. The Fifth
District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
The material appearing in
Region Focus is collected and
developed by the Research
Department of the Federal
Reserve Bank of Richmond.
DIRECTOR OF RESEARCH

John A. Weinberg

Grinding Gears: The jobs bank program has provided greater
job security for unionized workers at the Big Three automakers

EDITOR

But the security has come at the expense of greater flexibility in
labor markets.

Doug Campbell

Trading Spaces: Conservation efforts get a boost from
the market
Using market tools to achieve conservation goals isn’t a new idea, but
it is gaining currency as preservation funds dwindle and regulation
proves burdensome.
32

A Question of Money: Does money still matter for
monetary policy?

Kathy Constant
STA F F W R I T E R S

Charles Gerena
Betty Joyce Nash
Vanessa Sumo
E D I TO R I A L A S S O C I AT E

Julia Ralston Forneris
R E G I O N A L A N A LY S T S

Matt Harris
Matthew Martin
Ray Owens
CONTRIBUTORS

In their quest for price stability, central banks debate which policy
instrument they should use to keep inflation under control.
37

The yield curve has been a reliable indicator of recessions. But that
may be history.

SENIOR EDITOR
MANAGING EDITOR

28

The Yield Curve is Sending Mixed Messages: What does it
imply for banks in the Fifth District and beyond?

Aaron Steelman

Kartik B. Athreya
Eliana Balla
Kevin Bryan
Robert Carpenter
William Perkins
Ernie Siciliano
Mark Vaughan
DESIGN

Beatley Gravitt
Communications
C I RC U L AT I O N

DEPARTMENTS

1 President’s Message/Lessons of the Phillips Curve
2 Federal Reserve/The Evolution of Fed Communications
6 Jargon Alert/Arbitrage
7 Research Spotlight/Global Warming
8 Policy Update/Interest Rates on Loans to Soldiers Capped
9 Around the Fed/Taxing Questions
10 Short Takes
40 Interview/Kip Viscusi
46 Economic History/Black-Owned Banks
50 Book Review/The Disposable American and The Strategist
52 District/State Economic Conditions
60 Opinion/In Praise of Theory

Walter Love
Shannell McCall
Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA23261
www.richmondfed.org

Subscriptions and additional
copies: Available free of charge
by calling the Public Affairs
Division at (804) 697-8109.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the
editor is required before
reprinting photos, charts, and
tables. Credit Region Focus and
send the editor a copy of the
publication in which the
reprinted material appears.
The views expressed in Region Focus
are those of the contributors and not
necessarily those of the Federal Reserve
Bank of Richmond or the Federal
Reserve System.
ISSN 1093-1767

PHOTO: GETTY IMAGES

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 1

PRESIDENT’S MESSAGE
Lessons of the Phillips Curve
ecent estimates suggest
that real gross domestic
product increased at a
relatively slow annual rate of
around a half of a percent in
the first quarter of 2007.
Meanwhile, year-over-year core
(PCE) inflation has been fluctuating around 2 1⁄4 percent. While
the latter figure may sound
benign, I view it, and the general
upward trend in prices over the
past few years, with caution.
Inflation, in my opinion, has been too high and should be
brought down. But will doing so also lower
economic growth? This raises a fundamental question
facing the Federal Reserve and one that has been at the
core of macroeconomics for the past 50 years. What is
the relationship between growth and inflation?
In 1957, A.W. Phillips looked at data on unemployment
and wage inflation in the United Kingdom and found that as
unemployment went down, wage inflation tended to go up.
This statistical relationship became known as the “Phillips
curve.” Phillips’ work was highly influential, but in
the decades since he published his findings, economists’
understanding of this relationship has evolved significantly,
and I would like to comment on that issue here.
In light of some additional work, many economists were
convinced that Phillips’ empirical findings also held for the
United States, and had argued that this implied a set of
choices for society. If you wanted faster economic growth,
you should put more money into the economy. This would
produce higher inflation, but that was a trade-off sometimes
worth making. Conversely, if you felt inflation was getting
too high, you should take money out of the economy. In
such a world, ambitious management of the macroeconomy
seemed possible.
Beginning in the late 1960s, economists came to recognize
the importance of people’s expectations for the relationship
between inflation and real economic indicators such as
unemployment. Inflation that was anticipated would not
stimulate real economic growth, nor would disinflation that
was anticipated slow it. Over the long run, they argued,
economic growth was determined by fundamentals such as
productivity and population growth. The appearance of a
correlation between inflation and unemployment in the
data was the result of episodes in which unanticipated
changes in inflation had temporary real effects.
This theory gained credence in the 1970s, as the U.S.
economy experienced both slow economic growth and
rising inflation. The original Phillips curve seemed to be
breaking down, and the menu of options that policymakers
supposedly had at their disposal no longer seemed useful.

R

At the same time, a group of economists began to focus on
the forward-looking nature of people’s expectations.
This “rational expectations” approach to the Phillips curve
suggested that the public understands when policymakers
might be tempted to try to exploit the seeming relationship
between inflation and unemployment, and change their
expectations even before a policy action has been taken.
As a result, an attempt to bring down unemployment by
letting inflation rise a bit will not work — prices will rise but
growth will not.
Modern work builds on this approach by studying
economies in which realistic imperfections in markets
create a short-run relationship between inflation and real
variables similar to what we observe in the data.
These models have the important implication that the
relationship between inflation and real activity is not causal.
Both inflation and unemployment are the outcomes of the
behavior of markets for goods and for labor. In turn, the
behavior of markets is the product of decisions made by
an array of households, firms, and policymakers. If people
are forward-looking, their expectations about the future
conduct of policy will play the dominant role in how
inflation and unemployment interact. This means that
unless policymakers can influence expectations, they will
have only limited ability to fine-tune the economy, even
temporarily, and that maintaining economic stability hinges
largely on people’s confidence in future policy actions.
In the late 1970s and early 1980s, the Federal Reserve
under Paul Volcker began a long and often difficult
campaign to regain the credibility it had lost during the
previous decade. Alan Greenspan continued that fight,
and by the 1990s, the Fed arguably had established such
credibility. Happily, the economy responded well: We
witnessed rapid economic growth without a concomitant
rise in inflation. In light of the modern understanding of the
Phillips curve, the real lesson of the Volcker-Greenspan
disinflation is that the best contribution the Fed can make
to economic growth is to keep inflation low and stable. And
the key to low inflation is the stability of people’s expectations about the future conduct of monetary policy.
Monetary policy works best when it allows the real economy to respond appropriately to economic fundamentals,
rather than attempts to insulate the economy from shocks
by tolerating swings in inflation. This is the lesson of the
modern Phillips curve and of our macroeconomic history
over the last half century.

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

Spring 2007 • Region Focus

1

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 2

FEDERALRESERVE
The Evolution of Fed Communications
BY D O U G C A M P B E L L

Former Federal Reserve Board
Chairman Alan Greenspan, seen
here testifying before Congress in
February 1994, led a movement
toward greater openness in
Fed communications.

2

Region Focus • Spring 2007

The Fed is evidently capable of
confounding people — absolutely
flummoxing them — by saying what it
plans to do, then doing it, and then
promptly announcing what it had done.
— Dow Jones News Service
Feb. 7, 1994
ust before 11 a.m., Feb. 4, 1994,
the Federal Reserve released
a three-paragraph statement. The
Federal Open Market Committee, Fed
Chairman Alan Greenspan said
in part, had decided to “increase
slightly the degree of pressure on
reserve positions. The action was
expected to be associated with
a small increase in short-term
money market interest rates.”
Though vague by today’s
standards, the release’s import
was clear. It marked the first time
the Fed announced a change in
monetary policy as soon as it was
made. Until the winter of 1994,
indications of the central bank’s
stance on the fed funds rate
were indicated primarily through
operations in the money market.
The Feb. 4 release was in fact the
beginning of an evolution — if
not quite a revolution — in
Fed communications.
Why did the Fed keep such a
veil of secrecy over its formulation
and stance of monetary policy for so
long, and why has it taken more steps
toward openness recently? Almost all
the main issues are laid out in FOMC
transcripts from that two-day
February meeting. During that landmark session, FOMC members
debated the pros and cons of immediately announcing their policy stance,
wondering aloud about the implications for the Fed’s flexibility and
credibility. Of paramount concern was
the prospect that the FOMC would be
misunderstood, no matter what it did.

J

Message Moratorium
The Fed has always found ways to
communicate with the public. But until
recently, few of those ways were terribly
direct, and none very immediate.
Beginning in 1935, with the
modern-day creation of the FOMC,
the Fed issued brief summaries of its
policy decisions, called the Records of
Policy Actions, on an annual basis.
At the same time, it kept minutes of
policy deliberations for internal use.
In 1967 came the release of the
FOMC minutes, 90 days after each
meeting. Also published with a 90-day
lag were the Records of Policy
Actions. In 1975, the lag in release of
minutes was shortened to 45 days, and
then in 1976 to 30 days.
Other communication vehicles
included the chairman’s semiannual
reports to Congress, the so-called
Humphrey-Hawkins Report. Since
1983 the Beige Book has publicly summarized economic conditions in each
of the 12 Federal Reserve districts
throughout the country. Finally, there
were speeches by Fed governors and
Reserve bank presidents.
But announcements immediately
following FOMC meetings simply
didn’t exist. The main way the Fed disclosed its policy actions was through
open market operations — chiefly,
daily repurchase agreements in which
the New York Fed’s trading desk
pumps up or drains reserves of the
nation’s banking system. Even there,
information was limited: Only the
amount of the repurchase agreements
transacted was released, with nothing
about rates, prices, or size of
propositions for the overnight loans.
FOMC members justified this
shroud as key to their effectiveness.
They even spelled it out at their
meeting on June 20, 1967. “For years,
Federal Reserve officials argued that
immediate release of policy decisions

PHOTOGRAPHY: AP IMAGES

In a pivotal 1994
meeting, Fed leaders
debated whether
to open up about
their actions

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

would make markets more unstable
and policy implementation more
costly and difficult,” said St. Louis
Fed President William Poole in a
2005 speech, referring to the 1967
meeting. “Creating these effects
through disclosure would obviously
be inconsistent with the Fed’s public
responsibilities.”
Michael Woodford, a Columbia
University economist who has studied
Fed communications, suspects another
motivation as well. “My guess is that
bureaucrats in most organizations
would prefer not to have to explain to
outsiders what they’re doing.”
Views on how much the Fed should
say in public about its formulation and
stance of monetary policy began to
change in the early 1990s. Politicians
were pressuring the Fed to open up.
Henry Gonzalez, then-House Banking
Committee chairman, was demanding
that the Fed make public details of its
deliberations on monetary policy.
Motivating some of this pressure was
the 1993 discovery that the Fed had
kept unedited transcripts of its FOMC
meetings since 1976.
Facing these developments, FOMC
members decided to release lightly
edited versions of those transcripts
with a five-year lag. But it wasn’t
enough to satisfy those seeking more
communication. Milton Friedman, a
frequent critic of the Fed, was one of
the leaders in the early 1990s in calling
for openness. He noted that a cottage
industry of Fed watchers had sprouted
up on Wall Street, reading the tea
leaves of repurchase agreements and
opaque speeches.
“Prompt release of the directive
would deprive the Fed watchers of
their employment but would improve
the operation of the money market by
ensuring that prompt information
was available to all participants alike,”
Friedman wrote in a Wall Street Journal
commentary with his longtime collaborator Anna Schwartz. “It would
also increase the effectiveness of
the Fed’s operations, since better
informed market participants would
have an incentive to speed the attainment of the Fed’s objectives.”

Page 3

The Debate
So it was that FOMC members began
to air their positions on the merits of
opening up. Members were steadfast
that they wouldn’t bow to political
pressure. At the same time, they were
willing to reconsider FOMC communications with a view toward making it
easier for the public to understand
their stance on monetary policy.
The Feb. 3-4, 1994, FOMC meeting
was widely anticipated as a possible
landmark occasion, both inside and
outside the Fed. Besides the communication issue, it had been five years
since the last rate increase, and two
since any move whatsoever. A few days
earlier, Greenspan had strongly indicated in Congressional testimony that
a rate hike was afoot. Transcripts from
the meeting reveal a lively discussion,
one in which members fretted about
preserving flexibility while living up
to their responsibilities.
Greenspan’s views on an announcement were already known to members.
He favored an immediate public
statement but wanted to make clear
that it would not set precedent.
He opened the monetary policy section
of the Feb. 3 gathering by outlining
his case. “I am particularly concerned
that if we choose to move tomorrow
[meaning, tighten monetary policy],
we make certain that there is no ambiguity about our move,” Greenspan
said. “I would very much like to have
the permission of the committee to
announce that we’re doing it and to
state that the announcement is an
extraordinary event.”
In addition, Greenspan argued,
nothing was forcing the Fed to make
this announcement a regular occurrence. “The issue of whether
something is precedential or not is
under our control. We don’t have to
announce our policy moves; there’s
nothing forcing us to do so, and I
cannot believe that there will be
legislation requiring that.”
Richard Syron, Boston Fed
president, also favored a public
announcement on the upcoming policy
move. But he wondered if the reaction
would serve as a guide on whether to

make future such announcements.
“My own forecast would be that this
would pull the teeth in a longer-term
sense, which we are not resolving now,
on a lot of these issues about disclosure. I know these issues wouldn’t all
go away.”
If there was a precedent to be
set, Greenspan said, it was that
announcements would be expected
when the Fed had acted after a long
time of leaving policy unchanged.
“What I’m saying is that the first
time we move the funds rate after this
extended period, we are hitting a
‘gong.’ ”
San Francisco Fed President Robert
Parry was the first to speak in favor of
a commitment to continued announcements: “We ought to have a discussion
as quickly as is feasible about the
desirability of similar statements in
the future because I think some of us
believe there is some advantage to
doing it on a continued basis.”
Robert Forrestal, president of
the Atlanta Fed, disagreed about the
need for regular announcements.
His suggestion was to make explicit in
the announcement that further
announcements would not necessarily
be forthcoming. “I have a real concern
that there’s a risk that we’re going
to be pushed by pressures — not necessarily legislation but other pressures —
to make this an ongoing operating
procedure. If that’s the case, I think
we would lose some flexibility,”
Forrestal said. “If we can draft a
statement that clearly indicates this is
not a precedent but a one-time
event because of the peculiar circumstances, then I would support your
[Greenspan’s] recommendation.”
Worries about setting precedent
aside, some members noted that
an announcement carried certain
advantages, the main one being that
the FOMC could better control the
message of the day. Jerry Jordan,
Cleveland Fed president, said that
without an announcement, the press
and public might wrongly conclude
that the committee was trying to curb
growth, when in fact such price
stabilization efforts were also pro-

Spring 2007 • Region Focus

3

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 4

growth. “The rationale for it [the
tightening] as a growth-sustaining
move is extremely important. Only by
putting out a statement can we get
that message out there, or at least
make an effort to say that this is not an
antigrowth move but one that is
designed to enhance the longevity of
this expansion.”
Jordan’s comment gets to the heart
of what was really going on in the
boardroom that day. Many FOMC
members were interested in transparency
because they believed it would make
monetary policy more effective.
By announcing why they acted, members
could influence the public’s expectations
about the future course of inflation —
and the Fed’s ability to deal with it.
That may not sound like such a
radical idea in 2007. But for the Fed in
1994, paying attention to public
expectations was still relatively new.
It was former Chairman Paul Volcker
who first implemented the practice.
Starting in 1979, the Fed began a
famous fight against inflation, slowing
the growth of the money supply so as
to bat down rising prices. It took five
years of mostly tight monetary policy
for the public to finally believe that
the Fed was serious and committed
about fighting inflation.
By 1994, open communication was
seen as a tool to further manage
expectations about the future path
of interest rates, and by extension
enhance the Fed’s hard-won credibility.
As economist Woodford put it in a
2005 paper: “Better information on
the part of market participants
about central-bank actions and intentions should increase the degree to
which central-bank policy decisions
can actually affect these expectations,
and so increase the effectiveness
of monetary stabilization policy.”
That’s why it was so important to
Jordan that any statement include an
explanation of the Fed’s rationale for
raising rates.

Around the Table
The debate was not entirely linear, bopping back and forth between the issues
as members were polled. Thomas
4

Region Focus • Spring 2007

Hoenig, Kansas City Fed president,
revisited the “precedent” problem,
arguing that there was no way around
one being set. Without saying whether
he favored an announcement in the
first place, he argued that doing so
would essentially back the FOMC into
a corner: “I have a hard time understanding how this would not be
precedential. ... I think it will be difficult from a credibility point of view to
argue against announcing in the future
should we want to make that argument.”
Greenspan responded: “We’re saying
there are different types of changes
[requiring statements]. For example,
in 1979 there was a major change.
Chairman Volcker and his staff went
out and had a big press conference.
There are certain individual events
where periodically the Federal
Reserve has made special statements;
I’m merely stipulating that this is one
of them. Frankly, with the exception of
the stock market crash in October
1987, it’s the first one since I’ve been
here.” If the committee four weeks
later raised rates again, “I don’t see any
reason why a statement would be
appropriate at that later time,”
Greenspan said. In the end, it was a
question of whether the FOMC could
control the issue and, in Greenspan’s
view, it could.
Thomas Melzer, president of
the St. Louis Fed, envisioned some
potentially embarrassing media
coverage with an announcement
that went out of its way to say it was a
one-time thing. “I think there is a
risk of a headline along the lines of
‘In an unprecedented move, the Fed
announced ... saying it wasn’t setting
a precedent,’ ” Melzer said.
And then came an animated back and
forth between Melzer and Greenspan.
Melzer: “Are we obligated to say
anything about the vote, for
example? I’m not sure. Again, I’d
prefer just to say what the action
was. It’s a decision of the committee, but if we get into
disclosing the vote, that begins
to set other types of precedents
that could be relevant when we

get to the point of deciding this
issue on a permanent basis.”
Greenspan: “Look, the main
issue here is that, as far as I’m
concerned, I would like us
to stand up and be counted.
We are the central bank and we
are making a major move.”
Melzer: “Right, I agree.”
Greenspan: “And to do it in
an ambiguous manner I think is
unbecoming of this institution.”
How to get around the precedent
problem? Greenspan suggested that a
partial solution was to have the
announcement made by him, not
by the committee, a proposal that
prompted a few jokes. “Now, if
we decide to do it on a permanent
basis, then it’s a committee issue,”
Greenspan said. “But marginally it’s of
a less precedential nature if I do it.”
Edward Boehne, president of the
Philadelphia Fed, responded: “If it
doesn’t work, the committee could fire
the chairman!”
Parry chimed in: “That’s right.”
“Well, maybe we ought to bring
that issue up before the vote!”
Greenspan said amid laughter.

A Consensus Builds
A few participants spoke up in
favor of the statement, particularly if
it came with some sort of “one-off ”
language. Joan Lovett, manager for
domestic operations with the New
York Fed, put it this way: “I think that
it can’t be harmful ... It tells everybody
what’s happening and it leaves no
room for ambiguity, and if it’s phrased
the way you are suggesting, it’s not
setting a stage for people to have
expectations of an announcement
every time there is a policy change
going forward.”
Gary Stern, president of the
Minneapolis Fed, made the case that
an announcement would level the
playing field in terms of market
participants understanding the Fed’s
message: “I happen to agree with those

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

who think this will turn out to be
precedential and from my perspective
that’s fine because I think we’ve been
in an awkward situation where we have
kind of acknowledged that people
in the markets get the news and
the signal immediately, but for
those who are not close to the markets
the news kind of dribbles out depending on how quickly they read the
financial press or consult other sources
of information.”
Richmond Fed President Al
Broaddus was among those arguing in
favor of a release: “There are risks of not
doing this [making a statement]. If there
were any confusion tomorrow going
into the weekend or this thing gets
played out in the New York Times
on Saturday and Sunday or on
CNN, I think we would have a real
mess.” And Dallas Fed President
Robert McTeer went so far as to say,
“I personally wouldn’t mind seeing it
become a precedent.”
The afternoon was turning dark and
it was time to wrap up. They would
gather again the next day at 9 a.m.
Adjourning the meeting, Greenspan
warned against leaks of the day’s discussion, alluding to the embarrassing
release of notes from a fall 1993 conference call to Rep. Gonzalez. “I just
beseech you to be as careful as you
possibly can and not even tell your
doorman where you’ve been!”
Nobody leaked, and the next day the
Fed released an announcement just as
planned to an unsuspecting public. It
was, as the Associated Press described,
a bolt from the blue: “In a rare display of
openness, the central bank issued a
three-paragraph statement Friday stating it had begun to clamp down on
credit ... The disclosure took the guesswork out of Fed-watching and caught
the financial markets off guard. For

Page 5

analysts accustomed to appraising
subtle shifts in money market interest
rates for clues to the Fed’s thinking, the
news release was a bombshell.”

More Changes
Despite the initial surprise, market
watchers quickly adjusted to the Fed’s
new openness. More moves toward
transparency followed. In 1995, FOMC
members agreed to release all future
transcripts of their meetings with a
five-year lag. On July 6, 1995, the
FOMC for the first time mentioned
the actual federal funds rate, saying
the policy action reflected “a decline
of 25 basis points.” On Jan. 31, 1996,
came the first mention of the actual
federal funds target rate.
Such announcements were forthcoming every time the FOMC
initiated a policy action. In 1999,
announcements became standard
practice, whether the target rate was
changed or not. From that year on,
there have been immediate announcements following each FOMC meeting.
The language contained in these
regular announcements has also
grown more precise. After meetings
in which there were shifts in FOMC
views about the future, announcements included a “balance of risks”
assessment — whether the risks
were greatest with regards to either
inflation or growth. In 2003, the committee began adding an additional
sentence about the future, such as
whether present policy actions were
likely to be continued. The May 2004
FOMC announcement, for example,
explained that “the committee
believes that policy accommodation
can be removed at a pace that is likely
to be measured.”
More than a decade after the pivotal
FOMC meeting, there is no looking

back. “On the whole it’s been a
successful experiment,” says Columbia
economist Woodford. “People in the
institution have come to understand
that there are advantages to the institution of being clearer about what the
policy targets are and what the Fed is
trying to achieve in the markets.”
Beyond policy announcements,
FOMC minutes now are usually
released three weeks after a meeting.
In addition, each open market operation is followed with a detailed report
of the transaction, including its
amount, the sizes of propositions, and
the stop-out rates and ranges. All of it
has added up to a considerably more
transparent Fed.
Not that there are no longer any
surprises. As recently as 2004, five-year
Treasury notes jumped 25 basis points
— the largest swing in more than a
decade — immediately after an FOMC
announcement that had been widely
anticipated. No policy action was
taken that day. What wasn’t anticipated was the FOMC’s move to eliminate
wording in its announcement that had
indicated no rate changes would be
happening “for a considerable period.”
The Fed had essentially signaled
that it was now closer to lifting interest
rates than it had been before, which is
why the Treasury notes rose with
the announcement.
Even now, after more than a decade
of moves toward greater transparency,
the Fed remains capable of confusing
the markets. But Woodford says that
FOMC communications are likely to
become even more explicit, not less.
“There’s still a search for even
better and perhaps more flexible ways
to communicate what the outlook
for future policy is,” Woodford says.
“The recent experience is that it can
be useful to talk about that.”
RF

READINGS
Carlson, John B., Ben Craig, Patrick Higgins, and William R.
Melick. “FOMC Communications and the Predictability of
Near-Term Policy Decisions.” Federal Reserve Bank of Cleveland
Economic Commentary, June 2006.

Poole, William. “FOMC Transparency.” Federal Reserve Bank of
St. Louis Review, January/February 2005, vol. 87, no. 1, pp. 1-9.
Woodford, Michael. “Central Bank Communication and Policy
Effectiveness.” NBER Working Paper no. 11898, December 2005.

Danker, Deborah J., and Matthew M. Luecke. “Background on
FOMC Meeting Minutes.” Federal Reserve Bulletin, Spring 2005.

Spring 2007 • Region Focus

5

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 6

JARGONALERT
Arbitrage
n economist is walking to lunch with an old friend.
The friend stops, startled, and calls out, “Look at
that hundred dollar bill on the sidewalk! How about
that?” The economist walks right past it, telling his friend,
“If there had been a hundred dollars there, someone would
have picked it up already.”
This joke gets to the heart of a key economic
principle: Opportunities for risk-free profit in markets
disappear quickly. Such profit is called arbitrage.
More specifically, arbitrage tends to refer to a difference
in pricing of the same commodity or asset in two different
markets. For example, imagine that MP3 players sell for $50
in Thailand and a buyer in California is willing to pay
$100 per player. If shipping costs are $10 per player, a firm
could make $40 per player by buying in Bangkok and
selling in San Diego. This profit opportunity might exist
briefly, but soon other people will catch on, driving up the
prices of MP3 players in Thailand, driving them down in
California, or both.
A more realistic example is “triangular arbitrage” in the
currency market. Imagine you can get a euro for $1.25 from
Broker A, a British pound for 1.5 euros
from Broker B, and a dollar for 50
pence (half of a British pound) from
Broker C. In this case, you could
convert $100 to 80 euros at Broker A,
then convert the euros to 53.33
pounds at Broker B, and finally
convert the pounds to $106.66 for a
profit of $6.66 per cycle. Investment
houses have teams of analysts
constantly on the lookout for these
types of arbitrage cycles.
Many economic ideas are derived
from the fact that arbitrage opportunities do not last.
The concept of “covered interest rate parity” states that a
currency future, or a contract to buy or sell a fixed amount of
currency at some date in the future, can be priced solely by
knowing the risk-free interest rate in both currencies and the
current exchange rate. An example of a nearly risk-free U.S.
interest rate is a short-term treasury bond, where default is
almost unthinkable.
Imagine that the current exchange rate is $1.25 per euro,
that the annual euro risk-free interest rate is 12 percent, and
that the annual dollar risk-free interest rate is 5 percent.
In this case, a euro-dollar futures contract expiring in 12
months would be $1.172 per euro. Why? Imagine that the
futures contract was $1.20 per euro. A firm could borrow
$100 at 5 percent interest, meaning the firm will owe the

A

6

Region Focus • Spring 2007

bank $105 in one year. The firm would then convert $100 to
80 euros at the current exchange rate and invest the euros in
a bond paying 12 percent. In one year, the firm would have
89.6 euros, which they could convert back to dollars at $1.20
per euro, giving them $107.52. After paying the bank $105,
the firm is left with $2.52 in profit. This profit is risk-free
because every component — the interest rates, the current
exchange rate, and the futures rate — was locked in from
the beginning. An equivalent example can be constructed for
futures rates lower than $1.172 per euro, where the investor
would borrow euros and invest in American bonds.
The amount of money chasing these arbitrage opportunities is immense. The Bank for International Settlements
estimates that more than $1 trillion in foreign exchange swaps
and futures are traded every day, and foreign exchange is only
one of a vast number of markets with arbitrage possibilities.
Problems can arise, however, when firms chase
price differential where risks are involved. Some of the
biggest investment houses and hedge funds in the world
have been bankrupted by tantalizing “almost risk-free”
profits. One of the most notorious failures in recent
years is that of Long Term Capital
Management (LTCM).
LTCM was a hedge fund run by a
team of top investors, including two
who won Nobel Prizes in economics
for their work on pricing assets,
which made immense profits in the
mid-1990s through a complex bond
price arbitrage. In the summer of
1998, however, Russia defaulted on a
number of its bonds, causing
investors to shift their holdings of
bonds in Europe and Japan into U.S.
Treasury bonds, which were considered the world’s safest.
Though world bond prices eventually returned to values
more in line with economic fundamentals, this flight away
from European and Japanese bonds resulted in a
$3.5 billion bailout and the fund was closed for good by early
2000. LTCM’s bond purchases were not really arbitrage at
all, since there was unhedged risk that allowed a small chance
for catastrophic losses.
The moral? True arbitrage opportunities are a rarity in the
real world. Many of them would be better described as
entrepreneurial opportunities that may prove profitable but
also carry with them real risk. So the next time someone
presents you with a “can’t-lose” scheme that seems too good
to be true, act like an economist and keep on walking past
that illusory profit.
RF

ILLUSTRATION: TIMOTHY COOK

BY K E V I N B RYA N

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 7

RESEARCHSPOTLIGHT
Global Warming and American Agriculture
BY B E T T Y J OYC E N A S H A N D A A RO N ST E E L M A N

incorporate medium- and long-term climate predictions and
cientists seem to have reached a consensus that global
temperature and precipitation averages across 2020-2049
warming is a reality. For instance, the latest report from
and 2070-2099.
the Intergovernmental Panel on Climate Change, issued
Long-run climate change predictions from these agriculin February, stated: “Warming of the climate system is
ture census data and weather models indicate that climate
unequivocal, as is now evident from observations of increases
change will add to annual agricultural sector profits by 4
in global average air and ocean temperatures, widespread
percent or $1.3 billion (in 2002 dollars). “Additionally, the
melting of snow and ice, and rising global average sea level.”
analysis indicates that the predicted increases in temperature
But how climate change will affect the economy remains a
and precipitation will have virtually no effect on yields
matter of debate.
among the most important crops,” the authors write.
In a new paper, economists Olivier Deschênes of the
This suggests that effects on profits aren’t because of
University of California at Santa Barbara and Michael
short-run price increases.
Greenstone of the Massachusetts Institute of Technology
Although the results indicate that the overall effect on
attempt to measure how the U.S. agricultural sector will
U.S. agriculture is likely to be positive, some areas will be hurt
fare. Their conclusion: Not as bad as you might expect.
by climate change. California, in particular, will be adversely
“If anything, climate change appears to be slightly beneficial
affected. In the Fifth District,
for profits and yields,” they write.
North Carolina is also expected to
Previous research has typically
“The Economic Impacts of Climate
take a big hit.
employed methods that are likely
Climate change will not
to produce inaccurate estimates of
Change: Evidence from Agricultural
affect the United States alone.
the economic effect of climate
As the Earth’s temperature rises,
change. The most common
Output and Random Fluctuations
agricultural production around the
method, the hedonic approach,
in Weather” by Olivier Deschênes
globe will be altered. That could
is unable to capture important
cause changes in relative prices,
characteristics, such as soil quality
and Michael Greenstone.
thus affecting markets both
and the option value to convert
internationally and domestically.
land to a new purpose, that play a
American Economic Review,
The authors are unable to account
key role in determining agricultural
March 2007, vol. 97, no.1, pp. 354-385.
for that possibility.
output and land values. Meanwhile,
Similarly, their model does not
the production function approach
deal with the potential for catastrophic weather events, which
does not account for adaptive behavior by farmers in response
some climatologists argue will result from global warming.
to climate change. For instance, as temperatures rise, farmers
If severe droughts or floods occur, their estimates could be
may change their mix of crops or use different fertilizers.
significantly off the mark.
Deschênes and Greenstone propose a new strategy:
Finally, if climate change were to produce significant
“Estimate the impacts of temperature and precipitation on
changes in the agricultural sector, it is not unreasonable to
agricultural profits and then multiply them by the predicted
believe that the complex system of federal farm subsidies
change in climate to infer the economic impact of climate
would also change. This would alter farmers’ incentives and
change in this sector.”
the nation’s agricultural production.
The authors use county-level data from the quinquennial
Despite the paper’s limitations, the authors have taken an
Census of Agriculture from 1987 through 2002. The census
important and often contentious topic and provided a sober
data are a measure of the revenue produced with the land
analysis. But more remains to be done, as the authors note.
and do not include income from federal farm programs or
If global warming is, in fact, upon us, the job of economists is
earnings from off the farm. The data are used to estimate the
to help us understand what it will mean to human welfare.
effect of weather on agricultural profits and yields for given
Agriculture is just one piece of the puzzle. The likely impact
geographic area while accounting for both average weather
of climate change on human health, particularly mortality
conditions and unexpected shocks.
rates in developing countries, is an especially important
The authors also use two standard sets of predictions
issue — and one where economists, perhaps in collaboration
about climate change. The first doubles concentrations of
with their colleagues from the physical and natural sciences,
greenhouse gases by the end of the 21st century, while the
could make an important contribution.
RF
second assumes a 250 percent increase. The authors

S

Spring 2007 • Region Focus

7

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 8

POLICYUPDATE
Interest Rate on Loans to Soldiers Capped
BY VA N E S S A S U M O

hurting soldiers and their families. If the rate cap drives
he Tidewater region of Virginia is home not only to
lenders out of the market, then this group will lose a somevarious Navy facilities but also to many payday lenders.
times important source of credit. “The likely impact of such a
The Department of Defense thinks that this is
rule would be to make military personnel with short-term
not a coincidence. When Congress asked the Pentagon to report
credit needs significantly worse off,” said William Brown, an
on abusive lending practices aimed at military servicemen,
accounting and finance professor at the University of North
it concluded that “predatory” lenders target soldiers and
Carolina at Greensboro, who testified before a Senate
their families through their “ubiquitous presence around
Committee last year.
military installations.”
Certain characteristics of the military and its lifestyle may
Payday lenders typically make small loans of a few
limit a soldier’s ability to handle short-term credit crunches.
hundred dollars due on the borrower’s next payday.
Many enlisted personnel are young, usually in their early 20s,
In exchange for immediate cash, borrowers write post-dated
and thus tend to have little precautionary savings. Soldiers
checks for the amount of the loan plus a fee. In a typical
may be deployed abroad for a long period, which would
transaction, a borrower pays a $15 finance charge on a loan of
make it difficult to deal with pressing financial demands at
$100, to be repaid in two weeks. That works out to an
home. A report by the RAND Corporation, a nonprofit think
annual interest rate of 390 percent. The cost of a payday loan can
tank, finds that a majority of miliquickly balloon if this credit is routary spouses believe that their
tinely rolled over. For instance, a
The maximum fee that payday
frequent and disruptive moves
borrower would eventually pay
have adversely affected their
back $490 for a mere $100 loan
outfits will be allowed to charge will
employment prospects.
that is renewed or “flipped” every
not be sufficient to cover the costs
In these instances, soldiers or
two weeks for an entire year
their dependents may prefer a pay(assuming that a $15 fee is charged
of extending short-term credit.
day loan to bouncing a check,
each time). But apart from the
paying late fees on a utility bill or
financial well-being of its troops,
credit card, or going to a pawnshop, options which could turn
the Department of Defense is also worried about debt troubles
out to be more costly. Moreover, a survey conducted by Brown
that cause soldiers to lose their security clearances, which would
with Charles Cushman, a political management professor at
prevent highly trained troops from being assigned to posts
George Washington University, finds that military servicemen
where they are needed the most.
choose payday loans because of the simplicity and speed of the
In response to these concerns, Congress recently passed
application process.
a 36 percent annual interest rate cap on consumer credit
Banks don’t compete in this market because they
extended to military servicemen and their dependents.
perceive such products as “too high risk to offer profitably
The rate cap includes all fees and charges associated with the
except at extremely high interest rates, thus inviting criticism
loan. The Department of Defense must draft implementing
from media, public policy officials, and consumer advocates,”
rules by October 2007. But a version of the proposed rules
wrote Sheila Bair, at the time a finance professor at the
released in April has narrowed the definition of
University of Massachusetts and now chairwoman of the
consumer credit to include only payday, vehicle title, and tax
Federal Deposit Insurance Corp. (FDIC), in a June 2005 report.
refund anticipation loans, products which were the focus of
Moreover, banks and credit unions may be wary
the Pentagon report. (Payday lending already has been
of creating a similar line for fear of cannibalizing their profits
effectively outlawed in a number of states, including
from overdraft protection fees, according to Bair’s study.
North Carolina.)
Despite the hesitation by some banks and credit unions, Bair
The Community Financial Services Association, the
thinks that they have the tools and infrastructure required to
national trade group for payday lenders, says that its
offer relatively low-cost alternatives to payday loans. A confermembers will stop offering loans to military personnel under
ence hosted by the FDIC late last year to discuss “affordable,
this new law. The maximum fee that payday outfits
responsible loans for the military” demonstrated some of the
will be allowed to charge will not be sufficient to
efforts in the industry to develop such products. Additionally,
cover the costs of extending short-term credit. “Payday
the FDIC plans to give banks Community Reinvestment Act
lenders can’t offer a loan at 36 percent,” says the
credit for making small-dollar, short-term loans to military
association’s spokeswoman.
members as an alternative to payday loans.
RF
Some analysts worry that the rate cap will actually end up

T

8

Region Focus • Spring 2007

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 9

AROUNDTHEFED
Taxing Questions
BY D O U G C A M P B E L L

“Implications of Some Alternatives to Capital Income
Taxation.” Kartik B. Athreya and Andrea L. Waddle, Federal
Reserve Bank of Richmond Economic Quarterly, Winter 2007,
vol. 93, no. 1, pp. 31-55.

“The Young, the Old, and the Restless: Demographics and
Business Cycle Volatility.” Nir Jaimovich and Henry E. Siu.
Federal Reserve Bank of Minneapolis Research Department
Staff Report 387, March 2007.

ouseholds face many forms of taxation. There are
taxes on property, capital income, labor income, and
consumption. Economic theory suggests that capital
income taxation is probably the worst of the lot. A world
with taxes on earnings from investments is a world where
people have to set aside more today to receive a given
amount of resources tomorrow. Young households
interested in building retirement nest eggs, for example,
must save enough to overcome the repeated taxation
of their investment proceeds, a cost that grows as the
household’s planning horizon lengthens.
Economists have looked for ways to shift the tax
base away from capital income, thinking almost
anything is likely to be better for general consumer welfare.
But in a new paper, researchers at the Richmond Fed sound
some cautionary notes about a wholesale switch of the
tax burden.
Kartik Athreya and Andrea Waddle build a model that
tests some intuitive notions about placing taxes exclusively
on either labor income (the income taxes filed April 15 each
year), consumption spending (usually sales taxes), or some
combination of both. The authors examine a world in which
households face real risks; people may be laid off or get sick,
and unable to work for some time. Lacking comprehensive
insurance, the only way to protect against these risks
is to accumulate wealth. In such a world, different
tax regimes have different risk-sharing repercussions.
The authors search for taxation arrangements that raise a
required level of revenue but yield the least possible pain
for households.
Athreya and Waddle’s most important finding is that
there are systematically different effects for welfare across
wealth levels. In a world with low risks, for example, wealthy
people would welcome a move away from capital income
taxation and toward either purely consumption or labor
income taxation. In a high-risk world, poor households
dislike a pure labor tax, much more so than their wealthy
counterparts who don’t rely on jobs for the majority of
their income.
There are no across-the-board conclusions, however. “You
can’t draw stark conclusions on which regime is best,” Athreya
says in an interview. “The usefulness of this paper is to illustrate that even in a relatively simple environment, uninsurable
risk has to be taken seriously in evaluating any tax system.”

conomists often have attributed the economic stability
the United States has experienced since the mid-1980s
to three forces: structural change, effective monetary policy,
and luck. In a new paper published by the Minneapolis Fed,
economists Nir Jaimovich and Henry Siu add a fourth factor
— demographics — which, they argue, explains as much as
one-third of the reduced volatility experienced during the
so-called “Great Moderation.”
The authors note that young workers experience much
more volatility in their employment status than the middleaged, while near-retirees experience something in between.
“When an economy is characterized by a large share of young
workers, all else equal, these should be periods of greater
cyclical volatility,” they write. The demographic profile of
U.S. workers since the mid-1980s has tilted away from the
“volatile age group,” thus contributing to economic stability.

H

E

“Identifying Asymmetry in the Language of the Beige Book:
A Mixed Data Sampling Approach.” Michelle T. Armesto et
al. Federal Reserve Bank of St. Louis Working Paper No.
2007-010A, March 2007.

vidence on whether the Federal Reserve’s Beige
Book — the anecdotal summary of regional economic
conditions published eight times a year — accurately reflects
actual economic activity has been mixed. In a new paper
published by the St. Louis Fed, a foursome of economists
build a model that “not only confirms the predictive
power of the Beige Book, but also provides a sense
of the asymmetry underlying the language of the
Beige Book.”
The “asymmetry” the authors refer to involves the
different sorts of information conveyed by optimistic or
pessimistic language. They used linguistics software to
assess the degree of optimism and pessimism in each Beige
Book edition. At the national level, they find that optimistic
language sends signals about high frequency fluctuations in
economic output while pessimistic language helps to tell us
where the economy is in the underlying business cycle.
At the regional level, the linguistic style of individual
Reserve banks is important. For some regions, pessimistic
language is the “key component relating the Beige Book to
district employment. In other regions, optimism — or both
characteristics — reflects the state of the economy.”
RF

E

Spring 2007 • Region Focus

9

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 10

SHORTTAKES
FISCAL SWEETENERS

Did North Carolina Lure Dell Too Much?

I

n 2004, North Carolina and Virginia were hoping to
attract Dell to construct a computer assembly plant
within their borders. But there was a stark difference in the
incentive packages offered by both states: North Carolina
presented Dell with more than $270 million in tax breaks
and other incentives while Virginia put $37 million on the
table. North Carolina eventually won the deal and the plant
is now operating in Winston-Salem. But looking back at
the large discrepancy in the offers has prompted the
Corporation for Enterprise Development and the North
Carolina Budget and Tax Center, both nonprofit organizations, to study whether the state is getting its money’s worth.
The report says that much of the difference between the
two packages was the result of the set of assumptions and the
models used to measure the economic impact of the Dell
plant. For instance, the state of North Carolina’s model relies
heavily on a projected sales figure, $2.3 billion annually, to calculate the factory’s impact on Gross State Product (GSP) and
state revenues. The authors of the report think that this sales
estimate is too high. It implies that each Dell worker
would add $175,000 to the GSP, which is more than twice
what the average job in North Carolina contributed
in 2004.
Moreover, they feel that a model which is mostly driven
by a sales figure might not be appropriate for a multistate
firm. While a portion of the plant’s revenues that goes
toward wages and salaries will likely stay in the state, the
profits generated by the factory will probably go back to
the head company or be paid out to shareholders who don’t
necessarily live in North Carolina.
To arrive at what they feel are better estimates of the
plant’s impact, the authors build various scenarios that
adjust some of the assumptions, tweak some of the features
of North Carolina’s model, and use an alternative one developed by the Iowa Department of Economic Development.
They find that the estimated values from this exercise are
nowhere near the $24.5 billion addition to the GSP and the
$707 million net change in state revenues projected by the
Commerce Department over the 20-year life of the project.
The report’s highest estimate shows a mere $8 billion
addition to GSP and a fall in state revenues of $72 million.
But the authors say that the most obvious omission in the
state’s economic impact model is the failure to take into
account whether firms would have chosen North Carolina
even without an incentive package. Such a consideration
would call for some downward adjustment in the state’s
offer, although it may not be easy to find this critical point.
Even if the report casts some doubt on the power
of incentives, it does not altogether discourage the use

10

Region Focus • Spring 2007

of subsidies in attracting businesses. It asks policymakers to
reconsider the methods and assumptions they use. “What is
needed, instead, is for the state of North Carolina to be a
savvy investor — for its subsidies to match and ideally surpass
its competitors not in largess, but in acumen,” says
the report. But the Commerce Department stands by its
methods. “We think that the model and the numbers we
used are accurate,” says Deborah Barnes, a spokeswoman for
the department.
Matt Martin, a regional economist at the Richmond Fed,
says that state officials can make a key mistake when they
fail to compare the results of an economic impact model to
the next best alternative use of public funds. “We want to
make a comparison of what the world looks like with
and without this [project], but not compared with nothing,”
he says.
Martin thinks that another way of gauging whether a
state is offering too much is to compare the average salary
that these jobs will fetch to the cost per job of the incentives
offered by the state. If that cost is a substantial fraction of
what a worker stands to receive, then the state may not be
getting its money’s worth.
— VANESSA SUMO
NO MORE GAS GUZZLERS IN VIRGINIA AND WASHINGTON, D.C.?

Hotbeds of Hybrid Sales

E

veryone’s talking about hybrid cars, and sales of these
electric/gasoline-powered vehicles have increased
every year since they were introduced in 2000. Virginia has
ranked among the top-five states for hybrid vehicle registrations since 2003, while Washington, D.C., had the
fourth-largest number of registrations among metro regions
in 2006. Part of that demand may have been due to a
perk that hybrid car owners had until last July — they could
use high-occupancy vehicle lanes to avoid congestion on
Interstates 95 and 395 in Northern Virginia. Also, hybrid
owners qualify for a federal tax credit of up to $3,150,
depending on the make and model of the vehicle and when
it was purchased.
Could the nationwide popularity of hybrids have
something to do with record-high prices at the gas pump?
That may seem like a no-brainer. In fact, the relationship
between gasoline prices and vehicle preferences isn’t
that simple. Although gasoline prices rose through August
2006, hybrid growth slowed. Registrations of hybrid vehicles increased 28 percent to 254,545 in 2006, compared with
140 percent year-to-year growth in 2005 and 91 percent
growth in 2004, according to data from automotive industry
consultant R.L. Polk & Co.
Automakers claim that Americans aren’t willing to pay a
large price premium for better fuel economy. Michael Allen,
director of public affairs for the Virginia Automobile Dealers

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 11

Association, says his members have made a similar observation. “Most people are looking for something that is better
than what they’ve got,” Allen notes. But new models of
traditional cars have been introduced that fare better
against hybrid vehicles.
“A lot of people come to the lot looking at hybrids, [but]
when they see the fuel economy of other vehicles [that sell] at
a lower cost, they are buying those vehicles instead of
the hybrids,” Allen explains. Even SUV lovers can find new
models with improved mileage, particularly “crossover” vehicles that are built on a car chassis and use a car powertrain.
Also, the demand for gasoline tends to be inelastic in the
short run; that is, the quantity consumed doesn’t change
much when prices change. However, it becomes more elastic
in the long run as elevated prices prompt consumers to
rethink driving habits. Demand for fuel-efficient cars like
hybrids would be expected to follow a similar pattern.
There is evidence of this trend based on the preliminary
results of a study by economist Sarah West at Macalester
College. West found that if gasoline prices double, sales of
minivans, trucks, and SUVs fall. However, the differences
weren’t statistically significant until she used lagged prices.
When someone thinks about buying a car, current fuel costs
aren’t the only consideration. Changes in prices over time
have more influence.
There are other factors that influence buying decisions.
George Hoffer, an economist at Virginia Commonwealth
University who has studied the automobile industry, says
zero-percent financing and other incentives can boost the
sales of gas guzzlers even when fuel prices are rising and
demand for better mileage increases. SUVs and other light
trucks have wider profit margins than other automobiles,
giving automakers more room to reduce prices while still
making a hefty return.
Car buying decisions have always involved a combination
of personal preference and practicality. While hybrids are
attracting more interest for their road performance and
styling as well as their fuel economy, they still can’t compete
on price. Only the Toyota Prius is competitive, Allen says,
but that’s because the company eats the added production
costs. General Motors and Ford aren’t in the financial
position to do the same thing.
— CHARLES GERENA
HURRY UP AND WAIT

Competition at Airports Affects Delays

W

hile airline delays got worse in February 2007, with
33 percent of flights late, performance improved in
the spring. Flights were late 26 percent of the time in March;
24 percent in April. For all of 2006, about 25 percent of the
nation’s flights got in late.
Delays and cancellations frustrate travelers. About 67
percent of flights arrived on time in 2006 at the Columbia,
S.C., Metropolitan Airport, compared to the national
average of 75 percent. Across the Fifth District, several

smaller airports’ punctuality in 2006 was worse than
the national average, according to the U.S. Bureau of
Transportation Statistics (BTS).
That’s bad news for travelers making summer plans, a
time when thunderstorms, for example, can wash out flights
and spur cancellations. Passenger traffic, according to the
BTS, rose by 1.7 percent during the first two months of
2007 over the previous year and continues to grow while
airline staffing has declined.
Late arrivals vary from airport to airport. Although
nonhub airports generally have fewer delays, Richmond’s
Richard Byrd International Airport’s flights were late
32 percent of the time. (A flight is considered delayed if it
arrived at or departed the gate 15 minutes or more after
its scheduled time.) Among larger airports, one of the best
performances was at Baltimore/Washington International
Airport where nearly 80 percent of flights arrived within that
15-minute window.
Reasons for delays are more complicated than they
appear. Nicholas Rupp, an economist at East Carolina
University in Greenville, N.C., has studied airline on-time
performance using a variety of data. People expect smaller
airports to be less congested with fewer delays. While that’s
generally true, he says, when a hub airline services the
airport, it can create more congestion.
Competition influences performance, too, with airports
served by a large number of carriers with equal market
share doing a better job. Airports dominated by a single
carrier, like Charlotte with US Airways, or Atlanta with
Delta, may not perform as well. Those airports tend to have
more frequent and longer delays, Rupp says. He and
co-authors Douglas Owens and Wayne Plumly have found
evidence of lower service quality on less competitive
airline routes.
But airports that service more than one hub airline
also can be congested. Since many flights originate from
hub airports, that can translate into delayed flights at
small airports.
Smaller airports also have higher cancellation rates,
“because they’re less able to handle adverse circumstances
than the big airports. If a bad snowstorm comes through, a
big airport is better equipped to handle it. Or a maintenance
issue, a big airport has better access to backup crews, planes,
parts, and maintenance, whereas small airport don’t,”
says Rupp.
Part of the reason for delays at all airports is simple supply
and demand. More carriers are offering more flights, but
without a ramp-up in runway capacity. “That is something
that we’re going to hear more about,” he says. “They don’t
build many new airports.”
— BETTY JOYCE NASH
CORRECTION: In the Winter 2007 Region Focus, the story
“Options on the Outs” incorrectly explained the meaning of
“out of the money” employee stock options. Such options
have exercise prices above the trading prices, which is why
they are worthless to the holders.

Spring 2007 • Region Focus

11

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 12

Academic
Alternatives
The theory of school choice sounds great, but it remains controversial.
Now, evidence from programs like the one in Milwaukee is beginning
to move the discussion from the theoretical to the practical
ever in the modern history of public education in
the United States have parents had more options
about where to send their kids to school. Vouchers
and charters, magnet schools, and even publicly financed
home schooling — almost every state, major school district,
and large city has some sort of school-choice program or is
considering one.
Washington, D.C., is home to a federally funded effort
that pays private school tuition for more than 1,800 lowincome children. North Carolina has one of the country’s
largest charter school programs, now encompassing 92
schools. South Carolina is looking at a number of plans, from
open public school enrollment to private school vouchers.
Utah recently established the nation’s most-encompassing
voucher initiative. Perhaps most significantly, Milwaukee is
in its 17th year of hosting its pioneering choice program.
Just about everything else has been tried to fix public
education, from busing to smaller class sizes to ramped-up
per-pupil spending and teacher salaries. But until recently,
exposing schools to market forces wasn’t one of them.

N

12

Region Focus • Spring 2007

The theory of school choice, as popularized by
economist Milton Friedman, looks like a clean solution to
the problem of poor-performing schools and the
underachieving students who attend them. Friedman envisioned a publicly funded system based on vouchers: Parents
are given coupons that can be redeemed for their child’s
admission to a school of their choosing. These vouchers
cover the full cost of tuition, and the money used to pay
for them follows students to their schools. The idea is that
with choice, parents create competition among schools,
whether public or private, for students and the money
that is attached to them. This changes the overall
market structure for education, begetting greater overall
efficiency and educational outcomes. As a result, kids
learn more.
All of which sounds great. The problem is that, even
with the increasing number of school-choice programs
nationwide, Friedman’s notion remains mostly theoretical.
Most of these programs in the United States are small;
many are just getting started.

PHOTOGRAPHY: ST. MARCUS LUTHERAN

BY D O U G C A M P B E L L

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

In the absence of obvious evidence,
it is difficult to have a civil conversation about the merits of school choice.
Mention “vouchers” and expect
impassioned opinions to be flung your
way. One side is scorned as market
zealots, the other as union shills. Even
the term “choice” is loaded, having
been appropriated by the movement
in favor of vouchers.
Inevitably, policy debates over
school choice bog down in the politics
of race, religion, and organized labor.
But in technical papers and academic
journals, social scientists are studying
U.S. school-choice programs, as limited
as they are, and engaging in lively
discussions. Do voucher programs
really help students learn more?
Do such programs need more
accountability and government
regulation? Or is just the existence
of “choice” a virtue unto itself? As
their findings move closer to broad
consensus — and in fact, they’re
pretty close — and as large programs like the one in Milwaukee
mature, it’s possible to imagine a
not-so-distant future when public
conversations on school choice are
finally based on evidence instead
of opinion.

The Milwaukee Experiment
School choice can take on many forms.
There are charter schools, public institutions that operate with some
autonomy from their district. Some
would also include magnet schools and
open enrollment among public schools
as being in the spirit of choice, at least
in cases when schools compete for students and funding. Finally, there are
vouchers — the gold standard in
school-choice programs.
To see school choice at its most
robust in the United States, go to
Milwaukee. Here, almost every conceivable manifestation of choice is
available — 55 charter schools, open
enrollment among public schools, and
even accessibility to public schools
outside the city.
But the main innovation has been
the $111 million voucher effort, called
the Milwaukee Parental Choice

Page 13

Program, which now pays private
school tuition for up to one-quarter
of the district’s student population.
Any student whose family lives at less
than 175 percent of the poverty line
qualifies. It doesn’t matter if they
already attend private school; any
low-income student living in the
district is eligible.
Some of the “choice” schools — as
Milwaukee private schools that accept
vouchers are known — are doing
laudable things. Notre Dame Middle
School, on the city’s impoverished,
increasingly Hispanic south side, is a
showcase. This school year it has 103
students, all girls, most Hispanic, in
grades five through eight.

To see school choice
at its most robust in
the United States,
go to Milwaukee.

This fall the students and administrators will expand into a new $2 million
building, complete with basketball
gym, life science laboratories, and
flat-screen TVs.
The school measures its success
in many ways, but maybe the most
prominent is the percentage of
students who graduate and eventually
go on to college — more than threequarters do. A first step in this process
is regular attendance, which is why
the principal regularly hops in a
van to knock on doors seeking
truant students.
But the real key to Notre Dame’s
long-term accomplishment is contained in a large whiteboard mounted
in a cramped first-floor office. On
it are the names of every graduate
of the school — from 1996 to present, 152 so far. Beside each name is
the high school the girl attends or
attended, and on what scholarship
they aim to continue their education. Students graduate from the
middle school, but Notre Dame
keeps up with them.

A Protestant Approach
Ninety of the 103 current students
pay with vouchers, worth $6,501 for
the 2006-2007 school year. That’s
money that otherwise would have
gone to the public school system.
Vouchers cover about half of the actual
cost of $12,000 per pupil to operate
the school, according to Alvaro
García-Vélez, president of the school
and its chief fund-raiser. (His wife,
Mary, is the principal.) The rest is
made up with private donations.
The campus is open from 7:30 a.m.
to 6 p.m. Classes are small, with no
more than 15 students. Religious education is a big part of the curriculum
and overall feel of the school. A picture
of Jesus Christ greets entering students with the words “Ven y Sígueme”
(Come and follow me). Painted on
various walls in large, block lettering
are slogans like “Love,” “Repent,”
“Faith,” and “Wisdom.” Next door is
the church where students worship at
various times throughout the day.

A few years ago, a man named
Henry Tyson learned about Notre
Dame’s whiteboard. Now, a similar
board occupies wall space in Tyson’s
school, St. Marcus Lutheran, where he
is the principal. He explains the
virtues of St. Marcus in a single piece
of paper. It shows a photo of a girl,
“Jade B.,” who enrolled at St. Marcus
as a fourth-grader. In the fifth grade,
she was in the 55th percentile of students taking a national standardized
test; by seventh grade, she was in the
93rd percentile.
The message is clear: St. Marcus
can perform near miracles with
children, plucking them from
failing inner-city public schools and
transforming them into academic
stars. Founded in 1873, the school
(with
grades
prekindergarten
through eight) had fewer than 100
students during the 1990s. With
the introduction of vouchers, the
student population surged, more than
doubling in size in 2001 to 220

Spring 2007 • Region Focus

13

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 14

students. Now it’s up to 300 and
maxed out, Tyson says. About 85 percent of students are using vouchers
for tuition. The proportion of the
student body that is black is about
the same.

Walking into the building is like
stepping into a different world.
Outside is a tough neighborhood, with
boarded up storefronts, and the occasional prostitute or drug dealer. Inside
is security. Students wear blazers and

Another Choice: Charters
Just a couple of decades ago, charter
schools didn’t exist in this country.
Minnesota passed the first charter school
law in 1991, followed quickly by
California. Today, 40 states, the District
of Columbia, and Puerto Rico together
have more than 1 million students who
attend more than 3,500 charter schools,
far outnumbering voucher programs in
this country. In D.C., more than onequarter of all public school students are
enrolled in charter schools.
Charter schools differ from voucher
programs in several important ways. They
are public, for starters, and can’t be religious in nature. But like private schools,
they don’t have to stick to some state and
local regulations as well as contracts with
teachers’ unions. They get their charters,
which typically last for three to five years,
from some governing body — usually the
local school board, but also states, cities,
and schools of higher education. Charter
school students usually take the same
state and federal standardized tests as
their public school counterparts. Funding
depends on how many students enroll,
and often (but not always) the funding
follows students instead of remaining in
the overall school district budget.
Helen Ladd, an economist at Duke
University, has studied one of the nation’s
largest charter programs. North Carolina
law allows up to 100 charter schools and
this school year had 92, with about
27,000 students enrolled. The schools
operate under the auspices of the State
Board of Education.
According to Ladd, the results so
far haven’t been positive. Charter school
students in her studies make smaller
achievement gains than they would have
in traditional public schools. She attributes much of this negative effect on high
rates of student turnover. “In a choice

14

system, an unintended side effect is
greater mobility of students moving in
and out of schools,” Ladd says. “That’s not
particularly good for either the schools or
the students.” Additionally, given that
charter schools enroll less than 2 percent
of the total North Carolina student population, the opportunity for creating
beneficial competitive effects is limited.
“Choice is something to be valued in
its own right. It’s empowerment,” Ladd
says. “But if we are going to use choice to
empower parents, then I want those
choices to be good choices.”
Harvard
University
economist
Caroline Hoxby disputes Ladd’s findings
(which are similar to findings done by
other economists of other charter programs). Hoxby argues that random
assignment models are the only way to
measure achievement differences.
Otherwise, Hoxby argues, the sample of
students attending charter schools is
biased by the likelihood that most of
those attending were low-achieving to
begin with. To get around that measurement problem, Hoxby focuses on
oversubscribed charter schools where lotteries determine admission. The pool of
enrollees is thus likely to be more random
and a better comparison to the regular
school attendees.
“Charter schools are inherently harder
to analyze than school vouchers,” Hoxby
says. She is referring to the difficulty in
finding places where there are enough
charter schools to create competitive
effects and for which there is enough
demand that a researcher can get around
the self-selection problem to draw random samples of students for comparison.
“You have to do more work to make sure
you’re picking up the charter impact and
not some time-related impact.”
— DOUG CAMPBELL

Region Focus • Spring 2007

either slacks or skirts. They don different neckties (both boys and girls)
based on their academic achievement
level, with those posting more than
a 3.5 grade-point average earning
coveted red and blue stripes.
Students get a lot of gospel and
no room for misbehavior. Saying
“no” to a grown-up is grounds for
suspension. The day starts at 6:30 a.m.
and ends at 5 p.m. in
study
hall. At St. Marcus, the pre-K kids —
4-year-olds — are reading first-gradelevel books. Students sign a “covenant”
that they will complete their
homework; if broken they can be
expelled. Teachers are on call 24/7.
“It’s a hard-nosed, high-discipline,
high-expectations, lots of love,
religion-based approach,” Tyson says.
“It’s possible. You’ve just got to expect
it and then have a curriculum
that supports it.”
Entry-level St. Marcus teachers get
paid near the same as their Milwaukee
public school counterparts — about
$32,500 per year. But more experienced St. Marcus teachers trail their
public counterparts. While a 20-year
St. Marcus veteran teacher earns about
$47,000, the average pay for a
Milwaukee public school teacher is
more than $50,000. Tyson says
schools like St. Marcus are able to pay
less because “monetary compensation
is an afterthought for most of our
teachers beyond the need to survive.”
Tyson, a former public school teacher
himself, says he took a 20 percent pay
cut to work at St. Marcus. “So what?
I knew I would be doing what I love
to do.”
For their tax dollars, parents who
might never have hoped to see their
children even graduate from high
school can get a highly disciplined
program, the likes of which hardly
exist in public schools but for which
there is clearly demand in the inner
city. Last fall, 400 parents lined up to
get their kids into a lottery for admission to the school. About 300 were
turned away because of lack of space.
By all accounts, Notre Dame
Middle School and St. Marcus
Lutheran are exceptional. But for

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

every good school, Milwaukee’s
voucher program has a horror story
counterpart. Perhaps the most
infamous was Alex’s Academics of
Excellence, which at one point
enrolled 175 students (but has since
closed) despite the problems of the
school’s founder being a convicted
rapist and allegations of employee
drug use on campus.

Questions
Dueling anecdotes aside, there are a
number of legitimate questions about
the potential impact of opening school
systems to market forces. There are
questions of accountability — who will
make sure schools are teaching
students the fundamentals if they are
not subject to mandatory testing?
Then there is the problem that public
schools — and the blameless students
who attend them — could be made
worse off if they lose significant
funding to private schools. And there
exists the potential that private
schools could “skim” the best students
from public schools, shutting out
children with special needs or others
who would lower the aggregate
achievement level. Finally, why should
public taxpayer funds support private,
sometimes religious, schools?
All of these questions are being
investigated by economists, and will be
discussed below, save for the last.
That one has been addressed by the
Supreme Court in a 2002 ruling on
Cleveland’s publicly funded scholarship
program. The court decided that using
vouchers for religious schools was permissible because such programs allow
parents to choose between religious
and secular schools, meaning there was
no bias either for or against religion.
This question also has an economic
rejoinder: In a market system, distinctions between private and public,
sectarian and nonsectarian, aren’t all
that important. If parents don’t
want their kids to go to religious
schools, they can put them in nonreligious schools, which the market
should produce given sufficient
demand. Religious affiliation is just
another choice.

Page 15

To many economists, the important thing is making the choices
available to everybody. Friedman
argued that the “neighborhood
effects” of education justified government sponsorship. That is, because
society gains from an educated
population, the government ought to
finance a minimum level of schooling.
But government intervention can stop
right there, Friedman said, with no
need for actual administration of
schools. The “externality” he hoped
to capture was an educated populace.
Whether that population was
educated in religious or nonreligious
schools doesn’t matter so long as a
baseline education is acquired.
How these ideas play out in the real
world, however, raises some valid questions about whether they really work.

Milwaukee’s Case for Choice
The traditional system, the “choice”
argument goes, isn’t doing very well at
providing this baseline education,
despite some innovations. Student-toteacher ratios have shrunk from 25.8 in
1960 to 16 in 2000; the median number of years of teacher experience is up
from 11 to 15 during that time; and
spending per pupil has grown threefold. But none of it has made a dent in
student achievement, which during
the past four decades has been flat,
as measured by the performance
of 17-year-olds on the National
Assessment of Educational Progress.
Meanwhile, school districts have consolidated and grown larger over the
years, putting parents further away
from monolithic decisionmaking. Yes,
there is an abundance of fantastic public schools. But in poorer districts, in
particular, public education is not
meeting expectations.
In the 1980s, the situation in
Milwaukee was dire. Less than half the
students who entered high school in
the district eventually enrolled as
seniors. It took an improbable 1980s
alliance between then-Gov. Tommy
Thompson, a Republican, and Polly
Williams, a Democrat who entered
the state Legislature primarily on the
platform of promoting school choice.

(Williams is no longer active in the
organized choice movement and did
not respond to telephone messages
requesting an interview for this story.)
With support from both the inner
city and the suburbs, in 1990, the city
launched the Milwaukee Parental
Choice Program, providing up to
1,000 low-income students with
vouchers to pay for private school.
This initial group represented about
1 percent of total students in the
district and was limited to those from
low-income families.
Support from Milwaukee’s business
community was crucial to the
program’s growth. The concern was
about the quality of the local work
force: The public schools were
producing an “army of illiterates,” as
one prominent chamber member put
it. The Metropolitan Milwaukee
Association of Commerce sunk
$500,000 into a lobbying campaign,
aiming to expand the voucher
program. “[School choice] is not a
panacea, but we are all of the opinion
the program is necessary,” says
Tim Sheehy, chamber president,
describing the organization’s mindset
in the mid-1990s. “We decided we
weren’t going to see this kind of
change and educational opportunity
without a system where parents are
fundamentally customers.”
Bolstered by the lobbying,
the voucher program was expanded in
1995 to allow up to 15 percent of public
school students to use vouchers, and
to use them even at religious schools.
Then in 2006, Gov. Jim Doyle signed a
bill that raised the cap to 22,500
students, or about 25 percent of the
district population.
There is still some room before the
cap is reached. This past school year,
17,410 children used vouchers to
attend one of 121 private schools.
Vouchers pay $6,501, compared with
a per-pupil cost of $12,000 for
the Milwaukee public system.
Of course, vouchers don’t cover the
full cost of education at many of
the choice schools — such as at
St. Marcus and Notre Dame — so
private fund-raising at each school

Spring 2007 • Region Focus

15

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 16

pays for the rest; parents don’t
have to pay another dime. This is a
requirement of the program: that the
vouchers fulfill all of a student’s
tuition obligations.
New voucher schools have driven
about 40 percent of the overall growth
in the program since 1999. Opening a
voucher school in Milwaukee today
mainly involves meeting some basic
administrative requirements from the
Department of Public Instruction.
Though private, Milwaukee schools
accepting voucher students must still
follow state requirements for providing basic instruction in reading,
language arts, math, social studies, and
science. (Private schools don’t have to
participate in the voucher program if
they don’t want to, and they generally
get to set the number of voucher students they will accept. For this reason,
elite prep schools participating in the
program tend to accept only a handful
of voucher students each year.)
Voucher schools must also provide
evidence of financial stability, and
schools entering the program must go
through an accreditation process.
But the regulations are limited
compared to public schools.

Choice advocates think this relatively hands-off approach is one of the
Milwaukee program’s best features.
Otherwise, they fear, private voucher
schools might be saddled with
regulations that could decrease their
quality. “We have focused on financial
viability as a means of solving
the problems that we encountered
with the program,” says Susan
Mitchell, president of School Choice
Wisconsin, a nonprofit group set up
to advocate the program. “We want to
stay out of academic regulation.”

Early Results
The data after five years of the
program were quite limited, given
the small size of the program. It’s fair
to question whether meaningful
conclusions could be drawn from a
program that involved about 1 percent
of the district’s student population.
The first study came in 1995
and was required by the law that established the voucher program. It was led
by University of Wisconsin-Madison
political scientist John Witte who
found no significant difference in
achievement between public school
and voucher students. But a follow-up

paper led by political scientists Jay
Greene (then at the University of
Houston) and Paul Peterson of
Harvard University aimed to adjust for
self-selection bias, with the idea that
Witte’s results were skewed by the
likelihood that mainly low-achieving
students would be applying for the
program, thus all but ensuring that
their performances would still trail
those of public school students. They
assumed that low-achieving students
would be the main voucher applicants
because satisfied parents wouldn’t
bother pulling their children out of
public school.
Peterson and Greene compared
voucher students to those who had
applied for the program but were
rejected and saw significant test score
gains in reading and math. Finally,
there was Princeton University economist Cecilia Rouse: She found voucher
students posted faster gains in math
scores, but none in reading.
So all in all, the first batch of
studies reported a mixed bag, though
more recently, one study found a
sample of voucher students with twice
the graduation rates of their public
school counterparts.

Schools that rely on voucher students for the majority of their enrollment have sprouted up across
Milwaukee since the inception of the city’s pioneering program in 1990. Two of the most admired are
Notre Dame Middle School (left), an all-girl campus whose famed whiteboard keeps tabs on all of its
graduates; and St. Marcus Lutheran (right), which has seen enrollment triple over the past five years
with its “tough love” approach to education.

16

R e g i o n F o c u s • S pu rmimnegr 22000076

Understanding the impact of vouchers
requires looking not only at private
schools but also at public ones. The
worry is that public schools will be
hurt if their funding is drained with
an exodus of voucher students to
private schools.
In Milwaukee, no negative effect
on public schools appears to have
occurred. In fact, the upshot may be
positive. The leading research on this
topic has been performed by Harvard
economist Caroline Hoxby who
studied whether competition between
public and private schools in
Milwaukee improved public school
student achievement and public
school productivity overall.
Milwaukee circa the late 1990s
made an excellent test case for several
reasons, Hoxby says. First, it contained students who before choice
were constrained to attend schools

PHOTOGRAPHY: DOUG CAMPBELL, ST. MARCUS LUTHERAN

Public School Impact

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

that “unconstrained” students — ones
wealthy enough to live elsewhere or go
to private school — avoided. These
were the students who were most
likely to be affected by the introduction of new options for schooling.
In addition, Milwaukee residents since
1990 had heard a lot about school
choice, but until 1998 — when the
1995 law raising the cap went into
effect — most couldn’t participate.
The release of vouchers served as a
sort of “shock” to the educational
environment, allowing researchers to
observe a supply response (how public
schools would react).
Hoxby focused on two groups:
Milwaukee public schools that were
likely to face the most competition
from voucher-infused private schools
(by looking at public schools with the
largest populations of voucher-eligible
students); and Milwaukee public
schools that were less likely to face
stiff competition.
What she found was that the
students in the former group of
schools posted test scores that
“improved quite dramatically over
the first three years after school
choice was unleashed.” In other
words, competition from voucher
schools made public schools better, which is consistent with
theory. (Hoxby’s findings have not
gone unchallenged; a 2004 paper by
Princeton University economist
Jesse Rothstein concluded that “a
fair reading of the evidence does
not support claims of a large or
significant effect.”)
“I am encouraged,” Hoxby says
about the results and their indication
that school choice is working as theory
predicts. “The reason is that where I
really expected to see the results, I
have seen the results ... I haven’t
expected to see results everywhere. If
1 percent of kids can leave for a charter
school, I would be surprised that it
would do anything.”
How did these gains that Hoxby
sees actually come about on the
ground? Ken Johnson, who served
as president of the Milwaukee Public
Schools board of directors in

Page 17

2005 and 2006, points to several
changes, all of which he attributes to
the leverage created by school choice.
After 1999, the district switched
to per-pupil funding, in which
dollars followed students even within
the public school system (which has
open enrollment under Wisconsin
law). Each school also was given
the power to create its own
governance council. These councils
were primarily led by parents
who have annual authority to
review and sign off on their
schools’ budgets.
Then there was the innovation of
site-based hiring, allowing principals
to bring in teachers they wanted
instead of having to accept applicants
because of seniority. Site-based hiring
ended the “annual dance of the
lemons,” in which teachers who had

Understanding the impact
of vouchers requires
looking not only at
private schools but
also at public ones.
quietly been pushed out of one school
demanded to be offered positions at
others, even if the schools didn’t want
them. It was a “climate change” in how
Milwaukee public schools operated,
Johnson says.
Johnson is not popular in the
Milwaukee public school system.
His unpopularity grew when he spoke
last year in radio advertisements
supporting the lift of the cap on
vouchers. He is not running for
re-election this year and vacates his
seat in the spring. “If something
is going on in school choice that
increases school achievement, then
we try to meet and beat that.
There’s nothing bad about that. If we
can compete and close them [the
voucher schools] down, great,”
Johnson says.

By contrast, a federally funded
Washington, D.C., program is unlikely
to have an impact on public schools
because of its limited size, even supporters agree. Although many people
refer to the Washington Scholarship
Fund as a voucher program, it’s strictly
a federal grant program through which
1,800 low-income students (in a
60,000-student district) receive
$7,500 to pay for private school.

Contrary Findings
The Public Policy Forum, a nonpartisan think tank in Milwaukee, has
identified a few problems with the
city’s voucher program. Its analysis of
the impact of school choice is more
ambiguous than the sort usually cited
by the pro-school choice crowd.
For one, the Public Policy
Forum suggests that a chief
beneficiary of the voucher system
has been religious schools. Today,
about 80 percent of voucher
students are enrolled in religious
schools, with the largest denominations being Catholic (37 percent)
and Lutheran (17 percent). For the
most part, these schools were
struggling to attract students
before vouchers provided a
financial windfall, says Anneliese
Dickman, research director at the
Public Policy Forum.
And this windfall may not be
having the beneficial competitive
effect that choice advocates seek.
After all, it’s possible that students
attending religious schools — with
their emphasis on discipline and
faith — would never go to public
schools in the first place. So how does
that create competition?
Consider that in the 2006 school
year, after the 15,000-student cap was
lifted, the voucher program grew
by 2,516 new pupils. But private
school enrollment grew by just
620 students, and that 60 percent
of new voucher users weren’t new to
private schools. “The availability of
more vouchers didn’t result in a
ton of kids coming into the religious
schools who weren’t there before,”
Dickman says.

Spring 2007 • Region Focus

17

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 18

Milwaukee’s Experiment
with School Vouchers
■

Name of Program
Milwaukee Parental Choice Program

■

Year Introduced
1990

■

Voucher
For 2006-2007 academic year, $6,501
per student

■

Eligibility
Students living in the Milwaukee public
school district who currently attend either
public or private schools, with family
incomes below 175 percent of the poverty
level (currently roughly $35,500 for a family
of four)

■

Cap
22,500 students, about 25 percent of
students living in the Milwaukee public
school district

SOURCE: Milwaukee Department of Public Instruction

Accountability is another concern.
The theory in school-choice programs
is that accountability largely is
supposed to be taken care of with
student mobility. If parents don’t like
the results, they can move their child
to another school.
Yet evidence from Milwaukee
makes a pretty good case that this
sort of accountability may not be
sufficient. There is an information
gap. For example: In Milwaukee’s
voucher schools, turnover is a big
problem, Dickman says, with the
annual rate of students dropping
out of the voucher program at 25
percent — even as voucher school
enrollment has increased. This presents a problem to parents trying to
choose the best schools for their
kids. “The parents don’t know that
half the class isn’t coming back the
following year,” he says.
But for the most part, Dickman
says, “bad” schools — such as the
rapist-founded Alex’s Academics —
were filled with kids just before they
closed. How can parents properly
18

Region Focus • Spring 2007

decide where to send their kids if they
don’t have comparable achievement
data from both public and voucher
schools? “This is not to blame the parents, but they just don’t have all the
information they need, so to put
the entire responsibility of accountability on them just isn’t fair,”
he says.
“Skimming,” however, doesn’t seem
to be a significant problem. In
Milwaukee, the program is designed to
prevent schools from turning away
low-achieving students. Private schools
have to accept all voucher students for
which they have slots, and then choose
by lottery once they fill. Also, the
voucher pays the full price of tuition,
even if the actual cost of enrollment is
higher. “Parents are choosing vouchers
because they are very unhappy about
where their child was before. And perhaps if they’re unhappy it’s because
they weren’t doing well,” Dickman
says. “My guess is creaming [skimming] is not happening, but we don’t
know for sure.”

The Wrong Market?
At the center of almost any discussion
about vouchers in Milwaukee — or
anywhere else for that matter — is the
teachers’ union. Certainly no group is
more aggrieved by the choice program.
The Milwaukee Teachers’ Education
Association’s list of problems with
voucher schools is lengthy. Among the
main concerns: Public school teachers
must go through licensing that private
school teachers don’t. Parents choose
private schools not for educational
purposes but for discipline or religion
(which isn’t necessarily what society
hopes to gain from funding schools).
Private schools don’t tend to enroll
students with special needs, who are
more expensive to educate. (By the
union’s count, voucher schools now
take fewer than 500 students with
special needs, compared with about
15,000 in the public schools.)
“The idea of applying market forces
to education is a bad idea to me,”
says Dennis Oulahan, president
of the Milwaukee teachers’ union.
“To me, education is not a commodity,

it’s a right. And when you apply market
forces to it, we say there will be
winners and losers. We can’t afford to
have any losers when talking about
educating our children.”
Of course, another obvious reason
for the teachers’ union to oppose
vouchers is that they threaten
job security as well as salaries.
Nationwide, public school salaries
are about 60 percent higher than
those offered in private schools.
Competition between those schools
should put pressure on the higher
public school salaries. For voucher
advocates, this is precisely the point.
Eric Hanushek, an economist
at Stanford University’s Hoover
Institution, says that the key to good
schools is good teachers, more so than
other factors. But trying to get good
teachers by requiring extra licensing
and regulation doesn’t seem to be
working. The data show that high
teacher quality is important in fostering student achievement, but that
teacher quality is uncorrelated with
certification and even experience.
What’s needed, Hanushek says, is a
competitive system in which schools
essentially bid for the services of good
teachers. In time, this system could
boost teacher pay, in addition to
making schools better.
Teachers’ unions “don’t want competition, any more than Ford Motors
wants competition,” Hanushek says.
“The puzzle to me is why particularly
the
minority
community
and
disadvantaged populations in large
urban areas are willing to put up
with the regular public schools and
not demand more choice.”

A Definitive Study?
Is it too optimistic to hope that a consensus — either in favor of or opposed
to market-based education systems —
among economists could break the
stalemate? There will soon be a
study that aims to provide all
the data which a parent, teacher,
policy wonk, or academic could want.
As part of the legislation to lift
Milwaukee’s voucher cap, a team of
researchers was commissioned to

7/12/07

3:05 PM

conduct a five-year evaluation of the
program. It is the first to attempt an
achievement comparison of voucher
and public school (including charter)
students since 1995.
“We’re really open to any and all
possibilities, trying to go in without any
strong priors, just in a spirit of explanation,” says University of Arkansas
economist Patrick Wolf, who is heading
the five-year investigation along with
fellow researchers Jay Greene at the
University of Arkansas and John Witte
at the University of WisconsinMadison. “It’s almost like this great
wilderness was discovered a decade ago
and nobody rediscovered it.”
So the results from this study
should settle matters, right? Probably
not. Already, the teachers’ union has
labeled the research team as biased in
favor of choice, and indeed some of
the research team’s past findings on
choice programs have largely been
positive. Oulahan also questioned
whether the testing — which is
different than that issued to public
school students — will accurately
reflect the groups’ relative achievement levels.

Page 19

Even Hoxby, the economist whose
research is most cited by advocates
of school choice, is pessimistic.
She says it will take a clean, big
natural experiment to truly answer
all the questions. Milwaukee no longer
resembles such an experiment, as the
“shock” of having choice available is no
longer there. A better study might be
one that soon looks at results in Utah,
which is now embarking on a statewide
voucher program.
“We tend to get messy experiments
in the United States. That’s the way
politics is,” Hoxby says. “The result is
that we have to work especially hard
with economics to try to understand
and get the information out of these
somewhat messy experiments.”

Studies come and go. Howard Fuller,
a Milwaukee native who in the early
1990s served as the district’s superintendent, has been involved with school
choice from the beginning. Fuller
has read all the studies and surveys.
He knows they are messy. He believes
school choice in Milwaukee requires
some tweaking. But to him, what
matters most is the principle involved:
choice.
“It has given parents who would
not otherwise have one, an option,”
says Fuller, who now serves as an
education professor at Marquette
University. “It’s not an issue of whether
it’s superior to the traditional system
or not. The issue is — did you
give low-income and working-class
blacks some opportunity to choose?
That’s the issue.”
Economic theory says that choice
should increase customer satisfaction.
In a recent poll, 80 percent of
Milwaukee parents using school
vouchers described themselves as
satisfied or very satisfied with the
program. For many in the nation’s
largest laboratory for school choice, no
further studies are necessary.
RF

PHOTOGRAPHY: ST. MARCUS LUTHERAN

RF SPRING 07_1-51_Rev7.9

READINGS
“Are Voucher Schools Putting the Squeeze on MPS?” Public Policy
Forum Research Brief, February 2007, vol. 95, no. 1.
Bifulco, Robert, and Helen F. Ladd. “The Impacts of Charter
Schools on Student Achievement: Evidence from North Carolina.”
Terry Sanford Institute of Public Policy Working Papers Series,
August 2004.
Friedman, Milton. Capitalism and Freedom. Chicago: University
of Chicago Press, 1962.
Greene, Jay P., Paul E. Peterson, and Jiangtao Du. “Effectiveness
of School Choice: The Milwaukee Experiment.” Harvard
University Education Policy and Governance Occasional
Paper Series, March 1997.
Hanushek, Eric A. “Choice, Charters, and Public School
Competition.” Federal Reserve Bank of Cleveland
Economic Commentary, March 15, 2006.
___. John F. Kain, and Steven G. Rivkin. “Teachers, Schools, and
Academic Achievement.” Econometrica, March 2005, vol. 73, no. 2,
pp. 417-458.

Hoxby, Caroline M. “School Choice and School Competition:
Evidence From the United States.” Swedish Economic Policy Review,
2003, vol. 10, no. 2, pp. 11-67.
“Does Competition Among Public Schools Benefit Students and
Taxpayers?” American Economic Review, December 2000, vol. 90,
no. 5, pp. 1,209-1,238.
Rothstein, Jesse. “Does Competition Among Public Schools
Benefit Students and Taxpayers? A Comment on Hoxby (2000).”
National Bureau of Economic Research Working Paper no. 11215,
March 2005.
Rouse, Cecilia Elena. “Private School Vouchers and Student
Achievement: An Evaluation of the Milwaukee Parental Choice
Program.” Quarterly Journal of Economics, May 1998, vol. 113, no. 2,
pp. 553-602.
Witte, John F., Troy D. Sterr, and Christopher A. Thorn. “Fifth-Year
Report: Milwaukee Parental Choice Program.” Department of
Political Science and The Robert M. La Follette Institute of Public
Affairs, University of Wisconsin-Madison, December 1995.

Spring 2007 • Region Focus

19

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 20

Ask the
MARKET
Companies are leading the way in the use of prediction
markets. The public sector may soon follow

very week, the Centers for Disease Control and Prevention (CDC) produces a five-color map, each color
representing the gravity of the flu in each state, from
yellow (no activity) to red (widespread activity). It’s useful
so far as it goes, but the information is often a week old.
There might be a way to gather timelier information.
Say that a nurse in a public health clinic in North Carolina
usually sees about one or two patients come in each day with
flulike symptoms in the month of December. But one day
that number goes up to four, and then to five the following
day. Sensing that the flu was quietly spreading, the nurse
places a bet that the CDC will upgrade the state’s flu alert
from green (sporadic activity) to purple (local activity) or
even to blue (regional activity). A doctor, a lab technician, a
pharmacist, a nursing student, and other traders in the flu
prediction market likewise throw in their hunches, based
on their own observations. Together, they come up with
their best prediction of how widespread the flu will be in the
coming weeks.
The University of Iowa has been running such a flu
prediction market for the state of Iowa for the past four
years. During the October 2006 to April 2007 flu season, it
added a new market for North Carolina because of strong
interest from state epidemiologists.
Here is how it works. Each participant is given 100 “flu
dollars” with which to trade. This amount is equivalent to a
real money educational grant of $100, which grows or
shrinks during the season depending on the accuracy of their
predictions. Participants buy and sell five color-coded
shares, or contracts, each one corresponding to a level of flu
activity based on the CDC’s surveillance system.

E

20

Region Focus • Spring 2007

For instance, if the price of a red contract that expires in
two weeks is 80 flu cents, then the market’s collective guess
is that there is an 80 percent probability that the flu will
be widespread in a couple of weeks. If many traders believe
likewise, then they will buy more red contracts, hence
bidding up its price. Thus, the market price of each contract
indicates the likelihood that the spread of the seasonal bug
will reach a certain level in a particular week. If the CDC
eventually reports a “red week,” then those holding on to red
contracts will be rewarded one flu dollar. Contracts of losing
bets expire worthless. Participants get a check at the end of
the season depending on how well they did.
Predictions from the Iowa flu market (where there are
more data to analyze) have been remarkably accurate. About
half of the time, it has been able to correctly predict the
extent of flu activity one to two weeks in advance, according
to a study by the managers of the flu prediction market. The
record is even better if allowed some wiggle room. “Doctors
say we don’t have to be exactly right, they just want to know,
for instance, that [the prediction] is green rather than red,
and we’re [correct] there 90 percent of the time,” says
George Neumann, an economist at the University of Iowa
and one of the market’s managers.
The value of such a prediction market is that one can
often get a very good idea of how severe the flu season will
be, even up to five weeks in advance. That’s enough time for
health care workers to spring into action — to mobilize
resources toward vaccinating high-risk individuals and to
prepare hospitals to anticipate more patients.
The principle behind prediction markets is simple.
By designing contracts for which payoffs depend on some

PHOTOGRAPHY: GETTY IMAGES

BY VA N E S S A S U M O

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

unknown future event, markets can
offer incentives for people to reveal
what they know, and then pool this
information to produce the best forecast. Prediction markets provide an
effective way to bring together what
writer James Surowiecki calls the “wisdom of crowds.” His book (which
bears the same title) is a treatise
on how the collective intelligence of
people is often better at predicting the
future, and, therefore, at making
better decisions, than calling on a few
experts. This claim has been shown
to be true in many domains. In the
case of flu prediction markets,
epidemiologists are happy to tap into
the wisdom of anyone from doctors to
nursing students.
Political and media uproar a few
years ago about a Department of
Defense-funded project has somewhat
frozen interest in using these markets
for public policy. One of two appointed projects, the Policy Analysis
Market (PAM), was accused of being a
market for predicting when the next
terrorist attack would occur, something that was thought to be offensive
and morally wrong. PAM was promptly
shut down even before it was launched.
In fact, PAM was a market that
would have allowed traders to speculate, for instance, on how the country’s
financial aid and military involvement
would affect economic and political
stability in the Middle East, and how
conditions in those countries could
affect the United States. “It wasn’t a
market about terrorist attacks,” says
Robin Hanson, an economist at
George Mason University and one
of the architects of PAM. But it
might have demonstrated how well
these markets can make forecasts
in comparison with other means of
gathering intelligence.
With the demise of PAM, public
policymakers may have become
hesitant to adopt prediction markets.
“The government got shy,” says
Hanson. Instead, companies are
leading the way, turning to
the power of these markets to peer
into the future to help them make
better decisions.

Page 21

The Market as a Crystal Ball
Prediction markets have been used to
forecast election and sports outcomes,
the weather, Oscar winners, future
technologies, the direction of the fed
funds rate, and almost any event that
people care about. It is tempting to
look at these markets as just a fancy
form of gambling or an entertaining
pastime. But as a new book on information markets (another name for
prediction markets) by the AEIBrookings Joint Center for Regulatory
Studies notes, these markets are beginning to acquire some respect. They
seem to deliver forecasts that are as
good as or even better than other wellknown prediction mechanisms.
For instance, the Iowa Electronic
Markets (IEM), a prediction market
institution at the University of Iowa
for almost two decades, has consistently done a better job at calling the
winners of presidential elections than
opinion polls. One of the contracts
offered at the IEM allows traders to
bet on a candidate’s share of the total
votes, which makes it easy to compare
the market’s prediction to the actual
vote share won by each nominee.
These contracts pay off a penny for
each vote share earned by a candidate.
For instance, if the democratic nominee gets 40 percent of all democratic
and republican votes, then that
contract pays 40 cents. The markets
are open to all traders, except for some
“classroom markets” that are limited
to academic traders.
Joyce Berg and Thomas Rietz, both
of the University of Iowa, have studied
the performance of the IEM so far.
They find that on the eve of the election, the predicted presidential vote
shares missed the actual vote shares by
1.33 percent. This is smaller than the
average error of 2 percent for opinion
polls (for elections prior to 2004).
Moreover, Berg and Rietz find that
IEM prices for the 2004 presidential
elections were “more stable than polls,
respond less to transient events than
polls, and were closer to election
outcomes than the average poll when
the election was more than one
week away.”

The IEM is probably the best place
to search for proof on the
forecasting ability of these markets
because it has been around for a long
time. The evidence is still coming in
from other corners of the field, but the
results look encouraging so far. One
piece of evidence comes from
data for the first two and a half
years of Economics Derivatives,
a prediction market that bets on
the future path of economic
variables like nonfarm payrolls and
retail sales. A recent analysis by Refet
Gürkaynak of Bilkent University in
Turkey and Justin Wolfers of the
University of Pennsylvania shows
that market-based forecasts “mildly
dominate” the consensus forecasts of
professional economists working in
financial markets.
The power of prediction markets
to successfully aggregate and summarize
information relies a great deal on giving
participants monetary incentives to
truthfully reveal their beliefs, in making people “put their money where
their mouth is.” This reward entices
people to come forward and trade, to
toss their bets and information in the
ring. The more confident a trader is in
his beliefs, the bigger his bet, thus
giving more weight to what he knows.
Because he will be rewarded for
being correct, a trader will have
the incentive to constantly watch
the markets and to jump on any
opportunity when prices fall out of
line with their predictions. He will
also be motivated to continually seek
information to improve upon his bets
and therefore the market’s forecasts.
The Fed, for instance, can look
at various surveys of inflation expectations, something that it is keen on
following because it directly affects its
decision on where the fed funds
rate, its monetary policy instrument,
will go. However, as a San Francisco Fed
Economic Letter points out, “survey estimates suffer a bit from the ‘talk is
cheap’ problem.” A better way is to
look at what the market thinks the
future course of inflation will be.
One indicator is the difference
between the rate of return on

Spring 2007 • Region Focus

21

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 22

conventional bonds and the rate of
return on inflation-indexed bonds: a
measure of the public’s expectations of
inflation. The prices of these financial
instruments will always represent the
market’s best forecast. There is every
incentive not to lie because there is
money at stake. Consequently, the
difference in yields between these two
instruments has been shown to be
better predictors of inflation than
survey-based estimates.
But does money always matter in
prediction markets? This question is
especially important in the United
States where gambling is mostly illegal,
which makes it difficult to set up a prediction market that involves real
money. The prediction market puzzle
is that even markets trading in pretend
dollars can make very accurate predictions, which seems to undermine
the profit motive that makes these
markets work so well.
For instance, the Hollywood Stock
Exchange, a play money site, has been
shown to have a very good record
of predicting Oscar winners and boxoffice successes. Even when the stakes
are limited such as at the IEM, which
trades with real money but has an
investment limit of $500, accuracy has
not suffered. (The IEM can legally
operate with real money because it has
been given a “no action” letter from
the Commodity Futures Trading
Commission, on condition that it does
business in a way that it has indicated
to the commission, including accepting
an investment no greater than $500
for each participant.)
Why do play money prediction
markets do as well? In real money
markets, the quality of the information or the skill of the trader is
reflected in the amount he is willing to
bet. However, the amount he is willing
to put down can be determined in part
by the depth of his pocket. In a play
money world, however, one can make
large bets only after amassing a fortune
of play money, which in turn is only
possible by making a series of good
trades. This feature may help make up
for any accuracy that could be lost by
not trading with real money.
22

Region Focus • Spring 2007

And there are other reasons that
could motivate people to trade on
the information that they have. In
companies that run internal prediction markets, for instance, managers
would probably never ask employees
to put up their own money. “What we
come to think of as real money is when
[employees] pay out of their own
pocket to participate. And inside the
company, that usually doesn’t happen,”
says Emile Servan-Schreiber, CEO
and co-founder of Baltimore, Md.based NewsFutures, a firm that
sets up internal prediction markets
for companies.
While businesses sometimes provide some small monetary rewards like
cash or gift certificates, reputation or
recognition within the group could be
a stronger incentive. “Your performance in the market is going to reflect
well upon you in the minds of the
higher-ups, which is probably more
important than the material reward,”
say Servan-Schreiber. Another factor
is how much the employee cares about
the event to be predicted in the
market. (For instance, will product X
be launched on time?) “If you care
about the question because it’s part of
your job and you think you know a lot
about it, then that will be its own
reward,” Servan-Schreiber says.

Markets vs. Meetings
A good forecast is extremely
valuable for companies, one
that translates into better
decisions and higher profits.
Google is running internal
prediction markets to
predict product release dates. Arcelor
Mittal, the largest steelmaker in
the world, has one to forecast sales and
the price of steel. The pharmaceuticals
industry has also been keen on prediction markets because choosing a new
drug to place its money on can be very
risky. “The problem of a pharmaceutical
company is that it has many ideas that
it could bet on, but it needs to bet on
the right one early on, otherwise
it could be wasting billions of
dollars on the wrong course,”
Servan-Schreiber says.

In companies, just like in any
organization, there is a lot of information that managers may find helpful,
and the challenge lies in how to bring
the pieces of the puzzle together.
Of course, probably the oldest and the
most widely used method, the staff
meeting, is one way to get everyone in
the same room and exchange what
they know, says Wolfers, but this is
probably not the best way to extract
what employees really believe.
Someone who is only interested
in pleasing his boss might say what he
believes the boss wants to hear, in
which case his information is useless
and even distracting. Nobody wants to
be the bearer of bad news, so someone
who might think that a project will
not be launched on time will hold back
saying so. And then there is the insufferable employee who will say his
opinion about everything but in fact
knows nothing. And in the corner at
the back of the room there may be
someone who is uncomfortable about
speaking up but really knows a lot.
In this type of situation, the
information that is laid out in front of
the manager will be erroneously
weighed according to who has the
loudest voice or who wants to curry
the most favor with the boss, which is
surely not the best way to aggregate
information. Prediction markets offer
a better way by giving employees
equal opportunity to place
bets on their beliefs and
have what they know count
more according to how strong their
opinions are.
But who gets to participate in the
company’s internal prediction market?
Should it be limited to the smartest
guys in the room? The problem is that
companies often do not know who the
smart guys are; if they did, then they
would just go up to them and ask.
Prediction markets make it possible
for experts to step forward and reveal
themselves. “We may be surprised that
the guys from the loading dock are
actually the ones who know how many
orders went out that week,” says
Wolfers. “So I wouldn’t want to
exclude the loading guy ever, because

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

he may be smart, and he’s the only one
who knows that.”
But what happens if those who
think they are experts but really
are not likewise come forward? This
actually makes it even more appealing
for the smart guys to trade. “You can
think about uniformed money as sort
of the honey that attracts the bees,”
says Wolfers. “It’s the reward for
intelligence and good trading.”
Similarly, manipulators, or those
who would lose trades intentionally to
move prices in their favor, can also be
thought of as “noise” traders. For
instance, if a company rewards
resources to a division based on what
prediction markets say, then employees may be tempted to try to
manipulate prices. However, it is
the nature of markets to offer rewards
to those who spot these “noises”
early on.

A Not-So-Scary Proposition
Besides manipulation, there are other
circumstances when prediction markets might fail to yield the right
forecast. Markets may be biased
toward “favorites” and “long shots,” or
the tendency to undervalue near
certainties and overvalue small probabilities. Prediction “bubbles” may also
be possible if investors irrationally
inflate the probabilities of certain
outcomes. These anomalies are not
different from those observed in

Page 23

financial markets. Also, if the quality
of available information is very poor,
then the prediction will simply reflect
the market’s collective ignorance.
But no system of forecasting is
error free, and so the relevant question
is how the errors of this mechanism
compare to the errors of other
forecasting mechanisms. So far, prediction markets have done at least as
well as the alternatives.
Even so, prediction markets are not
meant to prematurely replace other
methods of forecasting and gathering
information, if at all, but can initially
take on an advisory role. “We don’t
have to put the market directly in
charge, [but] you would slowly rely on
it as you came to trust its judgment
more,” says Hanson. Hence, it is
important to continue to compare
the accuracy of prediction markets
with alternative institutions. That may
assuage some fears of handing
over an organization’s decisionmaking
capabilities entirely to the market,
especially in the arena of public policy.
And there are many uses for public
policy. Just like company managers,
the problem a policymaker faces is
how to find those who will truthfully
reveal their beliefs about a certain
policy and how to best aggregate and
weigh those beliefs. One clever way is
to design a set of contracts that can be
traded in prediction markets to allow
policymakers to compare the outcome

of competing policies, or what Hanson
calls “decision markets.” For instance,
decision markets can be used to
compare murder rates with and
without capital punishment, children’s
test scores with and without school
choice, road congestion with and
without the expansion of a highway,
and a host of other publicchoice questions.
Will governments ever use prediction markets in this way? Perhaps, but
not soon. “Many useful institutions
took a long time to become adopted,”
says Hanson. Life insurance is one
such institution, which took awhile to
become accepted because people
thought that life insurance was like
gambling on death, and that put
people off. The unlawful Internet
gambling law that was passed last year
may also inadvertently affect prediction markets from flourishing, insofar
as these markets are seen as gambling.
In the business world, interest in
prediction markets is growing fast
even if the acceptance is a little slower,
mostly because it takes awhile to
see results. But Servan-Schreiber is
optimistic about their place in
corporate circles. “Prediction markets
are going to become a fixture of
management in the 21st century.
That’s pretty sure,” he says.
“[Prediction markets] work and
there’s a demand out there for the
wisdom of crowds.”
RF

READINGS
Gürkaynak, Refet, and Justin Wolfers. “Macroeconomic
Derivatives: An Initial Analysis of Market-Based Macro Forecasts,
Uncertainty, and Risk.” NBER Working Paper No. 11929,
January 2006.
Hahn, Robert W., and Paul C. Tetlock, ed. “Information Markets:
A New Way of Making Decisions.” AEI-Brookings Joint Center
for Regulatory Studies, 2006.
Hanson, Robin. “Shall We Vote on Values, But Bet on Beliefs?”
Working Paper, Department of Economics, George Mason
University, September 2003.
Kwan, Simon. “Inflation Expectations: How the Market Speaks.”
Federal Reserve Bank of San Francisco Economic Letter, Oct. 3,
2005, no. 2005-25.

Polgreen, Philip M., Forrest D. Nelson, and George R. Neumann.
“Use of Prediction Markets to Forecast Infectious Disease
Activity.” Clinical Infectious Diseases, 2007, vol. 44, pp. 272-279.
Servan-Schreiber, Emile, Justin Wolfers, David M. Pennock, and
Brian Galebach. “Prediction Markets: Does Money Matter?”
Electronic Markets, September 2004, vol. 14, no. 3.
Surowiecki, James. The Wisdom of Crowds. New York: Random
House, 2004.
Wolfers, Justin, and Eric Zitzewitz. “Prediction Markets.” Journal
of Economic Perspectives, Spring 2004, vol. 18, no. 2, pp. 107-126.
___. “Prediction Markets in Theory and Practice.” New Palgrave
Dictionary of Economics, 2nd ed., forthcoming.

Spring 2007 • Region Focus

23

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 24

GRINDING GEARS

The jobs bank program has provided greater job security for unionized workers at
the Big Three automakers, but at the expense of greater flexibility in labor markets

ou have probably seen this scene played out before
in the news: Laid-off workers, forlorn, leave their
shift at a manufacturing plant for the last time.
The meaningful relationships and lengthy careers they had
built up are gone, and the workers know it.
That was the scene at General Motors’ assembly plant in
Baltimore, Md., on May 13, 2005. The 70-year-old facility
rolled out its last vehicle, a van tagged with a cardboard sign
reading “The End.” It was a difficult day for the 1,400 plant
workers facing an uncertain future.
Displaced workers often suffer long periods of unemployment. When they finally get another job, it is more likely
lower-paying and part-time. Even those who land full-time
jobs are less likely to match their previous salaries, especially
when they enter a different line of work. That’s bad news for
people in industries such as automobile manufacturing who
have permanently lost positions due to automation and the
competitive pressures of a global economy.
Blunting the impact of layoffs is one of the reasons why a
“jobs bank” for displaced autoworkers was created in the
early 1980s. The program was the result of contract negotiations between the United Automobile, Aerospace, and
Agricultural Implement Workers of America (UAW) and the
“Big Three” — Ford Motor Company, General Motors, and

Y

24

Region Focus • Spring 2007

the Chrysler Group. In the jobs bank program, laid-off
autoworkers typically receive all or at least a large share of
their usual wages and benefits indefinitely. Eventually, the
idea was, these idled workers would be plugged into new
jobs when they became available.
Automakers expected to avoid extensive job cuts and plant
closings in the future, so they figured the number of people in
the jobs bank wouldn’t be overwhelming. They also believed
that most employees would be there for only a short stay.
Unionized workers, in turn, were willing to accept pay freezes
and other concessions in exchange for greater job security.
“We had to look at ways of protecting people, even if we
were going to do some givebacks,” says Fred Swanner, president of UAW Local 239 based in Baltimore. His union
represents the 100 people who remain in the jobs bank two
years after General Motors closed its assembly plant in the
city. “It was a way of protecting the employees and keeping
the corporations from outsourcing [work] overseas, or at least
try to slow them down by attaching some liabilities to it.”
The program appears to have given autoworkers
what they wanted. Swanner says his members appreciated
the added stability in their employment and have used
their stretch in the jobs bank to volunteer or go back
to school.

PHOTOGRAPHY: GM CORP

BY C H A R L E S G E R E N A

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

But the Big Three automakers have
paid a heavy price with the jobs bank
program. Although they have been able
to shed workers through attrition,
automakers have laid off more people
than expected. This has forced the
companies to pay millions of dollars to
idle their employees, as well as to offer
increasingly large buyouts to persuade
jobs bank participants and other workers
to leave.
Since 2005, vehicle and parts producers have announced more than
280,000 layoffs, according to surveys
by employment consultant Challenger,
Gray & Christmas. Against this backdrop, everything is up for renegotiation
during this summer’s UAW contract
talks, including the jobs bank.

Great Expectations
When it was created under the 19841987 UAW contract, the jobs bank
covered workers who lost their jobs as
a result of technological change, outsourcing, corporate reorganization, or
productivity improvements. It was
expanded in the 1990-1993 contract to
include workers who were laid off due
to declining sales.
Under the contract that expires
this September, laid-off autoworkers
receive supplementary unemployment
benefits on top of their state benefits
for 48 weeks. Then they enter a jobs
bank designated for each automaker.
Depending on the specifics of the
union contract at a plant, jobs bank
participants can sometimes continue
to receive up to 100 percent of their
salary and benefits until they leave the
company, accept another position at
their plant, or transfer to a different
facility within a 50-mile radius.
What happens in the meantime? It
varies from plant to plant. Sometimes,
workers are allowed to use their downtime to take classes or volunteer in the
community. Or they may be asked to
help the company in another capacity,
such as delivering vehicles at dealerships or covering for other workers
who are receiving training.
In a few cases, there isn’t anything
productive for the jobs bank inhabitants to do. When General Motors

Page 25

closed its Baltimore plant in May
2005 and displaced 1,400 workers, the
union hall near the plant was reportedly
a popular hangout for banked workers
waiting for odd jobs.
Being stuck in a jobs bank may not
seem attractive to someone accustomed to punching a clock at a factory
every day. But in the early 1980s, UAW
officials wanted some degree of
job security for their members after
several years of mass layoffs. A sluggish
economy and a surge in imports had
prompted automakers to shed close to
300,000 positions, or one-third of their
work force, between 1979 and 1982.
Besides, the UAW figured the
jobs bank would discourage companies
from cutting jobs and speed the
reassignment of those who were laid off.
Companies weren’t expected to pay
people not to build vehicles for an
extended period of time. In fact, that is
exactly what the Big Three have done,
slowing down assembly lines, eliminating shifts, and, eventually, closing plants
despite the additional expense of
putting workers in the jobs bank.
Laurie Harbour-Felax, a manufacturing consultant and president of
Harbour-Felax Group, says there have
been dramatic improvements in productivity in the last few decades and a
greater focus on quality, which drives
higher efficiency. The result: less labor
required to produce the same number
of cars. “As people are retiring,
automakers are hiring back fewer
people,” she adds.
Labor demand has been eroded by
overseas competition as well. Asian
automakers like Toyota, Hyundai, and
Nissan have lured consumers away
from the Big Three, reducing the latter’s share of the domestic market.
They also operate American plants,
taking up some of the labor slack they
helped create, but not all of it.
“Throughout the 1980s and 1990s, the
Japanese would put in a plant to make
the same volume as a Big Three plant,
but do it with dramatically fewer
people,” Harbour-Felax says.
This pushed U.S. automakers to be
even more productive and reduce their
labor needs. “The globalization that

has gone on in the last 25 years has put
tremendous pressure on this country,”
she notes.
The Big Three automakers didn’t
envision this situation, either. “The
thing we didn’t plan for was the loss of
market share and the need to close
plants,” says Dan Flores, General
Motor’s spokesman for manufacturing
and labor. “There weren’t jobs that
people could go to.”
When they renewed their UAW
contracts in the early 1980s, the Big
Three automakers were optimistic that
their long-run growth since World War
II would continue — their combined
profits had jumped from $6.3 billion
in 1983 to $9.8 billion in 1984. “We
thought that [the jobs bank] would be a
temporary bullpen, where employees
would go before they were assigned to
a different job or plant,” Flores says.
Automakers were also willing to
create the jobs bank in exchange for
union concessions on the use of
automation and flexible scheduling.
Lastly, the added level of job security
was supposed to improve productivity
by enhancing workers’ motivation
and commitment.
Improving the worker-employer
relationship was expected to have one
more benefit, adds Cornell University
economist Harry Katz. “[Automakers]
hoped workers would be more willing
to move around the system and live
with plant closings, line adjustments,
work-rule changes … and other things
they were doing in exchange for
longer-term security,” he notes.
Katz says it’s hard to tell whether
the creation of the jobs bank
contributed to productivity improvements in the automotive industry.
It hasn’t led to substantially greater
flexibility, in his opinion.
What the program has done is add
to the ranks of the underemployed.
Chrysler had 2,000 people in its
jobs bank at the end of 2006.
(As of press time, the private equity
firm Cerberus Capital Management
LP had agreed to buy 80.1 percent
of the Chrysler Group from
DaimlerChrysler AG.) Ford and its
former parts subsidiary, Visteon, had a

Spring 2007 • Region Focus

25

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 26

combined total of 1,100 jobs bank participants. Published estimates for
General Motors and Delphi, a parts
supplier spun off from GM, indicate
that they have the largest jobs banks —
a combined 11,100 people as of
March 2006. In total, more than
14,000 autoworkers have been laid off
but are still on company payrolls. The
number is expected to drop in 2007 as
workers finally accept severance and
retirement packages.

Stuck in Neutral
Automakers have waved numerous
enticements in front of idled workers
to prompt their departure. In 2006,
General Motors offered $140,000
buyouts to workers with 10 years of
seniority, including those in the jobs
bank. Ford also offered buyouts of
up to $140,000 last year, as well as
annual tuition assistance of $15,000.
In March 2007, Chrysler started
offering early retirement payouts of
$70,000 and lump-sum buyouts
of $100,000 to help reduce its North
American headcount.

The Details
The size of the Big Three automakers’ jobs bank
program is expected to decline in 2007. It will
depend on how many laid-off workers accept buyout packages or choose to retire. Here is a snapshot
of where the program stood at the end of 2006.
Chrysler Group
Jobs Bank Name: Job and Income Security
Funding Committed Under UAW Contract: $451 million
Estimated No. of Participants: 2,000 (2,100 in 2005)
Ford Motor Company
Jobs Bank Name: Protected Employee Program (PEP)
Funding Committed Under UAW Contract: $944 million
Estimated No. of Participants: 1,100 (1,275 in 2005;
includes PEP workers at former parts subsidiary
Visteon)
General Motors Corp.
Jobs Bank Name: Job Opportunity Bank-Security (JOBS)
Funding Committed Under UAW Contract: $2.7 billion
Estimated No. of Participants: 11,100 (9,000 in 2005;
includes JOBS workers at former parts subsidiary
Delphi)
SOURCES: Company estimates, news reports, and
UAW Web site

— CHARLES GERENA
26

Region Focus • Spring 2007

Many people in the jobs bank seize
the opportunity to move on with their
lives. About 600 of the 1,400 people
let go from General Motors’ Baltimore
facility in 2005 have retired or
accepted buyouts, according to the
UAW’s Fred Swanner. About 700
people accepted transfers to other
GM facilities, including a transmission
plant in Baltimore County and an
assembly plant in Wilmington, Del.
That leaves about 100 workers in the
jobs bank by Swanner’s calculations.
The workers who accept buyouts or
transfers are usually the ones who are
the most mobile. They are willing and
able to start anew, whether it’s at a
plant in a different city or in an entirely
different career. This includes younger
people with lots of healthy, productive
years ahead of them, as well as older,
experienced workers who are confident in their marketability.
Then there are the workers who
don’t move on. One group of General
Motors’ employees reportedly stayed
in the jobs bank 12 years after their
plant in Van Nuys, Calif., closed in
1992. Some Delphi workers in Flint,
Mich., have been idling for more than
six years.
Based on his experience, Swanner
says the people who stay the longest in
the jobs bank aren’t usually production
workers who stamp out or assemble
parts. Rather, they are the millwrights
who fix conveyors, the electricians
who maintain assembly line robots,
and other skilled trades workers.
“We’re down to our trades guys, who
are hard to place. Those openings only
occur in some factories one or two at a
time,” Swanner notes.
There are other reasons why people
choose to remain in the jobs bank.
Workers close to retirement may be
willing to stay in limbo for a few more
years until they can collect their
pension and health-care benefits.
Others may cling to the hope that
automakers will regain their market
share and ask them to return — several
people at General Motors’ Baltimore
plant told the Baltimore Sun last year
they didn’t want to accept buyout
offers and give up a chance to work at

GM’s transmission plant in Baltimore
County.
Similarly, some workers may be
holding out for a buyout that is worth
giving up their current salary and benefits. In general, the comparatively
high compensation of autoworkers —
even within the manufacturing sector
— makes staying put in the jobs bank a
better choice, even if it only delays
dealing with the harsh realities that lie
beyond the automotive industry.
For one thing, the jobs bank sets a
high bar for a worker’s reservation
wage, the minimum salary for which
people are willing to supply their labor.
Economists believe that a person’s
reservation wage depends on the probability of getting a job offer, the
expected range of wages available in
the market, and the availability of
alternative income sources, such as
state unemployment benefits and
personal savings. So, the higher the
alternative income — in this case, the
salary that laid-off workers in the jobs
bank continue to receive — the higher
the reservation wage, all other things
being equal.
Also, autoworkers may not get the
same wages and benefits if they transfer
to another plant. The labor market has
undergone a significant shift from
unionized positions at American firms
in the “Rust Belt” to positions at foreign
producers with plants in lower-cost,
“right-to-work” Southern states where
unionization is less common.
For example, BMW Manufacturing
has been hiring people at its plant in
Spartanburg County, S.C. But the
nonunionized facility needs only temporary workers and pays $12 to $13 an
hour compared to the industry average
of $22 an hour.
Finally, laid-off workers may feel
less secure about their employability
and be afraid to take chances and
switch careers. “We get some who will
be in that mode. Their self-esteem
is lower,” Swanner notes. “They might
not feel they can make another
career change at the age they are
now.” Or they might feel that they
lack the necessary skills to handle
today’s technology.

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 27

Buyout Blitz
Automakers looking to downsize have been working hard to persuade their workers to voluntarily leave their jobs. Unionized workers have
required more encouragement, especially those in the jobs bank programs at Ford, GM, and Chrysler.
Date of
Buyout Offer

Scope of
Buyout Offer

Mar 2007

11 plants nationwide

■ Ford Motor Company

Sep 2006

All active employees

■ General Motors Corp.

Mar 2006

All active employees

■ Mitsubishi Motors

Jul 2006

Plant in Normal, Ill.

North America
■ Nissan
North America

Feb 2007

Plants in Decherd
and Smyrna, Tenn.

■

Chrysler Group

SOURCE: News reports and press releases

Automakers provide educational
programs for laid-off workers to learn
new skills. However, the type of training
varies from plant to plant — one
General Motors’ facility in Flint
reportedly offered classes on working
in a casino, while a Ford plant
in Wayne, Mich., taught classes in
bicycle repair and silk flower arranging.
The career paths that workers are
being prepared for may not be
as financially rewarding as staying
in the automotive industry or
in manufacturing.
When 2,300 workers stop making
F-150 trucks at Ford’s Norfolk, Va.,
plant this year, few alternatives will be
available for those who decide to enter
a jobs bank. “There is nothing around
here,” says Chris Kimmons, president
of the plant’s union. They could join
the International Longshoreman’s
Association and make $16 an hour at a
wharf, but that compares poorly to the
$28 an hour they make now. “You do
the best you can for your family
and yourself.”

The Price of Layoffs
Laid-off workers are likely feeling the
pressure to get out of the jobs bank,
with the UAW’s contracts with
the Big Three expiring in September.
Apparently, most of the workers at

Lump Sum
Payments Offered
• $100,000 + six months of benefits for workers with at least one year of seniority
• $70,000 + full benefits for workers eligible for early retirement
• $140,000 for any worker with 30 yrs. service or age 55 with 10 yrs. of service;
future benefits forfeited
• $100,000 + six months of benefits for workers eligible for retirement
• $140,000 for workers with 10 yrs. of service or more; future benefits forfeited
• $70,000 for workers with less than 10 yrs. of service; future benefits forfeited
• $35,000 for normal or early retirements retroactive to 10/1/05; future benefits retained
• $85,000 + 3 months of benefits
• $45,000 + $500/year of service

■ Unionized / ■ Nonunionized

Ford’s truck assembly plant in Norfolk
don’t want to take any chances when
the facility shuts down. About 1,900
workers have accepted one of the
severance packages that Ford offered
to workers in 2006. Another 360 will
transfer to the company’s truck plant
in Dearborn, Mich. Kimmons estimates
that fewer than 50 workers will enter
the jobs bank after receiving 48 weeks
of unemployment benefits.
“They wanted to take the money
and run,” Kimmons says. “Maybe they
feel the auto industry isn’t as sound as
it used to be. There is a lot of devastation going on.”
While the UAW will likely resist
any changes to the jobs bank,
automakers have good reasons to push
for a compromise. With $4.1 billion
committed to funding the jobs bank
under the current UAW contract, the
program adds another burden to
the long list of challenges facing
automakers, from rising health-care
costs to designing vehicles that can
compete with imports on quality and
customer satisfaction.
As consumer tastes change and
demand fluctuates, automakers constantly reevaluate their production
levels and employment needs.
According to GM spokesman Dan
Flores, there has been room for the

company to make short-term tweaks,
such as eliminating a shift at a plant
producing an unpopular model.
Flores admits that the jobs bank
did raise the cost of long-term layoffs
enough that it has affected GM’s production plans. But he downplays its
impact on GM’s decisionmaking
process, saying it is just one of many
considerations. “Certainly, there is a
significant cost associated with [the
jobs bank],” he notes. However,
“there are a lot of factors that come
into play.”
In an October 2006 report, the
Harbour-Felax Group calculated the
price tag for permanently closing a
typical assembly plant that produces
1,000 vehicles a day. Its analysis
assumed a total of 4,450 workers —
both at the plant and in related
component, engine, and transmission
plants — would end up in a jobs bank
for one year.
The bottom line: Chrysler would
have to devote $175 of every vehicle it
sells in North America to pay for the
plant closing, while Ford would have
to cough up $134 per vehicle and GM
would have to pay $88 per vehicle. To
put these costs in perspective,
Chrysler lost an average of $950 per
vehicle sold in North America last
continued on page 31

Spring 2007 • Region Focus

27

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 28

Trading

Spaces
Conservation efforts get
a boost from the market

BY B E T T Y J OYC E N A S H

Old pine trees: Home of the endangered red-cockaded woodpecker

R

28

Region Focus • Spring 2007

Natural Remedy
Conservation increasingly pits private interests against
public. Conflicts will only intensify as development
continues to chop up open land and species habitat.
Incentives, however, have demonstrated over the past
decade that they can turn environmental liabilities into
assets. Under some programs, farmers can sever and sell
development rights. In others, developer obligations can be
transferred to “mitigation banks” that sell credits from
private, certified-restored natural areas. Both create
tradable commodities. Both achieve social goals through
market enterprises.
Such incentives can inspire landowners to maintain land
and correct the negative consequences of development.
Now, red-cockaded woodpeckers are multiplying in the
Southeast, with the help of incentive-based agreements like
Safe Harbor. The Sustainable Land Fund is ironing out
details for a mitigation bank near Elkins, W.Va., for the
threatened Cheat Mountain salamander and West Virginia
northern flying squirrel. And a market for transferable
development rights in Maryland has preserved 17,500 acres
of farmland in Calvert County.
“We know incentives inform what landowners decide to
do with their properties, so the idea of now turning those
around and saying, ‘How can we use that same technology,
that same financial set of tools to create a longer-term,
sustainable future?’ is by my way of thinking just an
appropriate new mechanism we need to adopt to do more
than we can possibly do with the old tools,” says William
Ginn. He directs the Global Forest Partnership at

PHOTOGRAPHY: U.S. FISH AND WILDLIFE SERVICE

ed-cockaded woodpeckers choose real estate
carefully. They prefer drilling cavities in old pine trees
— longleaf if they can get it — amid open space.
A golf course, for instance. They’re picky, territorial, and
because they’re endangered, they influence private land
transactions in a big way. They live in 10 Southeastern states
on a sliver (3 percent) of their original longleaf pine habitat.
Even after federal protection in 1973, their numbers
continued to slide, in part because frustrated
landowners tried to deter or get rid of the birds. Some
cut forests before maturity to avoid woodpecker
cavities or allowed dense hardwood growth to spoil
foraging area under the pines. All this to avoid future
limitations on how property owners could use their land.
Something was very wrong with the incentives
here: Habitat and species preservation demanded a truce.
Owners of the preferred pine tracts needed encouragement
to manage forests in a woodpecker-friendly way without
liability. A conservation tool called Safe Harbor does that.
Landowners voluntarily agree to restore woodpecker
habitat; in return, the U.S. Fish and Wildlife Service frees
them from regulatory limits should those management
practices attract additional groups of birds beyond the
original “baseline.” Voila — incentive.
“It removes regulatory risk and allows landowners
to engage in practices beneficial to them and the
woodpecker,” says Michael Bean, an attorney at
Environmental Defense, the nonprofit environmental
group that has pioneered Safe Harbor and other
conservation incentives.

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

The Nature Conservancy and wrote
the 2005 book, Investing in Nature.
Using market tools to achieve
conservation goals isn’t a new idea, but
it is gaining currency as preservation
funds dwindle and regulation proves
inadequate.
Environmental regulation tries to
make up for costs that affect society,
costs that aren’t borne by firms
or landowners, called externalities.
Like pollution. Or doing in the last
red-cockaded woodpecker. But a
one-size-fits-all standard may not
work as envisioned. (Economists John
List, Michael Margolis, and Daniel
Osgood have written a paper suggesting that the U.S. Endangered Species
Act has accelerated development,
leading to habitat and even species
decline. Property owners, the authors
argue, are forward-looking and when
they see the “act as a threat to their
development rights,” they may
respond “by developing preemptively”
that is, before restrictions imposed by
the act are applied to them.)

The Birdie
Pinehurst, a resort in the Sandhills of
south central North Carolina, was in
on Safe Harbor from the get-go, says
Brad Kocher, vice president of
grounds and golf course management.
His interest dates back to 1995, when
the resort’s No. 8 golf course was
under construction. A shame, he
recalls thinking, that the resort couldn’t
do something to attract more birds.
“But if I did something to encourage
the species on No. 8, I [would have]
encumbered anybody within a halfmile radius of that tree, and they were
not going to be very happy with me.”
Safe Harbor will protect landowners
from future restrictions once the original group of birds is documented.
Owners also must enhance the habitat. Pinehurst has about 11 families of
red-cockaded woodpeckers, and has
won awards for stewardship. The
agreement extends to neighboring
properties affected if new woodpeckers
are drawn by Pinehurst habitat.
Incentives can prompt improbable
acts: After Hurricane Fran in 1996, a

Page 29

landowner reported a downed cavity
tree and insisted that a biologist drill
an artificial cavity ASAP so the woodpeckers would stay. Today, about 100
Safe Harbor agreements cover about
50,000 acres, according to Susan Ladd
Miller of the U.S. Fish and Wildlife
Service. And six new groups of redcockaded woodpeckers have settled in
the North Carolina Sandhills.
“What it did was allow us to not
just be the bad guy, the regulatory guy,
but allowed us to have positive
relationships with these landowners,”
Miller says. Safe Harbor and other
conservation plans have inspired
some private landowners to put
land in conservation easements, one
adjacent to Ft. Bragg, desperate for
noise buffers and critter habitat.
(By law, federal lands must recover
endangered species.)

Banking on Conservation
Birds can be “banked.” Two elderly
women in North Carolina needed to
sell timber to pay medical expenses
but were hindered by the discovery of
three red-cockaded woodpeckers.
Those birds were moved to Nature
Conservancy land in Sussex County,
Va. The Piney Grove Preserve, 2,700
acres of primo foraging habitat, has
twice been a mitigation bank for
the woodpeckers.
Ralph Costa of the U.S. Fish and
Wildlife Service brokered the deal. He
tracks the red-cockaded woodpeckers’
progress and works out agreements
with landowners all over the Southeast
— public and private. Between 100
and 200 groups of birds are growing
annually throughout the region on
public and private land, Costa
says. Landowners must pay the
mitigation costs.
Costa explains: “I get a phone
call … ‘Got two groups to get off
my property — I need the money.’
I give them the names of all the
mitigation banks and contacts within
their recovery unit and that’s the end
of my involvement. At that point, they
call the bank and negotiate the
price. I don’t care if it’s free or a
million dollars.”

One cluster of birds sold for
$100,000 and went to a conservation
easement owned by the University of
South Carolina, Costa says. The birds
came from property being developed
on the coast of South Carolina. “What
drives the price [is the] value of
the timber and/or the dirt for development on the mitigation property,” he
says. “We have some prices floating
around right now approaching
$250,000 for a developer who has a
group they want to get rid of and it’s
because the land could be used
for timber.” Few high-dollar groups of
woodpeckers remain on coastal highdollar dirt, Costa says, making such
transactions rare.
While such informal trades don’t
constitute a true conservation or
mitigation bank with active trading,
that concept is gaining ground.
California established rules for the
first endangered species banks in 1995,
and in 2003, the U.S. Department of
the Interior issued guidelines. Under a
command and control system, if an
endangered species were found during
development, a protracted process
ensued that often led to piecemeal
preservation. Using endangered
species mitigation banks, an “enviropreneur” may buy and manage land,
gaining appropriate agency approvals.
(Those in the business say that, for
now, they’re overregulated but expect
that to diminish as the banks prove
themselves.)
A California firm, Wildlands, Inc.,
has opened seven mitigation banks
since November; the latest one
preserves habitat for the giant garter
snake in an eight-county area near
Sacramento, Calif. Wayne White
works as a consultant for mitigation
banks, having learned the ropes during
his career with the U.S. Fish and
Wildlife Service.
A new mitigation bank carries risk
just like any other business. You need
to know your market and how many
credits you’ll have to sell to break even
or to make a profit, White says.
The bank improves, monitors, and
establishes an endowment to ensure
management in perpetuity. And here’s

Spring 2007 • Region Focus

29

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

Page 30

the trickiest part, he says: “Can you
sell the credits?”
Conservation banks seem sound,
but they need tweaking. California’s
first bank, formed in 1995, went
bankrupt in 2005. “Agencies and
bankers learned you have
to have a good financial portfolio,” White says. Since then,
state and federal wildlife
agencies have developed tools to
monitor financial performance.

Flexible Farmland
in Calvert County

Red-cockaded
woodpeckers
average about
eight inches
in length and
feed mostly
on insects.
Landowners
face fewer
restrictions on
property
use in
return for
habitat
maintenance.

30

Perhaps the oldest conservation
incentive tool comes in the form
of “rights” that can be sold off
farm properties to offset additional
density elsewhere.
From the top half of Calvert
County, Md., you can commute
to Washington, D.C., in about 35
minutes. This Eastern Shore county
has been a prime bedroom community
even as far back as the early 1980s.
Between 1990 and 2000, Calvert
was the fastest-growing county
in Maryland.
“We were one of the first to feel the
effect of the concept of a bedroom
community,” says Susan Hance-Wells.
Her family has lived on Calvert farmland since it was established some 300
years ago. Today, she grows corn,
soybeans, and oats, and she breeds
Friesian horses.
Worried about disappearing farmland, Hance-Wells and her father,
Maryland’s first secretary of agriculture, enrolled in Calvert County’s
transferable development rights
(TDR) program back in 1979. Their
fears illustrate the externality that isn’t
accounted for by a builder — in this
case, the reduction of open space and
loss of farmland.
In a TDR market, development
potential transfers from one parcel to
another. It can be used to preserve
natural or historic areas as well as
species habitat. Zoning, a typical
regulatory response, often doesn’t
work the way it’s supposed to because
of what economists call “rent-seeking.” If owners feel a classification

Region Focus • Spring 2007

deprives them of income, they’ll
pressure local authorities for change.
Paul Thorsnes of the University of
Otago in New Zealand and Gerald
Simons of Grand Valley State
University in Michigan have studied
the issue and written: “In short,
however efficient the allocation of
land, the inequitable distribution
of costs and benefits plagues openspace zoning.” But creating a market
for development rights is preserving
land, particularly in Calvert and
Montgomery counties.
Prices are determined through
supply and demand, not appraisals.
“If the builders are building, and they
need those rights to increase density in
the subdivisions, then the price goes
up,” Hance-Wells explains.

The Power of Prices
Channeling development through
a TDR market draws on the same
principles as free market environmentalism. In theory, people choose
based on self-interest and everybody
benefits. “Regulation to a standard
means forcing some people to be at a
position they’d rather not be,”
says Margaret Walls, an economist
at Resources for the Future, a
Washington, D.C., think tank.
“Whether it has to do with acres
of land or power plant emissions,
market-based instruments tend to
have more flexibility.”
Walls, with co-authors Virginia
McConnell and Elizabeth Kopits,
has studied the market for
development rights in Calvert County.
“It creates a price for selling these
rights, an incentive for people to
preserve this land permanently,”
McConnell notes.
One measure of a TDR program’s
success lies in trading activity,
McConnell says. “So often the
programs are on the books but nobody
makes transactions; supply and
demand are out of whack. If the
market works in the sense that people
are participating, then you know land
is being preserved.”
In Calvert County, any landowner
with productive soil may enroll

and sell TDRs. Owners also can develop
their land or buy rights and develop
beyond base zoning limits. A single
TDR preserves one land parcel.
McConnell points out that one of the
downsides to TDR programs is
adverse selection. Conceivably, some
owners sell rights who may never have
intended to sell or develop their
property in the first place. That could
lead to more density than would occur
under a straight zoning regime.
About 142 TDR programs are
ongoing in the United States, some
more successful than others. Getting
supply and demand in sync is critical.
If people don’t know who is selling
development rights, there are problems
matching willing buyers and sellers.
To remedy a thin market and
price fluctuation, Calvert County
began in 1993 to buy TDRs annually
at announced prices. They also
now publish a newsletter to keep
information flowing.
With the uncertainty reduced,
McConnell says, trading increased.
A recent study of TDR programs found
that market stimulation through such
public purchases helps balance supply
and demand and is characteristic of
successful TDR markets.
Purchases vary from year to year
and prices have increased from
an average of $2,500 in 2001 to $7,500
in 2005. Maryland’s TDR program
has preserved 12,000 acres, and
Calvert County has bought 5,500 acres
to retire permanently. Separately,
the state has bought easements that
have preserved 7,000 acres.

Not Perfect
Critics point out, though, that development in Calvert County sprouted in
the rural communities anyway, demonstrating low demand for dense
residential development. While the
development pattern isn’t perfect,
flexibility may have worked to the
county’s advantage.
“A lot of programs try to dictate
that the sales go into more highdensity areas; as a result it sometimes
limits the demand for them,”
Walls says. “TDRs are not a growth

RF SPRING 07_1-51_Rev7.9

7/12/07

3:05 PM

limiting tool but a growth allocation,
spatial allocation tool.”
High land prices have brought new
owners to TDRs, says Susan HanceWells, but many are “farmettes” rather
than large-tract farms. “Some of the
land that will get in is not what we originally intended, but they preserve farm
communities. It’s insulating the farms
in that community against increased
density,” she says. “You’re going to have

Page 31

cases that don’t suit what you’re looking
for or don’t accomplish the goals, but
they’re not going to be in the majority.”
OK, so maybe the development
isn’t ideally situated. And perhaps it
isn’t a true market, as Thorsnes
points out, because the zones are
predetermined by planners. But
the fact remains that farmland is
being preserved.
Likewise, maybe Safe Harbor won’t

satisfy everybody, but it’s increasing
populations of the red-cockaded
woodpecker. And mitigation banks are
criticized for “enabling” development.
Yet the private banks, the successful
ones, are preserving larger sites and
providing permanent management for
endangered animals. As markets for
endangered species and open land
mature and go mainstream, they may
reveal nature’s true value.
RF

READINGS
Balsdon, Ed. “Incentives for Resource Conservation through the
Capitalization of Environmental Value: An Evaluation of Open
Space Mitigation Requirements.” Discussion Paper 03-04, Center
for Public Economics, San Diego State University, January 2003.
Boyd, James, Kathryn Caballero, and R. David Simpson. “The Law
and Economics of Habitat Conservation: Lessons from an Analysis
of Easement Acquisitions.” Discussion Paper 99-32, April 1999,
Resources for the Future.
Dehart, Grant H., and Rob Etgen. “The Feasibility of Successful
TDR Programs for Maryland’s Eastern Shore.” The Maryland
Center for Agro-Ecology, Inc. January 2007.
Ginn, William. Investing in Nature: Case Studies of Land
Conservation in Collaboration with Business. Washington, D.C.:
Island Press, 2005.

GRINDING GEARS

•

List, John A., Michael Margolis, and Daniel E. Osgood. “Is the
Endangered Species Act Endangering Species?” NBER Working
Paper no. 12777, December 2006.
McConnell, Virginia, Elizabeth Kopits, and Margaret Walls. “Using
Markets for Land Preservation: Results of a TDR Program.”
Journal of Environmental Planning and Management, September
2006, vol. 49, no. 5, pp. 631-651.
Shogren, Jason F., et al. “Why Economics Matters for Endangered
Species Protection.” Conservation Biology, December 1999, vol.13,
no.6, pp. 1257-1261.
Thorsnes, Paul, and Gerald P. W. Simons. “Letting the Market
Preserve Land: The Case for a Market-Driven Transfer of
Development Rights Program.” Contemporary Economic Policy,
April 1999, vol. 17, no. 2, pp. 256-266.

continued from page 27

year, while Ford lost $2,015 per vehicle
and GM lost $335 per vehicle.
Some industry observers argue that
if it weren’t for the cost of the jobs
bank, mass layoffs would have been
more common in the automotive industry. Sean McAlinden, chief economist
at the Center for Automotive Research
in Ann Arbor, Mich., agreed with this
argument in a June 2004 report.
If the Big Three held firm on prices

during the onset of the 2001 recession,
McAlinden noted, they would
have laid off tens of thousands of
workers who would have collected
supplementary unemployment benefits and, eventually, full pay and
benefits in the jobs bank. “The
companies, already facing pension
shortfalls, and remembering the disastrous cash drain of such layoffs in
1992 for GM, cut prices instead of

production and employment.”
This overcapacity has been partially
masked by strong sales of high-margin
trucks and sport-utility vehicles. But
continued poor sales of the Big
Three’s cars are forcing automakers to
make more drastic changes. With or
without the jobs bank, it’s a more
challenging environment for both
American automakers and the people
they employ.
RF

READINGS
Fahey, Jonathan, and Joann Muller. “Idling.” Forbes, Oct. 17, 2005,
pp. 110-112.
Farber, Henry S. “Job Loss in the United States, 1981-2001.”
Princeton University Industrial Relations Section Working Paper
no. 471, January 2003.
Harbour, Jim, and Laurie Harbour-Felax. “Automotive Competitive
Challenges: Going Beyond Lean.” Harbour-Felax Group,
October 2006.

Katz, Harry C. “The Restructuring of Industrial Relations in the
United States.” Cornell University Center for Advanced Human
Resource Studies Working Paper no. 91-20, May 1991.
___. “Industrial Relations in the U.S. Automobile Industry: An
Illustration of Increased Decentralization and Diversity.”
Economic and Labour Relations Review, December 1997, vol. 8, no. 2,
pp. 192-220.
McAlinden, Sean P. “The Meaning of the 2003 UAW-Automotive
Pattern Agreement.” Center for Automotive Research, June 2004.

Spring 2007 • Region Focus

31

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 32

A Question Of

MONEY
Does money still matter for
monetary policy?
BY VA N E S S A S U M O

n its early days, the Bank of England used a weather
vane to predict when commercial ships would arrive at
the port of London. Variable winds on the river Thames
made docking times uncertain, causing sudden shifts in the
demand for money and credit. But with the weather vane’s
guidance, central bankers could overcome this uncertainty
and wield more prudent control over the quantity of money
in the economy.
Such a gauge would be useful today. Central bankers find
the link between money and inflation to be fickle in practice.
This is partly because the definition of money has been evolving along with the financial landscape. Ideally, central bankers
would like to predict shifts in the demand for money that
people use to purchase goods and services. But financial
deregulation and innovation has allowed banks, for instance,
to create new types of products. This has blurred the line
between what is “transactions” money and what is not. And
so it has become harder to pin down money demand.
Central bankers all over the world, including the Federal
Reserve, often give this reason as to why they have paid
increasingly scarce attention to money when conducting
monetary policy. Indeed, The Economist noted in 2006 that
former Fed Chairman Alan Greenspan did not mention the
word “money” in 10 speeches. His successor, Ben Bernanke,
however, did talk about money in a speech at a European
Central Bank conference late last year, but only to confirm
that “monetary and credit aggregates have not played a central
role in the formulation of U.S. monetary policy” since the
monetarist experiment that brought down inflation under
Paul Volcker more than a quarter of a century ago.

I

32

Region Focus • Spring 2007

Is money still relevant for monetary policy? It might
seem odd that money does not occupy a more prominent
place. After all, Milton Friedman’s proposition that
“inflation is always and everywhere a monetary phenomenon” is widely accepted as a general principle, with some
qualifications — which would suggest that the key to stabilizing inflation is to control the growth of money. As Mervyn
King, the Governor of the Bank of England, asks, “How do
we explain the apparent contradiction that the acceptance
of the idea that inflation is a monetary phenomenon has
been accompanied by the lack of any reference to money in
the conduct of monetary policy?” The paradox is that as
central banks recognize price stability as their main objective, they seem to be giving a smaller role to money. This is
nowhere more apparent than in central banks’ principal
choice of a policy instrument.

Instruments and Rules
Achieving price and output stability is the main objective of
central banks, but these are not the variables that they can
directly control. Rather, central banks operate through
targets and instruments to reach their ultimate objectives.
Insofar as there is a reliable relationship between instruments and goals, central banks can tweak their instruments
to achieve their desired impact on the economy.
Most central banks prefer to use interest rates as an
instrument through which they can carry out monetary
policy. For instance, the Fed’s Federal Open Market
Committee (FOMC) sets the fed funds rate, its policy
instrument of choice. Depending on its outlook of the

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

economy, the FOMC meets eight
times a year to raise, lower, or keep the
fed funds rate constant.
But the interest rate is not the only
instrument that central banks can
use. They can also opt to control the
quantity of money circulating in
the economy. In other words, central
banks can either choose to set the
quantity of money or set the price of
holding money; that is, the interest
rate. Thus, fixing the interest rate
means that the amount of money
would have to adjust in response to the
level of the interest rate. Likewise, if
central banks decide to control money,
then they would have to let the
interest rate fluctuate as it will. Under
certain conditions, these may not be
equivalent strategies.
The classic 1970 analysis by
William Poole, president of the
St. Louis Fed, shows us that if the central bank has to choose a policy
instrument before observing the
disturbances to the goods and money
markets, then setting either the interest rate or the stock of money can
lead to smaller output fluctuations —
variability in gross domestic product
— depending on the type of shocks
that are present. If there is an aggregate demand shock (say, a huge
increase in government spending),
then fixing the interest rate will lead to
larger output fluctuations than controlling money supply and letting the
interest rate rise, which automatically
stabilizes output. On the other hand,
if the demand for money is unruly (as
explained earlier), then fixing the
stock of money can lead to more
volatile interest rates and larger output
fluctuations, such that keeping the
interest rate constant is more desirable.
Along with these debates on the
choice of a monetary policy instrument were discussions on whether
monetary policy would be better
served by following a rule (to set the
money supply or the interest rate) or
to allow the central bank’s discretion.
Policy rules have the advantage of
being easily understood by the public,
so that they can hold the central
bank accountable for its decisions.

Page 33

Friedman, for instance, advocated
adhering to a “money rule”: a proposal
to increase the stock of money by a
fixed percentage each month (corresponding to the growth in long-run
output). His preference for a rulebased policy was founded on the
observation that there is a lengthy
interval from measuring current
economic conditions to implementing
policy to affecting the public’s borrowing and spending decisions. By the
time the policy takes effect, the discretionary response may no longer be
appropriate. In this way, Friedman
thought that simply sticking to a
money rule rather than exercising
discretion could do less harm to
the economy.
But the policy rules that are discussed these days are “activist” rules
rather than constant rate of growth
rules like Friedman’s. Activist rules can
be expressed in terms of a formula,
which describes how the value of a policy instrument adjusts or “feeds back”
in response to economic conditions.
Central banks can use both rules
and discretion to varying degrees, and
even if some would appear to lean
more toward discretion (including,
arguably, the Fed), rules play a prominent role not only as a guide to
discretion but also as a benchmark for
outsiders to use when thinking about
the central bank’s monetary policy
stance. And because monetary policy
has evolved over the years such that
the interest rate has become the
preferred instrument, the choice of
rules has likewise tilted in favor of
interest rate rules and away from
money growth rules.

Measure for Measure
In 1993, John Taylor of Stanford
University formulated a policy rule
that closely approximated the Fed’s
policy actions over several years. The
Taylor rule has become very wellknown, in part because it specifies a
short-term interest rate and so makes
it easy to compare with the Fed’s
actual policy stance. The Taylor rule
prescribes a nominal interest rate, in
this case the fed funds rate, which

reflects movements around a long-run
real interest rate depending on how
much actual inflation and output
deviate from their respective targets.
While the central bank chooses a
target rate for inflation, the output
“target” is determined by economic
fundamentals, such as productivity
and population growth. This “feedback rule” is designed in such a way
that if actual inflation and output are
above their desired levels, the fed
funds rate should be raised in order to
dampen these inflationary forces. The
bigger the gap between actual inflation and output and their targets, the
higher the fed funds rate and
the tighter monetary policy should be.
Several years before the Taylor rule,
Bennett McCallum, an economist at
Carnegie Mellon University and a
visiting scholar at the Richmond Fed,
devised a rule that is often used alongside Taylor’s to track the Fed’s
monetary policy stance. Unlike the
Taylor rule, which sets the nominal
interest rate, the McCallum rule
specifies a growth rate for base money,
which is typically the amount of
currency plus commercial bank
reserves kept at the central bank. By
prescribing how much the base money
should rise or fall, a central bank is able
to influence the stock of money in
the financial system.
Base money, according to this
“monetarist” rule, should expand as
fast as the desired growth in nominal
income — similar to Friedman’s policy
prescription — and adjust for deviations of actual income growth from
this preferred rate. For instance, if
nominal income is moving sluggishly,
then the base money should be
allowed to grow faster in order to
stimulate the economy. Thus, while
the Taylor rule feeds back from deviations of output and inflation from its
targets, the McCallum rule feeds back
from movements away from the
desired path of nominal income.
Because the McCallum rule is a
“money rule,” the conventional
wisdom has been that rules like
Taylor’s are better able to capture the
practice of well-run interest-rate-

Spring 2007 • Region Focus

33

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 34

Monetary Policy Rules
If monetary policy were set using the Taylor rule, then it would have prescribed a higher interest rate in recent years.
The McCallum rule, on the other hand, would have recommended a looser policy stance.

Federal Funds Rate Using Taylor Rule

4% 3% 2% 1% 0% Target Inflation Rates

PERCENT

12
9
6
3

Actual

0
1997

1998

1999

2000

2001

2002

2003

2004

2005

2006

Monetary Base Growth Using McCallum Rule

PERCENT

12

Actual

9
6
3

Target Inflation Rates

0
1997

1998

1999

2000

2001

2002

2003

0% 1% 2% 3% 4%
2004

2005

2006

SOURCE: St. Louis Fed Monetary Trends, April 2007

setting central banks — such as at the
Fed. This is not necessarily accurate.
Using Greenspan’s tenure at the
helm of the Fed as a benchmark for a
successful period of monetary policy,
McCallum has compared how close
the policy prescriptions of the two
rules are to the actual fed funds rate
set by the Fed and the actual base
money growth. In a note for the
Shadow Open Market Committee, a
group of economists who meet regularly to evaluate the policy choices of
the Fed, McCallum finds that “actual
policy during the Greenspan era has
not differed from that recommended
by the McCallum rule by a significantly
greater extent than is the case for the
Taylor rule.”
If the same exercise is used to gauge
the tightness or looseness of a central
bank’s monetary policy, then the two
rules have been giving different perspectives over the last four years. In
the St. Louis Fed’s Monetary Trends
34

Region Focus • Spring 2007

report, which regularly tracks the settings prescribed by the two rules (see
above), the Taylor rule has for several
years suggested that monetary
policy has been too loose while the
McCallum rule has suggested that
monetary policy has been too tight,
assuming that the Fed has set an
implicit 2 percent inflation target.
However, the McCallum base money
growth rate has been closer to the
actual values. (Some observers would
say that the fall in global real interest
rates means that the real interest rate
indicated in the Taylor rule — 2 percent — should be adjusted downward,
which would allow it to come closer to
the actual fed funds rate.)
Another way to compare the two
rules is to look at points in history
when central banks may have followed
a policy that was either too loose or
too tight. For instance, both rules correctly suggest that monetary policy
was much too loose in the United

States during the great inflation of the
1970s. However, there were episodes
when the base money rule was
dropping the “right” hints while
the interest rate rule was not. “I think
money growth rules give better signals
as to what needs to be done,”
says McCallum in an interview.
For instance, monetary policy at the
Bank of Japan may have been too
loose during the asset price bubble of
the late 1980s, which the McCallum
rule correctly calls. The Taylor rule, on
the other hand, indicated that policy
was too tight or about right during
that period. But McCallum recognizes that the evidence in favor
of money growth rules is not completely decisive. “It’s not an easy
argument to make and not all good
monetary economists agree with it,”
says McCallum.
Nevertheless, McCallum’s base
money growth rule seems to perform
well, and can give “ideal” policy

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

prescriptions as well as or perhaps
even sometimes better than Taylor’s
interest rate rule. Still, why do central
banks favor interest rates over money
as a policy instrument? The instability
in the demand for money and how
accurately money is defined is one
reason, but there are others. Central
banks may prefer to smooth interest
rates, thus making it quite natural for
them to use the interest rate as a policy
instrument. Commercial banks, for
instance, dislike interest rate variability
because it can wreak havoc on the
value of their assets if interest rates
move sharply up and down. “Central
bankers spend a lot of time in the
company of bankers, and they want
to keep the financial market happy,”
says McCallum.
Central banks may also have a
better understanding than before of
how to adjust interest rates in a timely
way. “We have a much better appreciation of what disciplined discretionary
monetary policy is today,” says
Laurence Meyer, a former Federal
Reserve Board governor, who is now
an economist at the consulting firm
Macroeconomic Advisers.
In the 1970s, monetarists argued
that the Fed needed to move away
from its practice of setting the fed
funds rate because it was doing a bad
job of judging where the interest rate
should be in order to bring down
inflation. Hence, it would be better to
target the supply of money. But
because of the important lessons
learned from that period of high
inflation, central bankers have much
more confidence today in their
ability to conduct monetary policy by
choosing the right level of interest
rates. And to the extent that central
banks prefer to smooth interest rates,
then they don’t need to abandon
their instrument of choice. “I think
they’ve reached a better compromise
between the desirability of avoiding
interest rate volatility and the
desirability of making sure interest
rates move up and down when they
need to for economic stability,” says
Edward Nelson, an economist at the
St. Louis Fed.

Page 35

Follow the Money
The Fed under Paul Volcker was
successful in orchestrating an end to
the high and erratic inflation of
the 1970s, largely because it paid
attention to the band of monetarists
who said that the central bank could
do something about it. It might
seem odd then that money no longer
plays a leading role in monetary
policy today, as if the Fed were
risking the possibility of a return to
runaway inflation.
Does abandoning an emphasis on
money mean that the Fed has forgotten the lessons learned from that
period? Michael Woodford, an
economist at Columbia University,
thinks otherwise. “I would argue that
the most important of these lessons,
and the ones that are of continuing relevance to the conduct of policy today,
are not dependent on the thesis of the
importance of monetary aggregates,”
Woodford said in his remarks at a
European Central Bank conference
late last year.
Monetarists made it clear that central banks could contain inflation, as
opposed to the prevailing view at that
time that inflation was largely a product of too much power being wielded
by labor unions demanding higher
wages and monopolist producers
demanding higher prices. They also
emphasized the importance of a
commitment to a policy rule that
would foster credibility and anchor
inflation expectations. On both counts,
however, Woodford emphasizes that
adhering to a money growth target is
not the only way to bring down inflation and not the only kind of
commitment that a central bank can
make. Central banks that have set an
explicit inflation target, for instance,
bind themselves to a specific numerical target and justify this course of
action to the public. And though the
Fed has no such explicit targets, it
often speaks persuasively about its
commitment to price stability, and its
performance since the early 1980s has
made these statements very credible.
But perhaps the biggest blow to the
case for money is that it has been eased

out of today’s consensus model for
understanding how monetary policy
affects the economy. It used to be that
economists and central bankers
thought of money as the instrument
that influences aggregate spending
decisions and subsequently inflation,
the interest rate being determined by
the demand and supply of money in the
money market. In the new consensus
model, however, an interest rate rule of
the type proposed by Taylor has
replaced the money market. In other
words, the amount of money in
the system is now determined by the
interest rate and not the other way
around, rendering money essentially
superfluous in this model.
Money may not be everything, but
has it become completely dispensable?
Or is it still worthwhile to track the
growth of money, even in some
supporting role? “I’ve come to believe
that the right thing to do is to think of
monetary aggregates as an indicator,
but not necessarily use it as the
[instrument] that the Fed sets from
week to week,” says McCallum.
While Laurence Meyer was at the
Board of Governors, he made it a
practice to meet with money specialists
on the staff before every FOMC meeting. He thought that looking at the
behavior of money was a worthwhile
cross-check for any signals about future
macroeconomic developments which
other data may not have picked up.
However, he often walked away emptyhanded. “In five-and-a-half years, I
never got anything out of that meeting
that would have altered my views about
monetary policy,” says Meyer.
Still, McCallum thinks that it is
unwise to ignore money. “It’s got
better information than the interest
rates for central banks,” he says.
Nelson likewise agrees that money
could be a good indicator. For
instance, if money growth is rising but
the interest rate is kept unchanged
(because the central bank sets
the interest rate), then the central
bank has to print more money just
to keep the interest rate constant.
But why has the central bank
needed to print more money?

Spring 2007 • Region Focus

35

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 36

Perhaps because expectations of
future income or future inflation have
increased, so people are building up
their money holdings. “That’s a change
in the economy … that you wouldn’t
see just by looking at interest
rates,” says the St. Louis Fed’s
Nelson. More generally, the amount
of money may be influenced by the
prices of an entire range of assets
and the interest rate is just
one price, so money may reveal

additional information about people’s
spending decisions.
For McCallum and other monetarists, money may still make the
world go around, but they’ve come to
hold a more pragmatic view. “Central
banks use interest rates rules, so we
want to converse with them. John
Taylor’s paper was very helpful … It’s
healthy for academics to talk to central banks,” says McCallum. Even
some of McCallum’s own work reflects

the current state of affairs.
With respect to monetary policy
practice, Fed Chairman Bernanke
believes that it may be “unwise” to rely
heavily on money as a guide to
policy in the United States. But he also
thinks that “money growth may
contain important information
about future economic developments,” enough so that the Fed will
probably continue to keep an eye on
money growth.
RF

READINGS
Bernanke, Ben. “Monetary Aggregates and Monetary Policy at the
Federal Reserve: A Historical Perspective.” Remarks at the Fourth
European Central Bank Central Banking Conference, Frankfurt,
Germany, November 2006.
King, Mervyn. “No Money, No Inflation — The Role of Money in
the Economy.” Bank of England Quarterly Bulletin, Summer 2002,
pp. 162-177.
McCallum, Bennett T. “Robustness Properties of a Rule for
Monetary Policy.” Carnegie-Rochester Conference Series on Public Policy,
1988, vol. 29, pp. 173-204.
___. “Alternative Monetary Policy Rules: A Comparison with
Historical Settings for the United States, the United Kingdom,
and Japan.” Federal Reserve Bank of Richmond Economic Quarterly,
Winter 2000, vol. 86, no. 1, pp. 49-79.
___. “Policy Retrospective on the Greenspan Era.” Shadow Open
Market Committee position paper. May 8, 2006.

36

Region Focus • Spring 2007

Meyer, Laurence H. “Money and Monetary Policy.” Monetary
Policy Insights, Macroeconomic Advisers, November 17, 2006.
Nelson, Edward. “The Future of Monetary Aggregates in
Monetary Policy Analysis.” Journal of Monetary Economics, July
2003, vol. 50, no. 5, pp. 1029-1059.
Poole, William. “Optimal Choice of Monetary Policy Instruments
in a Simple Stochastic Macro Model.” The Quarterly Journal of
Economics, May 1970, vol. 84, no. 2, pp. 197-216.
Taylor, John B. “Discretion versus Policy Rules in Practice.”
Carnegie-Rochester Conference Series on Public Policy, 1993, vol. 39,
pp. 195-214.
The Issing Link. The Economist, March 23, 2006.
Woodford, Michael. “How Important is Money in the Conduct of
Monetary Policy?” Remarks at the Fourth European Central Bank
Banking Conference, Frankfurt, Germany, November 2006.

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 37

DECODING

MESSAGES
From The Yield Curve
What does the recent inversion imply
for banks in the Fifth District and beyond?
BY E L I A N A BA L L A , RO B E RT C A R P E N T E R , A N D M A R K VA U G H A N

ank supervisors get paid to worry — even when there
may be little to worry about. Historically, when the
yield curve inverts — that is, when shortterm interest rates rise higher than long-term interest rates
— banks have gotten into trouble. Short rates have exceeded
long rates consistently since July 2006, so supervisors
are naturally growing restless. Are banks in the Fifth
District and across the country potentially headed for
problems?
The yield curve plots the return on a given type of debt
instrument — if held to maturity — across a range of maturities. The curve is typically drawn using Treasury bills,
notes, and bonds because U.S. government debt is fairly
homogenous, enjoying virtually no default risk and active
secondary markets.
The slope of the yield curve, also called the term spread,
is often measured by the difference between the rate on
10-year Treasury notes and three-month Treasury bills.
The term spread reflects the premium demanded by
investors for bearing greater interest-rate risk on long-term
Treasuries, as well as expectations about the future path of
interest rates on short-term Treasuries.
The yield curve almost always slopes upward — put
another way, long-term interest rates are generally higher
than short-term rates. But the curve can take other shapes.
It can be flat, for example, indicating that short- and longterm Treasuries offer the same rates. It can also slope
downward or “invert,” indicating that short rates exceed
long rates.
An inverted yield curve is worrisome to bank supervisors
because it has typically put pressure on the margin between
interest earnings and funding costs. Such pressure can, in

B

turn, tempt bankers to take more risk. Both effects have the
potential to weaken bank conditions.
An inverted yield curve worries supervisors for an
additional reason. Inversion has typically signaled a coming
recession, and recessions undermine the ability of bank
customers to repay their loans.
The chain from inversion to recession to loan losses
shows up clearly in recent data. In July 2000, the yield curve
inverted, and a recession followed, starting in March 2001.
Between year-end 2000 and year-end 2002 the median
charge-off rate for U.S. banks — net loan losses divided by
total loans — grew by 50 percent before peaking at 0.16 percent in December 2002.
The yield curve was relatively flat throughout most of
2006 before inverting last summer. Only twice in the last 20
years has the Treasury yield curve departed from its usual
shape for so long. More ominously, the curve has inverted
prior to every recession since 1960, and only once in the past
70 years has a recession not followed a period of inversion
lasting more than a month.

Term Spreads and NIMs
The term spread is a key driver of net interest margin
(NIM), which, in turn, is an important source of bank
profits. Formally, NIM is measured as the difference
between the interest income from loans and securities and
the interest expense on deposits, divided by interest-earning
assets. Loans tend to have longer maturities than deposits,
so bankers make money when the long-term rates
they charge loan customers exceed the short-term rates they
pay depositors. As the yield curve begins to invert, however,
margins narrow and profitability suffers. The positive rela-

Spring 2007 • Region Focus

37

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 38

DIFFERENCE IN 10-YEAR AND 3-MONTH YIELD (%)

MEDIAN RATIO OF TOTAL NON-INTEREST INCOME TO TOTAL INCOME (%)

38

Region Focus • Spring 2007

MEDIAN NET INTEREST MARGIN (%)

1986 and year-end 2006, median fee
Virginia, Virginia, North Carolina, and
tionship can be seen in Figure 1, which
dependence (i.e., the portion of bank
South Carolina suggests the factors
traces the term spread and median
income derived from fees) rose from
behind nationwide trends are also at
NIM for U.S. and Fifth District banks
6.45 percent to 9.32 percent in the
work in the Fifth District. Across the
from 1984 through March 2007.
nation and from 5.23 percent to 9.38
country, banks have insulated margins
A persistently flat or inverted yield
percent in the Fifth District.
through more careful asset-liability
curve can also lure bankers into assumAlthough flat-to-negative term
management and greater reliance on
ing more risk, in the hope, potentially
spreads have yet to
higher returns will
squeeze margins, they
offset declining NIM.
Figure 1
still could. Three times
This phenomenon,
Treasury Yield Curve and Net Interest Margins (NIMs)
since 1984 — in 1989,
called “chasing yield,”
First Quarter 1984 – First Quarter 2007
2000, and 2006 —
can take a number of
5.00
5.25
the yield curve flatforms. A bank could,
Term Spread
Median NIM – U.S. Banks
tened or inverted and
for example, grow its
Median NIM – Fifth District Banks
4.00
5.00
Fifth District NIMs
asset portfolio for a
declined sharply, but
given level of equity
3.00
4.75
with a lag. This expericapital (a so-called
ence suggests bank
leveraged-growth
margins and profits
strategy) or purchase
2.00
4.50
could still be at risk
securities with greater
Slope of the
Yield Curve
should the unusual
levels of interest1.00
4.25
slope of the yield
rate risk. Traditionally,
curve persist.
banks have opted to
0.00
4.00
take more credit risk
1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006
— by weakening lendRecession Radar
-1.00
3.75
ing standards, offering
Assessing the potential
SOURCE: Federal Reserve Economic Data (FRED), Federal Reserve Bank of St. Louis
new lending products,
for loan losses implied
or lending in unfamiliar
by a flat or inverted
territory. On average,
yield curve requires a
Figure 2
chasing yield has
look at the role
Fee Income as a Proportion of Total Income
resulted in greater loan
of monetary policy.
First Quarter 1984 – Fourth Quarter 2006
losses, with negative
The Federal Reserve
13
consequences for bank
stabilizes prices by tarRecession
Median Dependence on Fee Income – U.S. Banks
12
conditions.
geting a key short-term
Median Dependence on Fee Income – Fifth District Banks
interest rate — the
11
federal funds rate.
Whither NIMs?
10
Suppose the Fed raises
The current inversion
the federal funds rate
has yet to produce a
9
to fight inflation. The
marked decline in
8
increase will ripple
net interest margins
across all interest rates
because the tradi7
in financial markets,
tional relationship has
6
but the rise in short
weakened since 2001.
5
rates will be larger than
A close look at Figure 1
the rise in long rates.
shows this weakening.
4
Long rates will not rise
Between 2001 and
1984
1986 1988
1990
1992
1994 1996
1998 2000 2002 2004
2006
as much because they
2004, the term spread
SOURCES: Federal Reserve Economic Data (FRED), Federal Reserve Bank of St. Louis; and
reflect the average of
widened to a 13-year
the National Bureau of Economic Research
short rates expected to
high while margins
long-term fixed-rate Federal Home
prevail over time, and the Fed historifor U.S. and Fifth District banks
Loan Bank advances. Profits also
cally has cut the federal funds rate
drifted downward. Then, in early
have been shielded from narrowing
with the passing of the inflation
2004, median NIM began to
margins through greater reliance on
threat. If the rise in the federal funds
climb while the spread narrowed
fees from services, most prominently
rate is large, then short rates could
dramatically.
for larger banks. This trend can be
move above long rates. Such a hike is
Analysis of bank-level data for
seen in Figure 2. Between year-end
also likely to slow the economy.
Maryland, Washington, D.C., West

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 39

DIFFERENCE IN 10-YEAR AND 3-MONTH YIELD (%)

Supervisors have another factor
head off loan-quality problems. An
In formal statistical studies, econoon their side: the robust levels of
inverted yield curve has preceded every
mists Arturo Estrella of the New York
equity capital in the banking sector.
recession since 1960, as seen in Figure 3,
Fed and Frederic Mishkin, now a
Equity capital represents the owner’s
but a recession has not followed every
Federal Reserve Board governor,
stake in the bank — the more capital,
inversion. In September 1966, for
and more recently, Estrella and Mary
the less temptation to chase yield.
example, the term spread dipped below
Trubin, also of the New York
As of March 31, 2007, the median
zero and remained negative into
Fed, documented the link between
leverage ratio —
the term spread and
equity capital divided
recession probability.
Figure 3
by assets — was 10.16
Jonathan Wright, an
Yield-Curve Inversions and Recessions
in the Fifth District
economist at the
June 1953 – March 2007
and 10.00 percent
Federal Reserve Board,
4
across the nation.
has also found a
Recession
Term Spread
Viewed another way,
connection, though his
3
no Fifth District banks
work suggests that
and only seven banks
the forecasting ability
nationwide (of more
of the term spread
2
than 7,300) were weakimproves dramatically
ly capitalized, where
when the federal funds
1
“weak” corresponds to
rate is taken into
a simple leverage ratio
account to capture
0
under 5 percent.
information about cur1953
1958
1963
1968
1973
1978
1983
1988
1993
1998
2003
rent monetary policy.
-1
Bottom Line
On average, since
Taken together, the
1990, the term spread
-2
evidence suggests that
has been 1.72 percent,
the recent inversion of
and the federal funds
SOURCES: Federal Reserve Economic Data (FRED), Federal Reserve Bank of St. Louis; and
the yield curve may
rate has been 4.37
the National Bureau of Economic Research
not pose a threat to
percent. Using Wright’s
bank safety and soundmodel, these numbers
ness. Moreover, in early June the curve
imply an average probability of recesJanuary 1967, yet no recession occurred.
“righted” itself — that is, the term
sion of 2.5 percent. Since the yield
In addition, the charge-off rate tends to
spread headed above zero — for the
curve inverted in August 2006, the
lag the business cycle, so supervisors
first time in nearly a year. Still, past
spread has averaged -0.21 percent, and
should have some time to prepare if
experience combined with the lengthy
the federal funds rate has averaged 5.25
economic conditions weaken. Finally,
duration of the yield-curve inversion
percent, implying a recession probaloan quality in the Fifth District and the
suggests bank supervisors should
bility of 42.8 percent, though most
nation is strong by historical standards.
remain vigilant.
RF
analysts’ forecasts put the chances of
At the end of March 2007, the median
recession lower than that.
charge-off rate was 0.02 percent for
Fifth District banks and 0.01 percent
Eliana Balla, Robert Carpenter, and
for U.S. banks as a whole. These figures
Mark Vaughan are economists in the
Banks Are Solid
compare with the 20-year high of 0.77
Banking Supervision and Regulation
The recent behavior of the term spread
percent for the entire banking sector in
Department of the Federal Reserve
does not necessarily imply that bank
December 1986.
Bank of Richmond.
supervisors should leap into action to

READINGS
Estrella, Arturo, and Frederic S. Mishkin. “The Yield Curve as a
Predictor of U.S. Recessions.” Federal Reserve Bank of New York
Current Issues in Economics and Finance, June 1996, vol. 2, no. 7.

Federal Deposit Insurance Corporation. “What the Yield Curve
Does (and Doesn’t) Tell Us.” FYI: An Update on Emerging Issues
in Banking, February 22, 2006.

Estrella, Arturo, and Mary R. Trubin. “The Yield Curve as a
Leading Indicator: Some Practical Issues.” Federal Reserve Bank of
New York Current Issues in Economics and Finance, July/August
2006, vol. 12, no. 5.

Wright, Jonathan H. “The Yield Curve and Predicting Recessions.”
Board of Governors of the Federal Reserve System, Finance
and Economics Discussion Series Working Paper 2006-07,
February 2006.

Spring 2007 • Region Focus

39

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 40

INTERVIEW
W. Kip Viscusi
Every day, people expose themselves to risk. Whether
it’s driving to work, crossing the street, or climbing a
ladder to change a lightbulb, we engage in activities that
have a small probability of injuring or even killing us.
Some people — miners, construction workers, and firefighters, for instance — take jobs that are considerably
more dangerous than the typical profession. Kip Viscusi
has spent much of his career examining how individuals
evaluate risk exposure and the public policies aimed at
improving worker and consumer safety.

analysis to evaluate a wide range of regulations, having
served as a consultant to the Environmental
Protection Agency, the Occupational Health and
Safety Administration, and the Federal Aviation
Administration. He has also looked at unanticipated
behavioral changes prompted by government mandates,
which sometimes render those mandates ineffective or
even counterproductive. In addition, he has carefully
followed the multistate tobacco settlement, arguing
that the agreement has done much to enrich plaintiffs’
attorneys but little to discourage youth smoking, one
of its ostensible goals.
Trained as an economist, Viscusi has taught in both the
economics departments and the law schools of several
leading universities, including Northwestern, Duke,
Harvard, and now Vanderbilt, which has recently
launched a Ph.D. program in law and economics. Viscusi
has authored or co-authored more than 20 books, and
his papers have appeared in leading journals such as
the American Economic Review, the Quarterly Journal
of Economics, and the Journal of Legal Studies. He is
also the founding editor of the Journal of Risk and
Uncertainty. Aaron Steelman interviewed Viscusi at
his office at Vanderbilt on March 29, 2007.

40

Region Focus • Spring 2007

RF: It seems that the public and its political representatives often do not fully appreciate the trade-offs and, in
some cases, the unintended consequences associated
with measures aimed at improving consumer safety.
Could you give a few examples? And why do you think
such regulations are viewed so positively when economists tend to be less sanguine about their virtues?
Viscusi: Congress tends to pass legislation that is supposed to
rid the world of risk. “Let’s get rid of pollution.” “Let’s make
the workplace safe.” That certainly sounds great in speeches.
But very rarely do you see legislative mandates that permit a
balancing of their costs and benefits. The U.S. Department of
Transportation is arguably the main exception.
This doesn’t mean the public doesn’t care about the costs.
In fact, when people are confronted with those costs directly,
they are often opposed. But with most regulations, the costs
are not explicit. There are no price tags attached to them.
Also, the costs are often borne by different parties.
So, for instance, in the case of Superfund cleanups of hazardous wastes, the people who benefit from the cleanups are
not paying the costs directly and thus demand the most
stringent standards possible. The result is that the median
cost per cancer case averted is about $7 billion. It’s off the
charts because you are using the responsible parties’ money
to clean up the site. In contrast, if you look at the amount of
money people are willing to pay for houses that are not
exposed to hazardous waste risks, you don’t observe that
kind of large trade-off at all. It’s more like $5 million rather
than $7 billion. Similarly, the premium that workers require
to work in relatively dangerous jobs is a lot less than what
government agencies spend on regulations.
Another example of the kind of thing that legislators do
not fully consider is the effect of regulation on behavior.
In the case of safety caps, which was one of the first risk
behavior case studies I examined, there were mandatory

PHOTOGRAPHY: DAN LOFTIN

Viscusi has been a leader in the use of cost-benefit

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 41

OSHA goes in there and imposes negligible fines compared
safety cap requirements on aspirin and other potentially
to this. It is just not a big player. Workers’ compensation is a
dangerous products that children might try to get into. So
different case. In general, I think it is a very good program.
what happened? Because parents thought the safety caps
The question is who pays for workers’ compensation? Even
made them risk-free — in fact, they were first called
though companies pay the bill directly, that bill is passed down
“childproof ” caps by the Consumer Product Safety
to workers who receive lower wages.
Commission — people started leaving the bottles around in
What we found is that workers
the open rather than storing them,
are willing to accept a wage cut that
giving kids greater access. In some
exceeds the costs of the premiums
cases, people left the caps off
because they value the insurance
altogether because they were so hard
➤ Present Position
more than the actuarial costs of the
to grapple with every time you wanted
University Distinguished Professor
insurance. It’s similar to people who
to open the bottle.
of Law, Economics, and Management,
pay more than the expected payout
As I said earlier, the main agency
Vanderbilt University
for auto insurance because they
that seems to care about trade-offs is
➤ Previous Faculty Appointments
value having that protection. Also,
the Department of Transportation.
Northwestern University (1976-1981 and
workers’ compensation is a highly
The costs of the mandates they issue
1985-1988), Duke University (1981-1985
efficient insurance program. It has
are quite evident in product price. If
and 1988-1996), and Harvard University
very low administrative costs, so it
you require more safety features on
(1996-2006)
pays out something like 80 cents on
cars, that will raise the price, and
➤ Education
the dollar, which is tremendous. In
consumers will see that. But the link
A.B. (1971), M.P.P. (1973), A.M. (1974),
addition, companies get value from
is not as direct with environmental
Ph.D. (1976) Harvard University
the program because it protects
or worker safety regulations.
➤ Selected Publications
them from being sued by their
Economists see the costs and
Author or co-author of such books as
employees in case of an accident.
benefits from the economy-wide
Risk by Choice: Regulating Health and
They avoid a lot of litigation as
standpoint but the consumer doesn’t
Safety in the Workplace (1983); Reforming
a result, and I think that is a
engage in that type of analysis, and I
significant benefit.
think that’s why many regulations are
Products Liability (1991); Fatal Tradeoffs:
not subject to strict public scrutiny.
Public and Private Responsibilities for Risk
RF: How does one properly derive
(1992); Punitive Damages: How Juries
an estimate of the value of a statisRF: How much discipline does the
Decide (2002); and Smoke-Filled Rooms:
tical life for use in cost-benefit
market impose on companies to act
A Postmortem on the Tobacco Deal (2002)
analysis? How can those estimates
in a responsible way with respect to
be used to improve public policy?
worker safety?

W. Kip Viscusi

Viscusi: There are three major sources of financial incentives for job safety. By far the most important is the market.
Workers on dangerous jobs generally perceive that they are
dangerous. This drives up their wages and gives the company
an incentive to make the workplace safer. If you look at it
empirically, this dwarfs everything else that is going on. The
number two player is workers’ compensation. The premiums
for workers’ compensation are now in the $30 billion a year
range. Particularly if you are a large enterprise, your workers’
compensation bill goes up if you have a bad accident record.
We found that in the absence of workers’ compensation,
worker fatality rates would go up by one-third. So that’s a
very large effect. Then, third, we get to the Occupational
Safety & Health Administration (OSHA), which issues
health and safety regulation for the Department of Labor.
You are looking at zero effect in the early years of the agency,
and maybe something like a 1 percent to 2 percent total
effect on safety in recent years. It’s very small.
So, overall, the market exerts the most discipline on companies to protect workers. Every death on the job generates
significant wage premiums in effect. But if a worker falls to
his death because the scaffolding is poorly constructed,

Viscusi: The main technique used by economists is to look
at the money-risk trade-offs reflected in the decisions that
people actually make. One context is the labor market,
where workers are paid more for dangerous jobs. Another
context is the product market, where people pay less money
for a relatively unsafe product or more money for a relatively
safe product. I have looked at both contexts. But most of my
work has focused on the labor market because we have a lot
of data on workers’ wages, which we can match to the risks
of those jobs.
Controlling for other aspects of the job, we find that
workers are in fact paid more if they work in hazardous jobs.
This is not a new theory. Adam Smith developed this in 1776.
But it was only in the 1970s that economists started estimating the relationship. My current estimate puts the value at
$7 million per statistical life. What that means is that if you
face an annual risk of death on the job of one chance in
10,000, on average you get paid about $700 extra per year.
During the Carter administration, I worked in the
Regulatory Analysis Review Group and was the Deputy
Director of the Council on Wage and Price Stability, which
was responsible for regulatory oversight at that time.

Spring 2007 • Region Focus

41

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 42

I suggested to OSHA that they use statistical estimates such
as these to value the benefits of OSHA regulations. They
said, “No. It would be immoral to put a dollar value on
human life. Absolutely not.” Then in 1982, OSHA proposed
a hazard communication regulation that for the first time
would have required the labeling of dangerous chemicals in
the workplace and sent it over to the Office of Management
and Budget (OMB) for review.
President Reagan had just set up this group within OMB
that looked at new regulations and required that the benefits
be greater than the costs. OMB looked at this and said this
is all very interesting but the costs are greater than the
benefits. Because OSHA had argued that putting a dollar
value on life was immoral, they instead said that when calculating the benefits of improved safety due to the regulation,
they were only going to estimate the cost of death. The cost
of death was the present value of lost earnings plus your
medical costs after you are killed on the job. Well, you can
call it the cost of death or you can call it the value of life, but
it’s still the same thing. OSHA appealed the decision to the
vice president, who was in charge of all such appeals. He said
it was a technical issue and needed to be settled by an expert.
I was asked to settle the dispute between the two agencies
over the regulatory impact analysis. It was pretty easy. What
I did was adopt every one of OMB’s assumptions with the
analysis except for one thing: I used my value of life number
instead of the cost of death number. Doing that increased
the benefits by a factor of 10. Once you used the economic
value of life numbers, the regulation had benefits greater
than the costs, and the regulation was issued. So after that,
regulatory agencies started using the numbers. Part of the
reason was that it was good economics. But a big part was
that it often made their benefits look large, and that’s what
carried the day.
A related issue I have been working on recently is
whether old people’s lives are worth less than young people’s
lives. I was at a conference and suggested that the answer
was yes, because of shorter life expectancy and lower quality
of life. This generated a lot of discussion. Since that time, I
have looked more closely at how the value of a statistical life
varies with age. It turns out that it doesn’t really drop off the
table as you get older. In fact, workers at age 60 have a higher
value per statistical life than workers at age 20 because they
are richer and can do more things that they enjoy. To take
one example, I buy cars with all these additional safety features while my son drives around in a topless Jeep Wrangler.
Why would this make sense if his value per statistical life
was higher than mine?
RF: What is your opinion of the compensation policy
toward the families of the victims of the terrorist attacks
of Sept. 11? That policy generated much criticism, but
did it conform to sound economic reasoning?
Viscusi: First of all, you would not want to use the value per
statistical life to compensate people because these are the
42

Region Focus • Spring 2007

values from the standpoint of prevention — for instance,
how much we should pay to prevent the small probability of
death. Instead, we’re trying to figure out what is the optimal
insurance of the losses of the families. So this situation is
much more analogous to a wrongful death case in the courts.
If you are killed by a drunk driver, what should the compensation be? Generally, it’s the present value of lost earnings
minus some deduction for consumption of the deceased. I
think you would want to do the same thing with the victims
of Sept. 11. What they didn’t do, which the courts would do,
was to continue the compensation up the income ladder.
Instead, they capped the compensation at a particular
amount. If you really wanted to provide income replacement
and handle it the way the courts do, there would be no cap
at the top. So in some ways, what they did was institute a
program more similar to workers’ compensation, which also
has caps.
As to whether the families of the victims should have been
compensated at all, that is a society-wide decision. But it’s
important to consider that the people who were killed on
Sept. 11 were not engaged in any moral hazard. They did
nothing to put themselves at any known risk. So compensation
does not create any incentive effect that would cause concern.
RF: Setting aside the issue of the wars in Iraq and
Afghanistan, how would you assess the public policy
response to the threat of domestic terrorism postSept. 11? Are we thinking about the issues in a way that
is roughly correct and weighing the costs and
benefits in a generally rational way?
Viscusi: One problem is that economists think about these
things a lot differently than other people. We are always
thinking about trade-offs and balancing competing concerns.
In the case of responding to terrorism risks, you have two
classes of concerns that tend to be considered sacred by some
people: civil liberties on one hand and people’s safety on the
other. You have people in each camp who say they are willing
to do nothing to compromise those values. Neither one
wants to admit that these things do have a finite value and
that you might have to strike some sort of trade-off. The real
issue is what type of trade-off you want to strike, or how
much you are willing to give up to increase safety.
The reason this is tricky is we don’t have very good numbers
on what these risks are. We just don’t have a lot of data —
unlike, say the risk of being in an automobile accident. We
know the probability of that with relative precision. But the
estimates of the probability of a terrorist attack or the number of people who are going to die in the coming year are all
over the map. So if you can’t assess the likelihood of a terrorist attack or how deadly it is going to be, it is really hard
to say how much you should spend to try to prevent it.
RF: What do you think of the proposal to establish a
prediction market to help assess the likelihood of a
terrorist attack?

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 43

Viscusi: One problem with that proposal is that people can
affect the probability of that outcome. If you can bet on it
and make a lot of money, then people may have an incentive
to launch a terrorist attack so they can collect on their bet.
Also, I’m not sure the information you would get would be
refined enough to help you devise a defense strategy. It
wouldn’t help you much to know that the probability of an
attack has gone up if you don’t know the target. So the markets would need to be very specific, such as the probability
of the Holland Tunnel being blown up in the next month.
RF: What do we know about the
economic and legal effects of the
1998 settlement between state
governments and tobacco companies? And how, if at all, should
that settlement be modified?

None of the cases went to trial. The settlement appeared
to be a good idea to executives because whenever there was
a rumor of a settlement, stock prices would go up. What
they did not anticipate was that they would be funding a lot
of other lawsuits against them.
From the standpoint of society, the main selling point
was, “We need this for the kids. We are going to take the
money and use it to combat youth smoking.” The settlement
led to what is in effect a 40-cent tax on each pack
of cigarettes. The money has flowed to the states, but
only a negligible amount has
been used for programs aimed
at preventing youth smoking.
So that’s the reality of what has
happened, and I think the antismoking groups would agree
with that.
Also, part of the settlement
restricted the advertising of cigarettes and some have argued that
has led to anticompetitive effects.
The reason is that if you can’t
advertise your product, it’s hard to introduce new brands,
and that serves to lock in the existing market shares in an
industry that is already highly concentrated. One of the
plaintiffs’ experts, Joseph Stiglitz, has attacked the agreement, arguing that it is part of a great conspiracy to limit
competition. So even those on the antismoking side
concede that there are some problems with the way they
structured the agreement. On the one hand, they want to
limit advertising. On the other hand, they don’t want to
restrict competition. But you can’t do both.

You can’t just
wave a magic wand and
eliminate risk for free.

Viscusi: I don’t think there is
much I can say that is good about
it. From the standpoint of the
industry, the idea was that if they
made this settlement, they would be putting all the tobacco
litigation behind them. Instead, what they did was hand out
billions of dollars to plaintiffs’ attorneys who have used that
money to finance future litigation. So there has been a wave
of lawsuits after the settlement. I would have rather seen
them play out the court cases. If they lost and were
responsible for all the health-care costs generated by
smokers, then they would have paid up. But that was never
really decided.
So this was a fairly novel legal concept: If people use a
dangerous product that leads to health-care costs in the
Medicaid program, you can recoup the costs. That’s not
true of every product. People are injured in car accidents,
for instance, which generate health-care costs also. The
trigger here to warrant making the cigarette industry pay
the bills is that consumers have to be deceived or victims of
fraudulent behavior. If there is no wrongful conduct on the
part of the companies, you can’t nail them. But, of course,
people have known for a very long time that cigarettes
were dangerous.
In 1964, the Department of Education, Health, and
Welfare issued a report stating that smoking caused lung
cancer. Two years later, mandatory warnings were placed on
packs of cigarettes indicating that they were dangerous. This
is the first mass-marketed consumer product that did not
kill you immediately when used as intended which had
on-product warnings. A lot of things you take for granted as
having warnings, like power tools and household cleaners, did
not have warnings back then. What did have warnings were
really dangerous chemicals like sulfuric acid and hydrochloric
acid, prescription drugs, pesticides and insecticides, and
that’s just about it. So it’s not a state secret that cigarettes are
dangerous. In fact, when asked, people vastly overstate the
likelihood that a typical smoker will get lung cancer.

RF: What does your work on jury analysis tell us about
jurors’ risk beliefs? And how do those views compare to
judges’ views? Also, what can be done to reform the
compensation process to better align what juries award
in punitive damages with what would be consistent with
mainstream risk analysis?
Viscusi: Jurors are subject to a variety of behavioral anomalies in terms of how they perceive risk. One of the most
important is hindsight bias. That’s important for accident
cases where jurors will say, “Well, they should have known
that doing this would have caused the accident.” They fail to
perceive that at the time you take the action, there is a probability that something bad will happen, but it’s not definite
the person will get injured. This comes up in a variety of
contexts. For instance, car companies do corporate risk
analysis. They analyze the risks associated with a car and the
costs associated with improving the safety, and if the costs
are greater than the benefits, they don’t do it. If you do that
analysis and decide not to take the extra safety precautions,
juries generally find you to be reckless because you have
thought about the risk and decided the preventive measures
weren’t worth making.

Spring 2007 • Region Focus

43

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 44

So I ran a series of jury experiments and asked what they would
decide if the companies acted the way
economists would suggest, using the
value per statistical life employed by
the Department of Transportation.
The results actually got worse. Before,
they were awarding punitive damages
of about $1 million. But when
they found out that the company was
valuing a statistical life at $4 million,
they awarded punitive damages of $10
million. So the more responsible the
companies get in how they value life,
the worse they fare, because juries
want to send a message and top the
dollar value that the companies are
using in their analysis. Juries tend to
resist the whole idea of looking at
what the costs are and what the benefits are and then making a rational
decision based on the numbers.
I have also run surveys on judges,
and judges do much better than juries in terms of how accurate their risk beliefs are with respect to the major causes of
death. Jurors tend to overestimate small risks much more
than judges do — even though judges overestimate as well.
Judges are more cognizant of benefits and costs, and less
subject to hindsight bias because they have seen lots of cases
and know that not everything is preventable.
I think one reform we should have is turning over the
setting of punitive damages to judges. Jurors do a much better
job of evaluating whether conduct is bad than assigning a
dollar value to the bad conduct. I don’t particularly fault
jurors for that. When you look at jury instructions, there is
no guidance about how you should come up with a punitive
damages number. The result of this is that the plaintiffs’
attorneys will try to give them an anchor, which is often
totally irrelevant, such as what the company spent on
advertising last year. They are just trying to get a big number
out there for jurors to latch onto because there is no
methodology for coming up with a dollar value. As a result,
you often get ridiculously large awards, which later are
reduced by the courts.
RF: How have people’s preferences for the consumption
of environmental health and beauty changed over time?
Viscusi: There is no question that our valuation of the environment has gone up, and I think much of that has to do
with increased wealth. We can afford to enact stricter environmental standards now. When you look around the world,
the poorer countries do not have as stringent environmental
standards. If you really want to get a sense of what pollution
is, you have to leave the United States.
I think the effect of wealth on preferences is interesting
44

Region Focus • Spring 2007

from the standpoint that a lot of proposals have been made saying we should
not import goods from countries where
the job safety standards are not as
strong as ours or the environmental
standards don’t meet our criteria. The
net effect of these proposals would be
to keep those countries poor. So these
protectionist measures would not do us
any favors — and they certainly wouldn’t
do them any favors. Also, we should
remember that the United States did
not have such a pristine environment
100 years ago, when we had much lower
per-capita income. In fact, most of the
major environmental regulatory agencies — the Environmental Protection
Agency and the Nuclear Regulatory
Commission, for instance — were not
created until the 1970s.
RF: How has economic analysis
affected the way lawyers, judges,
and regulators have looked at policy issues over the
past 35 years? Do they take economic analysis more
seriously now?
Viscusi: Yes, they are heading in the right direction, but
they still have a long way to go. In 1985, I testified before
Congress on Superfund. I was talking about costs and benefits
and one of the representative’s questions was: “Costs and
benefits? Isn’t it just common sense that you want benefits
greater than costs?” I was taken aback. I had never heard a
congressman say something so sensible. So I think some of
the basic ideas have been adopted.
Also, if you look at the curricula of law schools, you can’t
get through those three years without knowing something
about the Coase theorem. In fact, I would say that there are
law-and-economics scholars on the faculty of virtually every
major law school in the United States today. So some core
ideas in law and economics are now routinely taught. In
addition, there are some justices on the Supreme Court,
such as Justice Scalia and Justice Breyer, who know a lot of
economics. And, increasingly, law clerks are coming to their
jobs equipped with the basic tools of economic analysis,
which judges rely on when doing research for their opinions.
So times have changed. I think there is no doubt that the
law-and-economics movement has been the most important
intellectual development to hit law schools in the last
half century.
RF: How, if at all, does recent work by “behavioral economists” — which claims to show that consumers are
often irrational and make systematic errors — complicate risk analysis, which generally assumes that people
are rational given certain constraints?

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 45

Viscusi: I have documented a lot of systematic errors
myself. But the fact that you observe systematic errors does
not mean that markets necessarily don’t work, because not
everyone has to understand what is going on for the market
to function pretty well. Also, the fact that you find an error
doesn’t mean it’s a big deal. You can construct a lot of these
experiments that have nothing to do with the real world and
find an anomaly, but that doesn’t mean actual decisions will
be off. In addition, the fact that there is an error doesn’t
mean it’s a market failure and the government needs to do
more. Sometimes, it means the government needs to do less.
One error is that people tend to overestimate small risks.
If that’s true, they won’t be taking enough small
risks. So it doesn’t mean you want to regulate some things
more stringently. In fact, you may want to regulate them
less stringently.
One anomaly that I find interesting is that people are
really adverse to ambiguous losses. This has an effect on
government policies. Instead of looking at the mean risk
associated with something, they look at the worst-case
scenario. So if it’s an imprecisely understood risk, they focus
on how bad it possibly could be, rather than how bad it probably will be. This is true throughout the federal government,
where the upper bound is used as the risk number. This really
distorts all the risk numbers coming out of the government.
An example is the Superfund cleanup efforts. What’s the
concentration of the chemical at the site? They take the
upper bound for that. What’s the exposure level at the site?
They take the upper bound for that. What’s the frequency of
exposure? They take the upper bound for that. So they multiply four or five upper-bound numbers, which vastly
exaggerates the estimate by the time you are done.
RF: In your view, is there a role for normative analysis
when working on issues involving risk? For instance,
should we place some value — even if, ultimately, it
is merely symbolic — on the notion that people have
an obligation to behave safely and that legislators
and regulators have a duty to try to make them act in
that manner?

require cars to be as safe as they possibly could be? And he
said yes. But that would mean that even the cheapest cars
would cost a lot of money and many people would not be
able to buy them. You can’t just wave a magic wand and
eliminate risk for free, which is what people want to do. If
you restrict people from taking jobs, if you restrict the foods
that they eat, if you place limits on how much they
can weigh, all of these things will reduce their welfare as
they perceive it. The proper role of government is to give
people enough information so they can make reasonable
decisions, and after that step aside and allow them to make
their own choices.
RF: There are some law schools where economic
analysis is an important part of the curricula and many
economics departments where one can work on similar
issues. What is the market that you wish to satisfy with
the new Ph.D. program in law and economics
at Vanderbilt?
Viscusi: Even though most law schools teach law and
economics, as we discussed earlier, they don’t teach it at the
graduate level. Ours is a much more high-powered
program than anything I have seen. Students have to take
micro theory and the standard econometric sequence and
then we bolster it with additional behavioral techniques
and more empirical methods. So our students are going to
be better geared up, technically, than the typical J.D./Ph.D.
student, who takes the J.D. courses in the law school and
then is sent over to the economics department to complete
those courses. So the two programs are not really integrated
in a meaningful way, and we attempt to bridge that gap with
the program at Vanderbilt. Our first entering class will arrive
this fall, and within six years they will leave with both a J.D.
and a Ph.D. We want our students to be skilled enough to get
jobs in economics departments, but the program is really
designed to place graduates in teaching positions at law
schools where they can apply the integrated skill set that
they have acquired.
RF: Which economists have influenced you the most?

Viscusi: I have some limited sympathy for that type of argument. We do care about individual health. That’s why we
spend a lot of money on various government health-care
programs. We don’t want our fellow citizens to be ill. So
that’s a legitimate concern. On the other hand, let’s say I
decide that I don’t think anyone should endanger his life by
working in a steel mill. In that case, you are imposing your
own preferences on someone else and in the process lowering
his perceived welfare. That’s a type of paternalism that I
think is really hard to justify.
When it comes to these types of things, most people
really don’t understand economics. I was at a conference that
was attended by one of the leading health policy experts in
Europe — he’s even knighted — and he said that we should
give health care to everyone. My response was: Would you

Viscusi: The people I am going to mention are the three
members of my dissertation committee. And I picked them
for a reason, so it wasn’t just by accident that they had a big
influence. The chairman of the committee was Kenneth
Arrow. Even though I am not the same type of theorist that
he is, he sets a very high standard and is an inspiration. Also,
he did a lot of work on risk and uncertainty that I thought
was clever and innovative at the time. Another member was
Richard Zeckhauser, who was also my undergraduate thesis
adviser. I have worked with him over basically my whole
career and we continue to co-author papers. The third
member was Richard Freeman, a labor economist, who
continues to do inventive things with data and is always
moving on to new and interesting topics.
RF

Spring 2007 • Region Focus

45

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 46

ECONOMICHISTORY
Opening the Vault
BY C H A R L E S G E R E N A

Black-owned
banks have a long
history of providing
financial services
to underserved
communities, but
how important
are they in
today’s market?

obert Johnson, America’s first
black billionaire, decided to
get into the financial services
industry last year. He purchased a
majority stake in a struggling minorityowned bank in Orlando, Fla., infusing
millions of dollars into the institution
to position it for a future national
expansion.
Named Urban Trust Bank, the firm
will target urban residents who don’t
already have accounts and have limited
access to capital, particularly blacks.
In addition to using capital from
Johnson and his network of CEOs,
bank officials want to join the
Treasury Department’s Minority Bank
Deposit Program, which encourages
corporations, federal agencies, and
state and local governments to put
their savings in banks owned by
women and minorities.
Still, the bank’s officers make it
clear that Urban Trust isn’t all about
the black community. They want to
make money by attracting customers
of all races and backgrounds. Single
mothers and people with questionable
credit records will be served alongside
minority students and small-business
owners applying for loans.
More than a century ago, the
economic realities imposed by segregation required blacks to pool their

R

resources and help each other. Blackowned banks are part of this long
tradition. During their peak between
the end of the Reconstruction era and
the start of the Great Depression, more
than 130 of these institutions opened
for business, providing capital to black
entrepreneurs and prospective homeowners at a time when it was expensive
or impossible to get elsewhere.
Not surprisingly, most of these
banks were in the South, where 90
percent of blacks lived. The Fifth
District accounted for one-third of the
total, with Virginia having the most
of any state.
Today, there are only 44 “blackowned banks,” where African-Americans
own at least 51 percent of the voting
stock. Three other banks have minority
board of directors and focus on the
black community. Again, most operate
in the South. They commanded more
than $6 billion in assets in 2006,
less than 1 percent of the total capital
held at commercial banks.
Whether there is still a need for
financial institutions operated by
blacks for blacks has been hotly
debated. The United States is more
racially integrated, but the challenges
of serving African-Americans and
other unbanked residents in poorer
communities remain.

By the 1950s,
Mechanics and
Farmers Bank
had become
a fixture of
“Black Wall
Street” in
Durham, N.C.
The black-owned
bank celebrates its
100th anniversary
next year.

46

Region Focus • Spring 2007

In the 19th century, banking activity was
fairly widespread. “Capital accumulation
in the Southern financial sector in
the antebellum period compared favorably with Northern accumulation on
a per-capita basis,” notes Howard
Bodenhorn, an economist at Lafayette
College who has written extensively
about banking history.
As for blacks, some of those who
were enslaved in the South managed to
participate in the economy on a limited
basis. They sold their services on

PHOTOGRAPHY: COURTESY OF MECHANICS AND FARMERS BANK

A Penny Saved, A Penny Loaned

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

occasion, and a few established small
businesses on the side.
Their credit needs were filled by
blacks who accumulated enough
wealth to buy their freedom. These
informal bankers collected other people’s savings and used the funds to
make small loans. Also, mutual-aid
societies pooled people’s resources to
offer a variety of services, including
financing for black entrepreneurs.
Meanwhile, free blacks in the
North created similar institutions
to provide banking and other services
to each other. As the nation’s
industrialization and westward expansion increased the overall demand for
capital during the mid-19th century,
however, they realized other sources
of credit were needed. Church and
business leaders in the black community gathered in New York City in 1851
to discuss, among other things, the
formation of a mutual savings bank.
They believed the bank would help
blacks buy their own homes and start
businesses, as well as encourage thrift.
Black leaders talked about this idea
again in New York and in Philadelphia
four years later. But the bank was
never created. The tensions leading to
the Civil War in 1861 probably reduced
the viability of their plans.
Bodenhorn has his own theories.
“Economic logic tells me that blacks
just did not have the resources to keep
a bank going,” he explains. “Banks
prosper when they can tap into two
markets — wage earners who need a
depository and entrepreneurs with
potentially lucrative projects.” Black
communities often had neither.
“Workers earning barely more than
subsistence [did not] provide a reliable
source of deposits, and black entrepreneurial projects were not the
most promising of the available set
of projects.”
During the Civil War, the federal
government established banks administered by Union generals as a safe
place for black soldiers and refugee
camp workers, as well as emancipated
slaves, to park their money. Two of
these banks opened in 1864, in
Beaufort, S.C., and Norfolk, Va.,

Page 47

where large numbers of black soldiers
were stationed.
After the war, hundreds of thousands of dollars sat in these banks
unclaimed — many depositors either
had died or had returned home without closing out their accounts.
Government and military officials
decided to redirect this capital into a
federally incorporated institution
called the Freedmen’s Savings and Trust
Company in 1865. The bank eventually
opened more than 30 branches —
mostly in Southern states — and accumulated about $3 million in deposits.
The Freedmen’s Bank was supposed
to help newly emancipated blacks.
Instead, it didn’t survive the banking
Panic of 1873, when dozens of private,
commercial banks failed following the
bankruptcy of a prominent railroad
financer. The Freedmen’s Bank went
belly up a year later.
Some historians blame the failure
on a lack of accountability and
mismanagement. Others argue that
the bank was too cautious and focused
on protecting its funds rather than
earning a profit. Branches sent their
deposits straight to the bank’s headquarters to be invested in government
securities, and weren’t permitted to
make loans until 1870.

Left Out
The failure of Freedmen’s Bank left
many blacks distrustful of the white
banking community, especially since
the bank was established and managed
by whites and hired black advisers and
employees later in its history.
Combined with the Panic of 1873, it
undermined the confidence of blacks
in the nation’s financial system.
Yet there was arguably pent-up
demand for capital. The Reconstruction
era, spanning from 1865 to 1877, gave
blacks a taste of civil and economic
freedom. In later years, however,
banks imposed higher interest rates on
black borrowers, or simply rejected
them. In the 1880s, “Jim Crow” laws
in Southern states formalized the
segregation of whites and blacks.
Aside from blacks not being
welcome in mainstream banks because

of the color of their skin, there may
have been economic reasons why
they generally were not attractive
customers. “Deposits by AfricanAmericans tended to be small and not
always cost-effective,” noted Nicholas
Lash, a finance professor at Loyola
University Chicago, in a 2005 journal
article. “Also, loan profitability
would be constrained by the small
scale, illiquidity, and high risk of the
loans.” Still, there are accounts of
blacks with good credit and a solid
banking history being turned down
for a loan.
To fill this gap, black churches,
fraternal organizations, and benevolent societies began supporting the
formation of banks in the late 1880s.
Individuals also started industrial loan
companies, building and loan firms,
and credit unions. Thanks in part to
these institutions, black business and
homeownership rates continued to
rise after the Civil War despite many
social and legal barriers.
The two earliest examples of blackowned banks were in the Fifth
District. The United Order of True
Reformers obtained the first charter
for a black-owned bank in March 1888
from the Virginia General Assembly.
When the Richmond-based bank
eventually opened for business in April
1889, it financed various enterprises,
including a chain of grocery stores that
operated in Virginia and Washington,
D.C. But a series of bad loans and an
embezzlement scandal eventually
forced the state to close the bank
in 1910.
Capital Savings Bank in Washington,
D.C., opened for business in October
1888. The firm paid dividends to its
shareholders and did well in its early
years, but it also succumbed to
mismanagement 14 years later.
Bad judgment, often attributed to
a lack of experience in banking,
contributed to the short life spans of a
large number of black-owned banks
during this period. Many firms either
closed their doors or merged with
other banks within five to eight years.
Economist Howard Bodenhorn
says there are other explanations why

Spring 2007 • Region Focus

47

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 48

black-owned banks were smaller and
less profitable. “[Blacks] had no legacy
of banking,” he offers. “They had no
substantial collateral [and] earned low
wages. A far smaller percentage of
blacks were literate.”
Along with these financial challenges came a period of continued
volatility and distrust in banking. In
addition to the Panic of 1873, two
major crises struck the industry, the
first in 1893 and the second in 1907.
Banks of all stripes would close their
doors for weeks at a time to head off
runs on their deposits.
Black-owned banks suffered alongside their peers during the Great
Depression, probably more so because
they were located primarily in minority neighborhoods and served minority
clients, says Harold Black, professor of
financial institutions at the University
of Tennessee. “In [black] communities, their patrons felt the effects of
unemployment first and, probably,
harder than the population at large,”
he explains. “The Great Depression
really set back black enterprise.”
At the same time, blacks started
migrating from the South to Northern
states where economic and social
opportunities often were better. This
created new customers for the latter
region, but drained a significant supply
of deposits from the former.
Some banks were strong enough to
survive this challenging period. St.
Luke Penny Savings Bank was founded
in Richmond in 1903 by a black
fraternal organization and managed by
Maggie Walker. The bank offered
low-cost mortgages to blacks and
eventually expanded its services
and influence beyond the black
community, serving as a depository
for Richmond’s utility and tax
payments. It absorbed two of the city’s
black-owned banks during the 1930s
to become Consolidated Bank &
Trust, which is still operating today
as a subsidiary of Abigail Adams
National Bancorp.

A Period of Transition
By the 1930s, only nine black-owned
banks were still around. Just five new
48

Region Focus • Spring 2007

banks organized between 1934 and
1951,
according
to
one
estimate, and many more shut down.
Harold Black says several factors
contributed to a decline in the
growth of economic well-being of
blacks after the Depression and
through the 1960s. The Federal
Housing Administration supported
racially restrictive zoning ordinances
and covenants on homes, resulting
in a drop in black homeownership.
Labor unions worked to improve
the wages of its members, but
excluded blacks.
Aside from these issues affecting
their customer base, black-owned
banks faced a painful transition
following the civil-rights movement
of the 1960s. The racially segregated
business districts that had created
a captive market for banking
services began to disappear. The
banks that remained in these
districts had to compete for
customers with the mainstream banking industry for the first time.
Additional competition came later
when the Community Reinvestment
Act (CRA) was enacted in 1977 to
encourage lending to low- and moderate-income communities.
In the view of William Bradford, a
professor of business and economic
development at the University of
Washington, these additional pressures pushed black bankers to improve
their customer service, broaden their
solicitation of deposits, and develop
competitive advantages. Few were
able to do so, however. “A number of
them were bought out or failed,”
Bradford says.
Black banking did experience a
resurgence during the 1970s. There
were still underserved markets to
tackle. Some mainstream banks
discriminated against blacks moving
into white suburbs. Others allegedly
didn’t fund development in poor
and minority communities, a practice
sometimes dubbed “redlining.” Also,
the civil-rights movement encouraged
blacks to empower themselves
economically.
Government intervention also

played an important role. The federal
Minority Bank Deposit Program
helped increase the number of
deposits at minority-owned financial
institutions, while the Comptroller
of the Currency pushed for more
national bank charters to be awarded
to blacks.
Nevertheless, Bradford argues that
black-owned banks have become less
necessary. In his opinion, the changes
that have opened up banking markets
to black customers in the last 40
years have reduced the demand for
such institutions.

Where’s the Market?
Supporters of black-owned banks and
others contend that redlining and similar forms of discrimination against
black borrowers still occur, despite
CRA requirements. Various studies of
whether such discrimination exists
have yielded only mixed results.
Finance professor Nicholas Lash of
Loyola University Chicago says that
there is no conclusive evidence of
continuing discrimination against
black borrowers. “People on each side
of the divide would say, ‘Of course
the evidence is conclusive.’ I’m still
agnostic about it.”
In general, economic theory suggests that discrimination isn’t a
rational choice because it leaves
money on the table. Some bankers
may have a “taste for discrimination,”
as economist Gary Becker of the
University of Chicago has argued, but
in a competitive market that preference will cost them.
Further, Lash and others argue that
the role of black-owned banks in community development is limited.
“There may be possible market imperfections, but are [the banks] large
enough and do they have enough of a
presence to have a significant impact
on urban poverty?” Lash questions.
His view is that black-owned banks
don’t appear to be efficient and profitable enough to have an effect, nor do
they exist in sufficient numbers.
Academic studies of black-owned
banks in recent decades have found
that various aspects of their markets

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

make it difficult to be profitable.
Deposits and loans tend to be small.
Since a transaction can entail the same
costs regardless of its size, this makes
per-unit transaction costs at blackowned banks higher.
The amount of money kept on
deposit in black-owned banks is also
more variable and the volume of transactions is higher. Economist Harold
Black at the University of Tennessee
says that, on average, customers hold
money in their accounts for relatively
short periods. Rather, they deposit
their paycheck and quickly start
withdrawing funds to pay their bills.
In order to ensure that they have
sufficient resources above their
reserve requirements to cover transactions, black-owned banks tend to keep
more of their money in liquid assets
like U.S. government securities. The
drawback to these investment vehicles
is that they yield lower rates of return
compared to corporate bonds or loans.
“Banks always have to balance
between keeping some degree of liquidity and making higher profits on
loans,” Lash adds. “Other things being
equal, the more volatile your source of
funding, the more liquid [you have to
be] and the less lending you can do.”
Finally, black-owned banks make
loans where the return tends to be
lower and more uncertain. Combined
with receiving lower yields on their
investments and higher costs, this has
made it very difficult to make money.
A 1988 study by Robert Clair, formerly
of the Dallas Fed, disputes this
conclusion, finding that loan losses
and operating expenses of blackowned banks are no different from
their nonminority competitors in the
same neighborhood. (Clair did find,
however, that the return on assets of
these banks was lower.)

Page 49

Microlenders that cater to the poor
as an alternative to traditional banking
have found ways to significantly
reduce loan defaults, Lash says. Group
lending uses peer pressure and monitoring to reduce the risk of default,
while progressive lending involves giving small loans initially and increasing
the size of loans as borrowers demonstrate their ability to repay. Still,
microlenders tend to struggle to turn a
profit and are often heavily dependent
on charitable donations.

An Uncertain Future
The continued viability of blackowned banks will depend on meeting
the goals of any business — to provide
a product that people want at a price
they can profit from. But how?
Black bankers say they take the
time to work with customers who
may have less financial sophistication.
“We see education as part of
our mission and purpose,” says
Kim Saunders, former president and
CEO of Consolidated Bank &
Trust in Richmond. Saunders recently
took the helm of black-owned
Mechanics and Farmers Bank, which
opened in 1908 in Durham, N.C.
“When we sit down with a
customer, we are going to explain why
we are looking for what we are looking, and why it is important,” she adds.
“It is more hand-holding.”
Similarly, black-owned banks aim to
have strong, long-term relationships
with their customers. In turn, as various studies on banking in general have
found, strong relationships provide
additional market knowledge that
helps banks manage their risks.
“As the industry changes and we
add products and services to remain
competitive, we are able to go to a specific customer and identify what might

enhance their business because we
know them,” Saunders notes.
But can’t any community-oriented
bank do these things? Saunders says
black-owned banks have chosen to
stay in the heart of inner cities
while other banks focus on the
suburbs and the fringes of cities.
“That positions black banks to serve a
pivotal role in the development that is
going on in a lot of the urban cities
across the country.” Also, they may be
able to develop and nurture relationships with their customers more easily
than predominantly white-owned
and -managed banks. Lash says it’s in
the gray areas where the riskiness of
a loan isn’t clear that black-owned
banks may have an advantage.
“You’re not just looking at the
financial statements. You talk to
[borrowers] and get a sense of [their]
character and reliability,” he says.
However, the banking industry in
general is more racially diversified than
it used to be. “A lot of banks in the past
didn’t hire black lending officers. Now
they do,” William Bradford says.
Relationship building had been
something that set black-owned banks
apart from their competition, but now
major banks are also forging these relationships by hiring officers and
representatives “who fit the ethnic
profile of their customers.”
Competing against banks large
and small, as well as other businesses
targeting the poor and unbanked,
black-owned banks will need to find a
place in the larger marketplace in
order to survive. So far, a few have
positioned themselves successfully,
but this is relatively rare. “You need
to be socially valuable and economically valuable in order to prosper and
grow over time,” Bradford notes. “It
is a difficult position.”
RF

READINGS
Ammons, Lila. “The Evolution of Black-Owned Banks in the United
States Between the 1880s and 1990s.” Journal of Black Studies, March
1996, vol. 26, no. 4, pp. 467-489.
Clair, Robert T. “The Performance of Black-Owned Banks in
their Primary Market Areas.” Federal Reserve Bank of Dallas
Economic Review, November 1988, pp. 11-20.

Lash, Nicholas A. “Black-Owned Banks: A Survey of Issues.” Journal of
Developmental Entrepreneurship, August 2005, vol. 10, no. 2, pp. 187-202.
Price, Douglas A. “Minority-Owned Banks: History and Trends.”
Federal Reserve Bank of Cleveland Economic Commentary, July 1990.

Spring 2007 • Region Focus

49

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 50

BOOKREVIEW
Job Security No Longer Job One
THE DISPOSABLE AMERICAN: LAYOFFS
AND THEIR CONSEQUENCES
BY LOUIS UCHITELLE
NEW YORK: A. KNOPF, 2006,
283 PAGES
REVIEWED BY CHARLES GERENA

H

ere is a startling statistic: Between 2003 and 2005,
the Bureau of Labor Statistics counted 8 million
workers who were displaced from their jobs.
They weren’t fired for sleeping on the job or misspelling
the CEO’s name on a press release. Their plant or office
closed or relocated, or their position was eliminated because
there wasn’t enough work for them. Skilled professionals
who thought they were safe from outsourcing and
layoffs are being cut loose, putting them in the same
predicament as blue-collar workers who have been forced to
deal with declining employment in many parts of the
manufacturing sector.
In The Disposable American: Layoffs and Their Consequences,
Louis Uchitelle chronicles how layoffs have become a fact of
economic life and argues that this lack of job security
ultimately hurts everyone. Uchitelle, a business and economics reporter for the New York Times, blends narrative
journalism with economic analysis to tell a compelling story.
The author challenges three arguments — what he calls
“myths” — that support the use of layoffs. Number one on
his list is that layoffs have an economic payoff. Although the
unemployed can experience painful transitions in
the short run, the economy will be stronger in the long
run as a result of having a labor force that can more
easily respond to market changes. In turn, more
people will be employed and earn higher wages. Uchitelle
argues that layoffs have only begat more layoffs,
undermining the bargaining power of workers.
The second myth is that workers have the power to save
themselves. If they are laid off because their skills have
declined in value, they can acquire new skills to raise their
marketability. Uchitelle contends that the problem isn’t a
lack of workers with the skills that are needed; it’s a lack of
good-paying jobs to reward those skills.
However, one could easily point to industries such as
health care where good people are hard to find. Though he
bemoans inadequate funding for retraining programs to
help transition workers into new fields, Uchitelle likely
underestimates the existing demand for skilled workers
across many sectors of the economy.
The last myth tackled by the author is that the effect of
layoffs is purely and narrowly economic. Uchitelle argues
that layoffs also have a psychological cost that ultimately
hurts companies. They “chip away at human capital,” as he
50

Region Focus • Spring 2007

puts it. Productivity declines among the workers who
remain at the company, as well as among those who are
displaced and find other jobs. In addressing this myth,
Uchitelle makes his strongest indictment of layoffs.
He asserts that people who feel secure in their jobs
are better workers — for example, they are less likely to
object to workplace changes that require new tasks or
longer hours. Also, the author vividly chronicles the
psychological damage of layoffs by following several people
in their post-layoff careers.
Much of the descriptive content in The Disposable
American is on the mark, particularly his overview of the
“golden age” of job security from the late 1880s to the late
1970s. From railroads to retailers to manufacturers, firms
recognized the value of long-term employees. These workers
had better knowledge of a company’s systems and
procedures, and they were motivated by a desire to move
up the corporate ladder, making them easier to manage
within a complex organization. Firms offered pensions and
health insurance for the first time to retain employees.
The book’s analysis, however, is often less sound. Even
when Uchitelle goes astray, his claims can spark discussions
of important issues. For example, he casts a negative light on
the ability of capital to move easily across state borders
when, in fact, this trend has benefited the economy in many
ways. But this does raise a valid question: Is physical capital
significantly more mobile than human capital? If so, then
companies can easily chase cheaper labor costs, giving them
the upper hand in negotiating with workers.
Uchitelle argues that economic development incentives
used by state and local governments to lure companies
to new locations contribute to corporate migration.
He suggests that Congress ban all such incentives.
Why subsidize the movement of economic activity —
displacing workers in the process — when those decisions
might better be left to the market?
Uchitelle spends the last chapter offering other solutions
to mitigate the pace of layoffs. He will win points with economists for not advocating the propping up of moribund
industries to preserve jobs. But his proposals for additional
government intervention to ostensibly protect workers
from today’s changing economy — such as suggestions to
increase the minimum wage and create mandatory severance packages — would raise objections from those who
would point to continental Europe’s rigid labor market
policies to show the damage such measures can cause.
Still, The Disposable American forces the reader to, at
least, reevaluate the role of layoffs in today’s economy.
His message is clear: We as a society need to place more
value on taking care of workers whose displacement was due
to economic changes beyond their control.
RF

RF SPRING 07_1-51_Rev7.9

7/12/07

3:06 PM

Page 51

The Not-So-Dismal Science
THE STRATEGIST: THE LIFE AND
TIMES OF THOMAS SCHELLING
BY ROBERT DODGE
HOLLIS, N.H.: HOLLIS PUBLISHING,
2006, 244 PAGES
REVIEWED BY AARON STEELMAN

I

t took many economists by surprise when Thomas
Schelling was awarded the Nobel Prize (along with
Robert Aumann) in 2005. Not that he was undeserving.
Far from it. His contributions have been numerous and
influential. But many people believed that time had passed
him by — if he were going to win the award, it would have
happened many years ago. In 1994, the Nobel committee
had decided to award the prize to three game theorists, yet
Schelling was not included. This despite the fact that his
work, while not highly technical by today’s standards, had
employed game theory to great effect.
Schelling, who spent most of his career at Harvard
University before coming to the University of Maryland in
1990, always asked big questions. And no question was more
important for most of the latter half of the 20th century
than how to avoid nuclear war. Many thought it was
inevitable that the United States and the Soviet Union
(or one of its client states) would eventually engage in such
a confrontation. But, thankfully, it never happened.
To Schelling, this was not as amazing as many thought.
In his work on deterrence theory — which occupied much
of his attention in the 1950s and 1960s and was the topic of
his Nobel address — he concluded that the Americans
and Soviets were actually quite interdependent.
What’s more, their leaders were generally rational and
understood that a nuclear attack would be catastrophic to
both sides. As a result, the “nuclear taboo” was never broken, though other countries have subsequently come close
to crossing that line. How nonstate actors will act should
they acquire those weapons also remains to be seen.
One optimistic scenario holds that insofar as terrorists
wish to eventually “become the government,” they will
refrain from using nuclear weapons in an attempt to
earn international credibility and recognition. But this
remains speculation.
In his new biography of Schelling, Robert Dodge does an
admirable job of describing accurately and clearly Schelling’s
contribution to Cold War diplomacy — both as an academic and as a policy analyst and adviser. Likewise, he concisely
explains Schelling’s contributions to other topics once
thought beyond the purview of economists, such as racial
segregation and self-command, the latter of which, like
much of Schelling’s work, had a connection to his own realworld experience.

Schelling had tried unsuccessfully multiple times to quit
smoking. He knew the difficulty of overcoming addiction.
But he did not think it necessarily required third-party
intervention. Addicts could help themselves. He argued that
the problem could be modeled as a fundamental conflict
between the “present self” — who badly wants to quit smoking,
drinking, or overeating — and the “future self ” who will be
tempted to continue to engage in those activities. How to
get those two selves in line? By understanding that the
temptation to revert to old habits will be strong
and to implement rational, purposeful strategies to avoid
doing so.
One type of strategy is to remove yourself from
situations that you know will be challenging. For instance, if
you are trying to quit smoking, don’t go to places, such as
bars, where many other people will be smoking and where
the desire to light up will be hard to resist. Another type of
strategy is to commit to penalizing yourself if you deviate
from your plan. “One suggested commitment is to make a
large donation to a political candidate you despise; write
a check to the Republican/Democratic Party or whomever
you find offensive, and arrange for it to be out of your
control that the check is sent in your name if you fail,”
Dodge writes. Yet another type of strategy is to simply
disable yourself. A college student who doesn’t “trust himself
to stay in and study on the weekend for an important exam
could put his keys in the mail to himself on Friday, so they’d
be delivered to him on Monday.”
Dodge, a former student of Schelling’s at Harvard’s
Kennedy School of Government, has divided the book
into 27 relatively short and highly readable chapters.
The book also includes an informative foreword by
Schelling’s long-time colleague Richard Zeckhauser, who
aptly writes: “Schelling is the high priest of economists who
draw lessons from life. Just as Leonardo da Vinci drew
remarkable figures of the human anatomy, Schelling sketches
equally remarkable portraits that detail the anatomy of
human interactions.”
This, I suspect, may have been one of the reasons that
Schelling had to wait so long to receive the Nobel Prize.
His insights, while profound, are presented in such a
highly straightforward fashion, stripped of unnecessary
jargon and mathematics, that they often strike the reader
as almost matter-of-fact. This perhaps led to his work
being insufficiently appreciated by the profession at
large. But to his students, colleagues, and friends,
Schelling’s penetrating mind, his almost infectious
intellectual curiosity, and his easy demeanor have been
impossible to ignore — all of which become clear
to the reader of The Strategist: The Life and Times of
Thomas Schelling.
RF

Spring 2007 • Region Focus

51

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 52

DISTRICT ECONOMIC OVERVIEW
BY M AT T H E W M A RT I N

F

ifth
District
economic
conditions
strengthened
somewhat during the fourth
quarter as continued growth in the
District’s service sector outweighed
lingering weakness in the region’s
housing markets and a further softening in manufacturing activity.

Labor Markets Remain Sound
Outside of housing and manufacturing,
reports on the District labor markets
were generally favorable. District job
growth remained strong as continued
expansion in the region’s service sector
helped propel payroll increases over
the final three months of 2006.
Overall, District payrolls expanded
1.7 percent during 2006, matching
the national pace over the period.
Employment gains were particularly
solid in professional and business
services, leisure and hospitality and
educational and health services with
payrolls in each sector posting
increases of 2.5 percent or more since
the end of 2005.
The Richmond Fed’s services
sector survey also indicated a strengthening during the fourth quarter —
especially among nonretail firms. The
average index numbers for nonretail
revenues and anticipated demand for
services during the next six months
increased from their third-quarter levels
while the index number for employ-

ment dipped slightly, but remained
solid at 14. Adding to the positive tone
for the final quarter of 2006, the
District’s unemployment rate held its
ground at 4.5 percent — down from
4.7 percent a year earlier. Improved
income growth accompanied steady
employment performance across the
District in the fourth quarter. Income
growth in the District had lagged that
of the United States for most of 2006,
but a sharp pickup in real income
growth during the fourth quarter
brought the District more in line with
the national pace.

Manufacturing Sector Slows

Fifth District economic
conditions strengthened
somewhat during
the fourth quarter.
Softness Lingers in Housing
District housing markets continued to
soften over the final three months of
2006. The total number of residential
building permits issued districtwide
was down 16 percent compared to a
year earlier, with the largest declines in
the District of Columbia, South
Carolina, and Virginia. Weakening in
housing activity was also evident in the

Economic Indicators
4th Qtr. 2006
Nonfarm Employment (000)
Fifth District
U.S.
Real Personal Income ($bil)
Fifth District
U.S.
Building Permits (000)
Fifth District
U.S.
Unemployment Rate (%)
Fifth District
U.S.

52

3rd Qtr. 2006

Percent Change
(Year Ago)

13,752
136,951

13,680
136,442

1.7
1.7

921.6
9,603.8

909.6
9,472.8

3.5
3.6

47.2
355.3

54.1
437.9

-16.5
-26.0

4.5%
4.5%

4.5%
4.7%

Region Focus • Spring 2007

home sales data. Existing home sales
across the District fell sharply in the
quarter, off nearly 17 percent from a
year earlier. In addition, the District
posted a sizable increase in the percentage of delinquent mortgages
during the fourth quarter of 2006,
though the uptick followed a period of
particularly low delinquencies.
Nonetheless, assessments of housing
activity were not uniformly negative.
Overall, district home prices continued to post gains during the quarter,
though the rate of appreciation fell
well short of the previous year’s pace.

The District’s manufacturing sector also weighed on economic
growth during the fourth quarter
of 2006. Following several quarters
of moderate expansion, District
manufacturing activity softened
over the final three months of the
year. The average index numbers
for shipments, new orders, and
employment all fell during the fourth
quarter, pulling the Richmond Fed’s
composite manufacturing index down
to zero. The falloff was particularly
stark in new orders, where the average
index number from the manufacturing
survey fell from 10 in the third quarter
to -1 in the fourth quarter.
Additionally, survey participants
reported that they began to trim
payrolls and reduce hours worked.
Regarding prices, survey results
indicated that District producers faced
higher input costs during the quarter
due to a rise in energy prices.
But, despite weakening conditions in
the fourth quarter, District manufacturers remained optimistic about their
future prospects. Survey respondents
anticipated that manufacturing activity
would expand during the first six
months of 2007. At the same time,
reports on price expectations continued to suggest moderate growth in both
raw material and finished good prices
during the first half of 2007.
RF

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 53

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 1993 - Fourth Quarter 2006

Change From Prior Year

First Quarter 1993 - Fourth Quarter 2006

First Quarter 1993 - Fourth Quarter 2006

5%

8%

8%

7%

4%
7%

6%

3%

5%
6%

2%
1%

4%
3%

5%

2%

0%
-1%
-2%

4%

1%

3%

-1%

0%
93

95

97

99

01

03

05

93

95

97

99

01

03

05

Fifth District

93

95

97

99

01

03

05

United States

Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

First Quarter 1993 - Fourth Quarter 2006

First Quarter 1995 - Fourth Quarter 2006

Change From Prior Year

First Quarter 1993 - Fourth Quarter 2006

8%
7%
6%
5%
4%
3%
2%
1%
0
-1%
-2%

9%
8%

30%

7%

20%

6%

10%

5%

-10%
3%
93

95

97

99

Charlotte

01

Baltimore

03

-20%

2%

05

93

Washington

95

97

99

Charlotte

01

03

Baltimore

Washington

FRB – Richmond
Manufacturing Composite Index

First Quarter 1996 - Fourth Quarter 2006

First Quarter 1996 - Fourth Quarter 2006

40

30

30

20

20

10

10

0

0

-10

-10

-20

-20
96

98

00

02

04

06

-30

95

05

FRB – Richmond
Services Revenues Index
40

-30

0%

4%

97

99

01

03

Fifth District

05

United States

House Prices
Change From Prior Year
First Quarter 1996 - Fourth Quarter 2006

20%
18%
16%
14%
12%
10%
8%
6%
4%
2%
0%
96

98

00

02

04

06

96

98

00

Fifth District

02

04

06

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and
employment indexes.
2) Metropolitan area data, building permits, and house prices are not seasonally adjusted (nsa); all other
series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Office of Federal Housing Enterprise Oversight, http://www.ofheo.gov.

For more information, contact Matthew Martin at 704-358-2116 or e-mail Matthew.Martin@rich.frb.org.

Spring 2007 • Region Focus

53

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 54

STATE ECONOMIC CONDITIONS
BY M AT T H E W M A RT I N

U Maryland

District of Columbia

T

he District of Columbia’s economy improved at the
end of 2006 as stronger job and income growth
trumped weak performances in several segments of the
local economy.

Subprime Loans 30 Days Past Due
16

12
10

Subprime Loans 30 Days Past Due

8
6

98

99

00

01

02

03

04

05

NOTES: Subprime borrowers are those with low credit scores or no credit history.
NSA stands for Not Seasonally Adjusted.
SOURCES: Mortgage Bankers Association and Haver Analytics

The unemployment rate in the District of Columbia
edged up to 6.1 percent in the fourth quarter as solid job
growth was overshadowed by an especially large increase
in the area’s labor force. Nonetheless, payrolls expanded
at a 4.5 percent annualized rate in the fourth quarter, behind
notable gains in professional and business services and
information sector jobs. In addition, the area experienced
robust increases in construction employment. Payrolls in
the sector grew 5.1 percent during 2006 — the largest
year-over-year jump since early 2003.
Faster job growth during the fourth quarter of 2006
coincided with stronger income growth. Real income
growth in the District of Columbia had lagged the national
rate for the first three quarters of the year, but a rapid
increase in the final quarter of the year helped to erase the
gap. Over the past year, real income in the area increased 3.7
percent, compared to the national rate of 3.6 percent.
Conditions in the District of Columbia’s housing market
during the fourth quarter were more mixed, however. On
one hand, existing home sales in the area fell to their lowest
level since 1998. At the same time, growth in home prices
strengthened during the period, advancing at a 9.0 percent
annual rate compared to a 4.8 percent rate in the third
quarter. But not all measures strengthened. On the mortgage
front, the delinquency rate for subprime mortgages
increased from 7.0 percent at the beginning of 2006 to 11.2
percent in the fourth quarter — a mark still below the 12.8
percent peak recorded in 2002. However, the overall impact
of such an increase is likely to be somewhat muted given the
fact that the subprime segment accounts for less than 10 percent of all mortgages outstanding in the District of Columbia.
54

Region Focus • Spring 2007

16

06

14
% NSA

% NSA

14

aryland’s overall economic performance improved in
the fourth quarter of 2006 as job and income growth
edged higher. Payrolls expanded at a 1.4 percent annual rate
during the period, following back-to-back quarters of
little to no employment growth. Increases were particularly
pronounced in the state’s services sector, headlined by a gain
of 3,000 additional jobs in educational and health services.
Furthermore, payrolls in leisure and hospitality and professional and business services posted stout increases during the
fourth quarter, adding 2,300 and 1,600 positions, respectively.

M

12
10
8
6
98

99

00

01

02

03

04

05

06

NOTES: Subprime borrowers are those with low credit scores or no credit history.
NSA stands for Not Seasonally Adjusted.
SOURCES: Mortgage Bankers Association and Haver Analytics

Other sectors of the Old Line State’s economy were more
sluggish, however. Maryland’s already weakened manufacturing
sector continued to contract throughout the close of the year
as it trimmed payrolls for the ninth consecutive quarter.
Additionally, the state’s information sector posted job losses
for the fourth straight quarter. Nonetheless, Maryland’s
unemployment rate dropped 0.1 percentge point during the
quarter and at 3.9 percent, remained the second-lowest unemployment rate in the District behind Virginia. Firmer labor
market conditions in the period were matched by an acceleration in state income growth. Real income grew at a 5.8
percent annual rate — the highest increase among District
jurisdictions during the fourth quarter of 2006.
In less upbeat news, Maryland’s housing market continued
to soften over the final months of the year. Home construction
declined sharply in the fourth quarter as building permits fell
8.3 percent from a year earlier. Existing home sales also
slumped, moving down 20.8 percent compared to the fourth
quarter of 2005. Home prices continued to increase over the
final months of 2006, though the pace of appreciation slowed
somewhat from the previous period.
Driven by a weak performance in the subprime sector,
mortgage delinquencies in the state were elevated as the year
closed. The delinquency rate for subprime mortgages, in particular, increased appreciably throughout 2006 and finished the

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 55

year at 12.4 percent, close to the state’s peak level of 13.6 percent
set in 2002. Maryland’s overall foreclosure rate also edged up
during the fourth quarter from 0.46 percent to 0.50 percent.

h

North Carolina

T

he North Carolina economy posted a marked improvement during the final quarter of 2006, benefiting from
steady job growth and comparative stability in its housing
markets. The state’s two largest metro areas, Charlotte and
Raleigh, continued to add service jobs at a brisk pace, and the
drag from manufacturing job losses waned from a year earlier.

o South Carolina
S

outh Carolina’s economy posted mixed results during the
fourth quarter of 2006. Overall job growth was especially
robust as payrolls expanded at a 3.7 percent annual rate in
the fourth quarter. On the other hand, the Palmetto State
continued to be hindered by persistently high unemployment during the fourth quarter.

Subprime Loans 30 Days Past Due
26
21
16

Subprime Loans 30 Days Past Due

11
6

98
99
00
01
02
03
04
05
NOTES: Subprime borrowers are those with low credit scores or no credit history.
NSA stands for Not Seasonally Adjusted.
SOURCES: Mortgage Bankers Association and Haver Analytics

06

Boosted by a strong showing in the services sector, state
payrolls expanded at a healthy 2.9 percent annual rate in the
last quarter of 2006. Year-over-year increases in professional
and business services and financial services employment of
5.4 percent and 5.2 percent, respectively, were of particular
note. In addition, tentative signs emerged that employment
levels stabilized in durable goods manufacturing as the sector
posted consecutive year-over-year payroll gains during the
final two quarters of 2006. Steady payroll growth during the
quarter was matched by a sizable expansion in North
Carolina’s labor force. As a result, the unemployment rate
remained at 4.9 percent over the final months of 2006. Adding
to the positive tone, real income growth in the Tarheel State
advanced at a 5.3 percent annual rate during the fourth quarter,
well above the rate posted a year earlier.
Compared to its District peers, North Carolina’s housing
markets faired pretty well during the final quarter of 2006.
Existing home sales remained somewhat below the level at
the end of 2005, but the 1.2 percent drop paled in comparison to the double-digit decreases experienced in other
District jurisdictions. Additionally, home prices continued
to appreciate at a healthy clip, advancing at a 9.8 percent
annual rate during the fourth quarter. Also of note, the pace
of new home construction moderated since the end of 2005,

% NSA

% NSA

though the fourth quarter decrease was only 4 percent.
Stability in residential real estate markets was accompanied by a relatively solid performance in mortgage activity.
Mortgage delinquencies rose across the state in the fourth
quarter, but the increase was smaller than in other District
jurisdictions. A comparatively stronger performance was
also apparent in the state’s subprime market segment, where
delinquencies and foreclosures were both below recent
peaks despite slight increases during the period.

26
24
22
20
18
16
14
12
10
8
6
98

99

00

01

02

03

04

05

06

NOTES: Subprime borrowers are those with low credit scores or no credit history.
NSA stands for Not Seasonally Adjusted.
SOURCES: Mortgage Bankers Association and Haver Analytics

South Carolina’s strong employment performance during
the close of 2006 was headlined by record increases in
educational and health services and trade and transportation
payrolls. The leisure and hospitality and financial services
sectors also posted solid performances during the fourth
quarter, adding 2,400 and 3,100 jobs, respectively.
Despite the strong job numbers, South Carolina’s unemployment rate inched up to a District-high 6.6 percent as
labor force growth in the state outstripped job growth.
Sustained weakness in the manufacturing sector also
contributed to South Carolina’s high unemployment rate,
especially in its smaller metro and rural areas. Overall, South
Carolina producers cut payrolls by 1.4 percent in the fourth
quarter, marking the ninth straight reduction in state
manufacturing employment. The textile industry, for
example, continued to shed workers throughout 2006,
trimming its already reduced payrolls by an additional
13 percent.

Spring 2007 • Region Focus

55

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 56

South Carolina’s housing markets also weakened somewhat during the fourth quarter of 2006, driven in part by a
particularly sharp falloff in coastal market activity. Existing
home sales in both the Hilton Head and Myrtle Beach metro
areas declined notably and accounted for the bulk of the 12.8
percent drop in statewide sales during the fourth quarter.
Reports also indicated an acute reduction in median home
prices in the coastal markets, while prices in Columbia and
other more inland markets moved a bit higher compared to
a year earlier. On the mortgage front, South Carolina did not
experience a striking surge in delinquencies or foreclosures,
including within the much-discussed subprime market. The
delinquency rate for the subprime portion of the mortgage
market did increase slightly to 16.6 percent at the end of last
year, but it remained far below the peak level of 25.1 percent
recorded in late 2000.

u Virginia

T

he Virginia economy remained one of the District’s
strongest during the final three months of 2006. The state
posted a fourth-quarter unemployment rate of 3 percent —
besting all other District jurisdictions by nearly a full
percentage point. Additionally, payroll growth in the
Commonwealth maintained its recent steady pace, increasing
at a 1.2 percent annual rate during the quarter behind solid
gains in professional and business services and trade and
transportation employment. Virginia’s robust performance in
the fourth quarter also included strong personal income
growth. Real income levels expanded at a 5.6 percent annual
rate during the final quarter of 2006 and registered a 3.4 percent
gain from a year earlier.

Subprime Loans 30 Days Past Due
18

% NSA

16

one-third below that of a year earlier. The decline in existing
home sales was quite large as well, falling 26.2 percent from
the end of 2005. Virginia also experienced a marked slowdown in home price appreciation during the fourth quarter.
While home prices in the state were up 7.7 percent over the
course of the year, appreciation levels did not measure up to
those recorded in 2005.
Virginia also saw an increase in mortgage delinquencies
and foreclosures toward the end of 2006. Increased stress
was particularly apparent in the subprime segment of the
market where the delinquency rate swelled nearly two full
percentage points to 12.1 percent during the fourth quarter.
While that rate was well below the state’s recent peak of 16.1
percent in 2000, it represented a sharp increase from levels
seen earlier in 2006. The foreclosure rate showed a similar
pattern, posting an especially large jump during the close of
2006. The foreclosure rate in Virginia increased from 0.30
percent in the third quarter to 0.37 percent in the fourth
quarter — the largest quarterly rise since the late 1990s.

w

West Virginia

T

he West Virginia economy improved somewhat during
the fourth quarter as strong job and income growth outweighed a continued slowdown in the state’s housing market.
Employment growth in the state accelerated over the
final three months of 2006 due in part to sizable
gains in trade and transportation, leisure and hospitality,
and financial services payrolls. The natural resource and
mining sectors contributed as well, with high energy
prices driving employment to levels not seen since
the early 1990s. However, stagnant educational and health
services employment and persistent job losses in manufacturing diluted the overall gains a bit. Manufacturing, in
particular, continued to weigh on the West Virginia
economy as the sector trimmed payrolls for the sixth
consecutive quarter.

14
12

Subprime Loans 30 Days Past Due

10

26

8
98
99
00
01
02
03
04
05
NOTES: Subprime borrowers are those with low credit scores or no credit history.
NSA stands for Not Seasonally Adjusted.
SOURCES: Mortgage Bankers Association and Haver Analytics

21

06
% NSA

6

16
11

Other segments of the state’s economy looked more
mortal, however. Fourth-quarter data suggested the pullback in housing activity has been a bit more pronounced in
Virginia than in other District jurisdictions. Single-family
permits fell sharply in the fourth quarter to a level about
56

Region Focus • Spring 2007

6

98

99

00

01

02

03

04

NOTES: Subprime borrowers are those with low credit scores or no credit history.
NSA stands for Not Seasonally Adjusted.
SOURCES: Mortgage Bankers Association and Haver Analytics

05

06

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 57

Nonetheless, the state unemployment rate inched 0.1
percentage point lower to 5.1 percent in the final quarter of
2006. Reports on income growth also signaled an improvement in state economic conditions during the quarter. The
state posted a 3.8 percent increase in real income since the
close of 2005 — the second-highest rate among District
jurisdictions during that period and the largest gain in West
Virginia over any 12-month period since late 2001.
Not all the fourth-quarter economic news in West
Virginia was positive, however. Residential real estate
activity contracted somewhat during the final quarter of
2006. Existing home sales were down 7.3 percent in the
quarter and new building permit issuance dropped even
more sharply, posting a decline of 16.1 percent since the
fourth quarter of last year. Home appreciation also
moderated with median prices registering little to no change

in the fourth quarter, following solid growth in
the previous period. Additionally, the overall mortgage
delinquency rate edged up in the quarter, increasing
1.0 percentage point to 7.4 percent — the state’s highest
overall rate since the end of 1988. Looking below
the surface, the data show that a sizable portion of the
increase in fourth-quarter delinquencies came from
the prime market segment, rather than the subprime
segment, as was the case in other District jurisdictions.
The delinquency rate in the prime market rose from 4.3
percent inthe third quarter to 5.2 percent in the final
quarter of 2006, reaching its highest mark in several years.
Despite the increase in delinquencies, the state experienced
a slight reduction in foreclosures as the rate edged down
to 1.06 percent during the fourth quarter.
RF

Behind the Numbers: Productivity

19
82
19
85
19
88
19
91
19
94
19
97
20
00
20
03
20
06

76
79
19

19

70

73

19

19

19

67

% Change Year ago

40 years. One can easily see
Amid all the headlines about
how economic booms and
employment, Gross Domestic
Nonfarm Business Output Per Hour
busts are associated with
Product, and income growth,
5
improvements or stagnation
one sometimes has to
4
in productivity. The United
hunt for reports on what is
3
States was coming out of a
arguably the most important
2
recession in 1983, for example,
economic measure — producand experienced dramatic
tivity. Without gains in
1
growth in productivity with
productivity, it would be
0
the economic rebound.
difficult to sustain growth
-1
Gains during the last part
in employment, output,
-2
of the 1990s are generally
and income.
attributed to advances in
The Bureau of Labor
Source: Bureau of Labor Statistics
information
technology.
Statistics
breaks
down
Productivity continued to
productivity into two parts:
climb in the early 2000s, despite a recession and
(1) labor productivity, measured in output per hour of labor;
pullback in IT investments. Some economists have viewed
and (2) multifactor productivity, which gauges output relathe resilience of productivity in the early 2000s as
tive to changes in all of the inputs. Multifactor productivity
partial evidence that the “IT-centered story” of the late
is sometimes called total factor productivity and attempts to
1990s is inaccurate. On the other hand, a recent paper
capture the part of growth in output not caused by changes
published by the Federal Reserve Board of Governors
in capital, labor, energy, materials, or purchased services.
described thisperiod as encompassing more business sectors
As a whole, productivity refers to the ability to produce
than previously thought: “By 2004 the resurgence in
more goods or services at the same effort level. It explains
productivity growth that started in the mid-1990s was
how effectively the resources that businesses take
found to have been relatively broad-based and likely still
in are churned out. Companies that can increase
driven by IT.”
productivity can generate higher profits, and they can pay
Recently, productivity gains have been smaller, and
their workers more without raising prices. An economy that
analysts disagree about the root cause. Some call it a merely
enjoys productivity growth is one in which people enjoy
cyclical downturn that will rebound over time.
better products at competitive prices and are generally
Searching for reasons behind the ebbs and flows in
better off.
productivity remains one of the most fertile fields of inquiry
The accompanying graph shows labor productivity
in economics.
— DOUG CAMPBELL
changes in the nonfarm business sector over the past

Spring 2007 • Region Focus

57

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 58

State Data, Q4:06
DC

MD

NC

SC

VA

WV

Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change

691.9
2.5
1.1

2,594.7
1.4
1.0

4,055.2
2.9
2.6

1,915.0
3.7
1.8

3,737.0
1.2
1.3

758.5
1.6
1.3

Manufacturing Employment (000)
Q/Q Percent Change
Y/Y Percent Change

1.7
-7.6
-13.8

135.2
-2.4
-2.6

551.6
-1.4
-1.4

247.1
-5.8
-3.5

285.0
-4.1
-2.9

60.2
-3.5
-2.7

Professional/Business Services Employment (000) 155.3
Q/Q Percent Change
4.5
Y/Y Percent Change
4.2

396.6
1.6
2.1

481.2
4.4
5.2

219.0
1.6
3.2

631.9
2.2
2.8

60.0
1.8
0.8

Government Employment (000)
Q/Q Percent Change
Y/Y Percent Change

232.7
-1.3
-0.5

472.6
0.3
0.8

674.8
1.4
1.1

330.2
2.0
-0.1

673.4
-1.5
1.2

145.0
1.5
1.1

Civilian Labor Force (000)
Q/Q Percent Change
Y/Y Percent Change

317.8
3.3
1.4

3,030.8
1.5
2.1

4,510.4
2.8
2.9

2,144.8
3.0
2.0

4,028.2
1.5
1.9

811.7
0.3
1.6

Unemployment Rate (%)
Q3:06
Q4:05

6.1
6.0
6.1

3.9
4.0
4.0

4.9
4.9
5.1

6.6
6.5
6.9

3.0
3.1
3.4

5.1
5.2
4.9

Personal Income ($bil)
Q/Q Percent Change
Y/Y Percent Change

28.7
5.7
3.7

218.9
5.8
3.1

252.5
5.3
4.0

112.5
4.7
3.6

264.1
5.6
3.4

44.8
4.7
3.8

Building Permits
Q/Q Percent Change
Y/Y Percent Change

334
—
-47.5

5,971
-25.3
-8.3

21,547
-33.6
-4.0

9,341
-63.7
-25.6

9,098
-40.5
-31.6

887
-70.6
-16.1

658.76
9.0
7.4

537.67
5.2
9.3

331.63
9.8
8.3

318.38
11.2
8.6

470.95
5.5
7.7

233.97
3.2
6.0

8.4
-17.6
-22.2

102
-7.3
-20.8

235.7
-1.9
-1.5

104
-12.8
-15.3

127.2
-6.5
-26.2

29.3
-7.3
-19.5

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change
Sales of Existing Housing Units (000)
Q/Q Percent Change
Y/Y Percent Change

NOTES:
Nonfarm Payroll Employment, thousands of jobs, seasonally adjusted (SA) except in MSA's; Bureau of Labor Statistics (BLS)/Haver Analytics, Manufacturing Employment, thousands of jobs, SA in all but DC and SC; BLS/Haver Analytics,
Professional/Business Services Employment, thousands of jobs, SA in all but SC; BLS/Haver Analytics, Government Employment, thousands of jobs, SA; BLS/Haver Analytics, Civilian Labor Force, thousands of persons, SA; BLS/Haver Analytics,
Unemployment Rate, percent, SA except in MSA's; BLS/Haver Analytics, Building Permits, number of permits, NSA; U.S. Census Bureau/Haver Analytics, Sales of Existing Housing Units, thousands of units, SA; National Association of Realtors®

58

Region Focus • Spring 2007

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 59

Metropolitan Area Data, Q4:06
Washington, DC MSA
Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q3:06
Q4:05
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Unemployment Rate (%)
Q3:06
Q4:05
Building Permits
Q/Q Percent Change
Y/Y Percent Change

1,317.1
4.1
0.9

836.7
9.6
3.3

2.9
3.3
3.1

3.8
4.4
4.0

4.7
4.9
4.9

4,553
-76.2
-33.5

1,825
9.3
-27.6

5,999
-24.8
13.9

Unemployment Rate (%)
Q3:06
Q4:05
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Charleston, SC MSA

Columbia, SC MSA

284.6
8.2
3.6

289.9
5.0
2.5

366.4
8.7
2.5

3.8
4.1
4.0

5.1
5.5
5.2

5.5
5.9
5.6

885
28.0
-2.2

1,934
-9.7
-17.2

1,366
-68.4
-19.4

Norfolk, VA MSA
Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change

Charlotte, NC MSA

2,998.3
3.5
1.5

Raleigh, NC MSA
Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change

Baltimore, MD MSA

Richmond, VA MSA

Charleston, WV MSA

771.2
0.8
1.0

634.9
4.4
2.1

150.6
0.5
1.3

3.1
3.5
3.5

2.9
3.3
3.4

4.2
4.5
4.1

1,998
143.2
-26.5

1,487
-56.5
-29.7

52
-67.7
-30.7

For more information, contact Matthew Martin at 704-358-2116 or e-mail Matthew.Martin@rich.frb.org.

Spring 2007 • Region Focus

59

FINAL RF SPRING 07_52-60

7/16/07

8:21 AM

Page 60

OPINION
In Praise of Theory
BY K A RT I K B . AT H R E YA

ere is an interesting story: The pace of personal
bankruptcies rose quickly during the 1990s, even
as the overall economy fared well. What might
we conclude from these facts? One possibility is that
improvements in financial intermediation have made
credit-granting decisions easier and have led to greater
borrowing by risky groups previously denied credit. Another
is that nothing has changed in the lending industry, yet
households anticipated rapid future income growth.
This led them to borrow, but for those whose income failed
to grow as expected, default proved useful, leading overall
bankruptcy rates to rise. Still another explanation is
that neither lender behavior nor income expectations have
changed, but instead that there is no longer any “shame”
in defaulting on debts.
Each of these explanations may partially account for
the facts, and some may fail altogether. But interpreting
historical behavior and predicting future patterns first
requires a theory about how consumers make financial
decisions. What are people considering when they choose
how much to spend, how much to borrow, and how much to
save? By themselves, the data tell us little.
Modern economics develops theories in the form
of mathematical models of household and firm decisionmaking in which their collective behavior is required to be
consistent with the feasibility requirements imposed by the
model. This is known as an “equilibrium” approach.
Equilibrium analysis may be clearly contrasted with
an alternative still prevalent in consumer finance, one
that places far less emphasis on modeling explicit
decisionmaking. The latter approach instead relies on summarizing observed features of the data, usually using
regression analysis, and treating the correlations as being
informative for the effects of policy.
Why should we not simply stare at data, perform a
purely statistical analysis, and hope to learn from the results?
Ever since the publication of Robert Lucas’ seminal work in
the 1970s, economists have become sensitive to the pitfalls
of using history to learn about the effects of future
policies, especially those that are novel and far-reaching.
The so-called “Lucas critique” pointed out that many
relationships between economic variables which appeared
structural, or immutable, actually were the products of past
policies and thus subject to change as policies changed.
Lucas’ work forced economists to push expectations to the
forefront of consumption research.
The argument is simple and powerful. If what we see
in the data is to be usefully interpreted as the outcome of
purposeful decisionmaking by the principal actors in the

H

60

Region Focus • Spring 2007

economy, then both current policies and expectations
about future policies will influence those actors’ decisions.
Consider a football game. If painstaking data analysis
from, say, the 1990s reveals that the instances in which
teams gained the most yardage were on passing plays, would
it make sense for teams to drastically increase their number
of passing plays? A little reflection suggests that it probably
wouldn’t. Most opponents would alter their behavior to
defend against this change in strategy.
While seemingly unrelated to economic policy analysis,
this analogy teaches us that, one, the data are an outcome
of optimization under a given policy regime, and, two,
when policies change, so might behavior. This is a
potentially serious problem for empirical work in
macroeconomics. After all, in most cases we do not have
the luxury of running highly controlled experiments on
citizens to learn how they would respond. Instead, we must
be clever and insist that our models match observed
behavior under current policy. Consequently, to predict
how policies would alter outcomes, we must explicitly
reanalyze household decisionmaking under a proposed
policy, and then compare the results. The outcome of this
process thereby overcomes the thorny problem of using
data to learn about the effects of proposed, but historically
novel, policy changes.
In contrast to a purely statistical analysis, an equilibrium
model is advantageous because it will deliver the full
range of decisionmaking for all conceivable situations
that may face households and firms. In turn, we can learn
more precisely what drives people to borrow, or save, or
file for bankruptcy. We can also have a clearer view
of how they might change their behavior if we
changed policy.
So we return to the initial question: Why have consumer
default rates risen? Though the data alone may point to
other culprits, equilibrium analysis suggests that improvements in lending technologies are a promising candidate for
explaining both borrowing and default behavior over the
past two decades, while mere reductions in “stigma” are not
able to match the data. In other words, it’s not shame that
drove the rise of bankruptcies, as neat of an explanation as
that would have been. Before you can understand the facts,
you first need a good theory.
RF
Kartik B. Athreya is a senior economist at the
Federal Reserve Bank of Richmond. A longer version
of this article can be found on the Bank’s Web
site at www.richmondfed.org/research/economics_of_
consumer_finance

SPRING 07 COVER FINAL

7/19/07

1:47 PM

Page 3

NEXTISSUE
Public Choice

Interview

For much of the 20th century, social scientists modeled
politicians as if they always worked in the public interest.
By contrast, the theory of public choice uses economic
principles to analyze the democratic process. When introduced
in the 1950s, the model was revolutionary: It explained how
politicians were self-interested and voters rationally ignorant,
and thus how economic policies that benefited the few at the
expense of the many were adopted. Now an economist
at George Mason University — the center of public choice
orthodoxy — turns that model on its head. Most voters, he says,
hold irrational economic views and often want the bad policies
that legislators deliver. Can that be right?

We talk with Russell Sobel of West
Virginia University about why the
Mountain State’s economy lags behind its
neighbors, and whether Wal-Mart and
other big-box stores are really so bad for
rural America and small business.

Credit Markets

Federal Reserve

Derivatives. Swaps. Securitization. Credit market innovations
like these should help make the financial system more efficient
and more resilient. But many ask whether these devices truly
make markets more stable. Are they, as famed investor Warren
Buffett sees them, “time bombs” which carry dangers that are
“potentially lethal”? Some of the nation’s leading experts on
credit markets recently gathered in Charlotte to discuss this
very question.

Was the central bank to blame for the
Great Depression?

Economic History
The High Point Market is the world’s biggest
furniture trade show. Over the years many
other cities have tried to steal the spotlight
from High Point. Las Vegas is the latest,
and perhaps most formidable, contender.

Tenure
Today, 62 percent of faculty jobs in American universities
and colleges are off the tenure track, many of them filled by
part-time instructors. Does tenure remain the most efficient
way to nurture research and disseminate ideas to students, a
public good crucial to society’s collective knowledge? Or does
the tenure system need an overhaul?

Banking and Commerce
For more than 50 years, this nation has kept a fairly strict
separation between banking and commerce. But now, amid
some high-profile requests by businesses to gain banking
powers, economists are revisiting the question of whether the
wall has outlived its usefulness.

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and web-exclusive content
• To add your name to our
mailing list
• To request an e-mail alert of
our online issue posting
• To check out our online
weekly update

SPRING 07 COVER FINAL

7/19/07

1:47 PM

Page 4

F R B

R I C H M O N D

Economic Quarterly
T

he Richmond Fed’s
Economic Quarterly contains
original research from the Bank’s
economists and visiting scholars.
To be added to the EQ mailing list
or to view the issues online, please
visit www.richmondfed.org

Spring 2006: Vol. 92, No. 2
➤ Huberto M. Ennis, The Problem of Small Change in

Early Argentina
➤ Michael Dotsey and Andreas Hornstein, Implementation

of Optimal Monetary Policy
➤ R. Alton Gilbert, Andrew P. Meyer, and Mark D. Vaughan,

Can Feedback from the Jumbo CD Market Improve Bank
Surveillance?

Summer 2006: Vol. 92, No. 3
➤ John A. Weinberg, Borrowing by U.S. Households
➤ Margarida Duarte and Diego Restuccia, The Productivity

of Nations
➤ Yash P. Mehra, Inflation Uncertainty and the Recent Low Level
of the Long Bond Rate
➤ Robert L. Hetzel, Making the Systematic Part of Monetary
Policy Transparent

Fall 2006: Vol. 92, No. 4
➤ Hubert P. Janicki and Edward Simpson Prescott, Changes

in the Size Distribution of U.S. Banks: 1960–2005
➤ Alexander L. Wolman, Bond Price Premiums
➤ Pierre-Daniel G. Sarte, Stark Optimal Fiscal Policies and

Sovereign Lending
➤ John R. Walter, Not Your Father’s Credit Union

Winter 2007: Vol. 93, No. 1
➤ Robert L. Hetzel, The Contributions of Milton Friedman

to Economics
➤ Kartik B. Athreya and Andrea L. Waddle, Implications of

Some Alternatives to Capital Income Taxation
➤ Margarida Duarte, Diego Restuccia, and Andrea L.

Waddle, Exchange Rates and Business Cycles Across Countries
➤ Borys Grochulski, Optimal Nonlinear Income Taxation with

Costly Tax Avoidance

Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA 23261

PRST STD
U.S. POSTAGE PAID
RICHMOND VA
PERMIT NO. 2

Change Service Requested

Please send address label with subscription changes or address corrections to Public Affairs or call (804) 697-8109